uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,565,215 | arxiv | \section{Introduction}
Recently there has been encouraging progress on nonperturbative studies
of the QCD thermodynamics which have stimulated a great deal of theoretical
activity. Phenomenological and microscopic models have been developed along
parallel and complementary lines allowing to predict a rich phase
structure at finite temperature, $T$, and chemical potential, $\mu_B$
\cite{AlfordRW98-99,RappSSV98,HalaszJSSV,Ratti}. The quark gluon plasm (QGP)
is a longstanding theoretical issue since the discovery of the asymptotic
freedom of QCD \cite{HalaszJSSV,Shuryak2006}. Besides the intrinsic theoretical
interest of this subject, such studies are important because they are directly
applicable to the regime under current experimental investigation at the Brookhaven
National Laboratory (BNL) Relativistic Heavy Ion Collider (RHIC).
In fact, extensive experimental work has been done with heavy-ion collisions at
CERN and Brookhaven to explore the $T-\mu_B$ phase diagram and look for
signatures of the QGP.
Theoretical studies have been accumulating a lot of evidence that there exists a critical
end point (CEP) in the phase diagram of strongly interacting matter. Since Fodor and Katz,
who presented a phase diagram with the CEP within lattice calculations
\cite{Fodor:2004JHEP}, remarkable progress in this area has been made. It is an open
question, whether a critical end point exists on the $T-\mu_B$ plane and, particularly,
how to predict its location. When only thermal effects are concerned, universal
arguments \cite{Wilczek,RajagoplalW93} and lattice simulations \cite{Ukawa} indicate that
the order of the phase transition depends on the masses and flavors of quarks.
Considering also nonvanishing chemical potentials, a variety of models (see e.g.
\cite{Buballa:NPA1998,Buballa:PLB1999}) predict a second order phase transition point in
the phase diagram. This suggests that the phase diagram exhibits a CEP. At this point the
phase transition is of second order and long wavelength fluctuations appear, leading to
characteristic experimental consequences that can be detected by enhanced critical
fluctuations in heavy-ion collisions \cite{Stephanov:1998PRL,Hatta:2003PRD}. So, the
location of the CEP has become an important topic in effective model studies and lattice
calculations.
In fact, the phase diagram and QCD thermodynamics in general are becoming more
transparent due to the combination of research in several areas: perturbative QCD,
effective models, and lattice calculations.
The possible existence of such a point has recently been emphasized and its universal
critical properties have been discussed by several authors in the context of QCD inspired
models \cite{AchwarzKP99,Buballa:NPA1998,Buballa:PLB1999,PLBcosta}.
This point of the phase diagram is the special focus of the present article.
In a previous work \cite{PLBcosta}, we studied the phase diagram focusing our attention
on the CEP and the physics near it, through the behavior of the baryon number
susceptibility and the specific heat; the study was performed in the framework of the
SU(3) Nambu--Jona-Lasinio (NJL) model. Here, besides extending the investigation to other
observables, we make a comparative study of the phase diagram in the SU(2) and SU(3)
NJL models. Since more information can be taken within the simpler version of the NJL
model, this systematic study is expected to provide a better understanding of the
interesting physics around the CEP/TCP (tricritical point).
Our main goal is to locate the critical end point and confront the results with
universality arguments.
Based on the fact that the CEP is a genuine thermodynamic singularity, being considered a
second order critical point, the order parameter and related observables, like
susceptibilities, can provide relevant signatures for phase transitions.
We notice that susceptibilities in general are related to fluctuations through the
fluctuation dissipation theorem, allowing to observe signals of phase transitions in
heavy-ion reactions \cite{quat7}. The specific heat $C$, which is related to the
event-by-event temperature fluctuation \cite{quat8}, and mean transverse momentum
fluctuations \cite{quat9} in heavy-ion reactions, is also a quantity of interest in our
calculation. These fluctuations should show a divergent behavior near the CEP.
After equilibration, the dense matter created in relativistic heavy-ion collision
will expand along lines of constant entropy per baryon.
We remark that most of the work done in this area has been performed with non strange
quarks only and, when strange quarks are considered, mixing between the flavors $u$, $d$,
and $s$ has not been taken into account \cite{Barducci:1994PRD}. Our SU(3) version of
the NJL model includes a term that incorporates the axial anomaly of QCD, and is
responsible for the mechanism of flavor mixing.
We relate the discontinuity of the order parameter to other discontinuities of physical
quantities such as, for instance, the entropy. We are particularly interested in
confronting our calculation, in what concerns to the notion of a second order phase
transition due to nonvanishing current quark masses, with those of any classical mean
field theory. From lattice calculations it is well known that the strange quark mass
plays a decisive role in the location of the CEP.
On the other hand, information on the nature of excitations and the strength of their
interaction in the QGP would be crucial in the experimental search. Also in this context
it is relevant to confront first-principle based approaches with the results
of phenomenological models like the NJL model.
We organize the work in four main steps. First, after the presentation of the model
formalism (Sec. II), we discuss the behavior of the equations of state and analyze the
chiral phase transition (Sec. III). The well known universality hypothesis of phase
transitions will be considered. Second, we study the behavior of relevant physical
quantities in the $T-\mu_B$ plane (Sec. IV). Third, we analyze the phase diagrams in
the $T-\mu_B$ plane looking for the location of the critical end point and the behavior
of susceptibilities (Sec. V). Finally, we discuss signs of \textit{partial} and
\textit{effective} restoration of chiral symmetry (Sec. VI), looking for the convergence
of chiral partners. We conclude in Sec. VII with a brief summary of our results.
\section{Formulation of the model}\label{model}
The Lagrangian of the SU(3) NJL model
\cite{njl,{Kunihiro:PR1994},Rehberg:1995PRC} is given by
\begin{eqnarray} \label{lagr}
{\mathcal L} &=& \bar{q} \left( i \partial \cdot \gamma - \hat{m} \right) q
+ \frac{g_S}{2} \sum_{a=0}^{8} \Bigl[ \left( \bar{q} \lambda^a q
\right)^2+ \left( \bar{q} (i \gamma_5)\lambda^a q \right)^2
\Bigr] \nonumber \\
&+& g_D \Bigl[ \mbox{det}\bigl[ \bar{q} (1+\gamma_5) q \bigr]
+ \mbox{det}\bigl[ \bar{q} (1-\gamma_5) q \bigr]\Bigr] \, .
\end{eqnarray}
The column vector $q = (u,d,s)$ represents the quark field with three flavors, $N_f=3$,
and three colors, $N_c=3$. $\lambda^a$ are the Gell--Mann matrices, a = $0,1,\ldots , 8$,
${\lambda^0=\sqrt{\frac{2}{3}} \, {\bf I}}$.
The Lagrangian (\ref{lagr}) is invariant under chiral SU$_L(3)\otimes$SU$_R(3)$
transformations if we put $m_i=0$, where $m_i$ are the current quark masses
($\hat{m}=\mbox{diag}(m_u,m_d,m_s)$).
The last term in (\ref{lagr}) breaks the U$_A(1)$ symmetry. This term is a
reflection of the axial anomaly in QCD.
The model Lagrangian (\ref{lagr}) can be put in a form suitable for the
bosonization procedure after an adequate treatment of the last term, allowing
to obtain a four quark interaction from the six quark interaction.
Then the following effective quark Lagrangian is obtained:
\begin{eqnarray}
{\cal L}_{eff} &=& \bar q\,(\,i\, {\gamma}^{\mu}\,\partial_\mu\,-\,\hat m)\, q \,\,
+S_{ab}[\,(\,\bar q\,\lambda^a\, q\,)(\bar q\,\lambda^b\, q\,)]
+\,P_{ab}[(\,\bar q \,i\,\gamma_5\,\lambda^a\, q\,)\,(\,\bar q
\,i\,\gamma_5\,\lambda^b\, q\,)\,],
\label{lagr_eff}
\end{eqnarray}
where the projectors $S_{ab}\,, P_{ab}$ are given by:
\begin{eqnarray}
S_{ab} &=& g_S \delta_{ab} + g_D D_{abc}\left\langle \bar{q} \lambda^c q\right\rangle, \label{sab}\\
P_{ab} &=& g_S \delta_{ab} - g_D D_{abc}\left\langle \bar{q} \lambda^c q\right\rangle. \label{pab}
\end{eqnarray}
The constants $D_{abc}$ coincide with the SU(3) structure constants
$d_{abc}\,\,$ for $a,b,c =(1,2,\ldots ,8)$ and
$D_{0ab}=-\frac{1}{\sqrt{6}}\delta_{ab}$, $D_{000}=\sqrt{\frac{2}{3}}$.
The hadronization procedure can be done by the integration over the quark fields
in the functional integral with (\ref{lagr_eff}). The natural degrees of freedom
of low-energy QCD in the mesonic sector are achieved which gives the following
effective action:
\begin{align}
W_{eff}[\varphi,\sigma] & =-\frac{1}{2}\left( \sigma^{a}S_{ab}^{-1}%
\sigma^{b}\right) -\frac{1}{2}\left( \varphi^{a}P_{ab}^{-1}\varphi
^{b}\right) \nonumber\\
& -i\mbox{Tr}\,\mbox{ln}\Bigl[i\gamma^{\mu}\partial_{\mu}-\hat{m}%
+\sigma_{a}\lambda^{a}+(i\gamma_{5})(\varphi_{a}\lambda^{a})\Bigr]\,.
\label{action}
\end{align}
The notation $\mbox{Tr}$ stands for the trace operation over discrete indices
($N_{f}$ and $N_{c}$) and integration over momentum. The fields $\sigma^{a}$
and $\varphi^{a}$ are scalar and pseudoscalar meson nonets, respectively.
The first variation of the action (\ref{action}) leads to the gap equations,
\begin{eqnarray}\label{gap}
M_i = m_i - 2g_{_S} \big <\bar{q_i}q_i \big > -2g_{_D}\big <\bar{q_j}q_j\big > \big <\bar{q_k}q_k \big >\,,
\end{eqnarray}
with $i,j,k =u,d,s$ cyclic. $M_i$ are the constituent quark masses and the quark
condensates are given by: $\big <\bar{q}_i q_i \big > = -i \mbox{Tr}[ S_i(p)]$, $S_i(p)$
being the quark Green function.
The baryonic thermodynamic potential of the grand canonical ensemble, $\Omega(T, V, \mu_i)$,
is also obtained directly from the effective action (\ref{action}). So we take the
temperature $T$, the volume $V$ and the chemical potential of the $i$-quark ($\mu_i$) as
the full independent state variables.
The relevant equations of state for the entropy $S$, the pressure $p$, and the particle
number $N_i$, as well as the internal energy $E$, follow from well known expressions like
the Gibbs-Duhem relation
\begin{equation}\label{tpot}
\Omega (T, V, \mu_i )= E- TS - \sum_{i=u,d,s} \mu _{i} N_{i}\,.
\end{equation}
The following expressions are obtained:
\begin{eqnarray}\label{energy}
E &=&- \frac{ N_c}{\pi^2} V\sum_{i=u,d,s}\left\{
\int p^2 dp \, \frac{p^2 + m_{i} M_{i}}{E_{i}}\, (1\,-\,n_{i}-\bar n_{i})\right\} \nonumber \\
&&- g_{S} \sum_{i=u,d,s}\, (\big < \bar{q}_{i}q_{i}\big > )^{2}
- 2 g_{D} \big < \bar{u}u\big > \big < \bar{d}d\big > \big < \bar{s}s\big > \,,
\end{eqnarray}
\begin{eqnarray}\label{entropy}
S =-\frac{ N_c}{\pi^2} V \sum_{i=u,d,s
\int p^2 dp\,\,
\biggl\{ \bigl[ n_{i} \ln n_{i}+(1-n_{i})\ln (1-n_{i})%
\bigr] +\bigl[ n_{i}\rightarrow 1 - \bar n_{i} \bigr] \biggr\} \, ,
\end{eqnarray}
\begin{equation}\label{np}
N_i = \frac{ N_c}{\pi^2} V \int p^2 dp\,\,
\left( n_{i}-\bar n_{i} \right).
\end{equation}
V is the volume of the system and the quark density is determined by the relation $\rho_i =
N_i / V$. In the previous equations $n_i$ and $\bar n_i$ are the quark and antiquark
occupation numbers
\begin{equation}
n_{i}= \frac{1}{1+ e^{\beta(E_{i} - \mu_{i})}},\hskip1cm \bar n_{i} = \frac{1}{1+
e^{\beta(E_{i} + \mu_{i})}}.
\end{equation}
We define $\mu_B= \frac{1}{3} (\mu_u+\mu_d+\mu_s)$ and the baryonic matter density as
$\rho_B= \frac{1}{3} (\rho_u+\rho_d+\rho_s)$. As usual, the pressure and the energy density are
defined such that their values are zero in the vacuum state \cite{Buballa:2004PR}:
\begin{equation} \label{p}
p (\mu_i, T) = - \frac{1}{V}\left[ \Omega(\mu_i, T) - \Omega(0, 0) \right] ,
\end{equation}
\begin{equation}\label{e}
\epsilon(\mu_i, T) = \frac{1}{V}\left[E(\mu_i, T)-E(0,
0)\right].
\end{equation}
The baryon number susceptibility is the response of the baryon number density $\rho_B(T,
\mu_i)$ to an infinitesimal variation of the quark chemical potential $\mu_i$
\cite{McLerran:1987PRD}:
\begin{equation} \label{chi}
\chi_B = \frac{1}{3}\sum_{i=u,d,s}\left(\frac{\partial
\rho_i}{\partial\mu_i}\right)_{T}.
\end{equation}
Another relevant observable, in the context of possible signatures for chiral symmetry
restoration in the hadron-quark transition and in transition from hadronic matter to
the QGP \cite{McLerran:1987PRD,Asakawa:2000PRL,Blaizot:2001PLB}, is the specific heat
which is defined by \cite{PLBcosta}
\begin {equation}\label{c}
C = \frac{T}{V}\left ( \frac{\partial S}{\partial T}\right)_{N_i}
= \frac{T}{V}\left[\left ( \frac{\partial S}{\partial T} \right)_{\mu_i}
- \frac{[(\partial N_i/\partial T)_{\mu_i}]^2}{(\partial N_i/\partial \mu_i)_T}\right],
\end {equation}
where we have transformed the derivative $(\partial S/\partial T)_{N_i}$ using the
formula of the Jacobian. In fact, we work in the grand canonical ensemble where
$(T,V,\mu_i)$ are the set of natural independent variables (still holding $N_i$ and $V$
fixed).
By expanding the effective action (\ref{action}) over meson fields, we get an effective
meson action from which we can obtain the meson propagators. In the present work we are
only concerned with $\pi^0$ and $\sigma$ mesons. Starting with the pseudoscalar mesons we
have the effective meson action:
\begin{equation}
W_{eff}^{(2)}[\varphi]=-\frac{1}{2}\varphi^{a}\left[ P_{ab}^{-1}-\Pi_{ab}%
^{P}(P)\right] \varphi^{b}=-\frac{1}{2}\varphi^{a}(D_{ab}^{P}(P))^{-1}%
\varphi^{b}, \label{act2}%
\end{equation}
where $\Pi_{ab}^{P}(P)$ is the polarization operator,
\begin{equation}
\Pi_{ab}^{P}(P)=iN_{c}\int\frac{d^{4}p}{(2\pi)^{4}}\mbox{tr}_{D}\left[
S_{i}(p)(\lambda^{a})_{ij}(i\gamma_{5})S_{j}(p+P)(\lambda^{b})_{ji}%
(i\gamma_{5})\right], \label{actp}
\end{equation}
with $\mbox{tr}_{D}$ is the trace over Dirac matrices. The expression in square brackets
in (\ref{act2}) is the inverse non-normalized meson propagator $(D_{ab}^{P}(P))^{-1}$.
The inverse meson propagator for $\pi^0$ is given by
\begin{equation}
D^{-1}_{\pi^0} (P)= 1-P_{\pi^0} J_{uu}^P (P),
\end{equation}
with
\begin{equation}
P_{\pi^0}=g_{S}+g_{D}\left\langle\bar{q}_{s}q_{s}\right\rangle
\end{equation}
and where the polarization operator of the $\pi^0$ meson takes the form
\begin{equation}
J_{uu}^{P}(P_{0})=4\left[2I_{1}^{u}-P_{0}^{2}\,\,I_{2}^{uu}(P_{0})\right].
\label{ppij}
\end{equation}
The integrals $I_{1}^{i}$ and $I_{2}^{ij}(P_{0})$ are given in Appendix A.
The mass of the $\pi^0$ meson can be determined by the condition
$D_{\pi^0}^{-1}(M_{\pi^0},\mathbf{0})=0$ and the quark--meson coupling constant is
evaluated as
\begin{eqnarray}\label{mesq}
g_{\pi^0\overline{q}q}^{-2} = -\frac{1}{2 M_{\pi^0}} \frac{\partial}{\partial P_0}
\left[J_{uu}^{P}(P_0) \right]_{ \vert_{ P_0=M_{\pi^0}}}.
\end{eqnarray}
The procedure to describe scalar mesons is analogous.
We present below the most relevant steps.
Keeping now the scalar mesons only in (\ref{action}), we have the effective
meson action
\begin{equation}
W_{eff}^{(2)}[\sigma]=-\frac{1}{2}\sigma^{a}\left[ S_{ab}^{-1}-\Pi_{ab}%
^{S}(P)\right] \sigma^{b}=-\frac{1}{2}\sigma^{a}({D}_{ab}^{S}(P))^{-1}%
\sigma^{b}, \label{accao2}%
\end{equation}
with $\Pi_{ab}^{S}(P)$ being the polarization operator, which in the momentum
space has the form of (\ref {actp}) with ($i\gamma_5$) substituted by the
identity matrix.
To consider the $\sigma$ meson we take into account the matrix structure of the
propagator in (\ref{accao2}). For the isospin symmetry considered in the present work
$\left\langle\bar{q}_{u}\,q_{u}\right\rangle=\left\langle\bar{q}_{d}\,q_{d}\right\rangle$,
and the matrices ${S}_{ab}$ and ${\Pi}_{ab}^{S}$ are reduced to
\begin{equation}
{S}_{ab}\rightarrow\left(
\begin{array}
[c]{cc}%
S_{33} & 0\\
0 & \bar{S}_{ab}%
\end{array}
\right) \,\,\,\,\,\,\mbox{and}\,\,\,\,\,\,{\Pi}_{ab}^{S}\rightarrow\left(
\begin{array}
[c]{cc}%
\Pi_{33}^{S} & 0\\
0 & \bar{\Pi}_{ab}^{S}%
\end{array}
\right) ,
\end{equation}
where the matrix elements are given in Appendix A.
The mass of the $\sigma$ meson can be determined by the condition
$D_{\sigma}^{-1}(M_{\sigma},\mathbf{0})=0$, where
\begin{equation}
D_{\sigma}^{-1}=\left( \mathcal{A}+\mathcal{C}\right) -\sqrt
{(\mathcal{C}-\mathcal{A})^{2}+4\mathcal{B}^{2}}%
\end{equation}
with $\mathcal{A}=S_{88}-\Delta\Pi^S_{00}(P),\,
\mathcal{C}=S_{00}-\Delta\Pi^S_{88}(P),\,\mathcal{B}=-(S_{08}+\Delta\Pi^S_{08}(P))$
and $\Delta=S_{00}S_{88}-(S_{08})^{2}$.
Finally, the model is fixed by the coupling constants $g_S$ and $ g_D$, the cutoff in
three-momentum space $\Lambda$, which is used to regularize the momentum space integrals
and the current quark masses $m_i$. For numerical calculations in physical conditions we
use the parameter set \cite{Rehberg:1995PRC,Costa:2003PRC,Costa:2005PRD70,Costa:2005PRD71}:
$m_u = m_d = 5.5$ MeV,
$m_s = 140.7$ MeV,
$g_S \Lambda^2 = 3.67$,
$g_D \Lambda^5 = -12.36$ and
$\Lambda = 602.3$ MeV,
that has been determined by fixing the values
$M_{\pi} = 135.0$ MeV,
$M_K = 497.7$ MeV,
$f_\pi = 92.4$ MeV, and
$M_{\eta'}= 960.8$ MeV.
For the quark condensates we obtain:
$\left\langle \bar{q}_{u}\,q_u\right\rangle = \left\langle\bar{q}_{d}\,q_d\right\rangle = - (241.9 \mbox{ MeV})^3$ and
$\left\langle\bar{q}_{s}\,q_s\right\rangle = - (257.7 \mbox{ MeV})^3$, and
for the constituent quark masses
$M_u= M_d= 367.7$ MeV and $M_s= 549.5$ MeV.
\section{Equations of state and phase transition}\label{phase}
We will start the discussion of the phase diagram of the NJL model (\ref{lagr}) by
analyzing the behavior of the pressure/energy per particle as a function of the baryonic
density, paying special attention to the Gibbs criteria.
Our model of strong interacting matter can simulate either a region in the interior of a
neutron star or a hot and dense fireball created in a heavy-ion collision. In the present
work we focus our attention in the last type of systems, so we impose the condition
$\mu_e=0$, since electrons and positrons are not involved in the strong interaction. So,
we naturally get the chemical equilibrium condition $\mu_u=\mu_d=\mu_s=\mu_B$ that will
be used. This choice allows for equal constituent quark masses $M_u=M_d$ and
approximates the physical conditions at RHIC.
In this respect, we remind that in a relativistic heavy-ion collision of duration
of $\sim 10^{-22}\, s$, thermal equilibration is possible only for processes mediated
by the strong interaction rather than the full electroweak equilibrium.
Let us discuss our results for the pressure/energy per baryon at zero temperature that
are plotted in Fig. \ref{Fig:1} as a function of $\rho_B/\rho_0$ (solid lines),
where $\rho_0 = 0.17$ fm$^{-3}$ is the normal nuclear matter density. The pressure has
three zeros, respectively, at $\rho_B=0, 0.43 \rho_0, 2.36\rho_0$, that correspond to
the extreme of the energy per particle. For $\rho_B < 0.2 \rho_0$ the pressure and
compressibility are positive, so the system can exist in a uniform gas phase, but it will
not survive indefinitely, since the zero density state is energetically favored; for $
0.2 \rho_0<\rho_B < 0.43 \rho_0$ the system is unstable since the compressibility is
negative, in fact $\rho_B=0.43 \rho_0$ corresponds to a maximum of the energy per
particle; for $ 0.43 \rho_0<\rho_B < 2.36 \rho_0$, the pressure is negative, and the
third zero of the pressure, $\rho_B=2.36 \rho_0$, corresponds to an absolute minimum of
the energy (see Fig. \ref{Fig:1} (right panel)).
The appearance of an absolute minimum of the energy means the possibility for finite
droplets to be in mechanical equilibrium with the vacuum at zero -pressure ($P=0$).
Above $\rho_B=2.36 \rho_0$, which we define as $\rho_B^{cr}$, we have again a uniform gas
phase. So, for densities $0<\rho_B<\rho_B^{cr}$ the equilibrium configuration is a mixed
phase. This is because the Gibbs criterion of equal $P$ and $\mu_B$ is satisfied and,
therefore, the phase transition is a first order one: the thermodynamic potential has
two degenerate minima at which two phases have equal pressure and chemical potential and
can coexist. Such a situation is possible in regions where the gap equations have several
solutions for the quark masses.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\hspace*{-0.5cm}\epsfig{file=1a.eps,width=8.5cm,height=8cm} &
\hspace*{-0.75cm}\epsfig{file=1b.eps,width=9cm,height=8cm} \\
\end{tabular}
\end{center}
\vspace{-1.0cm} \caption{ Pressure (left) and energy per particle
(right) as a function of the density at different temperatures. The
points $A$ and $B$ (left panel) illustrate the Gibbs criteria.
Only in the $T=0$ line the zero-pressure point is located at
the minimum of the energy per particle.}
\label{Fig:1}
\end{figure}
Summarizing the results at $T=0$, the behavior described allows the following
interpretation: the uniform nonzero density phase will break up into stable droplets
with zero pressure and density $\rho_B^{cr} = 2.36 \rho_0$ in which chiral symmetry is
partially restored, surrounded by a nontrivial vacuum with $\rho_B=P=0$ (see also
\cite{Buballa:NPA1998,Mish2000,Buballa:2004PR,Costa:2003PRC,Rajagopal:1999NPA}).
In fact, for our choice of the parameters the critical point at $T=0$ satisfies to the
condition $\mu_i<M_i^{vac}$ \cite{Buballa:2004PR,Scavenius}, where $M_i^{vac}$ is
the mass of the $i$-quark in the vacuum. This can be seen by comparing
$\mu_B^{cr}=361$ MeV (see the T-axis of Fig. 2, left panel) with the quark masses
$M_u^{vac}\,=\,M_d^{vac}\,=\,367.7$ MeV and $M_s^{vac}\,=\,549.5$ MeV.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\hspace*{-0.5cm}\epsfig{file=2a.eps,width=8.5cm,height=8cm} &
\hspace*{-0.5cm}\epsfig{file=2b.eps,width=8.5cm,height=8cm} \\
\end{tabular}
\end{center}
\vspace{-1.0cm} \caption{Phase diagram in the SU(3) NJL model. The left
(right) part corresponds to the $T-\mu_B$ ($T-\rho_B$) plane. Solid
(dashed) line shows the location of the first order (crossover)
transition. The dotted lines shows the location of the spinodal
boundaries of the two phase transitions (shown by shading in the
right plot).}
\label{Fig:2}
\end{figure}
As can be seen from Fig. \ref{Fig:1}, as the temperature increases, the first order
transition persists up to the CEP. At the CEP the chiral transition becomes of second
order. Along the line of a first order phase transition the thermodynamic potential has
two degenerate minima. These minima are separated by a finite potential barrier making
the potential nonconvex. The height of the barrier is largest at zero temperature and
finite quark chemical potential and decreases towards higher temperature. At the CEP the
barrier disappears and the potential flattens. This pattern is characteristic of a first
order phase transition: the two minima correspond, respectively, to the phases of broken
and restored symmetry. The borders of the coexistence area are marked by the dotted lines
in Fig. \ref{Fig:2}. The domain between the two dotted lines has metastable states which
are characterized by large fluctuations. They are also solutions of the gap equations
but their thermodynamic potential is higher than for the stable solutions. The left
dotted curves represent the beginning of the metastable solutions of restored symmetry
in the phase of broken symmetry, while the right dotted curves represent the end of the
metastable solutions of broken symmetry in the restored symmetric phase. We also
represent in Fig. \ref{Fig:2} (right panel) the region where the solutions of the gap
equations are unstable.
The location of the CEP is found to be at $T^{CEP} = 67.7$ MeV and
$\rho_B^{CEP}=1.68\rho_0$ ($\mu_B^{CEP} = 318.4$ MeV). For
temperatures above the CEP the thermodynamic potential has only one
minimum and the transition is washed out: a smooth crossover takes
place.
Finally, we will focus again on the energy per baryon. In Fig. \ref{Fig:1}
(right panel), we plot the density dependence of the energy per baryon at different
temperatures.
We observe that the two points, zero of the pressure and minimum of the energy density,
are not the same at finite temperature. In fact, as can be seen from Fig.
\ref{Fig:1} (left panel), states with zero pressure are only possible up to the
maximal temperature $T_m \sim 38$ MeV.
We notice that the zero-pressure states persist up to temperatures of 70 MeV in a
two-flavor NJL model where equal chemical potentials of quarks and antiquarks is assumed
\cite{Mish2000}.
For $T<T_m$ the zero-pressure states are in the metastable density region and, as soon as
$T\neq 0$, they do not coincide with the minimum of the energy per particle.
The arguments just presented allow to clarify the difference between confined quark
matter (in hadrons) and bounded quark matter (droplets of quarks). As would be
expected, the binding mechanism is weaker than the confining one (nonexistent in the NJL
model). As a matter of fact, in spite of the existence of a binding energy for the
droplets of quarks at $T=0 $, we verify that it is not possible to avoid the evaporation
of the bounded quarks for arbitrarily small temperatures.
More detailed information concerning the structure of the phase
diagram will be given in Sec. V.
\section{Thermodynamic quantities in the $T-\mu_B$ plane}
For a better understanding of the thermodynamics of the phase transitions, we analyze in
this section the behavior of the thermodynamical quantities that are the most relevant
to discuss the physics across the first order phase transition. With these quantities, we
can also discuss the latent heat which is inherent to this phase transition.
The pressure is plotted in the left- hand side of Fig. \ref{Fig:3} (upper part), which
shows a continuous behavior for all points of the phase diagram.
In a first order phase transition a discontinuity occurs in the first derivatives of the
pressure (or the thermodynamic potential) with respect to $\mu_B$ and $T$, {\em i.e.},
the baryon number density and the entropy density, respectively.
In fact, as can be seen in the right side of Fig. \ref{Fig:3}, the entropy density
is discontinuous in the first order phase transition region
($T<T^{CEP},\,\mu_B>\mu_B^{CEP}$). A similar behavior is found for the energy density,
whose curves show that the first order phase transition, strong at $T=0$, turns into a
less abrupt one as the temperature increases (see Fig. \ref{Fig:3}, lower part).
In the crossover transition ($T>T^{CEP},\,\mu_B<\mu_B^{CEP}$) the thermodynamic
quantities change rapidly within a narrow range of values of $T$ and $\mu_B$, but the
pressure and all its derivatives remain continuous, as shown in Fig. \ref{Fig:3}.
The discontinuities of the entropy and energy densities disappear at the CEP, which
location can not be determined by universality arguments. The same is not true
concerning local singular behavior of thermodynamic quantities around the CEP that will
be discussed in the next section through the critical exponents.
\begin{figure*}[t]
\begin{center}
\begin{tabular}{cc}
\hspace*{-0.5cm}\epsfig{file=3a.eps,width=8.750cm,height=7.0cm} &
\hspace*{-0.25cm}\epsfig{file=3b.eps,width=8.0cm,height=6.25cm} \\
\end{tabular}
\epsfig{file=3c.eps,width=8.0cm,height=6.25cm}
\end{center}
\vspace{-0.5cm} \caption{Pressure (left side of upper part), entropy density (right
side of upper part) and energy density (down part) as functions of the temperature and
the baryonic chemical potential.}
\label{Fig:3}
\end{figure*}
Let us now analyze what information concerning the latent heat we can get from our
results. As already referred in Sec. III, along the line of first order phase transition
the thermodynamic potential has two degenerate minima that are separated by a finite
barrier. This barrier is largest at zero temperature and finite chemical potential and
decreases towards higher temperature. At the CEP the barrier disappears, which means
that there is no latent heat at this point.
As a grand canonical approach is applied to our model of strong interacting matter, the
independent quantities $T$ and $\mu_B$ represent the state variables which can be
externally controlled.
So, the conjugate of the intensive variables $T$ and $\mu_B$ in the Legendre
transformation {---} the entropy density $s$ and the baryonic density $\rho_B$
{---} provide a more natural description.
By analyzing first the gap in the curves of the entropy (Fig. \ref{Fig:3}, right side of
upper part), we see that the latent heat decreases for small temperatures, which is not
the expected behavior.
This analysis is, however, not sufficient; both the baryonic density and the entropy
density contributions should be examined for more reliable information about the latent heat.
We remember that the gap of the baryonic density across the first order phase transition
is largest at zero temperature and finite chemical potential and vanishes at the CEP
(see Fig. \ref{Fig:2}, right panel).
The discontinuities in the energy density include both the entropy and the baryonic
density contributions and, as can be seen in Fig. \ref{Fig:3}, they display the expected
behavior: the latent heat increases for decreasing temperatures.
Finally, to understand the thermodynamics of matter created in relativistic heavy-ion
collisions, it is convenient to calculate thermodynamic quantities along lines of
constant entropy per baryon number, the so-called isentropic lines. Most of these studies
have been done on lattice calculations for two-flavor QCD at finite $\mu_B$
\cite{Ejiri1200}, where nonphysical mass spectrum that corresponds to a too large of pion
mass $m_\pi \simeq 770$ MeV, has been used. Such studies predict that the effects of
the CEP change only slowly as the collision energy is changed as a consequence of the
attractor character of the CEP \cite{Stephanov:1998PRL}.
Our model calculations for the isentropic lines in the $T-\mu_B$ plane are shown in Fig.
\ref{Fig:4}. The behavior we find is somewhat different from those claimed by other
authors \cite{Stephanov:1999PRD,Ejiri1200,Nonaka}, where a phenomena of focusing of
trajectories towards the CEP is observed.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\hspace*{-0.5cm}\epsfig{file=4a.eps,width=8.50cm,height=8cm} &
\hspace*{-0.75cm}\epsfig{file=4b.eps,width=8.50cm,height=8cm} \\
\end{tabular}
\end{center}
\vspace{-0.5cm}
\caption{Two perspectives of the entropy per baryon number in the
$T-\mu_B$ plane. The vicinity of the CEP is enlarged in the right panel.}
\label{Fig:4}
\end{figure}
The isentropic trajectories in the phase diagram (Fig. \ref{Fig:4}) indicate that the
slope of the trajectories goes to large values for large $T$. This behavior is related to
the effects of the strange quark mass in our model. In fact, at high temperatures the
relation $\mu_s>M_s$ is verified, allowing for a more pronounced decrease of $M_s$
\cite{Costa:2003PRC}.
Although the entropy and the baryon number density, at high temperatures, are sensitive
to the regularization procedure used \cite{Zhuang:1994NPA,Costa:2007}, this effect is not
relevant for the present situation. The same is not true with respect to the effects of
the value of the cutoff itself in the regime of low temperatures as will be shown below.
In a small range of $s/\rho_B$ around $0.7$ (see Fig. \ref{Fig:4}, right panel), we observe a
tendency of convergence of these isentropic
lines towards the CEP. These lines come from the region of
symmetry partially restored in the direction of the crossover line.
For smaller values of $s/\rho_B$, the isentropic lines turn about the CEP and then
attain the first order transition line. For larger values of $s/\rho_B$ the isentropic
trajectories approach the CEP by the region where the chiral symmetry is still broken,
and also attain the first order transition line after bending toward the critical point.
As already pointed out in \cite{Scavenius}, this is a natural result in these type of
quark models with no change in the number of degrees of freedom of the system in the two
phases. As the temperature decreases a first order phase transition occurs, the latent
heat increases and the formation of the mixed phase is thermodynamically favored.
Finally, we remark that all isentropic trajectories directly terminate at $T=0$ at the
first order transition line, without reheating in the mixed phase as verified in the
"zigzag" shape of \cite{Subramanian,Stephanov:1999PRD,Ejiri1200,Nonaka}.
It is also interesting to point out that, in the limit $T\rightarrow 0$, it is verified
that $s \rightarrow 0$ and $\rho_B \rightarrow 0$, as it should be.
This behavior is in contrast to \cite{Scavenius} (right panel Fig. 9) using the
NJL model in the SU(2) sector and is related to our more convenient choice of the model
parameters, mainly a lower value of the cutoff. This can be explained by the presence of
droplets at $T=0$ whose stability is quite sensitive to the choice of the model parameters.
In fact, as referred in Sec. III, our choice of the parameters has important effects:
we verify that, at $T=0$, the phase transition connects the vacuum state ($P=0,
\rho_B=0$) directly with the phase of chiral symmetry partially restored ($P=0,
\rho=\rho_B^{cr}$) and the critical point of the phase transition in these conditions
satisfies to $\mu_i<M_i^{vac}$, where $M_i^{vac}$ is the mass of the $i$-quark
($i=u,d,s$) in the vacuum. This condition fulfills the criterium of stability of the
quark droplets \cite{Buballa:2004PR,Costa:2003PRC}. In addition, it is also crucial to
the satisfaction of the third law of thermodynamics in the limit $T\rightarrow 0$. This
cutoff effect has an identical role in the formation of stable droplets on both SU(2) and
SU(3) NJL models.
\section{Phase diagrams and susceptibilities in the vicinity of the
critical end point}
In this section we analyze with more detail the phase diagrams in different conditions
in the $T-\mu_B$ plane. Lattice QCD calculations have established the transition to a
phase where quarks and gluons are deconfined at temperatures larger than $\sim 150$ MeV
and zero baryon density. Depending on the number of quark flavors $N_f=2$ or $N_f=3$, and
on the masses of the quarks, different situations can occur and the transition from
hadronic matter to QGP may be of first order, second order, or a crossover transition.
To confront the model results with the universality arguments, we will discuss the class
of the critical points by changing the current quark masses in SU(2) and SU(3) versions
of the NJL model.
\subsection{Characteristics of the $T-\mu_B$ phase diagram}
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\hspace*{-0.5cm}\epsfig{file=5a.eps,width=8.50cm,height=8cm} &
\hspace*{-0.75cm}\epsfig{file=5b.eps,width=8.50cm,height=8cm} \\
\end{tabular}
\end{center}
\vspace{-0.5cm} \caption{Phase diagram in the SU(2) (left) and SU(3) (right) NJL models.
The solid line represents the first order phase transition, the dashed line the
second order phase transition, and the dotted line the crossover transition.
The size of the critical region is also plotted for several values of
$\chi_B/\chi_B^{free}$. The TCP in the right panel is found for $m_u=m_d=0$ MeV and
$m_{s}=140.7$ MeV.}
\label{Fig:5}
\end{figure}
We start by analyzing the differences between the three-flavor NJL model and its simpler
version in the SU(2) sector. The phase diagrams for both models are presented in Fig.
\ref{Fig:5} as a function of $\mu_B$ and $T$.
Concerning the SU(2) model, and using physical values of the quark masses: $m_u = m_d =
5.5$ MeV, we find that the CEP is localized at $T^{CEP}=79.9$ MeV and $\mu_B^{CEP} =
331.72$ MeV ($\rho_B^{CEP}=2.26 \rho_0$).
We also verified that, in the chiral limit, the transition is of second order at
$\mu_B=0$ and, as $\mu_B$ increases, the line of second order phase transition will end
in a first order line at the TCP. The TCP is located at $\mu_B^{TCP}=286.1$ MeV and
$T^{TCP}=112.1$ MeV.
For the SU(3) NJL model, also in the chiral limit ($m_u=m_d=m_s=0$), we verify that the
phase diagram does not exhibit a TCP: chiral symmetry is restored via a first order
transition for all baryonic chemical potentials and temperatures (see right panel of Fig.
\ref{Fig:5}).
According to lattice analysis, this pattern of chiral symmetry restoration should remain
even when the strange quark acquires a nonzero current mass, provided it is lower
than a critical value ($m_s < m_s^{crit}$), and $m_u=m_d=0$ is still kept. The value for
$m_s^{crit}$ is not settled yet, those found in lattice \cite{Laermann} or in model
calculations \cite{Hsu:1998PLB,Barducci:2005PRD} being lower than the physical strange
current quark mass ($m_s\approx 150$ MeV). We found $m_s^{crit}=18.3$ MeV in our model
\cite{PLBcosta}, lower than lattice values \cite{Laermann} but consistent with what it is
expected in these type of models \cite{Barducci:2005PRD}.
When $m_s\geq m_{s}^{crit}$, at $\mu_B=0$, the
transition is of second order and, as $\mu_B$ increases, the line of the second order
phase transition will end in a first order line at the TCP. The TCP for $m_{s}=140.7$
MeV is the closest to the CEP \cite{PLBcosta} and is located at $\mu_B^{TCP}=265.9$ MeV
and $T^{TCP}=100.5$ MeV. If we choose $m_u=m_d\neq0$, instead of second order transition
we have a smooth crossover whose critical line will end in the first order line at the
CEP.
Using physical values for the quark masses
\cite{Rehberg:1995PRC,Costa:2005PRD70}: $m_u = m_d = 5.5$ MeV, $m_s = 140.7$ MeV, this
point is localized at $T^{CEP}=67.7$ MeV and $\mu_B^{CEP} = 318.4$ MeV
($\rho_B^{CEP}=1.68\rho_0$).
We point out that both situations are in agreement with what is expected at $\mu_B=0$:
the chiral phase transition at the chiral limit is of second order for $N_f = 2$ and
first order for $N_f\geq3$ \cite{Pisarski:1984PRD}.
We also observe that the critical region is heavily stretched in the direction of the
crossover transition line, in both $N_f=2$ and $N_f=3$ cases, as shown in Fig.
\ref{Fig:5}.
To estimate the critical region around the CEP we calculate the dimensionless ratio
$\chi_B/\chi_B^{free}$, where $\chi_B^{free}$ is the chiral susceptibility of a free
massless quark gas.
The left (right) panel of Fig. \ref{Fig:5} shows a contour plot
for two fixed ratios $\chi_B/\chi_B^{free}=2.0;3.0$ in the phase diagram around the CEP.
\subsection{Behavior of $\chi_B$ and $C$ in the vicinity of the critical end point
and their critical exponents}
The phenomenological relevance of fluctuations in the finite temperature and chemical
potential around the CEP/TCP of QCD has been recognized by several authors.
If the critical region of the CEP is small, it is expected that most of the fluctuations
associated with the CEP will come from the mean field region around the
CEP \cite{Hatta:2003PRD}.
The size of the critical region around the CEP can be found by calculating the baryon
number susceptibility, the specific heat and their critical behaviors.
To a better understanding of the critical behavior of the system, we also analyze in
some detail what happens in the SU(2) case, sector to which there is more information in
the literature \cite{Sasaki}.
As is well known, the baryon number susceptibility, $\chi_B$, and the specific heat,
$C$, diverge at $T = T^{CEP}$ \cite{Hatta:2003PRD,Schaefer:2006,PLBcosta}. In order to
make this statement more precise, we will focus on the values of a set of indices, the
so-called critical exponents, which describe the behavior near the critical point of
various quantities of interest (in our case $\epsilon$ and $\alpha$ are the critical
exponents of $\chi_B$ and $C$, respectively). The motivation for this study arises from
fundamental phase transition considerations, and thus transcends any particular system.
These critical exponents will be determined by finding two directions, temperature
and magnetic-field-like, in the $T-\mu_B$ plane near the CEP, because, as pointed out in
\cite{Griffiths:1970PR}, the strength of the divergence is governed by the critical
exponents whose values depend on the path approaching the CEP.
\begin{table}[t]
\begin {center}
\begin{tabular}{cccccccc}
\hline
{Quantity} & { critical exponents/path} && { SU(2) NJL} && { SU(3) NJL} && {Universality} \\
\hline \hline
& {$\epsilon\,/\,\,\rightarrow$\textcolor{red}{$\bullet$}} && { {$0.66 \pm 0.01$}}
&& {$0.67 \pm 0.01$} && {$2/3$} \\
{$\chi_B$} & {{$\epsilon^\prime$\,/\,\,\textcolor{red}{$\bullet$}$\leftarrow$}} &&
{$0.66 \pm 0.01$} && {$0.68 \pm 0.01$} && {$2/3$} \\
& $\gamma_B\,/\rightarrow$\textcolor{blue}{$\bullet$} &&
$0.51 \pm 0.01$ && $0.49 \pm 0.02$ && {$1/2$} \\
\hline
& {$\alpha\,/
\begin{array}{c}
\textcolor{red}{\bullet} \\
\uparrow
\end{array}$}
&& {$
\begin{array}{c}
\alpha=0.59\pm 0.01 \\
\alpha_1=0.45\pm 0.01
\end{array}$}
&& {$
\begin{array}{c}
0.61\pm 0.01 \\
$---$
\end{array}$}
&& {$
\begin{array}{c}
2/3 \\
$---$
\end{array}$} \\
{$C$}
& {$\alpha^\prime/
\begin{array}{c}
\downarrow \\
\textcolor{red}{\bullet}
\end{array}$}
&& {$0.69 \pm 0.01$} && {$0.67 \pm 0.01$} && {$2/3$} \\
& {$\alpha\,/
\begin{array}{c}
\textcolor{blue}{\bullet} \\
\uparrow
\end{array}$}
&& {$0.40 \pm 0.01$} && {$0.45 \pm 0.02$} && {$1/2$} \\
\hline
\end{tabular}
\begin{flushleft}
TABLE I: The arrow $\rightarrow\textcolor{red}{\bullet}$ $\left(
\begin{array}{c}
\textcolor{blue}{\bullet} \\
\uparrow
\end{array}
\right)$
indicates the path in the $\mu_B\,(T)-$ direction to the CEP (TCP)
for ${\mu_B<\mu_B^{CEP}}$ $({T<T^{TCP}}$).
\end{flushleft}
\end{center}
\label{tab:(I)}
\end{table}
Considering the baryon number susceptibility, if the path chosen is asymptotically
parallel to the first order transition line at the CEP, the divergence of $\chi_B$ scales
with an exponent $\gamma_B$. In the mean field approximation it is expected that $\gamma_B=1$
for this path. For directions not parallel to the tangent line the divergence scales as
$\epsilon =2/3$. These values are responsible for the elongation of the critical region,
$\chi_B$, being enhanced in the direction parallel to the first order transition line (see
Fig. \ref{Fig:5}).
To study the critical exponents for the baryon number susceptibility (Eq. \ref{chi}) we
will start with a path parallel to the $\mu_B$-axis in the $T-\mu_B$ plane, from lower
$\mu_B$ towards the critical $\mu_B^{CEP}$, at fixed temperature $T = T^{CEP}$. Using a
linear logarithmic fit
\begin{equation}
\ln \chi_B = -\epsilon \ln |\mu_B -\mu_B^{CEP} | + c_1 ,
\end{equation}
where the term $c_1$ is independent of $\mu_B$, we obtain $\epsilon = 0.67\pm 0.01$,
which is consistent with the mean field theory prediction $\epsilon = 2/3$.
We also study the baryon number susceptibility from higher $\mu_B$ towards the
critical $\mu_B^{CEP}$. The logarithmic fit used now is
$\ln \chi_B = -\epsilon' \ln |\mu_B -\mu_B^{CEP}| + c'_1$.
Our result shows that $\epsilon' = 0.68\pm 0.01$ which is very near the value
of $\epsilon$. This means that the size of the region we observe is approximately
the same independently of the direction we choose for the path parallel to the
$\mu_B$-axis.
These critical exponents, calculated in both SU(2) and SU(3) NJL models,
are presented in Table I.
For comparison purposes with the universality/mean field predictions, the calculated
critical exponents at the TCP are also presented in Table I.
It is found that
the critical exponent for $\chi_B$, $\gamma_B$ once we are in the TCP, has the value
$\gamma_B=0.49\pm0.02$, for the SU(3) NJL model and $\gamma_B=0.51\pm0.01$, for the SU(2)
NJL model. These results are in agreement with the mean field value ($\gamma_B=1/2$),
and show that the behavior of the baryon number susceptibility is similar in both SU(2)
and SU(3) versions of the model.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\hspace*{-0.5cm}\epsfig{file=6a.eps,width=8.50cm,height=8cm} &
\hspace*{-0.75cm}\epsfig{file=6b.eps,width=8.50cm,height=8cm} \\
\hspace*{-0.5cm}\epsfig{file=6c.eps,width=8.50cm,height=8cm} &
\hspace*{-0.75cm}\epsfig{file=6d.eps,width=8.50cm,height=8cm} \\
\end{tabular}
\end{center}
\vspace{-0.5cm} \caption{ Upper part: Specific heat as a function of $|T-T^{CEP}|$ at
the fixed chemical potential $\mu^{CEP}_B$ for SU(2) (left) and SU(3) (right) NJL models.
Lower part: Specific heat as a function of $|T-T^{TCP}|$ at the fixed chemical potential
$\mu^{TCP}_B$ for SU(2) (left) and SU(3) (right) NJL models. }
\label{Fig:6}
\end{figure}
Paying now attention to the specific heat (Eq. \ref{c}) around the CEP, we have used a
path parallel to the $T$-axis in the $T-\mu_B$ plane from lower/higher $T$ towards the
critical $T^{CEP}$ at fixed $\mu_B = \mu_B^{CEP}$. In Fig. \ref{Fig:6} (upper part) we
plot $C$ as a function of $T$ close to the CEP in a logarithmic scale for both SU(2) and
SU(3) calculations.
In this case we use the linear logarithmic fit $\ln C = -\alpha \ln |T -T^{CEP}| + c_2$,
where the term $c_2$ is independent of $T$.
Starting with the SU(2) case, we observe in the left panel that, for the region
$T<T^{CEP}$, we have a slope of data points that changes for values of $|T-T^{CEP}|$
around $0.3$ MeV. We have fitted the data for $|T-T^{CEP}|<0.3$ MeV and $|T-T^{CEP}|>0.3$
MeV separately and obtained, respectively, the critical exponent $\alpha=0.59\pm 0.01$
and $\alpha_1=0.45\pm 0.01$, which have a linear behavior for several orders of magnitude
(see also Table I). As pointed out in \cite{Hatta:2003PRD}, this change of the exponent
can be interpreted as a crossover of different universality classes, with the CEP being
affected by the TCP. It seems that in our model the effect of the hidden TCP on the CEP
is relevant for the specific heat contrarily to what happens to $\chi_B$.
We also observe that there is no clear evidence of the change of the slope of the fitting
of data points in the three-flavor NJL model (see Fig. \ref{Fig:6}, right panel of
upper part, and Table I).
In fact, now we only obtain a critical exponent
$\alpha=0.61\pm 0.01$ when the critical point is approached from below. When the critical
point is approached from above the trivial exponent $\alpha^\prime =0.67\pm 0.01$ is
obtained.
To explore the possible effect of the hidden TCP on the CEP, as suggested in Refs.
\cite{Hatta:2003PRD,Schaefer:2006}, we analyze the behavior of the specific heat around
the TCP.
As shown in Fig. \ref{Fig:6} (lower part) and Table I, we find nontrivial critical exponents
$\alpha=0.40\pm0.01$ and $\alpha=0.45\pm0.01$, for SU(2) and SU(3) cases, respectively.
This result, in spite of being close, is not in agreement with the respective mean field value
($\alpha=1/2$).
However, they can justify the crossing effect observed.
We notice that the closest distance between the TCP and the CEP in the phase diagram
occurs in the T-direction ($(T^{TCP} - T^{CEP})<(\mu_B^{CEP} - \mu_B^{TCP})$).
The inconsistency with the mean field values only occurs for the exponent $\alpha$
as can be seen from Table I.
According to what was suggested by universality arguments in \cite{Hatta:2003PRD}, it
was expected that $\chi_B$ and $C$ should be essentially the same near the TCP and the
CEP, which would imply $\alpha=\epsilon=2/3$ at the CEP.
Nevertheless we observe that the nontrivial values of $\alpha$ in the TCP and in the CEP
are consistent within the NJL model for both, SU(2) and SU(3) versions of the model, and they
reflect the effect of the TCP on the CEP. We also stress that the universality arguments
are so general that they give no quantitative results and, due to the lack of information
from the lattice simulations, they should be confronted with model calculations.
Our results seem particularly interesting because the NJL model shares with QCD some
features, such as the dynamics of chiral symmetry.
In particular, the physics underlying the critical singularities in the QCD diagram is
associated with this fundamental property of strong interaction.
So, the NJL model is an useful framework allowing for insights to the difficult task of
the analysis of the QCD phase diagram at finite temperature and chemical potential.
The eventual difference between the values of the $C$ and $\chi_B$
critical exponents can be interesting in heavy-ion collisions experiments.
\section{Partial and effective restoration of chiral symmetry}
As we have shown in previous sections, thermodynamics provides a well- established
procedure, as for instance the Gibbs criterion, to determine the critical points for the
phase transition in the first order region. It follows that these critical points are
signalized by the discontinuity of several relevant observables (masses, quark
condensates) at some critical chemical potential, a situation that does not happen in the
crossover region, where these observables are continuous. At present, the criterion most
commonly accepted, and that will be used here, to define the critical point in the
crossover region, is to identify this point as the inflection point of the quark masses
$\partial^{2}M/\partial T^{2}=0$ \cite{Buballa:2004PR} or, equivalently, of the quark
condensates \cite{Alles,Ruivo05}. This criterion is numerically equivalent to the one
first proposed by M. Asakawa and K. Yazaki that defines the point where the
constituent quark masses decrease to half of their values in the vacuum
($M_{u}=M_{u}(0)/2$) \cite{Asakawa:1989NPA}, as the critical point. From this point on
the quark masses decrease quickly.
Both in the first order and in the crossover regions it is verified that the quark
masses, especially for the non strange quarks, decrease strongly at the critical point.
However, at this point different observables violating chiral symmetry are still far from
zero, like the quark condensates, the pion decay constant, and the difference between the
masses of the chiral partners.
One can say, therefore, that at the critical point there occurs only a \textit{partial}
restoration of chiral symmetry.
In view of what was said above we use the following criteria: we define the point in the
$T- \mu_B$ plane for the phase transition associated with \textit{partial} restoration of
chiral symmetry as the inflexion (discontinuity) point for the quark masses, and define
the point for \textit{effective} restoration of chiral symmetry as that one where the
masses of chiral partners become degenerate.
This is also signaled by the merging of the $\pi^0$ and $\sigma$ spectral functions
\cite{Hubert}.
As we include the strange sector in this study, the consequences of the nonvanishing
anomaly term (mixing effects) on the strangeness content of mesons and mixing angles must
be analyzed. In fact, as the temperature (density) increases, the mixing angles get
close to their ideal values and the strangeness content of the mesons change
\cite{Costa:2005PRD70,Costa:2005PRD71}: the masses of the mesons that become almost
non-strange, $\sigma$ and $\eta$, converge, respectively, with those of the non strange
mesons $\pi^0$ and $a_0$, while that of the $\eta^\prime$, that becomes essentially
strange, does not get close to $f_0$ (see \cite{Costa:2005PRD71} (Fig. 2, Case I));
the convergence of the chiral partner $(\kappa, K)$, that has a $\bar u s$ structure,
occurs at higher temperatures and is probably slowed by the small decrease of the
constituent strange quark mass, $M_s$.
For the purpose of discussing the {\em effective} restoration of chiral symmetry,
we restrict our analysis to the chiral partners $(\pi^0, \sigma$) that behave in a
qualitatively similar manner as the pair $(a_0, \eta)$.
The behavior of the masses of the chiral partners ($\pi^{0},\sigma$) in the limiting
cases ($T\neq 0$, $\rho_B=0$) and ($T=0$, $\rho_B\neq 0$) are qualitatively similar and
well known from the literature: they both converge at a certain value of the temperature
(density). The main difference between the finite temperature and the finite density case
is that, in the first one, the degeneracy of the chiral partners occurs in a range of
temperatures where the mesons are no longer bound states: the $\pi^{0}$ dissociates in
$q\bar{q}$ pair at the Mott temperature $T_{M\,\pi^0}=212$ MeV
\cite{Rehberg:1995PRC,Costa:2003PRC}, and the $\sigma$ at the Mott temperature
$T_{M\,\sigma}=160$ MeV; for the finite density case, the mesons are always bound states.
Interesting information can be obtained by calculating the masses of the $\pi^0$ and
$\sigma$ mesons as a function of $T$ and $\rho_B (\mu_B)$ which allows us to obtain a curve
in the $T-\rho_{B}(\mu_B)$ plane. This curve defines the line where the mesons became
degenerate (Fig. \ref{Fig:7}). In Fig. \ref{Fig:7} we also
represent the \textquotedblleft Mott lines\textquotedblright for the $\pi^{0}$ and the
$\sigma$, as well as the critical line. As can be seen, the phase transition
associated to \textit{partial} restoration of chiral symmetry occurs above the Mott line
for the pion and below the Mott line for the sigma, in most of the first order phase
transition region; the opposite happens in the crossover region. Concerning the
\textit{effective} restoration of chiral symmetry, one can see, from the line of
convergence of the chiral partners, that it happens after the \textit{partial}
restoration of chiral symmetry and the dissociation of the two mesons.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\hspace*{-0.5cm}\epsfig{file=7a.eps,width=8.5cm,height=8cm} &
\hspace*{-0.5cm}\epsfig{file=7b.eps,width=8.5cm,height=8cm} \\
\end{tabular}
\end{center}
\vspace{-1.0cm}\caption{ The \textit{effective} restoration of chiral symmetry, the phase
transition and the Mott lines for $\pi^{0}$ and $\sigma$ mesons in the
$T-\rho_B(\mu_B)$ plane.}
\label{Fig:7}
\end{figure}
As we already saw, there are dramatic changes in the behavior of
some thermodynamic functions such as the specific heat and the quark number
susceptibilities around the CEP.
So, due to their role as signals for restoration of chiral symmetry it is demanding to
discuss the behavior of the chiral partners ($\pi^0$, $\sigma$).
First we notice, in Fig. \ref{Fig:7}, that the
two Mott lines cross in the first order region at a point just bellow the CEP. This is
probably a remnant of the situation in the chiral limit where the transition is second
order and the pion and sigma dissociate at the same point.
In Fig. \ref{Fig:8} we plot the pion and sigma masses as functions of the
baryonic chemical potential for three different temperatures: $T=40$ MeV $<T^{CEP}$,
$T^{CEP}=67.7$ MeV and $T=100$ MeV $>T^{CEP}$. For $T=40$ MeV and $\mu_B\approx 350$
MeV, a discontinuity is visible in the evolution of the masses, signaling a first
order phase transition. However, according to our criterion, the \textit{effective}
restoration of the chiral symmetry only happens at
$\mu_B\approx 380$ MeV. At the CEP ($T=67.7$ MeV; $\mu_B = 318.4$ MeV), the sharp
decrease (increase) of the sigma (pion) meson masses reflect the nature of the second
order phase transition. Once again the \textit{effective} restoration of chiral symmetry
only happens at $\mu_B\approx 370$ MeV. When $T=100$ MeV $>T^{CEP}$, we have a crossover
and the meson masses have a smooth behavior. In this case, the \textit{effective}
restoration of the chiral symmetry happens at $\mu_B\approx 355$ MeV.
\begin{figure}[t]
\begin{center}
\epsfig{file=8.eps,width=10cm,height=9cm}
\end{center}
\vspace{-1.0cm} \caption{Degeneracy of the chiral partners ($\pi^0$, $\sigma$) for
different temperatures around the CEP.}
\label{Fig:8}
\end{figure}
\section{Conclusions}
The properties of the QCD transition at vanishing chemical potential depend on the
number of quark flavors and on their masses. The critical temperatures of $T_c\approx 155$ MeV
or as high as $T_c\approx 260$ MeV have been reported in the literature. Presently,
considering also nonvanishing chemical potential, some lattice calculations locate the CEP at
$T\approx 160$ MeV and $\mu_B \approx 360$ MeV \cite{Fodor:2004JHEP}. However,
the existence and location of the CEP are not conclusive even for lattice calculations
\cite{Forcrand}.
We proved that our model calculation has been able to reproduce the qualitative phase
structure features, and we also obtain the location of the CEP. We have obtained, at zero
baryon chemical potential in the SU(3) NJL model, values for the critical temperature
around $120-200$ MeV. The transition is first order in the chiral limit
($m_u=m_d=m_s=0$). Furthermore, when $m_u=m_d=0$ and $m_s>m_{s}^{crit}$
($m_{s}^{crit}=18.3$ MeV) the transition is of second order ending in a first order line
at the TCP. Finally, when also $m_u=m_d\neq0$, there is a crossover for all values of
$m_s$ and the location of the CEP depends strongly on the strange quark mass.
Contrarily to what happens in the three-flavor NJL model, we find a
TCP in the two-flavor NJL model in the chiral limit.
This agrees with what is expected at $\mu_B=0$: for $m_i=0$ the chiral restoration
happens via a second order phase transition for $N_f = 2$, and via a first order for
$N_f\geq3$.
For realistic values of the current quark masses the CEP is
located at $T^{CEP} = 79.9$ MeV and $\mu^{CEP}_B = 331.7$ MeV for
$N_f=2$, and at $T^{CEP} = 67.7$ MeV and $\mu^{CEP}_B = 318.4$ MeV
for $N_f=3$.
The pattern characteristic of a first order phase transition has also been analyzed
through several equations of state and the latent heat. For example, we verified
that states (droplets) in mechanical equilibrium with the vacuum state at $P=0$
are found at zero temperature.
This leads to nontrivial consequences for the behavior of the isentropic
trajectories which terminate at $T=0$ at the first order transition line.
Our convenient choice of the model parameters, which allows for a first order
phase transition that is stronger than in other treatments of the NJL model,
is crucial to attain this result.
We have studied the baryon number susceptibility and the specific heat which are related
with event-by-event fluctuations of $\mu_B$ or $T$ in heavy-ion collisions. In the two
and three-flavor NJL models, for $\chi_B$, we conclude that the obtained critical
exponents around the CEP in both models are consistent with the mean field values
$\epsilon=\epsilon'=2/3$.
For the specific heat we obtain nontrivial exponents $1/2<\alpha<2/3$ around the CEP,
indicating a crossover of different universality classes
\cite{Hatta:2003PRD,Schaefer:2006}. This effect is more clearly visible for the critical
exponent of the specific heat in the SU(2) version of the NJL model, where a crossover
from $\alpha$ to $\alpha_1$ is also observed.
Nevertheless we notice that the values of $\alpha$ in the TCP and in the CEP are
consistent within both versions of the NJL model.
A better insight to the difficult task of the analysis of the phase diagram of QCD can
be provided by an extension of the NJL model where quarks interact with the temporal
gluon field represented by the Polyakov loop dynamics
\cite{Ratti,Hubert,Sasaki,Megias:2006PRD}. Work in this direction is in progress.
Concerning the behavior of the chiral partners in the vicinity of the CEP, we verified
that the two Mott lines, respectively, for $\sigma$ and $\pi^0$ cross at a point just
bellow the CEP. On the other hand, there is a sharp decrease (increase) of the sigma
(pion) meson masses at the CEP, which reflects the nature of the second order phase
transition at this point.
\begin{acknowledgments}
Work supported by Grant No. SFRH/BPD/23252/2005 (P. Costa), Centro de
F\'{\i}sica Te\'orica, FCT under Project No. POCI/FP/63945/2005 and
under Project No. PTDC/FP/63912/2005 (GTAE).
\end{acknowledgments}
|
1,108,101,565,216 | arxiv | \section{Background}
The class of algorithms, commonly referred to as direct $N$-body algorithms is still
one of the most commonly used methods for simulations in astrophysics. These algorithms are relatively
simple in concept, but can be applied to a wide range of problems. From the simulation
of few body problems, such as planetary stability to star-clusters and even small
scale galaxy simulations.
However, these algorithms are also computationally expensive as they scale
as $O(N^2)$. This makes the method unsuitable for large $N$ ($>10^6$), for these
large $N$ simulations one usually resorts to a lower precision method like the Barnes-Hut
tree-code method~\cite{1986Natur.324..446B} or the Particle Mesh method that both scale
as $O(N \log N)$ (e.g.~\cite{1969JCoPh...4..306H, 1981csup.book.....H}).
These methods, although faster, are also notably less accurate and not suitable for
simulations that rely on the high accuracy that direct summation, coupled with higher order integrators, offer.
On the other end of the spectrum you can find even higher accuracy methods
which uses arbitrary precision~\cite{2014ApJ...785L...3P}. The work
of~\cite{2014ApJ...785L...3P} indicates that the accuracy offered by
the default (double precision)
direct $N$-body methods is sufficient for most scientific problems.
The direct $N$-body algorithm is deceivingly simple, in the fundamental form it performs $N^2$ gravitational
computations, which is a parallel problem that can be efficiently implemented
on almost any computer architecture with a limited amount of code lines.
A number of good examples can be found on the {\tt Nbabel.org} website.
This site contains examples of a simple $N$-body simulation code
implemented in a wide range of programming languages.
However, in practice there are many variations of the algorithms in use,
with up to eighth order integrations~\cite{2008NewA...13..498N}, algorithmic extensions such as
block time-stepping~\cite{1986LNP...267..156M},
neighbour-schemes~\cite{1973JCoPh..12..389A},
see~\cite{2012EPJST.210..201B} and references therein
for more examples. These variations transform the simple
$O(N^2)$ shared time-step implementation in a complex method,
where the amount of parallelism can differ per time-step.
Especially the dynamic block time-stepping method adds complexity
to the algorithm, since the number of particles that participate
in the computations changes with each integration step.
This variable number of particles involved in computing forces
requires different parallelisation strategies.
In the worst case, there is only
one particle integrated, which eliminates most of the standard parallelisation
methods for $N^2$ algorithms.
There is extensive literature on high performance direct $N$-body methods with the
first being described in 1963~\cite{1963MNRAS.126..223A}. The method has been efficiently
implemented on parallel machines~\cite{1986LNP...267..156M}, vector machines~\cite{1988CeMec..45...77H}
and dedicated hardware such as the GRAPE's~\cite{1998sssp.book.....M}.
For an overview we refer the interested reader to the following
reviews~\cite{2012EPJST.210..201B, 2003gmbp.book.....H, 2011EPJP..126...55D}.
Furthermore, there has been extensive work on accelerating $N$-body methods
using GPUs. There have been several $N$-body libraries to ease the development
of $N$-body integrators that use the GPU. The first library that offered
support for the GRAPE API
was {\tt Kirin}~\cite{2008NewA...13..103B}, however this library only
supports single precision and is therefore less accurate than the GRAPE.
With the introduction of the {\tt Yebisu} library~\cite{keigo_thesis} there
was support for double-single precision\footnote{In this precision, the number of significant
digits is 14 compared to 16 in IEEE double precision. Using a pair of floating point numbers double precision
accuracy is approximated through single precision floating point operations},
which achieved accuracy comparable
to the GRAPE. The library also featured support for fourth and sixth order
Hermite integrators in combination with minimized data send by performing
the prediction on the GPU. This library, however, is not compatible with
the GRAPE API and only supports a single GPU. In our previous work
{\tt Sapporo1}~\cite{Gaburov2009630}, we added support for multiple GPUs
in combination with the GRAPE API and double-single precision.
Apart from libraries there are also $N$-body integrators that
come with built-in support for GPU hardware. For example in~\cite{2011hpc..conf....8B},
the authors combine {\tt Yebisu} and {\tt phiGRAPE~\cite{2007NewA...12..357H}} in the
new {\tt phiGPU} code. This code is able to run on multiple GPUs and supports
up to eighth order accuracy. In~\cite{2013JCoPh.236..580C, 2013CoPhC.184.2528C},
the authors introduce the {\tt HiGPUs} $N$-body code. This standalone
code contains a sixth order integrator, and supports CUDA, OpenCL
and IEEE-754 double precision accuracy. Finally, there is {\tt NBODY6} which
uses GPU acceleration together with an Ahmad-Cohen neighbour
scheme~\cite{1973JCoPh..12..389A, 2012MNRAS.424..545N}.
In this paper we present our direct $N$-body library, {\tt Sapporo2},
since we focus on the library we will not make a full comparison with the standalone
software packages mentioned above.
The library contains built-in support for the second order leap-frog (GRAPE-5),
fourth order Hermite (GRAPE-6) and sixth order Hermite integrators.
The numerical precision can be specified at run time and depends on
requirements for performance and accuracy. Furthermore, the library can keep track of the nearest
neighbours by returning a list containing all particles within a certain radius.
Depending on the available hardware the library operates with CUDA and OpenCL, and has the
option to run on multiple-GPUs, if installed in the same compute node.
The library computes the gravitational force on particles
that are integrated with block time-step algorithms.
However, the library can trivially be applied to any other $O(N^2)$
particle method by replacing the force equations.
For example, methods that compute the Coulomb
interactions~\cite{VanGorp2011192} or molecular
dynamics~\cite{12392506} use similar methods as presented in this work.
\section{Methods}
\label{Sapporo2:Sect:Method}
With Graphic Processing Units (GPUs) being readily available in the computational astrophysics community
for over 5 years we will defer a full description of their specifics and
peculiarities~\cite{2012EPJST.210..201B, 2008NewA...13..103B,Nyland_nbody,CUDAGuide5.5}.
Here we only give a short overview to stage the context for the following sections.
In GPU enabled programs we distinguish two parts of code.
The `host' code, used to control the GPU, is executed on the CPU; whereas the `device'
code, performing the majority of the computations, is executed on the GPU.
Each GPU consists of a set of multiprocessors
and each of these multiprocessors contains a set of computational units. We send
work to the GPU in blocks for further processing
by the multiprocessors. In general a GPU requires a large amount of these blocks to
saturate the device in order to hide most of the latencies
that originate from communication with the off-chip memory.
These blocks contain a number of threads that perform computations.
These threads are grouped together in `warps' for NVIDIA machines or `wavefronts' on AMD
machines. Threads that are grouped together
share the same execution path and program counter. The smaller the number of threads
that are grouped the smaller the impact of thread divergence. On current devices
a warp consists of 32 threads and a wavefront contains 64 threads. This difference
in size has effects on the performance (see Section~\ref{Sapporo2:Sect:Results}).
\subsection{Parallelisation method}
\label{Sapporo2:Sect:Method:Par}
To solve the mutual forces for an $N$-body system the forces exerted by
the $j$-particles (sources) onto the $i$-particles (sinks) have to be
computed. Depending on the used algorithm the sources and sinks can
either belong to the same or a completely different particle set.
Neither is it required that these sets have the same dimensions.
In worst case situations this algorithm scales as $O(N^2)$, but since
each sink particle can be computed independently it is trivial to parallelise
within a single time-step.
The amount of parallelism, however, depends on the number of sink
particles. For example, in high precision gravitational direct
$N$-body algorithms that employ block time-stepping the number
of sink particles ranges between 1 and $N$.
In general the number of sinks is smaller than the number of sources,
because only the particles of which the position and velocity
require an update are integrated~\cite{1986LNP...267..156M}.
As a consequence the amount of available parallelism in this algorithm
is very diverse, and depends directly on the number of active sink particles.
Currently there are two commonly used methods for solving $N^2$ like
algorithms using GPUs.
The first performs parallelisation over the sink particles
~\cite{2007astro.ph..3100H, 2008NewA...13..103B, Nyland_nbody}
which launches a separate compute thread for each sink particle.
This is efficient when
the number of sinks is large ($> 10^4$), because then the number of compute
threads is sufficiently high to saturate the GPU. However, when the number of sink particles
is small ($\leq 10^4$) there are not enough active compute threads to
hide the memory and instruction latencies. As a result, the GPU will be under utilized
and only reaches a fraction of the available peak performance.
We expect that future devices require an even larger number of running threads
to reach peak performance, in which case the number of sink particles has to be even larger to
continuously saturate the device.
However, adjusting the number of sink particles to keep
parallel efficiency is not ideal, because then one artificially
increases the amount of work (and run time) in favor of efficiency.
Therefore, a second
method was introduced in {\tt Sapporo1}~\cite{Gaburov2009630} which takes
a slightly different approach. In {\tt Sapporo1} we parallelize over
the source particles and keep the number of sink particles that is concurrently integrated
fixed to a certain number.
The source particles are split into subsets, each of which
forms the input against which a set of sink particles is integrated.
The smaller the number of sink particles the more subsets of source particles
we can make.
It is possible to saturate the GPU with enough subsets,
so if the product of the number of sink and source particles is large
enough\footnote{The exact number required to reach peak performance depends on the used
architecture, but if the total number of gravitational interactions is $\geq 10^6$
it is possible to saturate the GPU}
you can reach high performance even if the number of sinks or sources is small.
Of the two parallelisation methods the first one is most efficient when
using a shared-time step algorithm, because fewer steps are involved in
computing the gravity. However, the {\tt Sapporo1} method is more suitable for
block time-stepping algorithms commonly used in high precision
gravitational $N$-body simulations. Even though this method requires an extra step
to combine the partial results from the different subsets. The {\tt Sapporo1} method
is also applied in this work.
With {\tt Sapporo1} being around for 5 years we completely rewrote it
and renamed it to {\tt Sapporo2}, which is compatible with current
hardware and is easy to tune for future generation accelerator
devices and algorithms using the supplied test scripts.
The next set of paragraphs describe the implementation and the choices we made.
\subsection{Implementation}
\subsubsection{CUDA and OpenCL}
When NVIDIA introduced the CUDA framework in 2007 it came with
compilers, run time libraries and examples. CUDA is an extension to the {\tt C} programming
language and as such came with language changes. These extensions are
part of the device and, more importantly, part of the host code\footnote{The most notable
addition is the {\tt `$<<< >>>$'} construction to start compute kernels.}. The use of
these extensions requires that the host code is compiled using the compiler supplied
by NVIDIA. With the introduction of the `driver API'\footnote{
The driver API requires the use of the low-level functions formatted
as {\tt cuFooBar()} while the run time API uses the higher level
functions formatted as {\tt cudaFooBar()}.}
this was no longer required. The
`driver API' does not require modifications to the {\tt C} language for the host code. However,
writing CUDA programs with the `driver API' is more involved than with the `run time API',
since actions that were previously done by the NVIDIA compiler now have to be performed by
the programmer.
When the OpenCL programming language was introduced in 2009 it came with a set of
extensions to the {\tt C} language to be used in the device code. There are
no changes to the language used for writing the host code, instead OpenCL comes with
a specification of functions to interact with the device. This
specification is very similar to the specification used in the CUDA driver API and
follows the same program flow.
In order to support both OpenCL and CUDA in {\tt Sapporo2} we
exploited the similarity between the CUDA driver API and the OpenCL API. We developed a
set of {\tt C++} classes on top of these APIs which offer an unified interface for the host
code. The classes encapsulate a subset of the OpenCL and CUDA functions for creating
device contexts, memory buffers (including functions to copy data) and kernel operations
(loading, compiling, launching). Then, depending on which class is included at compile time
the code is executed using OpenCL or CUDA. The classes have no support for the more advanced
CUDA features such as OpenGL and Direct3D interoperability.
\paragraph{Kernel-code}
With the wrapper classes the host-code is language independent.
For the device code this is not the case, even though the languages are
based on similar principles, the support for advanced features like
{\tt C++} templates, printing and debugging functionality in CUDA makes it
much more convenient to develop in pure CUDA. Once we have a working CUDA version
we convert this to OpenCL. The use of templates in particular
reduces the amount of code. In the CUDA version all possible
kernel combinations are implemented using a single file with templates.
For OpenCL a separate file has to be written for each combination of
integrator and numerical precision.
\\
The method used to compute the gravitational force is comparable to
the method used in {\tt Sapporo1} with only minor changes to allow
double precision data loads/stores and more efficient loop
execution.
\subsubsection{Numerical Accuracy}
\label{Sapporo2:sect:numericalAccuracy}
During the development of {\tt Sapporo1} (before the GT200 chips) GPUs lacked support for IEEE-754 double precision
computations and therefore all the compute work was done in either single or double-single precision.
The resulting force computation had similar precision as the, at that time, commonly used
GRAPE hardware~\cite{1998sssp.book.....M, Gaburov2009630}.
This level of accuracy is sufficient for the fourth order Hermite
integration scheme~\cite{1992PASJ...44..141M,2014arXiv1402.6713P}. Currently, however
there are integrators that accurately solve the equations of motions of stars around black-holes,
planets around stars and similar systems that encounter high mass ratios. For these kind of
simulations one often prefers IEEE-754 double precision to solve the equations of
motion.
The current generation of GPUs support IEEE-754, which enables computations that
require this high level of accuracy. The data in {\tt Sapporo2} is, therefore, always
stored in double precision. The advantage is that we can easily add additional
higher order integrators that require
double precision accuracy computations, without having to rewrite major parts of the
host code. Examples of such integrators are the sixth and eighth order Hermite
integrators~\cite{2008NewA...13..498N}.
The performance impact of double precision storage on algorithms that do not require double
precision computations is limited. Before the actual computations are executed the particle
properties are converted to either {\tt float} or {\tt double-single} and the precision therefore does
not influence the computational performance. The penalty for loading and storing double the
amount of data is relatively small as can be seen in the result section where {\tt Sapporo1}
is compared to {\tt Sapporo2}
\subsubsection{multiple GPUs}
Our new $N$-body library can distribute the computational work over
multiple GPUs, as long as they are installed in the same system. While in {\tt Sapporo1} this
was implemented using the {\tt boost} threading library,
this is now handled using {\tt OpenMP}.
The multi-GPU parallelisation is achieved by parallelising over the source particles.
In {\tt Sapporo1} each GPU contained a copy of all source particles
(as in~\cite{2007NewA...12..357H}), but
in {\tt Sapporo2} the source particles are distributed over the devices using the
round-robin method. Each GPU now only holds a subset of the source
particles (similar to {\tt PhiGPU, HiGPU} and {\tt NBODY6}) which
reduces memory requirements, transfer time and the time to execute the
prediction step on the source particles. However, the order of the particle distribution
and therefore, the order in which the addition is executed is changed when comparing {\tt Sapporo1} and {\tt Sapporo2}.
This in turn can lead to differences in the least significant digit when comparing the
computed force of {\tt Sapporo1} to {\tt Sapporo2}.
\subsubsection{Other differences}
The final difference between {\tt Sapporo1} and {\tt Sapporo2} is the way
the partial results of the parallelisation blocks are combined.
{\tt Sapporo1} contains two computational kernels to solve the gravitational
forces. The first computes the partial forces for the individual blocks
of source particles, and the second sums the partial results. With the use
of atomic operators these two kernels can be combined,
which reduces the complexity of maintaining two compute kernels when
adding new functionality, at a minimal performance impact.
The expectation is that future devices require more active threads to saturate the GPU,
but at the same time offer improved atomic performance. The single kernel method that we
introduced here will automatically scale to future devices and offers less overhead than
launching a separate reduction kernel. This reduced overhead results in slightly
better performance (few \%) on current architectures compared to the original
two kernel method.
In total we now require three GPU kernels to compute gravity,
one copy kernel to move particles from CPU buffers to GPU buffers,
one kernel to predict
the particles to the new time-step and finally, the gravity kernel to compute
the forces.
\section{Results}
\label{Sapporo2:Sect:Results}
In astrophysics the current most commonly used integration method is the
fourth order Hermite~\cite{1992PASJ...44..141M} integrator. This integrator
requires the velocity, the acceleration and the first time derivative of the acceleration (jerk)
to be computed. The integrator furthermore requires information of the nearest
neighbouring particle, this to determine collisional events or binary formation.
Finally, the more advanced integrators such as {\tt NBODY4}~\cite{1999PASP..111.1333A}
and Kira~\cite{2001MNRAS.321..199P} require a list of particles within a given radius from
each particle to determine the perturber list. All this is what
{\tt Sapporo1} computes and how the GRAPE hardware operates~\cite{1998sssp.book.....M}.
The used numerical precision in this method is the double-single variant.
In order to compare the new implementation with the results of {\tt Sapporo1}, all
results in this section, unless indicated otherwise, refer to the double-single fourth
order Hermite integrator. Furthermore, we have enabled the computation of
the nearest neighbour and the list of nearby particles, as has {\tt Sapporo1}.
However if the user does not require this information
it can be disabled by changing a template parameter in the code.
For the performance tests we used different machines, depending on which GPU was used.
All the machines with NVIDIA GPUs have {\tt CUDA 5.5} toolkit and drivers installed.
For the machine with the AMD card we used version 2.8.1.0 of the APP-SDK toolkit and driver version 13.4.
The full list of used GPUs can be found in Tab.~\ref{Sapporo2:Tab:GPUs}, the table shows
properties such as clock speed and number of cores. In order to compare the various GPUs
we also show the theoretical performance, relative with respect to the {\tt GTX480}. Since,
theoretical performance is not always reachable we also show the relative practical
performance as computed with a simple single precision $N$-body kernel that is designed for
shared-time steps, similar to the $N$-body example in the CUDA SDK~\cite{Nyland_nbody}.
\subsection{Thread-block configuration}
\label{Sapporo2:sect:tbc}
{\tt Sapporo2} is designed around the concept of processing a fixed number of
sink particles for a block time-step algorithm (see Section~\ref{Sapporo2:Sect:Method:Par}).
Therefore, the first thing to determine is the smallest number of sink particles that gives full GPU performance.
To achieve full performance the computation units on the GPUs have to be saturated with work.
The GPU consists of a number of multiprocessors and the computation units are spread over these multiprocessors.
When the host code sends work to the GPU this is done in sets of thread-blocks. Each thread-block is executed
on a multiprocessor. The blocks contain a (configurable) number of threads that can work together, while the blocks
themselves are treated as independent units of work.
In this section we determine the optimal number of blocks and the number of threads per block to saturate the GPU
when performing the gravity computations.
We test a range of configurations where we vary the number of blocks per multi-processor
and the number of threads per block. The results for four different GPU architectures
are presented in Fig.~\ref{Sapporo2:fig:BlockAndThreadConfigs}. In this figure each line
represents a certain number of blocks per multi-processor, $N_{blocks}$. The x-axis
indicates the number of threads in a thread-block, $N_\text{threads}$. The range of this
axis depends on the hardware. For the {\tt HD7970} architecture we cannot
launch more than $N_\text{threads}=256$, and for the {\tt GTX480} the limit is
$N_\text{threads}=576$. For the two {\tt Kepler} devices {\tt 680GTX} and {\tt K20m} we can launch
up to $N_\text{threads}=1024$ giving these last two devices the largest set of configuration options.
The y-axis shows the required wall-clock time to compute the forces using the
indicated configuration, the bottom line indicates the most optimal configuration.
For the {\tt 680GTX} and the {\tt K20m} the $N_\text{blocks}$ configurations
reach similar performance when $N_\text{threads} > 512$. This indicates
that at that point there are so many active threads per multi-processor, that there
are not enough resources (registers and/or shared-memory) to accommodate multiple thread-blocks
per multi-processor at the same time. To make the code suitable for
block time-steps the configuration with the least number of threads, that gives
the highest performance would be the most ideal. For the {\tt HD7970} this
is $N_\text{threads}=256$ while for the {\tt Kepler} architectures $N_\text{threads}=512$
gives a slightly lower execution time than $N_\text{threads}=256$ and $N_\text{threads}=1024$.
However, we chose to use $N_\text{threads}=256$ for all configurations and
use 2D thread-blocks on the {\tt Kepler} devices to launch 512 or 1024 threads.
When we talk about 2D thread-blocks it means that we launch multiple threads
per $i$-particle whereby each thread computes a part of the $j$-particles.
This way we increase the number of total threads which the hardware can
schedule in order to hide the memory latencies. Especially when the number
of active $i$ particles is $\leq 128$ this helps to improve
the performance and is discussed in more detail in the next section.
For each architecture the default configuration is indicated with the
circles in Fig.~\ref{Sapporo2:fig:BlockAndThreadConfigs}.
\subsection{Block-size / active-particles}
Now we inspect the performance of {\tt Sapporo2} in combination with a block time-step algorithm.
We measured the time to compute the gravitational forces using either the
NVIDIA GPU Profiler or the built-in event timings of OpenCL.
The number of active sink particles, $N_\text{active}$, is varied between 1 and
the optimal $N_\text{threads}$ as specified in the previous paragraph.
The results are averaged over 100 runs and presented in Fig.~\ref{Sapporo2:fig:threadPerformance}.
We used 131072 source particles which is enough to saturate the GPU
and is currently the average number of particles used in direct $N$-body simulations
that employ a block time-step integration method.
The straight striped lines in Fig.~\ref{Sapporo2:fig:threadPerformance} indicate the theoretical linear scaling from
$(0,0)$ to $(256, X)$ where $X$ is the execution time of the indicated GPU
when $N_\text{active}=256$. Visible in the figure are the jumps in the
execution time that coincide with the warp (wavefront) size of 32 (64).
For NVIDIA devices we can start 2D thread-blocks for all values of $N_\text{active}$,
since the maximum number of threads that can be active on the device is $\geq 512$.
The effect of this is visible in the more responsive execution times
of the NVIDIA devices when decreasing $N_\text{active}$ compared to the AMD device. Each time
$N_\text{active}$ drops below a multiple of the maximum number of active threads,
the execution time will also decrease. When $N_\text{active}$ decreases from
$N_\text{active} \aplt 64$ the execution
time goes down linearly, because of the multiple blocks that can be
started for any value of $N_\text{active}$. The lines indicated with `1D' in the
legend show the execution time, if we would not subdivide the work further using
2D thread-blocks. This will under-utilize the GPU and results in increased execution
times for $N_\text{active} < 128$.
The performance difference between {\tt CUDA} and {\tt OpenCL} is minimal,
which indicates that the compute part of both implementations inhabits similar
behavior.
For most values of $N_\text{active}$ the timings of {\tt Sapporo1} and {\tt Sapporo2}
are comparable. Only for $N_\text{active} < 64$ we see a slight advantage for {\tt Sapporo1}
where the larger data loads of {\tt Sapporo2} result in a slightly longer execution time.
However, the improvements made in {\tt Sapporo2} result in higher performance and
a more responsive execution time compared to {\tt Sapporo1} when $128 \geq N_\text{active} < 160$.
For the {\tt HD7970}, there is barely any improvement when $N_\text{active}$ decreases from 256 to 128.
There is a slight drop in the execution time at $N_\text{active}=192$, which
coincides with one less active wavefront compared to $N_\text{active}=256$.
When $N_\text{active} \leq 128$ we can launch 2D blocks and the performance improves again
and approaches that of the NVIDIA hardware, but the larger wavefront size compared to
the warp size causes the the execution times to be less responsive to changes of $N_\text{active}$.
\subsection{Range of N}
\label{Sapporo2:Sect:RangeOfN}
Now that we selected the thread-block configuration we continue with
testing the performance when computing the gravitational forces
using $N_\text{sink}$ and $N_\text{source}$ particles, resulting in
$N_\text{sink} \times N_\text{source}$
force computations (we set $N_\text{sink} = N_\text{source}$).
The results are presented in the left panel of
Fig.~\ref{Sapporo2:fig:NxNPerformance}. This figure shows the results for the
five GPUs using {\tt CUDA, OpenCL, Sapporo1} and {\tt Sapporo2}.
The execution time includes the time required to send the input
data and retrieve the results from the device.
The difference between {\tt Sapporo1} and {\tt Sapporo2} (both the {\tt CUDA}
and {\tt OpenCL} versions) on the K20m GPU are negligible. {\tt Sapporo1} is
slightly faster for $N < 10^4$, because of the increased data-transfer
sizes in {\tt Sapporo2}, which influence the performance more when the number of
computations is relatively small.
{\tt Sapporo2} is slightly faster than {\tt Sapporo1} when $N \geq 10^4$,
because of the various optimisations added to the new version.
The difference between the
{\tt GTX680}, {\tt K20m} and the {\tt HD7970} configurations is relatively small.
While the {\tt GTX Titan} is almost $1.5\times$ faster and the {\tt GTX480}
almost $2\times$ slower than these three cards. These numbers are not unexpected
when inspecting their theoretical performance (see Tab.~\ref{Sapporo2:Tab:GPUs}).
For $N < 10^5$ we further see that the
performance of the {\tt HD7970} is lower than for the NVIDIA cards. This difference
is caused by slower data transfer rates between the host and device for the {\tt HD7970}.
Something similar can be seen when we compare the {\tt OpenCL} version of the {\tt K20m} with the
{\tt CUDA} version.
Close inspection of the timings indicate that this difference is caused by
longer CPU-GPU transfer times in the {\tt OpenCL} version when transferring small
amounts of data ($< 100$KB) which, for small $N$, forms a larger part of the total
execution time.
\subsection{Double precision vs Double-single precision}
As mentioned in Section~\ref{Sapporo2:sect:numericalAccuracy} the higher order integrators
require the use of double precision computations. Therefore,
we test the performance impact when using full native
double precision instead of double-single precision.
For this test we use the {\tt GTX680}, {\tt K20m} and the {\tt HD7970}.
The theoretical peak performance when using double precision computations is
lower than the peak performance when using single precision computations.
The double precision performance of the {\tt K20m} is one third that
of the single precision performance. For the {\tt GTX680} this is $1\over24$th
and for the {\tt HD7970} this is one fourth.
As in the previous section we use the wall-clock time required to
perform $N^2$ force computations (including the data send and receive time)
to compare the devices. The results are presented in the
right panel of Fig.~\ref{Sapporo2:fig:NxNPerformance}, here the double precision
timings are indicated with the open symbols and the double-single timings
with the filled symbols.
As in the previous paragraph, when using double-single precision
the performance is comparable for all three devices.
However, when using double-precision the differences
become more clear. As expected, based on the theoretical numbers, the
{\tt GTX680} is slower than the other two devices.
The performance of the {\tt K20m} and the {\tt HD7970} are comparable for $N > 10^4$.
For smaller $N$ the performance is more influenced by the transfer rates between
the host and the device than by its actual compute speed.
Taking a closer look at the differences we see that the performance of the
{\tt GTX680} in full double precision is about $\sim10\times$ lower than when
using double-single precision. For the other two cards the double precision
performance is roughly $\sim2.8\times$ lower.
For all the devices this is roughly a factor of 2 difference from what can be
expected based on the specifications. This difference can be explained by the
knowledge that the number of operations is not exactly the same for the two
versions\footnote{Double-single requires more computations than
single precision on which the theoretical numbers are based} and even in the double
single method we use the special operation units to compute the {\tt rsqrt}\footnote{An
optimized function that computes the reciprocal-square-root ($1 / \sqrt{x}$).}.
Another reason for the discrepancy between the practical and theoretical numbers is
that we keep track of the nearest neighbours which requires the same operations for the
double-single and the double precision implementation.
Combining this with the knowledge that we
already execute a number of double precision operations to perform atomic
additions and data reads, results in the observed difference between the
theoretical and empirically found performance numbers.
\subsection{Sixth order performance}
The reason to use sixth order integrators compared to lower order integrators is that,
on average, they are able to take larger time-steps. They are also better in handling systems
that contain large mass ratios (for example when the system contains a supermassive black-hole).
The larger time-step results in more active particles per block-step which improves the GPU efficiency.
However, it also requires more operations than a fourth order integrator, something
which is discussed in detail in~\cite{2008NewA...13..498N}.
Previous work~\cite{keigo_thesis, 2013JCoPh.236..580C, 2013CoPhC.184.2528C}
indicates that double-single accuracy is sufficient for a sixth order integrator. However,
to give the user the choice we implemented both a double-single and a double precision
version of this method.
The performance results of these versions are presented in Fig.~\ref{Sapporo2:fig:4thvs6h}.
As in the previous figures we present the time to
compute $N^2$ forces.
Presented are the performance of the sixth and fourth order kernels using double precision
and using double-single precision. As expected, the sixth order requires more time
than the fourth order as it executes the most operations.
The difference between the fourth order in double-single precision and the sixth order in
double-single precision is about a factor 2. When we use double precision instead of
double-single precision for the sixth order method then the execution time goes up
by another factor of 2. The difference between the double precision fourth order and the
double precision sixth order is about a factor of 1.4.
The factor 2 difference in performance is relatively small and expected from the operation
count. Therefore, if the sixth order allows you to take time-step that are two or more times larger
than when using a fourth order your total execution time will go down when using a sixth
order integrator. This combined with the benefits of the sixth order integrator such as
being able to integrate high mass ratios, where high accuracy is required to trace tight orbits,
makes the sixth order method a viable solution for $N$-body methods.
\subsection{Multi-GPU}
As described in Section~\ref{Sapporo2:Sect:Method}, {\tt Sapporo2} supports multiple
GPUs in parallel. The parallelised parts are the force computation,
data transfer and prediction of the source particles. The transfer of
particle properties to the device and the transfer of the force computation
results from the device are serial operations.
These operations have a small but constant overhead, independent of the
number of GPUs.
For the measurements in this section we use the total wall-clock
time required to compute the forces on $N$ particles (as in Section~\ref{Sapporo2:Sect:RangeOfN}).
The speed-up compared to 1 GPU is presented in Fig.~\ref{Sapporo2:fig:multiGPUPerformance}.
The timings are from the {\tt K20m} GPUs which have enough memory to store
up to $8\times10^6$ particles. We use shared-time steps for these timings.
For $N > 10^4$ it is efficient to use all available GPUs in the system
and for $N \leq 10^4$ all multi-GPU configurations show similar performance.
The only exception here is when $N = 10^3$ at which point the overhead of
using 4 GPUs is larger than the gain in compute power.
For large enough $N$ the
scaling is near perfect ($T_\text{single-GPU}/T_\text{multi-GPU}$),
since the execution time is dominated by the computation
of the gravitational interactions. Note that for these experiments we have to transfer
the full data-sets to the GPU, this is why the scaling for small $N$ is less than perfect
as it takes time to transfer the data over a PCI-Express bus.
For block time-step simulations the number of particles being transferred, per time-step, will be
smaller. However, the compute time is also smaller as less particles will have to integrated.
Therefore, the scaling for small $N$ will stay less than perfect in all situations.
\subsection{Block time-step simulations}
To test the performance of the multi-GPU implementation for
block time-step simulations with {\tt Sapporo2} we use a sixth order Hermite integrator
with block time-steps~\cite{2012ApJ...753...85F, 2008NewA...13..498N}.
We perform simulations of Plummer~\cite{1915MNRAS..76..107P} spheres using 1
and 4 GPUs with double-single (DS) and full double precision (DP) accuracy. The number of
particles used ranges from 16k up to 512k particles. For each simulation
we record the execution time, the energy error, the average number of active
particles per block-step and the speed-up of using 4 GPUs over 1 GPU.
The chosen time-step criteria is critical when performing block time-step simulations.
For fourth order Hermite the method most commonly used is the Aarseth
method~\cite{2003gnbs.book.....A}.
For the sixth order a generalized version of the Aarseth criterion can
be used as, described in~\cite{2008NewA...13..498N}.
However, this generalized version is unstable when the force computation is not accurate
enough\footnote{Keigo Nitadori \& Michiko Fujii, private communication.}.
Specifically, rounding errors in the jerk and snap computation can cause the
time-step to go to zero. Before running production simulations one should
carefully consider which accuracy and time-step method to use, however a full analysis of
the best time-step method for these situations is beyond the scope of this work.
In \cite{2015arXiv150101040S} the authors work
around this time-step problem by taking the average of the Aarseth fourth order method and the sixth order
extension to compute the time-step (their Eq.~8). In order to compare
the timing and accuracy of our simulations
we use this average method for both our
DS and DP simulations. Note that using the sixth order
time-step computation together with DS force computation may result
in a time-step that approaches zero. While the sixth order time-step combined with
full DP force computation will work without problems.
For these simulations we set $\eta_4=0.01$ and $\eta_6=0.1$ and simulate
the model for one $N$-body time-unit. The presented execution times cover the full execution
from the start to the end of a simulation. The time therefore, includes all required
operations on the GPU side (predict, gravity, particle copy) as well as on the host side
(corrections, time-step computation, particle copies).
During the simulation the size of $N_\text{active}$ varies between 1 and $N$.
The resulting data for the simulations are presented in Fig.~\ref{Sapporo2:fig:hermite6}.
The figure contains four panels, the top left panel presents the absolute
execution time. The top right panel the speed-up when scaling from 1 to 4 GPUs.
The bottom left panel the average number of particles that is being
integrated, $N_\text{active}$. Finally, the bottom right panel presents
the energy error at the end of the simulation.
For all panels the solid lines indicate the simulations that use a single GPU
and the dashed lines indicate the simulations with four GPUs. The square symbols
indicate the simulations that use DS accuracy and the DP runs are
indicated by the round symbols.
The execution time scales, as expected, as $O(N^2)$ and as we can see
in the bottom left panel that the average number of active particles
increases with the total number of particles.
There are a number of other things we can see in the figures. First of all
we can see that the full double precision simulations run faster than
the double-single simulations. Eventhough the compute work is faster for the
double-single version (as we saw in Fig.~\ref{Sapporo2:fig:multiGPUPerformance}),
the reduced accuracy forces
the integrator to take more smaller time-steps. This can be seen by the
average number of particles per block
which is smaller for the DS simulations than for the DP simulations.
Another thing to note is that the results of the single GPU DS simulations are
slightly different than the four GPU DS simulations. This is another consequence
of the reduced accuracy, the changed addition order when running on more than
a single GPU results in rounding differences. For DP the results for single
and multi GPU simulations are so similar that the differences are not visible in the figures.
The DP simulations are not only faster, they also produce an enery error
that is almost two orders of magnitude smaller than that of the DS
simulations. The energy error for the DP simulations is around
$10^{-12}$ and that of the DS simulations around $10^{-10}$.
In Fig.~\ref{Sapporo2:fig:multiGPUPerformance} we saw that the speed-up when going from 1 to 4 GPUs
scales from a factor 1 to 4x when the number of particles increases.
A similar effect we see occuring in the bottom right panel; when the number of active
particles increases the speed-up also increases.
The jump in speed-up for the DS when going from
256k particles to 512k particles is caused by the increase of
$N_\text{active}$ between 256k and 512k.
These simulations show that the benefit of using more than a single GPU depends on the dataset
size, the used accuracy as well as on the average size of $N_\text{active}$. It is therefore
important that one knows these numbers when performing many simulations.
Especially, when using a sixth order integrator, as we did here, it is critical that one
chooses a time-step method that is suitable for the used accuracy.
\section{Discussion and CPU support}
\subsection{CPU}
With the availability of CPUs with 8 or more cores that support advanced vector instructions
there is the recurring question if it is not faster to compute the gravity on the CPU
than on the GPU. Especially since there is no need to transfer data between the host
and the device, an operation which
can be relatively costly when the number of particles is $\leq1024$. To test exactly for
which number of particles the CPU is faster than the GPU we added a CPU implementation to
{\tt Sapporo2}. This CPU version uses SSE2 vector instructions and OpenMP
parallelisation and can be run in single or in double precision.
The only kernel implemented is the fourth order integrator, including
support for neighbour lists and nearest neighbours (particle-ID and distance).
Because the performance of the GPU depends on the combination of sink and source
particles we test a grid of combinations for the number of sink and source particles
when measuring the time to compute the gravitational forces.
The results for the CPU (a Xeon E5620 @ 2.4Ghz), using a single core, are presented in Fig.~\ref{Sapporo2:fig:CPUvsGPU}a.
In this figure (and all the following figures) the x-axis indicates the number of sinks
and the y-axis the number of sources.
The execution time is indicated by the colour from blue (fastest) to red (slowest).
The smooth transition from blue to red from the bottom left corner
to the top right indicates that the performance does not preferentially depend on either
the source or sink particles, but rather on the combined number of interactions.
This matches our expectations, because the parallelisation granularity on the CPU is as
small as the vector width, which is 4.
On the GPU this granularity is much higher, as presented in Fig.~\ref{Sapporo2:fig:CPUvsGPU}b,
here we see bands of different colour every 256 particles. Which corresponds to
the number of threads used in a thread-block ($N_\text{threads}$).
With 256 sink particles we have the most optimal
performance of a block, however, if we would have 257 sink particles we process the first 256
sinks using optimal settings while the 257th sink particle is processed relatively inefficiently.
This granularity becomes less obvious when we increase the number of interactions
as presented in Fig.~\ref{Sapporo2:fig:CPUvsGPU}c. Here we see the same effect appearing as with
the CPU (Fig.~\ref{Sapporo2:fig:CPUvsGPU}a), where the granularity becomes less visible once we
saturate the device and use completely filled thread-blocks for most of the particles.
The final panel, Fig.~\ref{Sapporo2:fig:CPUvsGPU}d, indicates per combination of source
and sink particles which CPU or GPU configuration is the fastest.
For the CPU we measured the execution time when using 1,2,4 or 8 cores. In this panel
the colours indicate the method which gives the shortest execution times. Furthermore
does it indicate if and by how much the GPU is faster than the 8 cores of the CPU.
When either the number of sinks or the number of sources is relative small
($\leq 100$) the CPU implementation performs best.
However, when the number of sinks or sources is $>100$ the
GPU outperforms the CPU. When using a CPU implementation that uses the AVX or AVX2
instruction sets the borders of these regions would shift slightly upwards. The
CPU would then be faster for a larger number of source/sink particles, but that would
only be at most for a factor of 2 to 4 more particles. The data of Fig.~\ref{Sapporo2:fig:CPUvsGPU}
confirms that our choice to implement the {\tt Sapporo2} library for the GPU is an
efficient method for realistic data-set sizes.
Although our implementation uses SSE2 instructions it is not as advanced as the implementation
of~\cite{2012NewA...17...82T}. For example, we use intrinsic functions while they use the assembly operations directly.
This is also visible when we compare their performance with our implementation. The implementation we tested here reaches
about 60\% of their performance, however they do not compute the nearest neighbour particle
and do not keep track of the neighbourlist, both of which have a significant impact on the performance
as they cause divergence in the execution stream.
\subsection{XeonPhi}
Because the {\tt Sapporo2} library can be built with {\tt OpenCL} it should, theoretically, be
possible to run on any device that supports {\tt OpenCL}. To put this to the test, we compiled
the library with the Intel {\tt OpenCL} implementation.
However, although the code compiled without problems it did not produce correct results.
We tested the library both on an Intel CPU and the Intel {\tt XeonPhi} accelerator. Neither the CPU,
nor the {\tt XeonPhi} produced correct results. Furthermore, the performance
of the {\tt XeonPhi} was about 100$\times$ smaller than what can be expected from its theoretical
peak performance. We made some changes to the configuration parameters such as
$N_\text{threads}$ and $N_\text{blocks}$, however this did not result in any presentable performance.
We suspect that the Intel {\tt OpenCL} implementation, especially for XeonPhi, contains a number of limitations that
causes it to generate bad performing and/or incorrect code. Therefore, the {\tt Sapporo2} library
is not portable to Intel architectures with their current {\tt OpenCL} implementation\footnote{
A short test on an AMD CPU gave correct results therefore we suspect it is something intrinsic to
the Intel {\tt OpenCL} environment}.
This does not imply that the {\tt XeonPhi} has bad performance in general, since it is possible to
achieve good performance on $N$-body codes that is comparable to GPUs. However, this requires code
that is specifically tuned to the {\tt XeonPhi} architecture (K. Nitadori, private
communication~\footnote{Also see {\tt https://github.com/nitadori/Hermite} and \\
{\tt http://research.colfaxinternational.com/post/2013/01/07/Nbody-Xeon-Phi.aspx}.}).
\section{Conclusion}
The here presented {\tt Sapporo2} library makes it easy to enable GPU acceleration
for direct $N$-body codes. We have seen that the difference between the
{\tt CUDA} and {\tt OpenCL} implementation is minimal, when there are enough
particles to make the simulation compute limited. However, if many small data
transfers are required, for example when the integrator takes very small time-steps
with few active particles, the {\tt CUDA} implementation will be faster.
Apart from the here presented fourth and sixth order integrators the library
also contains a second order implementation. And because of the storage of data
in double precision it can be trivially expanded with an eighth order integrator.
The performance gain when using multiple GPUs implies that it is efficient to
configure GPU machines that contain more than 1 GPU. This will improve the time
to solution for simulations with more than $10^4$ particles.
The {\tt OpenCL} support and built-in tuning methods allow easy extension to
other {\tt OpenCL} supported devices. However, this would require a mature {\tt OpenCL}
library and matching hardware that supports atomic operations and double precision data types. For the {\tt CUDA}
devices this is not a problem since the current {\tt CUDA} libraries already have
mature support for the used operations and we expect that the library automatically scales to future architectures.
The only property that has to be set is the number of thread-blocks per multiprocessor
and this can be easily identified using the figures as presented in Section~\ref{Sapporo2:sect:tbc}.
The library is freely available either as part of the AMUSE software
package~\cite{2013CoPhC.184..456P}, which can be downloaded
from: http://wwww.amusecode.org. or as standalone library from: https://github.com/treecode/sapporo2/.
\begin{backmatter}
\theendnotes
\section*{Competing interests}
The authors declare that they have no competing interests.
\section*{Acknowledgements}
We thank the anonymous reviewers for their extensive and helpful comments.
This work was supported
by the Netherlands Research Council NWO (grants \#643.200.503,
\# 639.073.803, \#614.061.608, \# 612.071.503, \#643.000.802).
\bibliographystyle{bmc-mathphys}
|
1,108,101,565,217 | arxiv | \subsection{Details on the time-resolved X-ray diffraction experiment}
The femtosecond X-ray experiment is performed on the FEMTO slicing source of the Swiss Light Source (PSI, Villigen, Switzerland)\cite{Beaud2007}. The femtosecond X-ray pulse used is superimposed on a $\sim50$~ps background pulse, which contributes to about 21$\%$ of the intensity. This picosecond background is systematically recorded looking at X-ray pulse from the single electron bunch arriving just before the sliced bunch (1 $\mu$s before). For the scans as a function of rotation angle $\varphi$, we also perform pump-probe scans using as a probe the x-ray pulse after a time when the femtosecond portion of has relaxed, leaving only the long-time background pulse. These scans are subtracted from the rotation scans in order to avoid any artifact arising from the picosecond background.
The sample was previously oriented at the Material Science beamline of the Swiss Light Source \cite{Willmott2013}, where the orientation matrix was computed using the Bragg reflections (202), (-202), (0-22), and the (2-42). During the time-resolved experiment the orientation matrix was verified using the reflections (202) and (2-q 0 2-q), where q = 0.341.
\subsection{Excitation density calculations}
The X-ray, the optical 800 nm pump, and the 400 nm probe do not have the same penetration depth, therefore instead of using the absorbed fluence we use the average excitation density. We divide the sample into n layers of thickness $dz$ starting at the surface with i ranging from 0 to n. The excitation density for the i$^{th}$ layer is:
\begin{equation}
n(i)=\frac{F(1-R)(e^{\frac{-i dz}{\alpha_L}}-e^{\frac{-(i+1) dz}{\alpha_L}})}{dz}
\end{equation}
where F is the incoming fluence, R is the reflectivity of the pump laser at the specific angle of the experiment (R = 0.06 for trXRD and R = 0.11 for MOKE \cite{Lee2002}), $\alpha_L$ is the penetration depth of the laser (23 nm for MOKE and 21 nm for trXRD). Here $F(1-R)$ is the absorbed fluence.\\
We then compute the average excitation density seen by the probe:
\begin{equation}
\left< n_0 \right>=\sum_{i=0}^{n}n(i)\frac{1}{\alpha_p} e^{-idz/\alpha_p}
\end{equation}
where $\alpha_p$ is the penetration depth of the probe (55 nm for trXRD and 16 nm for MOKE).
\subsection{Static magnetic measurement}
In order to verify that the sample magnetization is saturated for the pump-probe experiment, we perform static magnetic optical Kerr effect with no pump pulse. Fig. \ref{hyst} shows that the field of 0.7 T is enough to saturate completely the bulk sample. Therefore we have performed the pump-probe experiment using 0.7 T.
\begin{figure}
\includegraphics[angle=0,width=1\linewidth,clip=true]{hyst.pdf}
\caption{Magnetization of bulk sample using static magnetic optical Kerr effect.}
\label{hyst}
\end{figure}
\subsection{Temperature calculations}
Neglecting transport effects, we estimate the temperature after achieving local thermal equilibrium as :
\begin{equation}
T_f=T_i+\frac{\left< n_0 \right>}{C_p}
\end{equation}
Where $T_f$ and $T_i$ are the final and initial temperature, and $C_P$ is the sample's heat capacity (95 J K$^{-1}$ mol$^{-1}$ \cite{Uijttewaal2009}). The estimated final temperatures are mentioned in the main text. We neglect heat diffusion since it has a usual timescale of hundreds of picoseconds to nanoseconds, which is outside our measurement window.
\end{document} |
1,108,101,565,218 | arxiv | \section{Introduction}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{figures/figure1.png}
\caption{(a) the raw waveform image of a music track, (b) the corresponding Log-Mel spectrogram.}
\label{fig1}
\end{figure}
Supervised learning has hit a bottleneck. On the one hand, it relies heavily on expensive manual tags and is subject to label errors and false correlations; On the other hand, the amount of labeled data is much smaller than unlabeled data. As a promising alternative, self-supervised learning has drawn massive attention for its data efficiency and generalization ability. Recently, breakthroughs in contrastive learning, such as SimCLR \cite{Chen2020}, MoCo \cite{He2020}, BYOL \cite{Grill2020}, Deep Cluster \cite{Caron2018}, SDCLR \cite{Jiang2021}, shed light on the potential of contrastive learning for learning self-supervised representation. Contrastive learning has increasingly become dominant in self-supervised learning owing to its competitive experimental performance compared with conventional supervised methods.
In Music Information Retrieval (MIR) community, many research-ers have made great efforts to learn effective music representation applied in different music-related tasks, such as music classification \cite{VandenOord2014, Choi2016, Choi2017, Pons2018, Pons2019, Lee2019, Wu2021}, cover song identification \cite{Xu2018, Yu2019, Yesiler2020, Yu2020, Jiang2020}, music generation \cite{Ren2020, Huang2021, Liu2021, Chen2021, Ren2021}. However, most of them learn music representation in a supervised manner. Due to the labeled datasets upon which the supervised learning methods depend being costly and time-consuming, the performance of supervised learning methods will be limited. For that reason, some audio research workers have adopted contrastive learning methods to train neural networks.
The underlying idea of contrastive learning applied in music is to minimize the distance among the audio segments from the same input while minimizing the similarity among the audio segments from the different inputs. CLMR \cite{Spijkervet2021} uses a simple contrastive learning framework for music representation whose encoder directly encodes raw waveforms of songs. Although it performs well in classification downstream task, encoding raw waveforms can hardly encode frequency distribution into the final representation of music. To make the model understand the music in time and frequency domain, unlike CLMR, COLA \cite{Saeed2021} encodes the Log-Mel spectrogram of music so that the time-frequency information can be embedded into music representation. A Mel spectrogram is a spectrogram where the frequencies are converted to the Mel scale. More details about the Log-Mel spectrogram can be found in Section \ref{logMel}. BYOL-A \cite{Niizumi2021}, adopting the network structure of BYOL, which owns an online network and a target network, also opt the Log-Mel spectrogram of the music as model input. Nevertheless, there exists an issue that they encode \textbf{all} frames of the spectrogram into music representation space. That would be harmful to the quality of learned music representation since not all frames impact the music positively. As shown in figure \ref{fig1}, the onset of the track may be silent or directly missing, resulting in the absence of valid content of these starting frames. The quality of music downloaded from the web is uneven. A song may lose its content at the beginning or any other position, while other songs may contain noisy frames. These frames are unimportant parts of the whole music. On the other hand, we argue that each frame has a different status in characterizing music. For example, the drastic parts of rock music are more appropriate than the mild parts when representing the characteristic of a particular song. In other words, the drastic parts are more representative to rock music. Therefore, when learning music representation, we must restrict the non-critical parts of music while augmenting the role of crucial parts.
In order to address the above challenge, we propose to mask some frames within a piece of music with a \textbf{P}ositive-n\textbf{E}gative frame mask for \textbf{M}usic \textbf{R}epresentation. Specifically, a predicting module is designed to catch the correlation of frames. simultaneously, an asymmetrical structure module utilizes the parameters of multi-head attention layers from the transformer encoder to produce the positive-negative mask. The positive mask will erase the existence of inessential frames. Thus, the remaining crucial frames will be encoded and projected into the contrastive learning space to obtain the augmented positive representation. In turn, we can get the counterfactual \cite{Zhang_Yao_Zhao_Chua_Wu_2021} negative representation by adopting the negative mask. Moreover, we design a contrastive learning objective for positive-negative representation pairs. These masks and loss functions can make the model pay more attention to the critical frames and reserve the music's global semantic information while reducing the non-critical frames' adverse effects.
We pre-train the model on several public musical datasets and employ labeled data to train classifiers based on self-supervised representation learned by PEMR. The classifiers achieve the state-of-the-art performance in \textbf{music classification} task. We fine-tune the encoder pre-trained in a dataset on another dataset for classification to evaluate transferability and generalization capability. Besides, we apply the pre-trained encoder in \textbf{cover song identification} task and fine-tune it. We can obtain the more advanced performance for cover song identification by incorporating our pre-trained encoder into the current advanced supervised model. In summary, the contribution of this work is threefold:
\begin{itemize}
\item We propose to mask some crucial or inessential parts of music so that the inessential parts will be limited and the critical parts will be boosted when learning music representation.
\item We devise an asymmetrical mask generation module, generating the positive and negative masks for input music and design a contrastive learning loss function. We incorporate them into the contrastive learning framework for learning more effective music representation.
\item The extensive experimental results show that our learned musical representation achieves state-of-the-art performance on a downstream classification task. Furthermore, our learned representation improves the performance of cover song identification, demonstrating its effectiveness and transferability.
\end{itemize}
\section{Related Work}
\subsection{Contrastive Learning}
To solve the problem of the ever-growing unlabeled data, lots of self-supervised methods \cite{Devlin2019, Fausk2007, Kipf2016, Razavi2019, Goodfellow2020, Caron2018, Caron2020} have been proposed in several areas, especially Computer Vision, Natural language processing. Since \cite{Hadsell2006} whose approaches contrast positive pairs against negative pairs to learn representation, contrastive learning has attracted a great deal of attention from both academia and the industrial community. Contrastive Predictive Coding \cite{Henaff2019} is an unsupervised objective that learns predictable representation. CMC \cite{Tian2020} is view-agnostic and can scale to any number of views by maximizing mutual information between different views of the same observation for learning representation. MoCo \cite{He2020} views the contrastive learning as a dictionary look-up to build a dynamic queue including samples of the current mini-batch and the previous mini-batch and a moving-averaged encoder. Another method SimCLR \cite{Chen2020}, is a simple framework for contrastive learning without a memory bank. Recently, BYOL \cite{Grill2020} proposed a new architecture for contrastive learning, which consists of online and target networks. They train the networks only with the various augmented views of an identical image without negative pairs. To avoid collapsed solution and minimize the redundancy, \cite{Zbontar2021} contrast samples from features dimension.
Many researchers in the music community have attempted to apply contrastive learning for learning music representation. \cite{Saeed2021} designs a common contrastive model for learning general-purpose audio representation. \cite{Spijkervet2021} also uses SimCLR \cite{Chen2020} framework for pre-training the model. \cite{Niizumi2021} introduces BYOL \cite{Grill2020} for audio and achieves advanced results in various downstream tasks.
\subsection{Masking Strategy in Music Representation}
Masking strategy has played a significant role in the NLP community. The success of BERT \cite{Devlin2019}, which randomly masks some tokens in the input sequence and learns to reconstruct the masked tokens from the output of the transformer encoder, has shown its superiority in learning contextual information among tokens and has attracted the attention of researchers in the audio domain. For example, MusicBERT \cite{Zeng2021} devised a bar-level masking strategy as the pre-training mechanism to understand symbolic music. Mockingjay \cite{Liu2020} is designed to predict the masked frame through jointly conditioning on both past and future contexts. \cite{Zhao2021} proposes two pre-training objectives, including Contiguous Frames Masking (CFM) and Contiguous Channels Masking (CCM), designed to adapt BERT-like masked reconstruction pre-training to continuous acoustic frame domain.
\section{Proposed Method}
\begin{figure*}[t] \begin{center}
\includegraphics[width=\textwidth]{figures/new_overview.pdf}
\caption{
The overall framework of our proposed method for music representation learning. The predicting module captures the correlation among frames.
}
\label{fig:overview}
\end{center} \end{figure*}
The overall architecture of our pre-training framework is shown in Figure \ref{fig:overview}. Our networks consist of a predicting module that utilizes a transformer encoder to learn contextual correlation among frames, an asymmetrical positive-negative mask generating module, and a contrastive learning module. After pre-training with music datasets, we utilize the pre-trained \textbf{Encoder}, as previous works do \cite{Chen2020, He2020, Grill2020}, to obtain the general music representation for various downstream tasks, such as music classification and cover song identification.
\subsection{Sampling and Augmentation.} The function of this unit is to select two segments from the same waveform randomly and apply some augmentation methods in the selected segments. We use the same group of music augmentations as in CLMR \cite{Spijkervet2021}. The group includes polarity inversion, noise, gain, filter, delay, pitch shift. Each augmentation is randomly selected accordingly to its setting probability.
\subsection{Log-Mel Spectrogram}\label{logMel}
The digital audio signal represents that the voltage amplitude varies over time. According to the Fourier theorem, every signal can be decomposed into a set of cosine and sine waves that add up to the original signal, \textit{i.e.}, an audio signal is comprised of several single-frequency sound waves. We use its Mel spectrogram, generated by Short-Term Fourier Transformation (STFT) and Mel-scale filter banks to capture the time-domain and frequency-domain information of music raw waveform. The Mel-scale aims to mimic the human ear perception function--the human ear's sensitivity varies to different frequencies. In the deep learning domain, \cite{Conference2014} trained convolution networks to autonomously discover frequency decompositions from raw audio. For simplicity, we use STFT and Mel-scale filters to obtain the Mel spectrogram of music and convert it to a logarithmic scale, \textit{i.e.}, Log-Mel spectrogram.
\subsection{Predicting Module}\label{trm}
Before generating masks, we should learn the correlation of each frame to the input music for more accurate masks. We use a random masking strategy in the input before feeding it to the transformer encoder. We then use a predicting layer to recover the masked positions from the output of the transformer encoder, obtaining a predicting loss. The Transformer encoder uses self-attention mechanisms primarily and learned or sinusoidal position information. Each layer consists of a self-attention sub-layer followed by a position-wise fully connected feed-forward network sub-layer.
Specifically, We view a frame as a token. In order to make the transformer encoder more stable and accurate when modeling the correlation among all tokens from a music fragment, we train it with a random mask strategy \cite{Devlin2019, Zhao2021, Liu2020}. we denote the frames set of a spectrogram as $\mathbf{F}=\left(\mathbf{f}_{1}, \mathbf{f}_{2}, \ldots, \mathbf{f}_{L}\right)$, where $\mathrm{F} \in \mathbb{R}^{L \times D}$. We append a learnable [CLS] token embedding in front of all frames, denoted as $\mathbf{X}=\left(\mathbf{c}, \mathbf{x}_{1}, \mathbf{x}_{2}, \ldots, \mathbf{x}_{L}\right)$, where $\mathrm{X} \in \mathbb{R}^{(L+1) \times D}$, so that we can aggregate information of all frames into [CLS] after attention operation. Then, we utilize a multi-head attention mechanism to calculate attention scores between a query and a key and use it for a value, which allows the model to focus on various parts of the frames sequence. The formulation of multi-head attention is,
\begin{align}
\notag
\mathbf{Y}_{n, h} = \operatorname{Attention}(\mathbf{Q}_{n,h}, \mathbf{K}_{n,h}, \mathbf{V}_{n,h}) \\
=\operatorname{Softmax}\left(\frac{\mathbf{Q}_{n,h} \mathbf{K}_{n,h}^{T}}{\sqrt{D}}\right) \mathbf{V}_{n,h}
\end{align}
where $\mathbf{Q}_{n,h}$, $\mathbf{K}_{n,h} $, $\mathbf{V}_{n,h}$ are the query, key and value respectively. The \textit{n} and the \textit{h} are the index of layer and attention head respectively. They are calculated by $\mathbf{Q}_{n, h} = \mathbf{X}\mathbf{W}_{n, h}^{Q}$, $\mathbf{K}_{n,h} = \mathbf{X}\mathbf{W}_{n,h}^{K}$ and $\mathbf{V}_{n,h} = \mathbf{X}\mathbf{W}_{n,h}^{V}$. The $\mathbf{W}_{n,h}^{Q}$, $\mathbf{W}_{n,h}^{K}$ and $\mathbf{W}_{n,h}^{V}$ $\in \mathbb{R}^{D \times D}$, which are the corresponding weight matrices. The attention scores between $\mathbf{Q}_{n,h}$ and $\mathbf{K}_{n,h}$ are divided by $\sqrt{D}$ to avoid large values of the dot product. Because the self-attention can not aware of the order, we add a sinusoidal position embedding to the frames sequence before input to self-attention.
The multi-head attention aggregates contextual information through learnable weights, but it is still a linear model. For introducing the non-linearity, multi-head attention output will be fed to the position-wise feed-forward network (FFN). Specifically, within the \textit{n}th layer of transformer encoder, we concatenate the outputs of all attention heads and apply linear transformation to get $\mathbf{Y}_{n}$, and input it into FFN to obtain output $\mathbf{X}_{n}$,
\begin{align}
\mathbf{Y}_{n} = \operatorname{Concat}(\mathbf{Y}_{n, 1}, \mathbf{Y}_{n, 2}, \ldots, \mathbf{Y}_{n, h}, \ldots, \mathbf{Y}_{n, H})\mathbf{W}_{n}
\end{align}
\begin{align}
\mathbf{X}_{n} = \operatorname{ReLu}(\mathbf{Y}_{n}\mathbf{W}_{n,1}+\mathbf{b}_{n,1})\mathbf{W}_{n,2}+\mathbf{b}_{n, 2}
\end{align}
where $\mathbf{W}_{n} \in \mathbb{R}^{HD \times D}$, $\mathbf{W}_{n,1}$ and $\mathbf{W}_{n,2} \in \mathbb{R}^{D \times D}$, $\mathbf{b}_{n,1}$ and $\mathbf{b}_{n,2} \in \mathbb{R}^{D}$. The \textit{H} is the number of attention heads. We randomly mask the frames and input into the above transformer encoder. And we then use FFN to predict the masked content of the output of transformer encoder so that we can learn robust contextual information between frames. The output of transformer encoder here is denoted as $\mathbf{P}$. For other details about transformer, such as positional encoding and residual connections, you can find from \cite{Vaswani2017}. The protocol of random masking follows \cite{Devlin2019}.
\subsection{Generating Positive-Negative Frame Mask}
Not all frames of a piece of music play an equivalent role in characterizing songs. Randomly masking the frames of music is a straightforward method to avoid the adverse effect of the inconsequential music frames. However, that may mask the critical frames of music, resulting in learning the inaccurate representation of the music. Simultaneously, the trivial frames will be retained and encoded, which will deteriorate the learned representation to a certain extent. Therefore, it is indispensable to approximately quantify the importance of a single frame to the entire music to mask frames selectively. We argue that the crucial frames can facilitate us learning a more distinct music representation that the network can identify different music more efficiently and accurately. As we describe above, the noisy or inessential parts will be detrimental to the music representation. Reducing the effect of non-critical frames is necessary. Hence, we generate the positive and negative masks to create the augmented positive and counterfactual negative representation. The contrastive learning loss functions will optimize the agreement between positives the distance between positives and negatives. More details can be seen in Section \ref{loss}.
We design an asymmetrical module to obtain the positive-negative mask. Firstly, we randomly select two fragments from the same music raw waveform. After applying several augmentation approaches, we get their Log-Mel spectrogram produced by stacking many frames. Before inputting them into the transformer encoder, we add a [CLS] token vector in front of their frames sequences. As we illustrate above, the vector $\mathbf{c}$ will contain the information of all frames within a sequence after being encoded by the transformation encoder. The first token of two branches will be $\mathbf{c}^{'}$ and $\mathbf{c}^{''}$ respectively. Therefore, we use the query vector of $\mathbf{c}^{'}$ and $\mathbf{c}^{''}$ to calculate attention scores between it and the keys of frames $\mathbf{F}^{''}$. The attention scores will be used to select a certain percentage of frames to be masked. Specifically, within the last layer of transformer encoder, we take out the $\mathbf{CLS}_{h}^{'Q}$ and $\mathbf{CLS}_{h}^{''Q}$ of added tokens in both branches. Then, we utilize these queries to calculate the attention scores with $\mathbf{F}_{h}^{''K}$,
\begin{equation}
\begin{split}
\mathbf{s} = \frac{1}{2}\sum_{h=1}^{H} (\operatorname{Softmax}(\mathbf{CLS}_{h}^{'Q} \cdot \frac{\mathbf{F}_{h}^{''K}}{\sqrt{D}}\\
+ \operatorname{Softmax}(\mathbf{CLS}_{h}^{''Q} \cdot \frac{\mathbf{F}_{h}^{''K}}{\sqrt{D}}))
\label{scores}
\end{split}
\end{equation}
where $\mathbf{F}_{h}^{''K}$ is the keys in the \textit{h}th attention head, $\mathbf{CLS}_{h}^{'Q}$ and $\mathbf{CLS}_{h}^{''Q}$ $\in \mathbb{R}^{1 \times D}$, $\mathbf{F}_{h}^{''K} \in \mathbb{R}^{D \times L}$, $\mathbf{s} \in \mathbb{R}^{1 \times L}$. The frames with high values in the $\mathbf{s}$ mean that they are crucial to both music fragments. According to the $\mathbf{s} $, we can screen a certain proportion of the frames with the lower attention weights. The remained crucial frames will reserve the global and local information since the two segments locate in different positions of the whole track and they are critical to the two segments. Specifically, we rank the $\mathbf{s} $ in the ascending order and set the value ranked at ratio $\boldsymbol{r}$ as the threshold $\boldsymbol{t}$. The $\boldsymbol{r}$ is the ratio value, and we set it to 10\% as the default value. We obtain the positive mask matrix $\mathbf{M} = (\mathbf{m}_{1}, \mathbf{m}_{2}, \ldots, \mathbf{m}_{i}, \ldots, \mathbf{m}_{L})$ as follows,
\begin{equation}
\mathbf{m}_{i} = \begin{cases}\mathbf{0}, & \mathbf{s}_{i} < \boldsymbol{t} \\
\mathbf{e}, & o t h e r s \end{cases}
\end{equation}
where $\mathbf{e}$ is unit vector, $\mathbf{0}$ is a zero vector. The negative mask $\overline{\mathbf{M}} = 1 - \mathbf{M}$. We add the positive and negative mask to the input frames $\mathbf{F}^{''}$ to obtain the augmented positive frames $\mathbf{F}_{pos}^{''}$ and counterfactual negative frames $\mathbf{F}_{neg}^{''}$ respectively. The $\mathbf{F}_{pos}^{''}$ and $\mathbf{F}_{neg}^{''}$ will be encoded and projected into the positive representation $\mathbf{Z}^{''}_{pos}$ and negative representation $\mathbf{Z}^{''}_{neg}$.
\subsection{Encoder and Projection Head}\label{encoder}
As the familiar setting in contrastive learning \cite{Chen2020, Grill2020, Spijkervet2021}, we apply a neural network encoder $f(\cdot)$ to extract representation vectors from augmented examples and use a MLP with one hidden as projection head $g(\cdot)$ to map representation to the latent space where contrastive loss is applied. We opt Fully Convolutional Networks (FCN) \cite{Choi2016} as our base encoder. The dimensionality of representation vectors from encoder is $D_{e}$ = 512, from projection head is $D_{p}$ = 256.
\subsection{Pre-training Objective Function}
\label{loss}
For training the model, we adopt Huber loss \cite{Girshick2015}, and Barlow Twins loss \cite{Zbontar2021} as our pre-training objective. In section \ref{trm}, a predicting layer is to predict the random disturbed input according to the output of the transformer encoder. The set $\mathbf{I}$ includes all masked frames' index. We calculate the predicting loss $\mathcal{L}_{pred}$ as follows,
\begin{equation}
\mathcal{L}_{pred} = \sum_{i \in \mathbf{I}} \sum_{j=0}^{D} \operatorname{smooth}_{L_{1}}(\mathbf{X}_{i,j} - \mathbf{P}_{i,j}),
\end{equation}
\begin{equation}
\label{smooth}
\operatorname{smooth}_{L_{1}}(x) = \begin{cases}0.5 \cdot x^{2}, & |x|<1 \\ |x|-0.5 , & otherwise \end{cases}
\end{equation}
L2 loss is more sensitive to outliers due to the square function. To stabilize training, we follow \cite{Girshick2015} to use L1 loss when |x| is larger than 1 as equation \ref{smooth}. So the $\mathcal{L}_{pred}$ is less sensitive to outliers. In section \ref{encoder}, we feed a batch of $\mathbf{F}^{'}$, the augmented positive version and counterfactual negative version of a batch of $\mathbf{F}^{''}$ into encoder and projection head to respectively get $\mathbf{Z}^{'}$, $\mathbf{Z}^{''}_{pos}$, $\mathbf{Z}_{neg}^{''} \in \mathbb{R}^{B \times D_{p}}$, where B is the value of batch size. The $\mathbf{Z}^{'}$ and $\mathbf{Z}^{''}_{pos}$ are treated as the positive samples in the contrastive space while $\mathbf{Z}^{''}_{neg}$ is the negative samples. We compute the contrastive loss between $\mathbf{Z}^{'}$ and $\mathbf{Z}^{''}_{pos}$, denoted as $\mathcal{L}_{pos}$ in the following manner,
\begin{equation}
\mathcal{L}_{pos} = \sum_{i=0}^{D_{p}}(1 - \mathbf{U}_{i,i})^{2} + \lambda \sum_{i=0}^{D_{p}} \sum_{j \neq i}^{D_{p}} \mathbf{U}_{i, j}^{2}
\label{cont}
\end{equation}
where $\mathbf{U} \in \mathbb{R}^{D_{p} \times D_{p}}$ is the cross-correlation matrix between $\mathbf{Z}^{'}$ and $\mathbf{Z}_{pos}^{''}$. $\mathcal{L}_{pos}$ is the same as BTLoss \cite{Zbontar2021}. Meanwhile, we design a contrastive loss for the negative samples,
\begin{equation}
\mathcal{L}_{neg} = \lambda \sum_{i=0}^{D_{p}} \mathbf{V}_{i,i}^{2}\label{cont}
\end{equation}
where $\mathbf{V} \in \mathbb{R}^{D_{p} \times D_{p}}$ is the cross-correlation matrix between $\mathbf{Z}^{'}$ and $\mathbf{Z}_{neg}^{''}$. The $\lambda$ is a hyperparameter to trade off the importance of $\mathcal{L}_{neg}$ and the second term of $\mathcal{L}_{pos}$. The loss $\mathcal{L}_{pos}$ and $\mathcal{L}_{neg}$ contrast data samples along the feature dimension which can prevent trivial constant solutions. Our final loss is $\mathcal{L}$ = $\mathcal{L}_{pred}$ + $\mathcal{L}_{pos}$ + $\mathcal{L}_{neg}$.
\section{Experimental Evaluation}
The primary purpose of unsupervised learning is to learn generalized and transferrable representation. Therefore, it is necessary for us to testify that if our learned music representation owns outstanding generalization and transferability. Following commonly used evaluation protocol of pre-trained representation \cite{Chen2020, He2020}, we firstly do linear evaluation, semi-supervised learning, transfer learning for music classification. We then transfer the learned representation to another dataset of cover song identification.
\subsection{Experimental Setting}
\subsubsection{Dataset.} We experiment with several available public datasets ofter used for classification and cover song identification. More details about datasets are illustrated as follows,
\begin{itemize}[leftmargin=*]
\item \textbf{MagnaTagATune (MTAT):} The annotations of MagnaTagATune were collected by Edith Law’s TagATune game \cite{Law2009}. The dataset includes 25863 pieces of music which are 29-seconds-long, and each track has multiple tags. The clips span a broad range of genres like Classical, New Age, Electronica, Rock, Pop, World, Jazz, Blues, Metal, Punk, and more. We split it into train/valid/test with a ratio as \cite{Spijkervet2021}, and get 18706/1825/5329 tracks;
\item \textbf{GTZAN\footnote{http://marsyas.info/downloads/datasets.html} : } The dataset consists of approximately 1000 audio tracks, each 30 seconds long. It contains ten genres.
\item \textbf{Second Hand Songs 100K (SHS100K):} \textbf{A cover version is a new performance or recording by a musician other than the original performer of the song.} We crawled raw audios through youtube-dl\footnote{https://github.com/ytdl-org/youtube-dl} using the provided URLs from Github\footnote{https://github.com/NovaFrost/SHS100K2}. Due to the copyright, We crawled all songs from YouTube and got 9733 songs. Every song has many cover versions. All cover versions of the 9733 songs add up to 104612. Following the experimental setting of \cite{Yu2019}, we selected the songs whose number of cover song versions were larger than 5 for training. We randomly selected tracks from the remaining records to construct two subsets for validation and testing, respectively. The ratio among training set, validation set, and testing set is 8:1:1. We get 6000 songs with 84153 versions for training, 1941 songs with their 10456 cover songs for testing;
\item \textbf{Covers80\footnote{https://labrosa.ee.columbia.edu/projects/coversongs/covers80/}:} There are 80 songs exists in Covers80, and every song has 2 cover versions.
\end{itemize}
\begin{table}[t]
\centering
\caption{The statistics of all datasets.}
\begin{tabular}{c c c c}
\toprule
Dataset & train & validation & test \\
\midrule
MagnaTagATune & 18,706 & 1,825 & 5,329 \\
GTZAN & 930 & - & - \\
SHS100K & 9,999 & - & 1,004 \\
Covers80 & - & - & 160 \\
\bottomrule
\end{tabular}
\label{dataset}
\end{table}
\subsubsection{Metrics.} To evaluate our learned representation for music, we follow the commonly used linear evaluation setting \cite{Kolesnikov2019, Chen2020, Spijkervet2021, Bachman2019}, where a linear classifier is trained on a pre-trained encoder from which parameters are not updated. Moreover, we train a multi-layer perceptron (MLP) to observe if the performance can be better after adding the depth of the classifier. We choose ROC-AUC and PR-AUC to measure the effect of the classifier comprehensively. For the cover song identification task, we use the widely used evaluation metrics \footnote{https://www.musicir.org/mirex/wiki/2020:Audio\_Cover\_Song\_Identification} mean average precision (MAP), the mean rank of the first correctly identified cover (MR1), and precision at 10 (Precision@10). Precision@10 is the mean ratio of the identical versions recognized successfully in the top 10 ranking list, which is obtained through ranking all records by the similarity between query and references. We calculate scalar products between two music representation to judge their similarity.
\subsubsection{Implementation Details}
The transformer encoder in the Predicting Module shares the parameters with the transformer encoder in the Positive-Negative Mask Generation Module. The basic encoders, a full convolution network with 4 layers \cite{Choi2016}, share parameters between two branches. The encoder outputs a 512-dimension feature as a representation. An MLP as projection head is used to map the representation to a smaller space. The output dimension of the projection head is 256. It is worth mentioning that the encoders and projection heads of all the unsupervised methods used in the experiments are consistent. We use the Adam optimizer. The learning rate is 0.0003, and weight decay is 1.0 $\times 10^{-6}$. Others are default. We set the batch size to 48 and train for 300 epochs, taking about 30 hours on a GPU. At the spectrogram extracting stage, the hop size is 128 during time-frequency transformation. STFT is performed using 256-point FFT while the number of mel-bands is set as 128. The transformer encoder consists of 3 layers, and its multi-head attention sub-layer has 3 heads.
\begin{table}[t]
\centering
\caption{The performance of some advanced supervised and self-supervised methods in music classification tasks is all trained on the MagnaTagATune dataset. For the unsupervised models, the scores are obtained by linear classifiers. * represents the performance of an MLP classifier.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c c c c c}
\toprule
& Method & Param & ROC-AUC & PR-AUC \\
\midrule
\multirow{4}{*}{Supervised}
& 1D-CNN & 382K & 85.6 & 29.6 \\
& SampleCNN & 2394K & 88.6 & 34.4 \\
& Musicnn & 228K & 89.1 & 34.9 \\
& Timber CNN & 220K & 89.3 & - \\
& FCN-4 & 370K & 89.4 & - \\
\midrule
\multirow{5}{*}{Self-Supervised}
& MoCo & 370K & 87.0 & 32.1 \\
& MoCo v2 & 370K & 87.9 & 33.2 \\
& CLMR & 2394K & 88.5 & 35.4 \\
& SimCLR & 370K & 88.7 & 34.8 \\
& BYOL & 370K & 89.1 & 35.8 \\
& PEMR(ours) & 370K & \textbf{89.6} & \textbf{36.9} \\
\midrule
\multirow{5}{*}{Self-Supervised}
& CLMR* & 2394K & 89.3 & 35.9 \\
& MoCo* & 370K & 89.5 & 36.3 \\
& MoCo v2* & 370K & 89.8 & 36.6 \\
& SimCLR* & 370K & 89.8 & 36.9 \\
& BYOL* & 370K & 89.9 & 37.0 \\
& PEMR(ours)* & 370K & \textbf{90.3} & \textbf{38.0} \\
\bottomrule
\end{tabular}}
\label{modPer}
\end{table}
\subsection{Music Classification}
We select some traditional and advanced supervised baselines in music classification and select some state-of-the-art self-supervised baselines. To make the results more persuasive, we implement several advanced contrastive learning models for music representation.
\subsubsection{Linear Evaluation.}
Table \ref{modPer} shows the performance comparison in music classification task among many approaches, including supervised methods and self-supervised methods. We follow \cite{Chen2020, He2020} to calculate the number of parameters of encoders in self-supervised methods. We use the thop package\footnote{https://pypi.org/project/thop/} to obtain the model size. Following a standard linear evaluation setting \cite{Chen2020, Grill2020, He2020}, We use the training set of MTAT to pre-train the models. After that, we train linear classifiers based on the frozen pre-trained encoder to evaluate the test dataset of MTAT. The applied encoders are the same for all methods except CLMR. CLMR uses sampleCNN \cite{Lee2019} adopting raw waveform as model input. To ensure a faithful comparison, the metric values of baselines are directly copied from their papers, where Timber CNN and FCN-4 do not report PR-AUC values. Our method achieves the best performance under linear evaluation protocol. We attribute the empirical results to the positive-negative frame mask, making the networks preserve the context while concentrating on the critical parts of music. The comparison between CLMR and other self-supervised methods demonstrates the advantage of the Log-Mel spectrogram. In addition, we train the MLP to observe whether the performance can be improved when introducing more parameters. The experimental results of PEMR reach the best of 90.3\% in ROC-AUC, 37.8\% in PR-AUC.
\subsubsection{Semi-Supervised Learning.}
Getting tagged data for deep learning problems often requires skilled human agents. As a result, the costs associated with the labeling process can make a large number of fully labeled training sets infeasible while obtaining unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value. Aiming at estimating if our learned music representation can still perform well in the semi-supervised learning classification task, we decrease the percentage of the labeled training data during the fine-tuning stage. Specifically, we randomly sample 1\%, 10\% labeled data from MTAT training dataset just as \cite{Beyer2019, Chen2020} do. We feed these few labeled data directly to the pre-trained base encoders and linear classifiers for training. The evaluation results of the previous approaches and PEMR are shown in Table \ref{semiPer}. In contrast with other musical self-supervised methods, PEMR can generate more generalized music representation even if labeled music for learning is inadequate. Besides, we randomly initialize the base encoder FCN carrying a linear classifier and train it with the same sampled labeled data. Our performance substantially exceeds the model trained from scratch. The empirical results prove the significance of our pre-trained music representation at a labeled-data-lacked scenario.
\begin{table}[t]
\centering
\caption{We fine-tune the pre-trained encoders and linear classifiers with different quantities of labeled data.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c c c c c}
\toprule
\multirow{3}{*}{Method} & \multicolumn{4}{c}{Label Fraction} \\
& \multicolumn{2}{c}{1\%} & \multicolumn{2}{c}{10\%} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& ROC-AUC & PR-AUC & ROC-AUC & PR-AUC \\
\midrule
FCN & 73.2 & 19.7 & 86.3 & 30.8 \\
\midrule
MoCo & 74.3 & 18.6 & 87.4 & 33.1 \\
MoCo v2 & 75.3 & 20.1 & 87.2 & 32.4 \\
CLMR & 77.3 & 22.6 & 87.0 & 32.9 \\
SimCLR & 73.1 & 18.6 & 87.2 & 32.5 \\
BYOL & 76.2 & 22.4 & 87.6 & 33.4 \\
PEMR(ours) & \textbf{77.3} & \textbf{24.4} & \textbf{88.0} & \textbf{34.0} \\
\bottomrule
\end{tabular}}
\label{semiPer}
\end{table}
\subsubsection{Transfer Learning}
\begin{table}[t]
\centering
\caption{We pre-train the models in the GTZAN dataset and transfer their learned parameters to another dataset for training. We evaluate the transfer capacity in both linear evaluation and fine-tuning settings.}
\begin{tabular}{c c c c c}
\toprule
\multirow{2}{*}{Method} & \multicolumn{2}{c}{Linear Evaluation} & \multicolumn{2}{c}{Fine-tuned} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& ROC-AUC & PR-AUC & ROC-AUC & PR-AUC \\
\midrule
FCN & - & - & 89.8 & 36.8 \\
\midrule
MoCo & 74.8 & 18.9 & 89.6 & 36.6 \\
MoCo v2 & 78.4 & 21.3 & 89.8 & 36.8 \\
CLMR & 81.9 & 26.2 & 89.7 & 36.1 \\
BYOL & 86.7 & 32.0 & 89.3 & 36.0 \\
SimCLR & 86.8 & 32.1 & 89.6 & 36.6 \\
PEMR(ours) & \textbf{87.9} & \textbf{33.8} & \textbf{90.0} & \textbf{37.3} \\
\bottomrule
\end{tabular}
\label{tab:transCLS}
\end{table}
Specifically, we adopt the whole GTZAN dataset for pre-training the model and employ the MTAT training dataset to train classifiers that will evaluate the MTAT testing dataset. To reveal the superiority of PEMR more apparently, we compare several self-supervised methods for music representation, including previous approaches and ours. As shown in Figure \ref{tab:transCLS}, the effect of the classifiers based on the encoder pre-trained by PEMR is the most advanced under both linear evaluation and fine-tuning settings. More importantly, we can surpass the same network trained in the supervised learning paradigm in the fine-tuning situation. Furthermore, although we freeze the pre-trained encoder when training the classifier, we can also obtain competitive results of 87.9\% in ROC-AUC and 33.8\% in PR-AUC. The results are comparable to the supervised FCN trained from scratch.
\subsection{Cover Song Identification}
The task of cover song identification is to identify alternative versions of previous musical works. Since different versions of a song are performed by various artists or musicians and instruments, they may vary significantly in pitch, rhythm, structure, and even in fundamental aspects related to the harmony and melody of the song. Recently, cover song recognition has attracted attention because it has the potential to serve as a benchmark for other musical similarities and retrieval algorithms. Chord analysis, melodic extraction, and musical similarity are all closely related to cover song identification -- another area of music analysis where artificial intelligence is used. Before \cite{Xu2018}, previous research primarily involved hand-crafted features, which was intolerable when facing large-scale datasets. Given this, \cite{Xu2018} proposed deep learning methods, learning to extract features efficiently for cover song identification. \cite{Yu2019} and \cite{Yu2020} devised TPP-Net and CQT-Net that could be naturally adapted to deal with key transposition in cover songs and they designed a training scheme to make their model more robust. We select these advanced models as our baselines of the cover song identification task. The main goal of them is to learn the high-quality representation of songs, employing supervised methods. There is still a great developing space for applying self-supervised learning in this task.
The pre-training for music representation will be greatly meaningful if the pre-trained music representation can be transferred to other downstream tasks where training datasets have little labeled data. After pre-training in the MTAT training dataset, we obtain the encoder and fine-tune it with the datasets from the cover song identification domain. The more details are as follows: 1) the network we want to train consists of FCN and CQT-Net. 2) the SHS100K training set is provided for the network to fine-tune. 3) we extract music representation through the trained network and evaluate their performance on the SHS100K testing set and Covers80 dataset. It is worth mentioning that \textbf{many songs can not be downloaded from YouTube due to the invalid copyright}, resulting in the much difference between our downloaded SHS100K and SHS100K used in previous methods. So we randomly sample the data from SHS100K to construct a subset of SHS100K, namely SHS100K-SUB, and split it into train, validation, test set with the same ratio as \cite{Yu2019, Yu2020}. Table \ref{tab:coverPer} exhibits our experimental results. We randomly initialize the parameters of FCN and CQT-Net. After training in the SHS100K-SUB train set, performance in the SHS100K-SUB test set or Covers80 can not surpass the individual CQT-Net. Nevertheless, we can improve the model's performance on two datasets by incorporating the pre-trained FCN with CQT-Net.
\begin{table}[t]
\centering
\caption{Transfer learning for cover song identification. The music representation encoder used in this task is pre-trained with the out-of-domain dataset.}
\begin{tabular} {c c c c c c c}
\toprule
Method & & & & MAP & Precision@10 & MR1 \\
\midrule
& & & & \multicolumn{3}{c}{SHS100K-SUB} \\
\midrule
Ki-Net & & & & 0.112 & 0.156 & 68.33\\
TPP-Net & & & & 0.267 & 0.217 & 35.75 \\
FCN & & & & 0.289 & 0.230 & 34.86 \\
CQT-Net & & & & 0.446 & 0.323 & \textbf{18.09} \\
\midrule
\multicolumn{6}{l}{\textit{Fine-tuned:}} \\
(rand. FCN) + CQT-Net & & & & 0.433 & 0.317 & 21.13 \\
(pre. FCN) + CQT-Net & & & & \textbf{0.484} & \textbf{0.341} & 20.68 \\
\midrule
& & & \multicolumn{3}{c}{Covers80} \\
\midrule
Ki-Net & & & & 0.368 & 0.052 & 32.10\\
TPP-Net & & & & 0.5 & 0.068 & 17.08 \\
FCN & & & & 0.529 & 0.073 & 12.50 \\
CQT-Net & & & & 0.666 & 0.077 & 12.20 \\
\midrule
\multicolumn{6}{l}{\textit{Fine-tuned:}} \\
(rand. FCN) + CQT-Net & & & & 0.624 & 0.079 & 14.43 \\
(pre. FCN) + CQT-Net & & & & \textbf{0.668} & \textbf{0.081} & \textbf{10.52} \\
\bottomrule
\end{tabular}
\label{tab:coverPer}
\end{table}
\subsection{Ablation Study}
This section will analyze the impact of the augmented positive representation and the counterfactual negative representation generated from the positive-negative mask on learning high-quality music representation; Besides, We modify the vital parameter \textit{ratio} when creating the positive-negative mask to control the proportion of the masked frames in input data. Specifically, the ratio $\boldsymbol{r} \in$ [0.01, 0.1, 0.3, 0.5], we change its value and pre-train the network from scratch for 300 epochs. The experimental results are shown in Table \ref{ablation} and plotted in Figure \ref{ratio},respectively. We find that,
\begin{itemize}[leftmargin=*]
\item We pre-train the baseline model without any masking strategy; Based on the baseline, we only add a positive mask into the model. Their experimental results are shown in Table \ref{ablation}. Generating the augmented positive frames and the counterfactual negative frames is beneficial for the model to learn effective music representation.
\item In Figure \ref{ratio}, the model achieves the best result when $\boldsymbol{r}$ is 0.1 while suffering from performance drops if the $\boldsymbol{r}$ continues growing. We ascribe this phenomenon to: 1) the mild effect of the mask. When $\boldsymbol{r}$ is too small, for example, $\boldsymbol{r}$ equals to 0.01, which means little low-scores frames selected to construct the negative frames sequence $\mathbf{F}_{neg}^{''}$. Most of the noisy or inessential frames will be retained in the positive frames sequence $\mathbf{F}_{pos}^{''}$, resulting that the model can not focus on the crucial elements. 2) the excessive effect of the mask. The negative frames sequence will contain a large number of the crucial frames if $\boldsymbol{r}$ is too large. That will destroy the positive frames sequence, causing the model to learn inaccurate music representation.
\end{itemize}
\begin{table}[t]
\centering
\caption{We experiment with a baseline contrastive framework without the mask. Then, we apply the positive mask and the negative mask in order. }
\begin{tabular}{c c c c c}
\toprule
Method & & ROC-AUC & & PR-AUC \\
\midrule
w.o. mask & & 89.1 & & 36.2 \\
pos. mask & & 89.4 & & 36.6 \\
pos.+ neg. mask (PEMR) & & 89.6 & & 36.9 \\
\bottomrule
\end{tabular}
\label{ablation}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/Ratio.jpg}
\caption{
The variation of the performance with different masked ratios.
}
\label{ratio}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/epochs.jpg}
\caption{Linear music classifier trained on the top of our pre-trained encoder pre-trained with different epochs.}
\label{epochs}
\end{figure}
\subsection{Training Epochs}
Figure \ref{epochs} shows the impact of different numbers of training epochs. When training time is relatively short, the training epoch is a critical key influencing the final performance. With more training epochs, the gaps between different epochs decrease or disappear.
\section{Conclusion and Future Work}
In this paper, we propose to mask the critical and unimportant or noisy regions of music under the contrastive learning framework so that the model can concentrate on the crucial parts of the music, thus learning the more remarkable and effective representation for music. We devise an asymmetrical module to obtain the positive-negative mask by utilizing the transformation encoder's attention weights. Our pre-trained representation is applied to music-related downstream tasks: music classification and cover song identification. The experimental results of two music-related tasks demonstrate that the positive-negative mask is beneficial for the model to learn more effective music representation, which has strong generalization ability and transferability.
However, there are still many challenges in the Music Information Retrieval (MIR) community, such as music recommendation system, music source separation, instrument recognition, and music generation, including music classification and cover song identification. Applying pre-trained music representation to music-related areas is a potential way to solve these challenges. We look forward to more research work in this field.
\section{Acknowledgments}
The work is supported by the Zhejiang Natural
Science Foundation (LR19F020006), and National Natural Science Foundation
of China (No. 61836002,62072397,62037001)
\clearpage
\balance
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,565,219 | arxiv | \section{Introduction}
In many applications measured data can be represented in a matrix
$X_{m\times n},$ for which only a relatively small number of entries
are observed. The problem is to ``complete'' the matrix based on the
observed entries, and has been dubbed the matrix completion problem
~\cite{cai-2008,candes:recht,recht-2007,candes-2009,monti-09}. The
``Netflix'' competition is a primary example, where the data is the basis
for a recommender
system. The rows correspond to viewers and the columns to movies, with
the entry $X_{ij}$ being the rating $\in\{1,\ldots,5\}$ by viewer $i$ for movie
$j$. There are 480K viewers and 18K movies, and hence 8.6 billion
($8.6 \times 10^9$) potential
entries. However, on average each viewer rates about 200 movies, so
only 1.2\% or $10^8$ entries are observed.
The task is to predict the ratings viewers would give for the movies
they have not yet rated.
These problems can be phrased as learning an unknown parameter (a matrix $Z_{m
\times n}$) with very high dimensionality, based on very few
observations. In order for such inference to be meaningful, we assume
that the parameter $Z$ lies in a much low
dimensional manifold. In this paper, as is relevant in many real life
applications, we assume that $Z$ can be
well represented by a matrix of low rank, i.e. $Z\approx
V_{mk}G_{kn}$, where $k\ll\min(n,m)$. In this recommender system
example, low rank structure suggests that movies can be grouped into a
small number of ``genres'', with $G_{\ell j}$ the relative score for
movie $j$ in genre $\ell$. Viewer $i$ on the other hand has an
affinity $V_{i\ell}$ for genre $\ell$, and hence the modeled score for
viewer $i$ on movie $j$ is the sum $\sum_{\ell=1}^kV_{i\ell}G_{\ell
j}$ of genre affinities times genre scores.
Very recently
~\cite{candes:recht,candes-2009,monti-09} showed theoretically that
under certain assumptions on the entries of the matrix, locations and
proportion of unobserved entries, the true underlying matrix can be
recovered within very high accuracy. Typically we view the observed
entries in $X$ as the corresponding entries from $Z$ contaminated with noise.
For a matrix $X_{m\times n}$ let $\Omega\subset \{1,\ldots,m\}\times
\{1,\ldots,n\}$ denote the indices of observed entries. We consider the
following optimization problem:
\begin{eqnarray}
\mathrm{minimize} && \mathrm{rank} (Z) \nonumber \\
\mathrm{subject\;\;to} && \sum_{(i,j)\in \Omega} (Z_{ij}-X_{ij})^2 \leq \delta,
\label{crit:one}
\end{eqnarray}
where $\delta\geq 0$ is a regularization parameter controlling the
tolerance in training error. The rank constraint in (\ref{crit:one})
makes the problem for general $\Omega$
combinatorially hard \cite{Nat-03}. For a fully-observed $X$, on the
other hand, the solution is given by the singular
value decomposition (SVD) of $X$. The following seemingly small
modification to (\ref{crit:one})
\begin{eqnarray}
\mathrm{minimize} && \|Z\|_* \nonumber \\
\mathrm{subject\;\;to} && \sum_{(i,j)\in \Omega} (Z_{ij}-X_{ij})^2 \leq \delta
\label{crit:relax}
\end{eqnarray}
makes the problem convex \cite{fazel-thes}. Here $\|Z\|_*$ is the
nuclear norm, or the sum of the singular values of $Z$. Under many
situations the nuclear norm is an effective convex relaxation to the rank
constraint as explored in
~\cite{fazel-thes,candes:recht,candes-2009,recht-2007}. Optimization
of (\ref{crit:relax}) is a semi-definite programming problem
\cite{BV2004,fazel-thes} and can be solved efficiently for small
problems, using modern convex optimization software like SeDuMi and
SDPT3. However, since these algorithms are based on second order methods
\cite{int-point}, the problems become prohibitively expensive if the
dimensions of the matrix exceeds a hundred \cite{cai-2008}. In this paper we
propose an algorithm that scales to large problems with $m, n\approx
10^4$--$10^5$ or even larger. We obtain a rank-11 solution to
(\ref{crit:relax}) for a problem of size $(5\times 10^5)\times (5\times
10^5)$ and $|\Omega|=10^4$ observed entries in under 11 minutes
in MATLAB. For the same sized matrix with
$|\Omega|=10^5$ we obtain a rank-$52$ solution in under 80 minutes.
\cite{candes-2009,cai-2008,candes:recht} consider the criterion
\begin{eqnarray}
\mathrm{minimize} && \|Z\|_* \nonumber \\
\mathrm{subject\;\;to} && Z_{ij}=X_{ij}, \; \forall (i,j)\in \Omega
\label{crit:candes}
\end{eqnarray}
When $\delta=0$, criterion (\ref{crit:one}) is equivalent to
(\ref{crit:candes}), in that it requires the training error to be
zero.
\cite{candes-2009,candes:recht}
further develop theoretical properties establishing the equivalence of
the rank minimization and the nuclear norm minimization problems
(\ref{crit:one},\ref{crit:candes}). Cai et. al. ~\cite{cai-2008} in
their paper propose a first-order singular-value-thresholding
algorithm scalable to large matrices for the problem
(\ref{crit:relax}) with $\delta=0.$ They comment on the problem
(\ref{crit:relax}), with $\delta>0$, and suggest that it becomes
prohibitive for large scale problems. Hence they consider the
$\delta>0$ case to be unsuitable for matrix completion.
We believe that (\ref{crit:candes}) will almost always be too rigid,
as it will force the procedure to overfit.
If minimization of prediction error is our main goal, then the
solution $Z^*$ will typically lie somewhere in the interior of the path
(Figure~\ref{fig:eg-2}), indexed by $\delta$.
In this paper we provide an algorithm for computing solutions of
(\ref{crit:relax}), on a grid of $\delta$ values, based on warm
restarts. The algorithm is inspired by Hastie et al.'s SVD- impute
\cite{svd-imp,olga01:_missin_dna} and is very different the proximal forward-backward
splitting method of \cite{cai-2008,pfbs1,fpc}, which requires the choice
of a step size. In \cite{fpc}, the SVD step becomes prohibitive,
so some randomized algorithms are used for the computation. Our
algorithm is very different, and by exploiting matrix structure
can solve problems much larger than those in \cite{fpc}.
Our algorithm requires the computation of a low-rank SVD of a matrix (which
is not sparse) at every iteration. Here we crucially exploit the
problem matrix structure:
\begin{eqnarray}
Y= Y_{SP} \;\;(\mbox{Sparse})\quad +\quad Y_{LR}\;\;(\mbox{Low Rank})
\label{decomp}
\end{eqnarray}
In (\ref{decomp}) $Y_{SP}$ has the same sparsity structure as the
observed $X$, and $Y_{LR}$ has the rank $r \ll m,n$ of the estimated
$Z$. For large scale problems, we use iterative methods based on
Lanczos bidiagonalization with partial re-orthogonalization (as in the
PROPACK algorithm \cite{larsen98:_lancz}), for computing the first few singular
vectors/values of $Y.$ Due to the specific structure of
(\ref{decomp}), multiplication by $Y$ and $Y'$ can both be done in a
cost-efficient way..
\section{Algorithm and Convergence analysis}
\label{sec:nuc-norm-reg}
\subsection{Notation} \label{sec:notation}
We adopt the notation of \cite{cai-2008}. Define a matrix $P_{\Omega}(Y)$ (with dimension $n\times m$)
\begin{eqnarray} \label{notn:proj}
P_{\Omega}(Y)\; (i,j) = \left\{\begin{array}{ll}
Y_{i,j} & \mbox{if $(i,j) \in \Omega$}\\
0 & \mbox{if $(i,j) \notin \Omega$},\end{array} \right.
\end{eqnarray}
which is a projection of the matrix $Y_{m\times n}$ onto the observed entries.
In the same spirit, define the complementary projection
$P^{\perp}_{\Omega}(Y)$ via
$P^{\perp}_{\Omega}(Y)+P_{\Omega}(Y)=Y.$
Using (\ref{notn:proj}) we can rewrite
$\sum_{(i,j) \in \Omega} (Z_{ij}-X_{ij})^2$ as $\|P_{\Omega}(Z)-P_{\Omega}(X)\|_F^2$.
\subsection{Nuclear norm regularization}
We present the following lemma, given in \cite{cai-2008}, which forms a basic ingredient in our algorithm.
\begin{lem}
Suppose the matrix $W_{m\times n}$ has rank $r$. The solution to the
convex optimization problem
\begin{eqnarray}
\mathrm{minimize}_Z \quad \mbox{$\frac12$} \|Z-W\|_F^2 + \lambda \|Z\|_*
\label{nuc-norm-basic}
\end{eqnarray}
is given by $\hat W = \mathbf S_\lambda(W)$ where
\begin{eqnarray}
\mathbf S_\lambda(W) \equiv U D_\lambda V'\quad \mbox{ with } \quad D_\lambda=\mathrm{diag}\left[(d_1-\lambda)_+,\ldots,(d_r-\lambda)_+\right],
\label{svt}
\end{eqnarray}
where $X=UDV'$ is the SVD of $W$,
$D=\mathrm{diag}\left[d_1,\ldots,d_r \right]$, and $t_+=\max(t,0).$
\end{lem}
The notation $\mathbf S_\lambda(W)$ refers to {\em soft-thresholding} \cite{DJ95}.
The proof follows by looking at the sub-gradient of the function to be
minimized, and is given in \cite{cai-2008}.
\subsection{Algorithm} \label{sec:algo}
Problem (\ref{crit:relax}) can be written in its equivalent Lagrangian form
\begin{equation}
\mathrm{minimize}_Z \quad \mbox{$\frac12$} \|P_{\Omega}(Z)-P_{\Omega}(X)\|_F^2 + \lambda \|Z\|_*
\label{crit:dual}
\end{equation}
Here $\lambda\geq 0$ is a regularization parameter controlling the
nuclear norm of the minimizer $ \hat{Z}_\lambda$ of
(\ref{crit:dual}) (with a 1-1 mapping to $\delta>0$ in (\ref{crit:relax})).
We now present an algorithm for computing a series
of solutions to (\ref{crit:dual}) using warm starts.
Define
$f_\lambda(Z)= \mbox{$\frac12$} \|P_{\Omega}(Z)-P_{\Omega}(X)\|_F^2 + \lambda \|Z\|_*$.
\begin{algorithm}
\caption{\textbf{Soft-Impute}} \label{algo1}
\begin{enumerate}
\item Initialize $Z^{\mathrm{old}}=0$ and create a decreasing grid
$\Lambda$ of values $\lambda_1>\ldots>\lambda_K$.
\item For every fixed $\lambda=\lambda_1,\;\lambda_2,\ldots\in\Lambda$
iterate till convergence:
\begin{enumerate}
\item\label{item:1} Compute
$ Z^{\mathrm{new}}\leftarrow\mathbf S_\lambda(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z^{\mathrm{old}}))$
\item
If $\quad\frac{\|f_\lambda(Z^{\mathrm{new}})-f_\lambda(Z^{\mathrm{old}})\|_F^2}{\|f_\lambda(Z^{\mathrm{old}})\|_F^2}< \epsilon,\quad$ go to step~\ref{item:2}.
\item Assign $Z^{\mathrm{old}}\leftarrow Z^{\mathrm{new}}$ and go to step~\ref{item:1}.
\item\label{item:2} Assign $\hat Z_\lambda\leftarrow Z^{\mathrm{new}}$ and $Z^{\mathrm{old}}\leftarrow Z^{\mathrm{new}}$
\end{enumerate}
\item Output the sequence of solutions $\hat Z_{\lambda_1},\ldots,\hat Z_{\lambda_K}.$
\end{enumerate}
\end{algorithm}
The algorithm repeatedly replaces the missing entries with the current
guess, and then updates the guess by solving~(\ref{crit:dual}).
Figure~\ref{fig:eg-2} shows some examples of solutions using
Algorithm~\ref{algo1} (blue curves). We see test and training error in
the left two columns as a function of the nuclear norm, obtained from
a grid of values $\Lambda$. These error curves show a smooth and very
competitive performance.
\subsection{Convergence analysis} \label{conv-ana}
In this section we prove that Algorithm~~\ref{algo1} converges to the
solution to~(\ref{crit:relax}).
For an arbitrary matrix $\tilde Z,$ define
\begin{equation}
Q_\lambda(Z|\tilde Z)=\mbox{$\frac12$} \|P_{\Omega}(X)+P_{\Omega}^{\perp}(\tilde Z)-Z\|_F^2 + \lambda \|Z\|_*,
\label{surr:defn}
\end{equation}
a surrogate of the objective function $f_\lambda(z)$. Note that
$f_\lambda(\tilde Z)=Q_\lambda(\tilde Z|\tilde Z)$ for any $\tilde Z$.
\begin{lem}\label{sec:convergence-analysis}
For every fixed $\lambda\geq 0,$ define a sequence $Z_\lambda^k$ by
\begin{eqnarray}
Z_\lambda^{k+1}&=&\arg\min_{Z} Q_\lambda(Z|Z_\lambda^k),
\label{surr}
\end{eqnarray}
with $Z_\lambda^0=0$.
The sequence $Z_\lambda^k$ satisfies
\begin{eqnarray}
f_\lambda(Z_\lambda^{k+1})\leq Q_\lambda(Z_\lambda^{k+1}|Z_\lambda^k)\leq
f_\lambda(Z_\lambda^k)
\label{wedge}
\end{eqnarray}
\end{lem}
\begin{proof}
\begin{eqnarray}
f_\lambda(Z_\lambda^k)&=& \mbox{$\frac12$} \|P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^k)-Z_\lambda^k\|_F^2 + \lambda \|Z_\lambda^k\|_* \nonumber\\
&\geq& \min_Z \{\|P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^k)-Z\|_F^2 + \lambda \|Z\|_* \}\nonumber\\
&=& Q_\lambda(Z_\lambda^{k+1}|Z_\lambda^k) \nonumber\\
&=& \mbox{$\frac12$} \|\{P_{\Omega}(X)-P_{\Omega}(Z_\lambda^{k+1})\}\;+\{P_{\Omega}^{\perp}(Z_\lambda^k)-P_{\Omega}^{\perp}(Z_\lambda^{k+1})\}\|_F^2 + \lambda \|Z_\lambda^{k+1}\|_* \nonumber\\
&=& \mbox{$\frac12$} \; \{ \|P_{\Omega}(X)-P_{\Omega}(Z_\lambda^{k+1})\|_F^2 +
\| P_{\Omega}^{\perp}(Z_\lambda^k)-P_{\Omega}^{\perp}(Z_\lambda^{k+1})\}\|_F^2 \} + \lambda \|Z_\lambda^{k+1}\|_* \nonumber\\
&\geq& \mbox{$\frac12$} \; \|P_{\Omega}(X)-P_{\Omega}(Z_\lambda^{k+1})\|_F^2 + \lambda \|Z_\lambda^{k+1}\|_* \nonumber\\
&=&Q_\lambda(Z_\lambda^{k+1}|Z_\lambda^{k+1}) \nonumber
\label{proof-monotone}
\end{eqnarray}
\end{proof}
\begin{lem}\label{lem:nonexpansive}
The nuclear norm shrinkage operator $\mathbf S_\lambda(\cdot)$ satisfies the following
for any $W_1,\;W_2$ (with matching dimensions)
\begin{eqnarray}
\|\mathbf S_\lambda(W_1)-\mathbf S_\lambda(W_2)\|_F^2 \leq \|W_1-W_2\|_F^2
\label{nonexpansive}
\end{eqnarray}
\end{lem}
\begin{proof}
We omit the proof here for the sake of brevity. The details work out
by expanding the operator $\mathbf S_\lambda(\cdot)$ in terms of the
singular value decomposition of $W_1$ and $W_2.$ Then we use trace
inequalities for the product of two matrices \cite{trace-ineq1}
where one is real symmetric, the other arbitrary. A proof of this
Lemma also appears in \cite{fpc}, though the method is different
from ours.
\end{proof}
\begin{lem}\label{lem:stationary}
Suppose the sequence $Z_\lambda^k$ obtained from (\ref{surr})
converges to $Z_\lambda^{\infty}.$ Then $Z_\lambda^{\infty}$ is a stationary point of $f_\lambda(Z)$.
\end{lem}
\begin{proof}
The sub-gradients of the nuclear norm $\|Z\|_*$ are given by ~\cite{cai-2008}
\begin{eqnarray}
\partial \|Z\|_* =\{ UV' + W : W_{m\times n},\;U'W=0,\;WV=0,\; \|W\|_2 \leq 1 \}
\label{sub:nuc-norm}
\end{eqnarray}
where $Z=UDV'$ is the SVD of $Z$.
Since $Z_\lambda^k$ minimizes $Q_\lambda(Z|Z_\lambda^{k-1})$,
it satisfies:
\begin{eqnarray}
0 \in -(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^{k-1})-Z_\lambda^k) + \partial \|Z_\lambda^k\|_*\;\;\forall k
\label{surr:stationary}
\end{eqnarray}
Since $Z_\lambda^k \rightarrow Z_\lambda^{\infty},$
\begin{eqnarray}
(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^{k-1})-Z_\lambda^k) \longrightarrow
(P_{\Omega}(X)-P_{\Omega}(Z_\lambda^{\infty})).
\label{lim:one}
\end{eqnarray}
For every $k,$ a sub-gradient $p(Z_\lambda^k) \in \partial \|Z_\lambda^k\|_*$ corresponds to a tuple $(u_{k},v_{k},w_{k}).$
Then (passing on to a subsequence if necessary),
$(u_{k},v_{k},w_{k})\rightarrow (u_{\infty},v_{\infty},w_{\infty})$ and this limit corresponds to
$p(Z_\lambda^{\infty})\in \partial \|Z_\lambda^{\infty}\|_*$.
Hence, from (\ref{surr:stationary}, \ref{lim:one}), passing on to the limits
\begin{eqnarray}
\mathbf{0} \in (P_{\Omega}(X) - P_{\Omega}(Z_\lambda^{\infty})) + \partial \|Z_\lambda^{\infty}\|_*
\label{lim:infty}
\end{eqnarray}
This proves the stationarity of the limit $Z_\lambda^{\infty}$.
\end{proof}
\begin{thm}
The sequence $Z_\lambda^k$ defined in Lemma~\ref{sec:convergence-analysis}
converges to $Z_\lambda^{\infty}$ which solves
\begin{eqnarray}
\min_Z \mbox{$\frac12$} \|P_{\Omega}(Z)-P_{\Omega}(X)\|_F^2 + \lambda \|Z\|_*
\label{fixed:pt}
\end{eqnarray}
\end{thm}
\begin{proof}
Firstly observe that the sequence $Z_\lambda^k$ is bounded; for it to converge it must have a unique accumulation point.
Observe that
\begin{eqnarray}
\|Z_\lambda^{k+1}-Z_\lambda^k\|_F^2&=&
\|\mathbf S_\lambda(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^k))-\mathbf S_\lambda(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^{k-1}))\|_F^2 \nonumber\\
(\mbox{by Lemma~\ref{lem:nonexpansive}})&\leq&\|\left(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^k)\right)-\left(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^{k-1})\right)\|_F^2 \nonumber \\
&=&\|P_{\Omega}^{\perp}(Z_\lambda^k-Z_\lambda^{k-1})\|_F^2 \nonumber\\
&\leq&\|Z_\lambda^k-Z_\lambda^{k-1}\|_F^2 \label{contrac}
\end{eqnarray}
Due to boundedness, every infinite subsequence of $Z_\lambda^k$ has a
further subsequence that converges. If the sequence $Z_\lambda^k$ has
two distinct limit points then for infinitely many $k'\geq 0,$
$\|Z_\lambda^{k'}-Z_\lambda^{k'-1}\|_F\geq \epsilon,$ for some
$\epsilon>0$. Using (\ref{contrac}) this contradicts the convergence
of any subsequence of $Z_\lambda^k.$ Hence the sequence $Z_\lambda^k$
converges. Using Lemma~\ref{lem:stationary}, the limit
$Z_\lambda^{\infty}$ is a stationary point of $f_\lambda(Z)$ and hence
its minimizer.
\end{proof}
\section{From soft to hard-thresholding} \label{sec:soft-hard} The
nuclear norm behaves like a $\ell_1$ norm, and can be viewed as a soft
approximation of the $\ell_0$ norm or rank of a matrix. In penalized
linear regression for example, the $\ell_1$ norm or LASSO \cite{Ti96}
is widely used as a convex surrogate for the $\ell_0$ penalty or
best-subset selection. The LASSO performs very well on a wide variety
of situations in producing a parsimonious model with good prediction
error. However, if the underlying model is very sparse, then the LASSO
with its uniform shrinkage can overestimate the number of non-zero
coefficients. In such situations concave penalized regressions are
gaining popularity as a surrogate to $\ell_0$. By analogy for matrices, it makes
sense to go beyond the nuclear norm minimization problem to more
aggressive penalties bridging the gap between $\ell_1$ and $\ell_0$.
We propose minimizing
\begin{eqnarray}
f_{p,\lambda}(Z)&=& \mbox{$\frac12$} \|P_{\Omega}(Z)-P_{\Omega}(X)\|_F^2 + \lambda \sum_j p(\lambda_j(Z);\gamma)
\label{crit:relax-concave}
\end{eqnarray}
where $p(|t|;\gamma)$ is concave in $|t|.$ The parameter $\gamma \in
[\gamma_{\inf},\gamma_{\sup}]$ controls the degree of concavity, with
$p(|t|;\gamma_{\inf})=|t|$ ($\ell_1$ penalty), on one end and
$p(|t|;\gamma_{\sup})=|t|^0$ ($\ell_0$ penalty) on the other. In
particular for the $\ell_0$ penalty denote $f_{p,\lambda}(Z)$ by
$f_{H,\lambda}(Z)$ for ``hard'' thresholding. See
\cite{FJ08,Fan01,Zhang07} for examples of such penalties.
Criterion (\ref{crit:relax-concave}) is no longer convex and hence
becomes more difficult. It can be shown that Algorithm~\ref{algo1}
can be modified in a suitable fashion for the penalty
$p(\cdot;\gamma).$ This algorithm also has guaranteed convergence
properties.
The details of these arguments and statistical properties will be
studied in a longer version of this paper. As a concrete example, we
present here some features of the $\ell_0$ norm regularization on
singular values.
The version of (\ref{nuc-norm-basic}) for the $\ell_0$ norm is
\begin{eqnarray}
\min_Z \mbox{$\frac12$} \|Z-W\|_F^2 + \lambda \|Z\|_0.
\label{l0-norm-basic}
\end{eqnarray}
The solution is given by a reduced-rank SVD of $W$; for every
$\lambda$ there is a corresponding $q=q(\lambda)$ number of singular-values to be
retained in the SVD decomposition. As in (\ref{svt}), the
thresholding operator resulting from (\ref{l0-norm-basic}) is
\begin{eqnarray}
\mathbf S^H_\lambda(W) = U D_q V'\quad \mathrm{where} \quad D_{q}=\mathrm{diag}\left(d_1,\ldots,d_q,0,\ldots,0\right)
\label{svt-hard}
\end{eqnarray}
Similar to \textbf{Soft-Impute} (Algorithm \ref{algo1}),
the algorithm \textbf{Hard-Impute} for the $\ell_0$ penalty is given by Algorithm \ref{algo2}.
\begin{algorithm}
\caption{\textbf{Hard-Impute}}\label{algo2}
\begin{enumerate}
\item Create a decreasing grid
$\Lambda$ of values $\lambda_1>\ldots>\lambda_K$. Initialize
$\tilde Z_{\lambda_k}\;k=1,\ldots,K$ (see Section~\ref{pp-and-init}).
\item For every fixed $\lambda=\lambda_1,\;\lambda_2,\ldots\in\Lambda$
iterate till convergence:
\begin{enumerate}
\item Initialize $Z^{\mathrm{old}}\leftarrow \tilde Z_\lambda$.
\item\label{item:1} Compute
$ Z^{\mathrm{new}}\leftarrow\mathbf S_\lambda^H(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z^{\mathrm{old}}))$
\item If
$\quad\frac{\|f_\lambda(Z^{\mathrm{new}})-f_\lambda(Z^{\mathrm{old}})\|_F^2}{\|f_\lambda(Z^{\mathrm{old}})\|_F^2}< \epsilon,\quad$
go to step~\ref{item:2}.
\item Assign $Z^{\mathrm{old}}\leftarrow Z^{\mathrm{new}}$ and go to step~\ref{item:1}.
\item\label{item:2} Assign $\hat Z_{H,\lambda}\leftarrow Z^{\mathrm{new}}$.
\end{enumerate}
\item Output the sequence of solutions $\hat Z_{H,\lambda_1},\ldots,\hat Z_{\lambda_K}.$
\end{enumerate}
\end{algorithm}
\subsection{Post-processing and Initialization} \label{pp-and-init}
Because the $\ell_1$ norm regularizes by shrinking the singular
values, the number of singular values retained (through
cross-validation, say) may exceed the actual rank of the matrix. In
such cases it is reasonable to {\em undo} the shrinkage of the chosen
models, which might permit a lower-rank solution.
If $Z_\lambda$ is the solution to~(\ref{crit:dual}), then its
\emph{post-processed} version $Z^u_\lambda$
obtained by ``unshrinking'' the eigen-values of the matrix $Z_\lambda$
is obtained by
\begin{eqnarray}
\alpha&=& \argmin_{\alpha_i\geq 0,\;i=1,\ldots,r_\lambda} \quad \|P_{\Omega}(X)-\sum_{i=1}^{r_\lambda} \alpha_i P_{\Omega}(u_iv_i')\|^2 \label{post-process}\\
Z^u_\lambda&=& UD_\alpha V',\nonumber
\end{eqnarray}
where $D_\alpha=\mbox{diag}(\alpha_1,\ldots,\alpha_{r_\lambda})$.
Here $r_\lambda$ is the rank of $Z_\lambda$ and $Z_\lambda=UD_\lambda
V'$ is its SVD. The estimation in (\ref{post-process}) can be done via ordinary
least squares, which is feasible because of the sparsity of
$P_{\Omega}(u_iv_i')$ and that $r_\lambda$ is small.\footnote{Observe that
the $P_{\Omega}(u_iv_i'),\; i=1,\ldots,r_\lambda$ are not orthogonal, though
the $u_iv_i'$ are.}
If the least squares solutions
$\boldsymbol\alpha$ do not meet the positivity constraints, then the
negative sign can be absorbed into the corresponding singular vector.
In many simulated examples we have observed that this post-processing
step gives a good estimate of the underlying true rank of the matrix
(based on prediction error). Since fixed points of
Algorithm~\ref{algo2} correspond to local minima of the
function~(\ref{crit:relax-concave}), well-chosen warm starts $\tilde
Z_\lambda$ are helpful. A reasonable prescription for warms-starts is
the nuclear norm solution via (\textbf{Soft-Impute}), or the post
processed version~(\ref{post-process}). The latter appears to
significantly speed up convergence for \textbf{Hard-Impute}.
\subsection{Computation} \label{comp} The computationally demanding
part of Algorithms~\ref{algo1} and \ref{algo2} is in
$\mathbf S_\lambda(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^k))$ or
$\mathbf S^{H}_\lambda(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_{H,\lambda}^k))$.
These require calculating a low- rank SVD of the matrices of
interest, since the underlying model assumption is that $\mathrm{rank}(Z) \ll
\min\{m,n\}$. In Algorithm~\ref{algo1}, for fixed $\lambda,$ the entire sequence of matrices
$Z_\lambda^k$ have explicit low-rank representations of the form $U_k
D_k V'_k$ corresponding to
$\mathbf S_\lambda(P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^{k-1}))$
In addition, observe that $P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^k)$ can be rewritten as
\begin{eqnarray}
P_{\Omega}(X)+P_{\Omega}^{\perp}(Z_\lambda^k) = \left\{P_{\Omega}(X)-P_{\Omega}(Z_\lambda^k)\right\}\; (\mathrm{Sparse}) \quad +
\quad Z_\lambda^k \; (\mathrm{Low Rank})
\label{sparse-low-rank-upd}
\end{eqnarray}
In the numerical linear algebra literature, there are very efficient
direct matrix factorization methods for calculating the SVD of
matrices of moderate size (at most a few thousand). When the matrix is
sparse, larger problems can be solved but the computational cost
depends heavily upon the sparsity structure of the matrix. In general
however, for large matrices one has to resort to indirect iterative
methods for calculating the leading singular vectors/values of a
matrix. There is a lot research in the numerical linear algebra for
developing sophisticated algorithms for this purpose. In this paper
we will use the PROPACK algorithm \cite{propack,larsen98:_lancz}
because of its low storage requirements, effective flop count and its
well documented MATLAB version. The algorithm for calculating the
truncated SVD for a matrix $W$ (say), becomes
efficient if multiplication operations $Wb_1$ and $W'b_2$
(with $b_1 \in \Re^n,\;b_2 \in \Re^m$) can be done with minimal cost.
Our algorithms \textbf{Soft-Impute} and \textbf{Hard-Impute} both
require repeated computation of a truncated SVD for a matrix
$W$ with structure as in (\ref{sparse-low-rank-upd}).
Note that in (\ref{sparse-low-rank-upd}) the term
$P_{\Omega}(Z_\lambda^k)$ can be computed in
$O(|\Omega|r)$ flops using only the required outer products.
The cost of computing the truncated SVD will depend upon the cost in
the operations $Wb_1$ and $W'b_2$ (which are equal). For the sparse
part these multiplications cost $O(|\Omega|)$. Although it costs
$O(|\Omega|r)$ to create the matrix $P_{\Omega}(Z_\lambda^k)$), this
is used for each of the $r$ such multiplications (which also cost
$O(|\Omega|r)$), so we need not include that cost here.
The $\mathrm{Low Rank}$ part costs $O((m+n)r)$ for the multiplication
by $b_1$. Hence the cost is $O(|\Omega|)+O((m+n)r)$ per multiplication.
cost.
For the reconstruction problem to be theoretically meaningful in the
sense of ~\cite{candes-2009}, we require that $|\Omega|\approx
nr\mathrm{poly}(\log n).$ Hence introducing the $\mathrm{Low Rank}$
part does not add any further complexity in the multiplication by $W$
and $W'$. So the dominant cost in calculating the truncated SVD in
our algorithm is $O(|\Omega|)$. The \textbf{SVT} algorithm
\cite{cai-2008} for exact matrix completion (\ref{crit:candes})
involves calculating the SVD of a sparse matrix with cost
$O(|\Omega|).$ This implies that the computational cost
of our algorithm and that of
~\cite{cai-2008} is the same. Since the true rank of the matrix
$r\ll \min\{m,n\},$ the computational cost of evaluating the
truncated SVD (with rank $\approx r$) is linear in matrix dimensions.
This justifies the large-scale computational feasibility of our
algorithm.
The PROPACK package does not allow one to request (and hence compute)
only the singular values larger than a threshold $\lambda$ --- one
has to specify the number in advance. So once all the computed
singular values fall above the current threshold $\lambda$, our
algorithm increases the number to be computed until the smallest is
smaller than $\lambda$. In large scale problems, we put an absolute
limit on the maximum number.
\section{Simulation Studies} \label{sec:simu} In this section we
study the training and test errors achieved by the estimated matrix
by our proposed algorithms and those by \cite{cai-2008,monti-09}. The
Reconstruction algorithm (\textbf{Rcon}) described in \cite{monti-09}
considers criterion~(\ref{crit:one}) (in presence of noise).
For every fixed rank $r$ it uses a bi-convex algorithm on a
Grassmanian Manifold for computing a rank-$r$ approximation $USV'$
(not the SVD). It uses a suitable starting point obtained
by performing a sparse SVD on a \emph{clean} version of the observed
matrix $P_{\Omega}(X).$ To summarize, we look at the performance
of the following methods:
\begin{itemize}
\item (a) \textbf{Soft-Impute} (algorithm \ref{algo1}); (b) Post-processing on the output of Algorithm \ref{algo1},
(c) \textbf{Hard-Impute} (Algorithm \ref{algo2}) starting with the output of (b).
\item \textbf{SVT} algorithm by \cite{cai-2008}
\item \textbf{Rcon} reconstruction algorithm by \cite{monti-09}
\end{itemize}
In all our simulation studies we took the underlying model as
$Z_{m\times n}=U_{m \times r} V'_{r \times n} + \mathrm{noise};$ where
$U$ and $V$ are random matrices with standard normal Gaussian entries,
and $\mathrm{noise}$ is iid Gaussian. $\Omega$ is uniformly random
over the indices of the matrix with $p\%$ percent of missing entries.
These are the models under which the coherence conditions hold true
for the matrix completion problem to be meaningful as pointed out in
~\cite{candes-2009,monti-09}. The signal to noise ratio for the model
and the test-error (standardized) are defined as
\begin{eqnarray}
\mathrm{SNR}=\sqrt{\frac{\mbox{var}(UV')}{\mbox{var}(noise)}};\quad
\mathrm{testerror}=\frac{\|P_{\Omega}^{\perp}(UV'-\hat Z)\|_F^2}{\|P_{\Omega}^{\perp}(UV')\|_F^2}
\end{eqnarray}
In Figure~\ref{fig:eg-2}, results corresponding to the training and
test errors are shown for all algorithms mentioned above ---
nuclear norm (left two panels) and rank (right two panels)--- in three
problem instances. Since \textbf{Rcon} only uses rank, it is excluded
from the left panels.
In all examples
$(m,n)=(100,100).$ SNR, true rank and percentage of missing
entries are indicated in the figures. There is a unique correspondence
between $\lambda$ and nuclear norm. The plots vs the rank indicate
how effective the nuclear norm is as a rank approximation --- that is
whether it recovers the true rank while minimizing prediction
error. We summarize our findings in the caption of the figure.
In addition we performed some large scale simulations in
Table~\ref{tab:one} for our algorithm in different problem sizes. The
problem dimensions, SNR, number of iterations till convergence and
time in seconds are reported. All computations are done in MATLAB and
the MATLAB version of PROPACK is used.
\subsubsection*{Acknowledgements}
\label{sec:acknowledgements}
We thank Emmanuel Candes, Andrea Montanari and Steven Boyd for helpful discussions. Trevor Hastie was partially supported by grant DMS-0505676 from the National
Science Foundation, and grant 2R01 CA 72028-07 from the National Institutes of
Health.
\begin{figure}[htp]
\centering
Type a \hspace{3mm} $50\%$ missing entries with SNR=1, true rank =10\\
\begin{psfrags}
\psfrag{SNR1}[][b]{\small{Test error}}
\psfrag{TrainMiss0.5}[][b]{\small{Training error}}
\includegraphics[width=3.2in,height=2in]{mpaper_111.ps}
\includegraphics[width=3.2in,height=2in]{mpaper_211.ps}
\end{psfrags}\\ \vspace{.5cm}
Type b \hspace{3mm} $50\%$ missing entries with SNR=1, true rank =6\\
\begin{psfrags}
\psfrag{SNR1}[][b]{\small{Test error}}
\psfrag{TrainMiss0.5}[][b]{\small{Training error}}
\includegraphics[width=3.2in,height=2in]{mpaper_18.ps}
\includegraphics[width=3.2in,height=2in]{mpaper_28.ps}
\end{psfrags}\\\vspace{.5cm}
Type c \hspace{3mm} $80\%$ missing entries with SNR=10, true rank =5\\
\begin{psfrags}
\psfrag{SNR10}[][b]{\small{Test error}}
\psfrag{TrainMiss0.8}[][b]{\small{Training error}}
\includegraphics[width=3.2in,height=2in]{mpaper_112.ps}
\includegraphics[width=3.2in,height=2in]{mpaper_212.ps}
\end{psfrags}
\caption{L1: solution for \textbf{Soft-Impute}; L1-U: Post
processing after \textbf{Soft-Impute}; L1-L0
\textbf{Hard-Impute} applied to L1-U; C : \textbf{SVT}
algorithm; M: \textbf{Recon} algorithm. \textbf{Soft-Impute}
performs well in the presence of noise (top and middle
panel). When the noise is low, \textbf{Hard-Impute} can
improve its performance.The post-processed version
tends to get the correct rank in many situations as in Types b,c.
In Type b, the post-processed version does better than the rest in prediction error.
In all the situations \textbf{SVT} algorithm does very poorly in
prediction error, confirming our claim that (\ref{crit:candes}) causes
overfitting. \textbf{Recon} predicts poorly as well apart from Type-c, where it gets
better error than \textbf{Soft-Impute}. However \textbf{Hard-Impute} and
\textbf{Recon} have the same performance there.
}
\label{fig:eg-2}
\end{figure}
\begin{table}
\begin{tabular}[ht!!]{c|c|c|c|c|c|c} \\
$(m,n)$ & $|\Omega|$ & true rank ($r$) & SNR& effective rank ($\hat r$) & \# Iters & time(s)\\\hline
$(3 \times 10^4,10^4)$& $10^4$ & $15$ & $1$ & $(13,47,80)$ & $(3,3,3)$ & $(41.9,124.7,305.8)$\\
$(5\times 10^4,5 \times 10^4)$& $10^4$ & $15$ & $1$ & $8$ & $80$ & $237$\\
$(10^5,10^5)$ & $10^4$ & $15$ & $10$ & $(5,14,32,62)$ & $(3,3,3,3)$ & $(37,74.5,199.8,653)$\\
$(10^5,10^5)$ & $10^5$ & $15$ & $10$ & $(18,80)$ & $(3,3)$ & $(202, 1840)$\\
$(5\times10^5,5\times10^5)$ & $10^4$ & $15$ & $10$ & $11$ & $ 3$ & $628.14$ \\
$(5\times10^5,5\times10^5)$ & $10^5$ & $15$ & $1$ & $(3,11,52)$ & $(3,3,3)$ & $(341.9,823.4,4810.75)$ \\\hline
\end{tabular}
\caption{Performance of the \textbf{Soft-Impute} on different problem instances.} \label{tab:one}
\end{table}
\newpage
\bibliographystyle{alpha}
|
1,108,101,565,220 | arxiv | \subsection{\@startsection{subsection}{3
\usepackage[inner=1in, outer=1in, bottom =2.9cm, top =2.9cm]{geometry}
\renewcommand{\baselinestretch}{1.057}
\pretolerance=10000
\tolerance=2000
\emergencystretch=10pt
\newcommand\scalemath[2]{\scalebox{#1}{\mbox{\ensuremath{\displaystyle #2}}}}
\makeatletter
\newcommand*\Cdot{\mathpalette\Cdot@{.5}}
\newcommand*\Cdot@[2]{\mathbin{\vcenter{\hbox{\scalebox{#2}{$\m@th#1\circled{$1$}$}}}}}
\makeatother
\usepackage{amsmath, amssymb, amsfonts, amsthm}
\usepackage{graphicx, overpic}
\usepackage{mathrsfs}
\usepackage{mathtools}
\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}
\usepackage{enumerate}
\usepackage{enumitem}
\usepackage{shuffle}
\usepackage{lipsum}
\usepackage{color}
\usepackage{phoenician}
\usepackage{xkeyval}
\usepackage{tikz}
\usepackage{pst-node}
\usepackage{tikz-cd}
\usepackage[OT2,T1]{fontenc}
\usepackage{thm-restate}
\usepackage{float}
\usepackage{bold-extra}
\usepackage{microtype}
\usepackage[all,cmtip]{xy}
\usepackage{etoolbox}
\usepackage[strict]{changepage}
\usepackage{esvect}
\usepackage{wrapfig}
\usepackage{caption}
\usepackage{cite}
\usepackage{eucal}
\usepackage{mathrsfs}
\DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it}
\usepackage{bbding}
\usepackage{array}
\usepackage{pict2e}
\usepackage{makecell}
\usepackage{stmaryrd}
\usepackage{lmodern}
\usepackage{amsbsy}
\usepackage{esvect}
\usepackage{bbding}
\usepackage{thm-restate}
\usepackage[toc,page, titletoc,title]{appendix}
\graphicspath{ {figures/} }
\usepackage{url}
\usepackage[pagebackref]{hyperref}
\newtagform{red}{\color{blue}(}{)}
\colorlet{linkequation}{blue}
\newcommand*{\refeqq}[1]
\begingroup
\hypersetup{
linkcolor=linkequation,
linkbordercolor=linkequation,
\ref{#1
\endgroup
}
\makeatletter
\newcommand{\colim@}[2]
\vtop{\m@th\ialign{##\cr
\hfil$#1\operator@font colim$\hfil\cr
\noalign{\nointerlineskip\kern1.5\ex@}#2\cr
\noalign{\nointerlineskip\kern-\ex@}\cr}
}
\newcommand{\colim}
\mathop{\mathpalette\colim@{\rightarrowfill@\scriptscriptstyle}}\nmlimits@
}
\renewcommand{\varprojlim}
\mathop{\mathpalette\varlim@{\leftarrowfill@\scriptscriptstyle}}\nmlimits@
}
\renewcommand{\varinjlim}
\mathop{\mathpalette\varlim@{\rightarrowfill@\scriptscriptstyle}}\nmlimits@
}
\makeatother
\makeatletter
\providecommand*{\twoheadrightarrowfill@}
\arrowfill@\relbar\relbar\twoheadrightarrow
}
\providecommand*{\twoheadleftarrowfill@}
\arrowfill@\twoheadleftarrow\relbar\relbar
}
\providecommand*{\xtwoheadrightarrow}[2][]
\ext@arrow 0579\twoheadrightarrowfill@{#1}{#2
}
\providecommand*{\xtwoheadleftarrow}[2][]
\ext@arrow 5097\twoheadleftarrowfill@{#1}{#2
}
\makeatother
\makeatletter
\newcommand*{\relrelbarsep}{.386ex}
\newcommand*{\relrelbar}
\mathrel
\mathpalette\@relrelbar\relrelbarsep
}
\newcommand*{\@relrelbar}[2]
\raise#2\hbox to 0pt{$\m@th#1\relbar$\hss
\lower#2\hbox{$\m@th#1\relbar$
}
\providecommand*{\rightrightarrowsfill@}
\arrowfill@\relrelbar\relrelbar\rightrightarrows
}
\providecommand*{\leftleftarrowsfill@}
\arrowfill@\leftleftarrows\relrelbar\relrelbar
}
\providecommand*{\xrightrightarrows}[2][]
\ext@arrow 0359\rightrightarrowsfill@{#1}{#2
}
\providecommand*{\xleftleftarrows}[2][]
\ext@arrow 3095\leftleftarrowsfill@{#1}{#2
}
\makeatother
\DeclareSymbolFont{cyrletters}{OT2}{wncyr}{m}{n}
\DeclareMathSymbol{\Sh}{\mathalpha}{cyrletters}{"58}
\usetikzlibrary{tikzmark,decorations.pathreplacing}
\usetikzlibrary{shapes,shadows,arrows}
\usetikzlibrary{decorations.markings}
\usetikzlibrary{positioning,calc}
\tikzset{near start abs/.style={xshift=1cm}}
\newcommand\blfootnote[1]
\begingroup
\renewcommand\thefootnote{}\footnote{#1
\addtocounter{footnote}{-1
\endgroup
}
\DeclareSymbolFont{symbolsC}{U}{txsyc}{m}{n}
\DeclareMathSymbol{\Searrow}{\mathrel}{symbolsC}{117}
\hypersetup{
colorlinks=true,
linkcolor=black,
filecolor=black,
urlcolor=blue,
citecolor= red,
}
\urlstyle{same}
\DeclareSymbolFont{extraup}{U}{zavm}{m}{n}
\DeclareMathSymbol{\varheart}{\mathalpha}{extraup}{86}
\DeclareMathSymbol{\vardiamond}{\mathalpha}{extraup}{87}
\DeclareMathSymbol{\varclub}{\mathalpha}{extraup}{84}
\DeclareMathSymbol{\varspade}{\mathalpha}{extraup}{85}
\newcommand{\sspade}{{\footnotesize\text{$\spadesuit$}}}
\newcommand{\sheart}{{\footnotesize\text{$\varheart$}}}
\newcommand{\bigslant}[2]{{\raisebox{.2em}{$#1$}\left/\raisebox{-.2em}{$#2$}\right.}}
\theoremstyle{definition}
\newtheorem{thm}{Theorem}[section]
\newtheorem{cor}{Corollary}[thm]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{conj}[thm]{Conjecture}
\theoremstyle{definition}
\newtheorem{definition}{Definition}[section]
\newtheorem{ex}{Example}[section]
\newtheorem{exs}[ex]{Examples}
\newtheorem{remark}{Remark}[section]
\newtheorem{obs}{Observation}[section]
\newtheorem{claim}{Claim}[section]
\newcommand{\gG}{\Gamma}
\newcommand{\gL}{\Lambda}
\newcommand{\gl}{\lambda}
\newcommand{\gs}{\sigma}
\newcommand{\gu}{\upsilon}
\newcommand{\ga}{\alpha}
\newcommand{\gd}{\delta}
\newcommand{\gb}{\beta}
\newcommand{\gam}{\gamma}
\newcommand{\gep}{\varepsilon}
\newcommand{\gp}{\pi}
\newcommand{\gr}{\rho}
\newcommand{\gf}{\varphi}
\newcommand{\go}{\omega}
\newcommand{\sfVec}{\textsf{Vec}}
\newcommand{\sfS}{\textsf{S}}
\newcommand{\sfSet}{\textsf{Set}}
\newcommand{\tB}{\mathtt{B}}
\newcommand{\tH}{\mathtt{H}}
\newcommand{\tQ}{\mathtt{Q}}
\newcommand{\tK}{\mathtt{K}}
\newcommand{\tM}{\mathtt{M}}
\newcommand{\tE}{\mathtt{E}}
\newcommand{\tW}{\mathtt{W}}
\newcommand{\tX}{\mathtt{X}}
\newcommand{\tY}{\mathtt{Y}}
\newcommand{\tF}{\mathtt{F}}
\newcommand{\tC}{\mathtt{C}}
\newcommand{\tL}{\mathtt{L}}
\newcommand{\tP}{\mathtt{P}}
\newcommand{\tA}{\mathtt{A}}
\newcommand{\tG}{\mathtt{\Gamma}}
\newcommand{\tb}{\mathtt{b}}
\newcommand{\thh}{\mathtt{h}}
\newcommand{\tq}{\mathtt{q}}
\newcommand{\tk}{\mathtt{k}}
\newcommand{\tm}{\mathtt{m}}
\newcommand{\te}{\mathtt{e}}
\newcommand{\tw}{\mathtt{w}}
\newcommand{\tf}{\mathtt{f}}
\newcommand{\tc}{\mathtt{c}}
\newcommand{\tl}{\mathtt{l}}
\newcommand{\tp}{\mathtt{p}}
\newcommand{\ta}{\mathtt{a}}
\newcommand{\tbM}{\text{{\ttfamily \fontseries{b}\selectfont \large M}}}
\newcommand{\tbC}{\text{{\ttfamily \fontseries{b}\selectfont \large C}}}
\newcommand{\bY}{\mathbb{Y}}
\newcommand{\bR}{\mathbb{R}}
\newcommand{\bN}{\mathbb{N}}
\newcommand{\bC}{\mathbb{C}}
\newcommand{\bK}{\mathbb{K}}
\newcommand{\bB}{\mathbb{B}}
\newcommand{\bZ}{\mathbb{Z}}
\newcommand{\bH}{\mathbb{H}}
\newcommand{\bF}{\mathbb{F}}
\newcommand{\bP}{\mathbb{P}}
\newcommand{\bQ}{\mathbb{Q}}
\newcommand{\bS}{\mathbb{S}}
\newcommand{\bX}{\mathbb{X}}
\newcommand{\cC}{\CMcal{C}}
\newcommand{\cD}{\CMcal{D}}
\newcommand{\cO}{\CMcal{O}}
\newcommand{\cP}{\CMcal{P}}
\newcommand{\cM}{\CMcal{M}}
\newcommand{\cL}{\CMcal{L}}
\newcommand{\cF}{\CMcal{F}}
\newcommand{\cR}{\CMcal{R}}
\newcommand{\cG}{\CMcal{G}}
\newcommand{\cW}{\CMcal{W}}
\newcommand{\cI}{\CMcal{I}}
\newcommand{\cH}{\CMcal{H}}
\newcommand{\cB}{\CMcal{B}}
\newcommand{\cJ}{\CMcal{J}}
\newcommand{\cS}{\CMcal{S}}
\newcommand{\cA}{\CMcal{A}}
\newcommand{\cT}{\CMcal{T}}
\newcommand{\cK}{\CMcal{K}}
\newcommand{\cX}{\CMcal{X}}
\newcommand{\cQ}{\CMcal{Q}}
\newcommand{\cV}{\CMcal{V}}
\newcommand{\cU}{\CMcal{U}}
\newcommand{\cY}{\CMcal{Y}}
\newcommand{\cZ}{\CMcal{Z}}
\newcommand{\CAT}{\operatorname{CAT}}
\newcommand{\iso}{\CMcal{\sim}}
\newcommand{\ad}{\operatorname{ad}}
\newcommand{\bs}{\backslash}
\newcommand{\link}{\operatorname{link}}
\newcommand{\dett}{\operatorname{det}}
\newcommand{\image}{\operatorname{image}}
\newcommand{\op}{\operatorname{op}}
\newcommand{\Inn}{\operatorname{Inn}}
\newcommand{\cov}{\operatorname{cov}}
\newcommand{\Out}{\operatorname{Out}}
\newcommand{\id}{\operatorname{id}}
\newcommand{\Hom}{\operatorname{Hom}}
\newcommand{\inv}{\operatorname{inv}}
\newcommand{\spec}{\operatorname{Spec}}
\newcommand{\PSL}{\operatorname{PSL}}
\newcommand{\proj}{\operatorname{proj}}
\newcommand{\sub}{\operatorname{sub}}
\newcommand{\FG}{\operatorname{FG}}
\newcommand{\card}{\operatorname{card}}
\newcommand{\Aut}{\operatorname{Aut}}
\newcommand{\PG}{\operatorname{PG}}
\newcommand{\Cir}{\operatorname{Cyc}}
\newcommand{\maximum}{\operatorname{max}}
\newcommand{\Sym}{\operatorname{Sym}}
\newcommand{\val}{\mathrm{val}}
\newcommand{\DAG}{\textbf{DAG} }
\newcommand{\bE}{\textbf{E}}
\newcommand{\bp}{\textbf{p}}
\newcommand{\bO}{\textbf{O}}
\newcommand{\bA}{\textbf{a}}
\newcommand{\sS}{\textsf{S}}
\newcommand{\bq}{\textbf{q}}
\newcommand{\Shd}{ \textbf{Shd}}
\newcommand{\Lie}{ \textbf{Lie} }
\newcommand{\Zie}{\textbf{Zie}}
\newcommand{\Com}{\textbf{Com}}
\newcommand{\Ass}{\textbf{Ass}}
\newcommand{\Lay}{\textbf{Lay}}
\newcommand{\LLay}{\textbf{LLay}}
\newcommand{\bL}{\textbf{L}}
\newcommand{\Gam}{\boldsymbol{\Gamma}}
\newcommand{\Sig}{\boldsymbol{\Sigma}}
\DeclareFontFamily{U}{mathx}{\hyphenchar\font45}
\DeclareFontShape{U}{mathx}{m}{n}{
<5> <6> <7> <8> <9> <10>
<10.95> <12> <14.4> <17.28> <20.74> <24.88>
mathx10
}{}
\DeclareSymbolFont{mathx}{U}{mathx}{m}{n}
\DeclareFontSubstitution{U}{mathx}{m}{n}
\DeclareMathAccent{\widecheck}{0}{mathx}{"71}
\DeclareMathAccent{\wideparen}{0}{mathx}{"75}
\def\cs#1{\texttt{\char`\\#1}}
\newcommand{\lie}{\hat{\boldsymbol{\Gamma}}}
\newcommand{\adlie}{\check{\boldsymbol{\Gamma}}}
\newcommand{\ten}{\hat{\boldsymbol{\Sigma}}}
\newcommand{\adten}{\check{\boldsymbol{\Sigma}}}
\newcommand{\shuff}{\hat{\boldsymbol{\Sigma}}^\ast}
\newcommand{\adshuff}{\check{\boldsymbol{\Sigma}}^\ast}
\newcommand{\colie}{\hat{\boldsymbol{\Gamma}}^\ast}
\newcommand{\adcolie}{\check{\boldsymbol{\Gamma}}^\ast}
\newcommand{\Br}{\textbf{Br}}
\newcommand{\adBr}{\textbf{Br}^\vee}
\newcommand{\res}{\parallel}
\newcommand{\Set}{\textsf{Set}^\times}
\newcommand{\pl}[1]{[\check{\mathtt{M}}_{#1}]}
\newcommand{\altpl}[1]{[\check{\mathtt{L}}_{#1}]}
\newcommand{\Sint}{ \textsf{\emph{S}}_{\text{int}} }
\newcommand{\formj}{\emph{\texttt{j}}\, }
\newcommand{\formg}{\emph{\texttt{g}}}
\newcommand{\bT}{\mathbb{T}}
\newcommand{\fp}{\mathfrak{p}}
\newcommand{\ssS}{\emph{\textsf{S}}}
\newcommand{\ssA}{\emph{\textsf{A}}}
\makeatletter
\newcommand*\bigcdot{\mathpalette\bigcdot@{.5}}
\newcommand*\bigcdot@[2]{\mathbin{\vcenter{\hbox{\scalebox{#2}{$\m@th#1\bullet$}}}}}
\makeatother
\newcommand{\comma}{\text{,}}
\makeatletter
\newcommand{\adjunction}{\@ifstar\named@adjunction\normal@adjunction}
\newcommand{\normal@adjunction}[4]
#1\colon #
\mathrel{\vcenter
\offinterlineskip\m@th
\ialign
\hfil$##$\hfil\cr
\longrightharpoonup\cr
\noalign{\kern-.3ex}
\smallbot\cr
\longleftharpoondown\cr
}
#3 \noloc #
}
\newcommand{\named@adjunction}[4]
#
\mathrel{\vcenter
\offinterlineskip\m@th
\ialign
\hfil$##$\hfil\cr
\scriptstyle#1\cr
\noalign{\kern.1ex}
\longrightharpoonup\cr
\noalign{\kern-.3ex}
\smallbot\cr
\longleftharpoondown\cr
\scriptstyle#4\cr
}
#
}
\newcommand{\longrightharpoonup}{\relbar\joinrel\rightharpoonup}
\newcommand{\longleftharpoondown}{\leftharpoondown\joinrel\relbar}
\newcommand\noloc
\nobreak
\mspace{6mu plus 1mu}
{:}
\nonscript\mkern-\thinmuskip
\mathpunct{}
\mspace{2mu}
}
\newcommand{\smallbot}
\begingroup\setlength\unitlength{.15em
\begin{picture}(1,1)
\roundcap
\polyline(0,0)(1,0)
\polyline(0.5,0)(0.5,1)
\end{picture
\endgroup
}
\makeatother
\newcommand{\leftrarrows}{\mathrel{\raise.75ex\hbox{\oalign
$\scriptstyle\leftarrow$\cr
\vrule width0pt height.5ex$\hfil\scriptstyle\relbar$\cr}}}}
\newcommand{\lrightarrows}{\mathrel{\raise.75ex\hbox{\oalign
$\scriptstyle\relbar$\hfil\cr
$\scriptstyle\vrule width0pt height.5ex\smash\rightarrow$\cr}}}}
\newcommand{\Rrelbar}{\mathrel{\raise.75ex\hbox{\oalign
$\scriptstyle\relbar$\cr
\vrule width0pt height.5ex$\scriptstyle\relbar$}}}}
\newcommand{\longleftrightarrows}{\leftrarrows\joinrel\Rrelbar\joinrel\lrightarrows}
\makeatletter
\def\leftrightarrowsfill@{\arrowfill@\leftrarrows\Rrelbar\lrightarrows}
\newcommand{\xleftrightarrows}[2][]{\ext@arrow 3399\leftrightarrowsfill@{#1}{#2}}
\makeatother
\newcommand{\la}{\langle}
\newcommand{\ra}{\rangle}
\newcommand{\lf}{\lfloor}
\newcommand{\rf}{\rfloor}
\newcommand{\wt}{\widetilde}
\newcommand{\wh}{\widehat}
\newcommand{\til}{\tilde}
\newcommand{\onetwothree}{\vcenter{\hbox{\includegraphics[scale=0.25]{checkMMM}}}}
\newcommand{\noderight}{\textcolor{red}{\downarrow}}
\newcommand{\nodeleft}{\textcolor{blue}{\uparrow}}
\definecolor{Red}{rgb}{0.8,0,0.2}
\newcommand{\GG}[1]{}
\makeatletter
\def\@footnotecolor{red}
\define@key{Hyp}{footnotecolor}
\HyColor@HyperrefColor{#1}\@footnotecolo
}
\def\@footnotemark
\leavevmode
\ifhmode\edef\@x@sf{\the\spacefactor}\nobreak\fi
\stepcounter{Hfootnote
\global\let\Hy@saved@currentHref\@currentHref
\hyper@makecurrent{Hfootnote
\global\let\Hy@footnote@currentHref\@currentHref
\global\let\@currentHref\Hy@saved@currentHref
\hyper@linkstart{footnote}{\Hy@footnote@currentHref
\@makefnmark
\hyper@linkend
\ifhmode\spacefactor\@x@sf\fi
\relax
\makeatother
\hypersetup{footnotecolor=blue}
\makeatletter
\patchcmd{\@startsection}
{\@afterindenttrue}
{\@afterindentfalse}
{}{}
\makeatother
\title[Hopf Monoids in Perturbative Algebraic Quantum Field Theory]{Hopf Monoids in Perturbative Algebraic\\ Quantum Field Theory}
\author{William Norledge}
\address{Pennsylvania State University}
\email{[email protected]}
\thanks{This paper is an abridged version of `Species-theoretic foundations of perturbative quantum field theory', arXiv:2009.09969}
\begin{document}
\usetagform{red}
\renewcommand{\chapterautorefname}{Chapter}
\renewcommand{\sectionautorefname}{Section}
\renewcommand{\subsectionautorefname}{Section}
\renewcommand{\chapterautorefname}{Chapter}
\renewcommand{\sectionautorefname}{Section}
\renewcommand{\subsectionautorefname}{Section}
\begin{abstract}
We develop an algebraic formalism for perturbative quantum field theory (pQFT) which is based on Joyal's combinatorial species. We show that certain basic structures of pQFT are correctly viewed as algebraic structures internal to species, constructed with respect to the Cauchy monoidal product. Aspects of this formalism have appeared in the physics literature, particularly in the work of Bogoliubov-Shirkov, Steinmann, Ruelle, and \hbox{Epstein-Glaser-Stora}. In this paper, we give a fully explicit account in terms of modern theory developed by \hbox{Aguiar-Mahajan}. We describe the central construction of causal perturbation theory as a homomorphism from the Hopf monoid of set compositions, decorated with local observables, into the Wick algebra of microcausal polynomial observables. The operator-valued distributions called (generalized) time-ordered products and (generalized) retarded products are obtained as images of fundamental elements of this Hopf monoid under the curried homomorphism. The perturbative \hbox{S-matrix} scheme corresponds to the so-called universal series, and the property of causal factorization is naturally expressed in terms of the action of the Hopf monoid on itself by Hopf powers, called the Tits product. Given a system of fully renormalized time-ordered products, the perturbative construction of the corresponding interacting products is via an up biderivation of the Hopf monoid, which recovers Bogoliubov's formula.
\end{abstract}
\maketitle
\vspace{-6.5ex}
\setcounter{tocdepth}{1}
\hypertarget{foo}{ }
\tableofcontents
\section*{Introduction}\label{intro}
The theory of species is a richer, categorified version of analyzing combinatorial structures in terms of generating functions, going back to André Joyal \cite{joyal1981theorie}, \cite{joyal1986foncteurs}, \cite{bergeron1998combinatorial}. In this approach, one sees additional structure by encoding processes of \emph{relabeling} combinatorial objects, that is by modeling combinatorial objects as presheaves on the category $\sfS$ of finite sets $I$ (the labels) and bijections $\sigma$ (relabelings). In this paper, we are concerned with species $\textbf{p}$ valued in complex vector spaces, i.e. functors of the form
\[
\textbf{p}:\sfS^{\op}\to \sfVec, \qquad I\mapsto \textbf{p}[I]
,\quad
\sigma \mapsto \textbf{p}[\sigma]
\]
where $\sfVec$ is the category of complex vector spaces. Explicitly, $\textbf{p}$ consists of a complex vector space $\textbf{p}[I]$ for each finite set $I$, and a bijective linear map $\textbf{p}[\sigma]:\textbf{p}[I]\to \textbf{p}[J]$ for each bijection $\sigma:J\to I$ such that composition of bijections is preserved.
A highly structured theory of gebras\footnote{\ meaning (co/bi/Hopf)algebras and Lie (co)algebras} internal to vector species has been developed by \hbox{Aguiar-Mahajan} \cite{aguiar2010monoidal}, \cite{aguiar2013hopf}, building on the work of Barratt \cite{barratt1978twisted}, Joyal \cite{joyal1986foncteurs}, Schmitt \cite{Bill93}, Stover \cite{stover1993equivalence}, and others. For the internalization, one uses the Day convolution monoidal product $\textbf{p}\bigcdot\textbf{q}$ with respect to disjoint union and tensor product, given by
\[
\textbf{p}\bigcdot\textbf{q}[I] = \textbf{p} \otimes_{\text{Day}} \textbf{q}[I]= \bigoplus_{S\sqcup T=I} \textbf{p}[S]\otimes \textbf{q}[T]
.\]
This may be viewed as a categorification of the Cauchy product of formal power series.\footnote{\ from the perspective of $\textsf{S}$-colored (co)operads, as defined in e.g. \cite[Section 3]{MR3134040}, there is an equivalent description of these gebras as (co)algebras over the left (co)action (co)monads of the (co)operads $\Com^{ (\ast) }$, $\Ass^{ (\ast) }$, $\Lie^{ (\ast) }$ \cite[Appendix B.5]{aguiar2010monoidal}, which relates the gebras of this paper to structures such as cyclic operads, which already appear in mathematical physics} Various decategorifications of \hbox{Aguiar-Mahajan's} theory recovers the plethora of graded combinatorial Hopf algebras which have been studied \cite[Chapter 15]{aguiar2010monoidal}.
On the other hand, quantum field theory (QFT) may be viewed as a kind of modern infinite dimensional calculus. Perturbative quantum field theory (pQFT) is the part of QFT which considers Taylor series approximations of smooth functions. By an argument of Dyson \cite{Dyson52}, Taylor series of realistic pQFTs are expected to have vanishing radius of convergence.
Nevertheless, if an actual smooth function of a non-perturbative quantum field theory is being approximated, then they are asymptotic series, and so one might expect their truncations to agree to reasonable precision with experiment. This is indeed the case.
There are two main synthetic approaches to (non-perturbative) QFT, which grew out of the failure to make sense of the path integral analytically. There is functorial quantum field theory (FQFT), which formalizes the Schr\"odinger picture by assigning time evolution operators to cobordisms between spacetimes. There is also algebraic quantum field theory (AQFT), going back to \cite{haagkas64}, which formalizes the Heisenberg picture by assigning $\text{C}^\ast$-algebras of observables to regions of spacetime. Low dimension examples of AQFTs/Wightman field theories were rigorously constructed in seminal work of \hbox{Glimm-Jaffe} and others \cite{MR247845}, \cite{MR272301}, \cite{MR363256}.
Perturbative algebraic quantum field theory (pAQFT) \cite{rejzner2016pQFT}, \cite{dutsch2019perturbative}, \cite[\href{https://ncatlab.org/nlab/show/geometry+of+physics+--+perturbative+quantum+field+theory}{nLab}]{perturbative_quantum_field_theory}, due to Brunetti, D\"utsch, Fredenhagen, Hollands, Rejzner, Wald, and others, is (mathematically precise, realistic) pQFT based on causal perturbation theory \cite{steinbook71}, \cite{ep73roleofloc}, \cite{MR1359058}, due to St\"uckelberg, Bogoliubov, Steinmann, Epstein, Glaser, Stora, and others. See \cite[Foreword]{dutsch2019perturbative} for an account of the history. Following \cite{Slavnov78}, \cite{klaus2000micro}, \cite{dutfred00}, in which one takes the algebraic adiabatic limit to handle \hbox{IR-divergences}, pAQFT satisfies the Haag-Kastler axioms of AQFT, but with \hbox{$\text{C}^\ast$-algebras} replaced by formal power series $\ast$-algebras, reflecting the fact that pQFT deals with Taylor series approximations. In this paper, we show that the construction and structure of these formal power series algebras is naturally described in terms of gebra theory internal to species.
For simplicity, we restrict ourselves to the Klein-Gordan real scalar field on Minkowski spacetime $\cX\cong \bR^{p,1}$, $p\in \bN$ (pAQFT may be applied in more general settings, see e.g. \cite{MR2455327}). Therefore for us, an off-shell field configuration $\Phi$ is a smooth function
\[
\Phi:\cX\to \bR
,\qquad
x\mapsto \Phi(x)
.\]
In particular, we do not impose conditions on the asymptotic behaviour of $\Phi$ at infinite times. Let $\mathcal{F}_{\text{loc}}$ denote the space of local observables $\ssA\in \mathcal{F}_{\text{loc}}$; these are functionals of field configurations which are obtained by integrating polynomials in $\Phi$ and its derivatives against bump functions on $\cX$. Let $\mathcal{F}$ denote the commutative $\ast$-algebra of microcausal polynomial observables $\emph{\textsf{O}}\in \mathcal{F}$; these are polynomial functionals of field configurations satisfying a \hbox{microlocal-theoretic} condition known as microcausality, with multiplication the pointwise multiplication of functionals, sometimes called the normal-ordered product. Then $\mathcal{F}[[\hbar]]$ is a formal power series $\ast$-algebra in formal Planck's constant $\hbar$, called the (abstract, off-shell) Wick algebra, with multiplication the Moyal star product for the Wightman propagator $\Delta_{\text{H}}$ of the Klein-Gordan field
\[
\mathcal{F}[[\hbar]] \otimes \mathcal{F}[[\hbar]] \to \mathcal{F}[[\hbar]]
,\qquad
\emph{\textsf{O}}_1\otimes \emph{\textsf{O}}_2 \mapsto \emph{\textsf{O}}_1 \star_{\text{H}}\! \emph{\textsf{O}}_2
,\]
sometimes called the operator product.
Perhaps the most fundamental Hopf monoid of Aguiar-Mahajan's theory is the cocommutative Hopf algebra\footnote{\ we say `algebra' and not `monoid' since vector species form a linear category} of compositions $\Sig$, see \autoref{hopfofsetcomp}, which is a Hopf monoid internal to vector species defined with respect to the Day convolution. (More familiar is perhaps a certain decategorification of $\Sig$, which is the graded Hopf algebra of noncommutative symmetric functions $\textbf{NSym}$, see \cite[Section 17.3]{aguiar2010monoidal}.) A composition $F$ of $I$ is a surjective function of the form
\[
F:I\to \{1,\dots,k\}
,\qquad
\text{for some} \quad k\in \bN
.\]
The ordering $1>\dots>k$ is understood, so that $F$ models the $k^{\text{th}}$ ordinal with $I$-marked points. We let $S_j=F^{-1}(j)$, called the lumps of $F$, and write $F=(S_1,\dots, S_k)$. Each component $\Sig[I]$ is the space of formal linear combinations of compositions $F$ of $I$,
\[
\Sig[I] = \Big \{ \mathtt{a}=\sum_{F} c_F \tH_F \ \big | \ c_F\in \bC \Big \}
.\]
The multiplication
\[
\mu_{S,T}: \Sig[S]\otimes \Sig[T]\to \Sig[I]
,\qquad
\tH_F\otimes \tH_G\mapsto \tH_{FG}
\]
is the linearization of concatenating compositions (`gluing' via ordinal sum), and the comultiplication
\[\Delta_{S,T}:\Sig[I] \to\Sig[S]\otimes \Sig[T]
,\qquad
\tH_F \mapsto \tH_{F|_S} \otimes \tH_{F|_T}
\]
is the linearization of restricting compositions to subsets (`forgetting marked points'), where $S\sqcup T=I$.
Aspects of $\Sig$ have appeared in the physics literature as follows. Firstly, \hbox{Epstein-Glaser-Stora's} algebra of proper sequences \cite[Section 4.1]{epstein1976general} is the action of $\Sig$ on itself by Hopf powers, called the Tits product \cite[Section 13]{aguiar2013hopf}, going back to Tits \cite{Tits74}. Secondly, the primitive part $\Zie=\mathcal{P}(\Sig)$\footnote{\ the name `Zie' comes from \cite{aguiar2017topics}}, which is a Lie algebra internal to species, is essentially the Steinmann algebra from e.g. \cite[Section 6]{Ruelle}, \cite[Section III.1]{bros}. More precisely, the Steinmann algebra is a graded Lie algebra based on the structure map of the adjoint realization of $\Zie$, see \autoref{sec:Ruelle's Identity and the GLZ Relation}. Thirdly and fourthly, and outside the scope of this paper, see below regarding work of Losev-Manin and Feynman integrals.
The central idea of this paper is to formalize the construction of a system of interacting \hbox{time-ordered} products in causal perturbation theory as the construction of a homomorphism $\widetilde{\text{T}}$ of algebras internal to species of the form
\[
\widetilde{\text{T}}:\Sig \otimes \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}
\to
\textbf{U}_{\mathcal{F}[[\hbar, \formg]]}
.\]
We describe this construction in a clean abstract setting in \autoref{sec:T-Products, Generalized T-Products, and Generalized R-Products}, and then specialize to QFT in \autoref{sec:Time-Ordered Products}. Here, $\otimes$ is the Hadamard monoidal product (=componentwise tensoring), $\textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}$ is the species given by $I\mapsto (\mathcal{F}_{\text{loc}}[[\hbar]])^{\otimes I}$, and $\textbf{U}_{\mathcal{F}[[\hbar, \formg]]}$ is the algebra in species which has the Wick algebra, with formal coupling constant $\formg$ adjoined, in each $I$-component,
\[
\textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}[I] = (\mathcal{F}_{\text{loc}}[[\hbar]])^{\otimes I} ,\qquad \textbf{U}_{\mathcal{F}[[\hbar, \formg]]}[I] = \mathcal{F}[[\hbar, \formg]]
.\]
It follows that the data of a system of products $\widetilde{\text{T}}$ is equivalently a homomorphism of \hbox{$\bC$-algebras}
\[
\hat{\Sig}(\mathcal{F}_{\text{loc}}[[\hbar]])\to \mathcal{F}[[\hbar, \formg]]
\]
where $\hat{\Sig}(-): \textsf{Vec}\to \textsf{Vec}$ is the analytic endofunctor, or Schur functor, on vector spaces associated to $\Sig$ \cite[Section 19.1.2]{aguiar2010monoidal}.\footnote{\ the hat $\hat{\Sig}$ is meant to suggest a kind of categorified Fourier transform} Decategorified versions of this formalization appear in graded Hopf algebra approaches to pQFT \cite{Brouder10}, \cite[p. 635]{Borcherds10}. In particular, there is an interpretation of the Moyal deformation quantization in terms of Laplace pairings (=coquasitriangular structures) \cite{Fauser01}, \cite[Section 2.4]{Brouder10}.
Also related is the notion of a Losev-Manin cohomological field theory \cite[Theorem 3.3.1]{losevmanin}, \cite[Definition 1.3]{shadrin2011group}, where finite ordinals are replaced by strings of Riemann spheres glued at the poles, giving a Hopf monoid structure on the toric variety of the permutohedron, and $\Sig$ is replaced by the ordinary homology of this toric variety. The Hopf monoid structure of this toric variety is also central to modern approaches to Feynman integrals \cite[p.6]{MR3713351}, \cite{schultka2018toric}. We shall study this Hopf monoid in future work.
Explicitly, the homomorphism $\widetilde{\text{T}}$ consists of component linear maps
\[
\widetilde{\text{T}}_I:\Sig[I] \otimes (\mathcal{F}_{\text{loc}}[[\hbar]])^{\otimes I}
\to
\mathcal{F}[[\hbar, \formg]]
,\qquad
\tH_F\otimes \ssA_{i_1}\otimes \dots \otimes \ssA_{i_n} \mapsto \widetilde{\text{T}}_I(\tH_F\otimes \ssA_{i_1}\otimes \dots \otimes \ssA_{i_n})
\]
for each finite set $I=\{i_1,\dots, i_n\}$. This homomorphism should also satisfy causal factorization, which says
\[
\widetilde{\text{T}}_I( \mathtt{a} \otimes \ssA_{i_1}\otimes \dots \ssA_{i_n} )
=
\widetilde{\text{T}}_I( \! \! \! \! \underbrace{\mathtt{a} \triangleright \tH_{G}}_{\text{Tits product}} \! \! \! \! \otimes \ssA_{i_1}\otimes \dots \otimes \ssA_{i_n} )
\qquad \text{for all} \quad
\mathtt{a}\in \Sig[I]
\]
whenever the local observables $\ssA_{i_1}, \dots , \ssA_{i_n}$ respect the ordering of $I$ induced by the composition $G$, see \autoref{prob:causalfac}. Additional properties are often included, such as translation equivariance.
We can curry $\widetilde{\text{T}}$ with respect to the internal hom $\cH(-,-)$ for the Hadamard product, giving a homomorphism of algebras
\[
\Sig \to \cH( \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]} ,\textbf{U}_{ \mathcal{F}[[\hbar, \formg]]} )
,\qquad
\tH_{F}=\tH_{(S_1,\dots, S_k)} \mapsto \widetilde{\text{T}}(S_1)\dots \widetilde{\text{T}}(S_k)
.\]
The resulting linear maps
\[
\widetilde{\text{T}}(S_1)\dots \widetilde{\text{T}}(S_k): (\mathcal{F}_{\text{loc}}[[\hbar]])^{\otimes I} \to \mathcal{F}[[\hbar, \formg]]
\]
are called interacting generalized time-ordered products. For each choice of a field polynomial, the curried homomorphism is a `representation' of $\Sig$ as $\mathcal{F}[[\hbar, \formg]]$-valued generalized functions on $\cX^I$, called operator-valued distributions since the Wick algebra is often represented on a Hilbert space. The composition of the time-ordered products $\widetilde{\text{T}}(I)$ with the Hadamard vacuum state
\[
\la - \ra_{0} :\mathcal{F}[[\hbar, \formg]] \to \bC[[\hbar, \formg]]
,\qquad
\emph{\textsf{O}} \mapsto \emph{\textsf{O}}(\Phi=0)
\]
are then translation invariant $\bC[[\hbar, \formg]]$-valued generalized functions
\[
\text{G}_I: \cX^I \to \bC[[\hbar, \formg]]
,\qquad
(x_{i_1}, \dots, x_{i_n}) \mapsto \text{G}_I(x_{i_1}, \dots, x_{i_n}) \footnote{\ we have used generalized function notation; $\text{G}_I$ is not a single function, but can be represented by a sequence of functions}
\]
called time-ordered $n$-point correlation functions. After taking the adiabatic limit, and in the presence of vacuum stability, these functions may be interpreted as the probabilistic predictions made by the pQFT of the outcomes of scattering experiments, called scattering amplitudes, see \autoref{sec:scatterung}. However, their values are formal power series in $\hbar$ and $\formg$, and so have to be truncated.
Central to Aguiar-Mahajan's work is the interpretation of $\Sig$ (and other Hopf monoids) in terms of the geometry of the type $A$ reflection hyperplane arrangement, called the (essentialized) braid arrangement
\[
\text{Br}[I]=\big\{ \{ x_{i_1}-x_{i_2}=0 \} \subseteq
\! \! \! \! \! \! \!
\underbrace{\bR^I/\bR \twoheadleftarrow \bR^I}_{\text{quotient by translations}}
\! \! \! \! \! \! \!
: (i_1,i_2)\in I^2, \ i_1 \neq i_2 \big \}
.\]
In causal perturbation theory, the braid arrangement appears as the space of time components of configurations \hbox{$\cX^I$} modulo translational symmetry \cite[Section 2]{Ruelle}, and the reflection hyperplanes are the coinciding interaction points. Every real hyperplane arrangement $\text{A}$ has a corresponding adjoint hyperplane arrangement $\text{A}^\vee$ \cite[Section 1.9.2]{aguiar2017topics}. The free vector space $\bR I$ on $I$ is naturally $\Hom(\bR^I,\bR)$, and so the adjoint of the braid arrangement is given by
\[
\text{Br}^\vee[I]=\bigg\{ \Big\{ \sum_{i\in S} x_i=\sum_{i\in T} x_i=0\Big \} \subseteq \underbrace{\Hom(\bR^I/\bR,\bR)\hookrightarrow \bR I}_{\text{sum-zero subspace}} : (S,T)\in 2^I,\ S,T\neq \emptyset \bigg \}
.\]
In causal perturbation theory, the adjoint braid arrangement appears as the space of energy components \cite[Section 2]{Ruelle}, and the hyperplanes correspond to subsets going `on-shell'. The spherical representation of the adjoint braid arrangement is called the Steinmann sphere, or Steinmann planet, e.g. \cite[Figure A.4]{epstein2016}. The chambers of the adjoint braid arrangement are indexed by combinatorial gadgets called cells $\cS$ \cite[Definition 6]{epstein1976general}, also known as maximal unbalanced families \cite{billera2012maximal} and positive sum systems \cite{MR3467341}.
The primitive part Lie algebra $\Zie=\mathcal{P}(\Sig)$ (together with its dual Lie coalgebra $\Zie^\ast$) has a natural geometric realization over the adjoint braid arrangement \cite[Section 6]{Ruelle}, \cite[\href{https://www.youtube.com/watch?v=fUnr0f6mV4c}{Lecture 33}]{oc17}, \cite{lno2019}, \cite{norledge2019hopf}, which results in cells $\cS$ corresponding to certain special primitive elements $\mathtt{D}_\cS\in \Zie[I]$, see \autoref{adjoint}. The special elements were named Dynkin elements by Aguiar-Mahajan \cite[Section 14.1 and 14.9.8]{aguiar2017topics}. It is shown in \cite{norledge2019hopf} that the Dynkin elements span $\Zie$, but they are not linearly independent. The relations which are satisfied by the Dynkin elements are known as the Steinmann relations \cite[Equation 44]{steinmann1960}, see \autoref{stein}, first studied by Steinmann in settings where $\Sig$ is represented as operator-valued distributions. More recently, they have been studied in the context scattering amplitudes, where they appear to be related to cluster algebras \cite{drummond2018cluster}, \cite{caron2019cosmic}, \cite{Caron-Huot:2020bkp}.
If we restrict a curried system of interacting generalized time-ordered products to the primitive part $\Zie$, then we obtain a Lie algebra homomorphism
\[
\Zie\to\cH(\textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]},\textbf{U}_{\mathcal{F}[[\hbar, \formg]]}), \qquad \mathtt{D}_\cS \mapsto \widetilde{\text{R}}_\cS
.\]
The operator-valued distributions $\widetilde{\text{R}}_\cS$ which are the images of the Dynkin elements $\mathtt{D}_\cS$ are the interacting generalized retarded products of the system, see e.g. \cite{steinmann1960}, \cite{Huz1}, \cite[Equation 79]{ep73roleofloc}. In this paper, we give an exposition of the Steinman algebra and Steinmann relations in \autoref{sec:Steain}, \autoref{adjoint} and \autoref{stein}.
Let $\textbf{L}\hookrightarrow \Sig$ be the Hopf subalgebra of linear orders (=compositions with singleton lumps), and let $\textbf{E}^\ast\hookrightarrow \Sig$ be the subcoalgebra of compositions with one lump. Then we have the dictionary in \autoref{dic} between products/vacuum expectation values and elements of $\Sig$. In the commutative setting before Moyal deformation quantization, the species $\textbf{X}$ and $\textbf{E}$ are similarly related to the smeared field and polynomial observables, see \autoref{sec:obs}.
\begin{figure}[t]
\begin{tabular}{|c|c|c|c|}
\hline
&
\begin{tabular}{@{}c@{}}spanning set\end{tabular}&
\begin{tabular}{@{}c@{}}operator-valued distributions\end{tabular}&
\begin{tabular}{@{}c@{}}vacuum expectation values\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$\textbf{E}^\ast$\end{tabular}&
\begin{tabular}{@{}c@{}}universal series\\$\mathtt{G}_I$\end{tabular}&
\begin{tabular}{@{}c@{}}time-ordered product\\ $\text{T}(I)$\end{tabular}&
\begin{tabular}{@{}c@{}}time-ordered $n$-point\\ function\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$\textbf{L}$\end{tabular}&
\begin{tabular}{@{}c@{}}$\tH$-basis linear orders\\$\tH_\ell$\end{tabular}&
\begin{tabular}{@{}c@{}}$\text{T}(i_1)\dots \text{T}(i_n)$\end{tabular}&
\begin{tabular}{@{}c@{}}Wightman $n$-point\\ functions \end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$\Sig$\end{tabular}&
\begin{tabular}{@{}c@{}}$\tH$-basis set compositions\\$\tH_F$\end{tabular}&
\begin{tabular}{@{}c@{}}generalized time-ordered products\\$\text{T}(S_1)\dots \text{T}(S_k)$\end{tabular}&
\begin{tabular}{@{}c@{}}generalized time-ordered\\ functions\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$\Zie$\end{tabular}&
\begin{tabular}{@{}c@{}}Dynkin elements\\$\mathtt{D}_\cS$\end{tabular}&
\begin{tabular}{@{}c@{}}generalized retarded products\\$\text{R}_\cS$\end{tabular}&
\begin{tabular}{@{}c@{}}generalized retarded\\ functions\end{tabular}\\ \hline
\end{tabular}
\caption{Dictionary between products/vacuum expectation values and elements of the Hopf algebra $\Sig$.}
\label{dic}
\end{figure}
In \autoref{sec:Perturbation of T-Products by Steinmann Arrows} and \autoref{sec:Interactions}, we formalize the \emph{perturbation} of time-ordered products in casual perturbation theory as follows. Our starting point is a fully normalized system of generalized \hbox{time-ordered} products, that is a homomorphism of algebras
\[
\text{T}:\Sig\otimes \textbf{E}_{\mathcal{F}_\text{loc} [[\hbar]]}\to\textbf{U}_{\mathcal{F}((\hbar))}
\]
satisfying causal factorization, and such that the singleton components $\text{T}_{\{i\}}$ are the natural inclusion
\[
\mathcal{F}_\text{loc} [[\hbar]]\hookrightarrow \mathcal{F}((\hbar))
,\qquad
\ssA \mapsto \, \, :\! \ssA : .
\]
The corresponding operator-valued distributions are determined everywhere on $\cX^I$ by causal factorization, apart from on the fat diagonal (=coinciding interaction points). In particular, off the fat diagonal, the time-ordered products $\text{T}(I)$ are given by the Moyal star product $\star_{\text{F}}$ with respect to the Feynman propagator $\Delta_{\text{F}}$ for the Klein-Gordon field. The terms of the product $\star_{\text{F}}$ may be encoded in finite multigraphs, i.e. Feynman graphs. The remaining inherent ambiguity means one has to make choices when extending the $\text{T}(I)$ to the fat diagonal, and these choices form a torsor of the \hbox{St\"uckelberg-Petermann} renormalization group. This is Stora's elaboration \cite{stora16}, \cite{stora1993differential}, \cite{klaus2000micro} on \hbox{St\"uckelberg-Bogoliubov-Epstein-Glaser} normalization \cite{ep73roleofloc}, which constructs the $\text{T}(I)$ inductively in $n=|I|$. We leave species-theoretic aspects of renormalization, and possible connections to \hbox{Connes-Kreimer} theory \cite{MR1845168}, \cite{Bondia00}, \cite{Kreimer05}, \cite{FredHopf14}, to future work.
In the original formulation by Tomonaga, Schwinger, Feynman and Dyson, would-be \hbox{time-ordered} products are obtained by informally multiplying Wick algebra products by step functions, which is in general ill-defined by H\"ormander's criterion. This leads to the divergence of individual terms of the formal power series, called UV-divergences. Then informal methods are used to obtain finite values from these infinite terms \cite[Preface and Section 4.3]{MR1359058}.
The exponential species $\textbf{E}$, given by $\textbf{E}[I]=\bC$ and $1_\bC\in \textbf{E}[I]$ denoted $\tH_I$, has the structure of an algebra in species by linearizing taking unions of sets,
\[
\mu_{S,T}: \textbf{E}[S]\otimes \textbf{E}[T]\to \textbf{E}[I]
,\qquad
\tH_S\otimes \tH_T\mapsto \tH_{I}
.\]
An $\textbf{E}$-module $\textbf{m}=(\textbf{m},\rho)$ is an associative and unital morphism
\[\rho:\textbf{E}\bigcdot\textbf{m}\to \textbf{m}\]
for $\textbf{m}$ a species. Moreover, taking the inverse of $\mu_{S,T}$ as the comultiplication turns $\textbf{E}$ into a connected (co)commutative bialgebra, and so the category of $\textbf{E}$-modules $\textsf{Rep}(\textbf{E})$ is a symmetric monoidal category with monoidal product the Cauchy product of $\textbf{E}$-modules. In particular, we may consider Hopf/Lie algebras internal to $\textsf{Rep}(\textbf{E})$, which we call Hopf/Lie \hbox{$\textbf{E}$-algebras}.
The retarded $Y\downarrow(-)$ and advanced $Y\uparrow(-)$ Steinmann arrows are (we formalize as) raising operators on $\Sig$, whose precise definition is due to \hbox{Epstein-Glaser-Stora} \cite[p.82-83]{epstein1976general}. They define two $\textbf{E}$-module structures on $\Sig$,
\[
\textbf{E}\bigcdot \Sig \to \Sig
,\quad
\tH_Y \otimes \tH_F \, \mapsto\, Y \downarrow\tH_F
\qquad \text{and} \qquad
\textbf{E}\bigcdot \Sig \to \Sig
,\quad
\tH_Y \otimes \tH_F \, \mapsto\, Y \uparrow \tH_F
.\]
See \autoref{sec:The Steinmann Arrows}. In particular, the retarded arrow is generated by putting $\{\ast\} \downarrow \tH_{(I)}= -\tH_{(\ast, I)} +\tH_{(\ast I)}$.\footnote{\ $(\ast I)$ denotes the composition of $\{ \ast \}\sqcup I$ which has a single lump} Then
\[
Y\! \downarrow \tH_{(I)}= \underbrace{\sum_{Y_1\sqcup Y_2=Y} \mu_{Y_1, Y_2\sqcup I}\big ( \text{s}(\tH_{(Y_1)}) \otimes \tH_{(Y_2\sqcup I)} \big )}_{\text{denoted $\mathtt{R}_{(Y;I)}$}}
\]
where $\text{s}:\Sig \to \Sig$ is the antipode of $\Sig$. The Steinmann arrows were first studied by Steinmann \cite[Section 3]{steinmann1960}, where $\Sig$ is represented as operator-valued distributions. Here, the \hbox{operator-valued} distribution which corresponds to $\mathtt{R}_{(Y;I)}\in \Sig[Y\sqcup I]$ is called the retarded product $\text{R}(Y;I)$.\footnote{\ note that some authors, e.g. \cite{dutsch2019perturbative}, call $\text{R}(Y;i)$ the retarded product, and then call $\text{R}(Y;I)$ the generalized retarded product}
Since $\{\ast\} \downarrow (-)$ is a commutative biderivation of $\Sig$ (\autoref{steinmannarrowaredercoder}), the retarded Steinmann arrow gives $\Sig$ the structure of a Hopf $\textbf{E}$-algebra, and $\Zie$ the structure of a Lie $\textbf{E}$-algebra (similarly for the advanced arrow). There is an interesting description of these Lie $\textbf{E}$-algebras in terms of the adjoint braid arrangement, see \autoref{sec:The Steinmann Arrows and Dynkin Elements}. The Steinmann arrows are ``two halves'' of the restricted adjoint representation $\textbf{L}\bigcdot \Sig \to \Sig$ of $\Sig$, which is reflected in \cite[Equation 13]{steinmann1960}. This directly corresponds to how the retarded $\Delta_-$ and advanced $\Delta_+$ propagators are two halves of the causal propagator $\Delta_{\text{S}}=\Delta_+ - \Delta_-$.
Let $\cH^{\bigcdot}(-,-)$ denote the internal hom for the Cauchy product of species, and let
\[
(-)^{\textbf{E}}=\cH^{\bigcdot} ( \textbf{E} , -)
.\]
See \autoref{Coalgebras} for a more explicit definition. See also \cite[Section 2]{norledge2020species} for more details here regarding the differentiation between the $\formj$-colored sets $I$ (physically, the source field) and the $\formg$-colored sets $Y$ (physically, the coupling constant). Then $(-)^{\textbf{E}}$ is an endofunctor on species, which is lax monoidal with respect to the Cauchy product. Therefore $\Sig^{\textbf{E}}$ is naturally an algebra, with multiplication inherited from $\Sig$. Then, by currying the retarded Steinmann action $\textbf{E}\bigcdot \Sig \to \Sig$, we obtain a homomorphism $\Sig \to \Sig^{\textbf{E}}$. Similarly for the setting with decorations, given a choice of adiabatically switched interaction action functional $\ssS_{\text{int}}\in \mathcal{F}_{\text{loc}}[[\hbar]]$, after acting with the retarded Steinmann arrows and currying, we obtain the homomorphism
\begin{align*}
\Sig \otimes \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]} &\to (\Sig \otimes \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]})^{\textbf{E}}\\[6pt]
\tH_F\otimes \ssA_{i_1}\otimes \dots \otimes \ssA_{i_n} &\mapsto \, \sum_{r=0}^\infty \underbrace{\downarrow\dots \downarrow}_{\text{$r$ times}} \tH_F \otimes \underbrace{\ssS_{\text{int}}\otimes \dots \otimes \ssS_{\text{int}}}_{\text{$r$ times}} \, \otimes\, \ssA_{i_1}\otimes \dots \otimes \ssA_{i_n}.
\end{align*}
Compare this with the formalism for creation-annihilation operators in \cite[Chapter 19]{aguiar2010monoidal}. Then, finally, the corresponding system of perturbed interacting time-ordered products $\widetilde{\text{T}}$ is given by composing this homomorphism with the image of $\text{T}$ under the endofunctor $(-)^{\textbf{E}}$,
\[
\widetilde{\text{T}} : \Sig\otimes \textbf{E}_{\mathcal{F}_\text{loc} [[\hbar]]} \to (\Sig\otimes \textbf{E}_{\mathcal{F}_\text{loc} [[\hbar]]})^{\textbf{E}} \xrightarrow{\text{T}^{\textbf{E}}} (\textbf{U}_{ \mathcal{F}((\hbar))})^{\textbf{E}} \cong \textbf{U}_{\mathcal{F}((\hbar))[[\formg]]}
.\]
See \autoref{sec:Perturbation of T-Products by Steinmann Arrows}. It is a theorem of pAQFT that this does indeed land in $\textbf{U}_{\mathcal{F}[[\hbar,\formg]]}$.
Finally, in \autoref{sec:T-Exponentials} and \autoref{sec:Time-Ordered Products}, we formalize S-matrices, or time-ordered exponentials, as follows. Let $\Hom(-,-)$ denote the external hom for species, which lands in vector spaces $\sfVec$. We let
\[
\mathscr{S}(-)=\Hom(\textbf{E},-)
.\]
This is lax monoidal with respect to the Cauchy product. In the presence of a generic system of products on an algebra $\textbf{a}$,
\[
\varphi:\textbf{a}\otimes \textbf{E}_V\to \textbf{U}_\cA,
\]
series $\mathtt{s}\in \mathscr{S}(\textbf{a})$ of $\textbf{a}$
\[\mathtt{s}:\textbf{E}\to\textbf{a}
,\qquad
\tH_I\mapsto \mathtt{s}_I\]
induce $\mathscr{S}(\textbf{U}_\cA)\cong \cA[[\formj]]$-valued functions $\mathcal{S}_{\mathtt{s}}$ on $V$ as follows,
\[
\mathcal{S}_{\mathtt{s}} : V \to \cA[[\formj]]
,\qquad
\ssA \mapsto \mathcal{S}_{\mathtt{s}}(\formj\! \ssA) := \sum_{n=0}^{\infty} \dfrac{\formj^n}{n!} \varphi_{n} ( \mathtt{s}_{n} \otimes \underbrace{\ssA\otimes \dots \otimes \ssA}_{\text{$n$ times}})
.\]
If $\varphi$ is a homomorphism of algebras, then
\[
\mathcal{S}_{(-)}: \mathscr{S}(\textbf{a})\to \text{Func}(V, \cA[[\formj]])
\]
is a homomorphism of $\bC$-algebras. As a basic example, if we put $\textbf{a}=\textbf{E}$, $\cA=C^\infty(V^\ast)$, and set $\formj=1$ at the end, then one can recover the classical exponential function in this way.
For $c\in \bC$, the so-called (scaled) universal series $\mathtt{G}(c)$ of $\Sig$ is given by sending each finite set to the (scaled) composition with one lump,
\[
\mathtt{G}(c): \textbf{E} \to \Sig
,\qquad
\tH_{I}\mapsto \mathtt{G}(c)_{I}:= c^n\, \tH_{(I)}
.\]
If we set $c=1/\text{i}\hbar$, then the function $\mathcal{S}=\mathcal{S}_{\mathtt{G}(1/\text{i}\hbar)}$ above for a fully normalized system of generalized time-ordered products $\text{T}:\Sig\otimes \textbf{E}_{\mathcal{F}_\text{loc} [[\hbar]]}\to\textbf{U}_{\mathcal{F}((\hbar))}$ recovers the usual perturbative S-matrix scheme of pAQFT,
\[
\mathcal{S}:\mathcal{F}_{\text{loc}}[[\hbar]]\to\mathcal{F}((\hbar))[[\formj]]
,\qquad
\ssA \mapsto \mathcal{S}(\formj\! \ssA) = \sum_{n=0}^{\infty} \bigg( \dfrac{1}{\text{i} \hbar} \bigg)^n \dfrac{\formj^n}{n!} \text{T}_{n} ( \tH_{(n)} \otimes \underbrace{\ssA\otimes \dots \otimes \ssA}_{\text{$n$ times}})
.\]
The image of $\mathcal{S}(\formj\! \ssA)$ after applying perturbation by the retarded Steinmann arrow and a choice of interaction $\ssS_{\text{int}}\in \mathcal{F}_{\text{loc}}[[\hbar]]$ is
\[
\mathcal{Z}_{\formg \ssS_{\text{int}}}(\formj\! \ssA)
=
\sum_{n=0}^\infty \sum_{r=0}^\infty
\bigg(\dfrac{1}{\text{i}\hbar}\bigg)^{\! r+n}
\dfrac{\formg^{r} \formj^n}{r!\, n!}\, \text{R}_{r;n} (\underbrace{\ssS_{\text{int}}\otimes \dots \otimes \ssS_{\text{int}}}_{\text{$r$ times}}\, ;\, \underbrace{\ssA\otimes \dots \otimes \ssA}_{\text{$n$ times}} )
\]
where, by our previous expression for $\mathtt{R}_{(Y;I)}=Y\downarrow \tH_{(I)}$ (and letting $\overline{\text{T}}$ denote the precomposition of $\text{T}$ with the antipode of $\Sig \otimes \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}$), we have
\[
\text{R}_{Y;I}(\ssS_{\text{int}}^{\, Y};\ssA^I)
=
\text{T}_{Y\sqcup I}(Y\downarrow \tH_{(I)} \otimes \ssS_{\text{int}}^{\, Y} \otimes \ssA^I)
=
\sum_{Y_1 \sqcup Y_2=Y} \overline{\text{T}}_{Y_1}(\ssS_{\text{int}}^{\, Y_1}) \star_{\text{H}} \text{T}_{Y_2\sqcup I}( \ssS_{\text{int}}^{\, Y_2}\otimes \ssA^I)
.\]
Then, since
\[
\mathcal{S}_{(-)}:\mathscr{S}(\Sig)\to \text{Func}\big (\mathcal{F}_{\text{loc}}[[\hbar]] , \mathcal{F}((\hbar))[[\formg]]\big )
\]
is a homomorphism of $\bC$-algebras, it follows that $\mathcal{Z}_{\formg \ssS_{\text{int}}}$ is given by
\[
\mathcal{Z}_{\formg \ssS_{\text{int}}}(\formj\! \ssA)
=
\mathcal{S}^{-1}( \formg \ssS_{\text{int}})\star_{\text{H}} \mathcal{S}(\formg \ssS_{\text{int}} +\formj\! \ssA )
.\]
This is the generating function, or partition function, for time-ordered products of interacting field observables, see e.g. \cite[Section 8.1]{ep73roleofloc}, \cite[Section 6.2]{dutfred00}, going back to Bogoliubov \cite[Chapter 4]{Bogoliubov59}. In this paper, we arrive at the generating function $\mathcal{Z}_{\formg \ssS_{\text{int}}}$ through purely Hopf-theoretic considerations. However, it was originally motivated by attempts to make sense of the path integral synthetically. For some recent developments, see \cite{collini2016fedosov}, \cite{MR4109798}.
\subsection*{Structure.}
This paper is divided into two parts. In part one, we focus on developing theory for the Hopf algebra of compositions $\Sig$ and its primitive part $\Zie$. In part two, we specialize to pAQFT for the case of a real scalar field on Minkowski spacetime.
\subsection*{Acknowledgments.}
We thank Adrian Ocneanu for his support and useful discussions. This paper would not have been written without Nick Early's discovery that certain relations appearing in Ocneanu's work were known in quantum field theory as the Steinmann relations. We thank Yiannis Loizides and Maria Teresa Chiri for helpful discussions during an early stage of this project. We thank Arthur Jaffe for his support, useful suggestions, and encouragement to pursue this topic. We thank Penn State maths department for their continued support.
\part{Hopf Monoids}
\section{The Algebras}
We recall the Hopf algebra of compositions $\Sig$, together with its Lie algebra of primitive elements $\Zie\hookrightarrow \Sig$. We show that $\Sig$ and $\Zie$ are naturally algebras over the exponential species $\textbf{E}$. This will be a \hbox{species-theoretic} formalization of mathematical structure discovered by Steinmann \cite{steinmann1960} and \hbox{Epstein-Glaser-Stora} \cite{epstein1976general}, which, combined with a certain `perturbation of systems of products' construction using the $\textbf{E}$-action, will recover the perturbative construction of interacting fields in pAQFT, as in \cite[Section 8.1]{ep73roleofloc}, \cite[Section 6.2]{dutfred00}, going back to Bogoliubov \cite[Chapter 4]{Bogoliubov59}.
\subsection{Compositions} \label{comp}
Let $I$ be a finite set of cardinality $n$. We think of $I$ as having `color' $\formj$ (physically, the source field). As a particular example of the set $I$, we have the set of integers $[n]:=\{ 1, \dots, n \}$ (formally, we have picked a section of the decategorification functor $I\mapsto n$). For $k\in \bN$, let
\[
(k):=\{1,\dots,k\}
\]
equipped with the ordering \hbox{$1>\dots> k$}. A \emph{composition} $F$ of $I$ of \emph{length} $l(F)=k$ is a surjective function $F:I\to (k)$. The set of all compositions of $I$ is denoted $\Sigma[I]$,
\[
\Sigma[I]:= \bigsqcup_{k\in \bN} \big \{ \text{surjective functions}\ F:I \to (k) \big\}
.\]
We often denote compositions by $k$-tuples
\[
F= (S_1, \dots, S_k)
\]
where $S_j:= F^{-1}(j)$, $1\leq j \leq k$. The $S_j$ are called the \emph{lumps} of $F$. In particular, we have the length one composition $(I)$ for $I\neq \emptyset$, and the length zero composition $(\, )$ which is the unique composition of the empty set. The \emph{opposite} $\bar{F}$ of $F$ is defined by
\[
\bar{F}:=(S_k,\dots, S_1), \qquad \text{i.e.} \quad \bar{F}^{-1}(j)=F^{-1}(k+1-j)
.\]
Given a decomposition $I\! =S\sqcup T$ of $I$ ($S,T$ can be empty), for $F=(S_1, \dots , S_{k})$ a composition of $S$ and $G=(T_1,\dots, T_{l})$ a composition of $T$, their \emph{concatenation} $FG$ is the composition of $I$ given by
\[
FG: = ( S_1, \dots , S_{k}, T_1,\dots, T_{l} )
.\]
For $S\subseteq I$ and $F=(S_1, \dots , S_{k}) \in \Sigma[I]$, the \emph{restriction} $F|_S$ of $F$ to $S$ is the composition of $S$ given by
\[
F|_S:= ( S_1 \cap S, \dots, S_k\cap S )_+
\]
where $(-)_+$ means we delete any sets from the list which are the empty set.
For compositions $F,G\in \Sigma[I]$, we write $G\leq F$ if $G$ can be obtained from $F$ by iteratively merging contiguous lumps. Given compositions $G\leq F$ with $G=(T_1, \dots, T_l)$, we let
\[
l(F/G):=\prod^{k}_{j=1} l( F|_{T_j} )
\qquad \text{and} \qquad
(F/G)!:=\prod^{k}_{j=1} l( F|_{T_j} )!\,
.\]
\subsection{The Cocommutative Hopf Monoid of Compositions} \label{hopfofsetcomp}
Let
\[
\Sig[I]
:= \big\{\text{formal $\bC$-linear combinations of compositions of $I$}\big\}
.\]
The vector space $\Sig[I]$ is naturally a right module over the symmetric group on $I$, and these actions extend to a contravariant functor from the category $\textsf{S}$ of finite sets and bijections into the category $\textsf{Vec}$ of vector spaces over $\bC$,
\[
\Sig:\textsf{S}^\text{op} \to \textsf{Vec}
,\qquad
I \mapsto \Sig[I]
.\]
For $F$ a composition of $I$, let $\tH_F\in \Sig[I]$ denote the basis element corresponding to $F$. The sets $\{\tH_F: F\in \Sigma[I]\}$ form the \emph{$\tH$-basis} of $\Sig$.
In general, functors $\textbf{p}:\textsf{S}^\text{op} \to \textsf{Vec}$ are called (complex) \emph{vector species}, going back to Joyal \cite{joyal1981theorie}, \cite{joyal1986foncteurs}. Morphisms of vector species $\eta:\textbf{p}\to \textbf{q}$ are natural transformations; they consist of a linear map $\eta_I: \textbf{p}[I]\to \textbf{q}[I]$ for each finite set $I$ which commutes with the action of the bijections. When $I=[n]:=\{ 1,\dots,n\}$, we abbreviate $\eta_n:= \eta_{[n]}$.
We equip vector species with the tensor product $\textbf{p}\bigcdot \textbf{q}$ known as the \emph{Cauchy product} \cite[Definition 8.5]{aguiar2010monoidal}, given by
\begin{equation}\label{eq:Cauchy}
\textbf{p}\bigcdot \textbf{q} [I] := \bigoplus_{I=S\sqcup T } \textbf{p}[S] \otimes \textbf{q}[T]
.
\end{equation}
This is the Day convolution with respect to disjoint union of sets and tensor product of vector spaces. In this paper, we consider algebraic structures on species which are constructed using this tensor product. In particular, a multiplication on a species $\textbf{p}$ consists of linear maps
\[
\mu_{S,T} : \textbf{p} [S] \otimes \textbf{p}[T] \to \textbf{p}[I]
\]
and a comultiplication on $\textbf{p}$ consists of linear maps
\[
\Delta_{S,T} : \textbf{p}[I] \to \textbf{p} [S] \otimes \textbf{p}[T]
,\]
where we have a map for each choice of decomposition $I=S\sqcup T$ ($S,T$ can be empty). We can then impose conditions like (co)associativity, see e.g. \cite[Section 1.3]{norledge2020species}.
Following \cite[Section 11]{aguiar2013hopf}, $\Sig$ is a connected\footnote{\ a species $\textbf{p}$ is \emph{connected} if $\textbf{p}[\emptyset]=\bC$} bialgebra, meaning it is naturally equipped with an associative, unital multiplication and a coassociative, counital comultiplication, which are compatible in the sense they satisfy the bimonoid axiom. See \cite[Section 8.3.1]{aguiar2010monoidal} for details. The multiplication and comultiplication are given in terms of the $\tH$-basis by
\[
\mu_{S,T}(\tH_F\otimes \tH_G):=\tH_{FG} \qquad \text{and} \qquad \Delta_{S,T} (\tH_F) := \tH_{F|_S} \otimes \tH_{F|_T}
.\]
We sometimes abbreviate $\tH_F \tH_G:= \mu_{S,T}(\tH_F \otimes\tH_G)$. The unit and counit are given by
\[
\mathtt{1}_{\Sig}:=\tH_{(\, )} \qquad \text{and} \qquad \epsilon_\emptyset(\tH_{(\, )}):=1_\bC
.\]
Let
\begin{equation}\label{antipode}
\overline{\tH}_F:= \sum_{G\geq \bar{F}} (-1)^{l(G)}\, \tH_G
.
\end{equation}
Then \cite[Theorem 11.38]{aguiar2010monoidal} (in the case $\textbf{q}=\textbf{E}^\ast_+$ and $q=1$) shows that
\begin{equation}\label{eq:inversion relation for reverse time-ordered products}
\sum_{S\sqcup T=I} \tH_{F|_S} \overline{\tH}_{F|_T}
=0
\qquad \text{and} \qquad
\sum_{S\sqcup T=I}\overline{\tH}_{F|_S} \tH_{F|_T}
=0
.
\end{equation}
In general, connected bialgebras are automatically Hopf algebras, and it follows from \textcolor{blue}{(\refeqq{eq:inversion relation for reverse time-ordered products})} that the antipode $s:\Sig\to \Sig$ is given by
\[
\text{s}_I(\tH_F)=\overline{\tH}_F
.\]
The Hopf algebra $\Sig$ is the free cocommutative Hopf algebra on the positive coalgebra $\textbf{E}^\ast_+$ \cite[Section 11.2.5]{aguiar2010monoidal}, and so $\Sig\cong \textbf{L}\boldsymbol{\circ} \textbf{E}^\ast_+$ where `$\boldsymbol{\circ}$' is plethysm of species and $\textbf{L}\hookrightarrow \Sig$ is the subspecies of singleton lump compositions ($=$linear orders).
There is a second important basis of $\Sig$, called the \emph{$\tQ$-basis}. The $\tQ$-basis is also indexed by compositions, and is given by
\[
\tQ_F:= \sum_{G\geq F} (-1)^{ l(G)-l(F) } \dfrac{1}{ l(G/F) } \tH_G\qquad \text{or equivalently} \qquad \mathtt{H}_F=: \sum_{G\geq F} \dfrac{1}{( G/F )!} \tQ_G
.\]
For $S\subseteq I$ and $F\in \Sigma[I]$, we have \emph{deshuffling}
\[F\res_S\, :=
\begin{cases}
F|_S &\quad \text{if $S$ is a union of lumps of $F$}\footnote{\ }\\
0\in \Sig[S] &\quad \text{otherwise.}
\end{cases}
\]
\footnotetext{\ not necessarily contiguous}The multiplication and comultiplication of $\Sig$ is given in terms of the $\tQ$-basis by
\[
\mu_{S,T} ( \tQ_F\otimes \tQ_G ) = \tQ_{FG}
\qquad \text{and} \qquad
\Delta_{S,T} (\tQ_F) = \tQ_{F\res_S} \otimes \, \tQ_{F\res_T}
.\]
\subsection{Decorations}\label{Decorations}
Given a complex vector space $V$, we can use $V$ to `decorate' $\Sig$ in order to obtain an enlarged Hopf algebra $\Sig\otimes \textbf{E}_V$. This goes as follows.
We have the species denoted $\textbf{E}_V$, given by
\[
\textbf{E}_V[I] := V^{\otimes I}= \! \! \! \! \! \! \! \! \! \underbrace{V\otimes \dots \otimes V}_{\text{a copy of $V$ for each $i\in I$}}
\! \! \! \! \! \! \! \! \! \! \!
.\]
The action of bijections is given by relabeling tensor factors.
\begin{remark}
Notice species of the form $\textbf{E}_V$ are exactly the monoidal functors \hbox{$\textbf{E}_V: \textsf{S}^{\text{op}} \to \textsf{Vec}$}.
\end{remark}
We denote vectors by $\ssA,\ssS\in V$, and we denote simple tensors of $V^{\otimes I}$ by
\[
\ssA_I=\ssA_{i_1}\otimes \cdots \otimes \ssA_{i_n} \in V^{\otimes I}
\]
where $I=\{ i_1,\dots, i_n\}$. If $\ssA_i=\ssA$ for all $i\in I$, then we write
\begin{equation}\label{eq:simpletensors}
\ssA^{I}:= \ssA\otimes \cdots \otimes \ssA\in V^{\otimes I}
\qquad \text{and} \qquad
\ssA^{ n }:=\ssA^{[n]}\in V^{\otimes [n]}
\end{equation}
where $[n]=\{1,\dots,n\}$ as usual.
We let `$\otimes$' denote the Hadamard product of species, which is given by componentwise tensoring, see e.g. \cite[Section 1.2]{norledge2020species}. Then the species of $V$-\emph{decorated compositions} $\Sig\otimes \textbf{E}_V$ is given by
\[
\Sig\otimes \textbf{E}_V[I] = \Sig[I]\otimes \textbf{E}_V[I]= \Sig[I] \otimes V^{\otimes I}
.\]
Following \cite[Section 8.13.4]{aguiar2010monoidal}, $\Sig\otimes \textbf{E}_V$ is a connected bialgebra, with multiplication given by
\[
\mu_{S,T}\big((\tH_F\otimes\ssA_S) \otimes (\tH_G \otimes \ssA_T)\big)
:=
\tH_F \tH_G \otimes \ssA_S \otimes \ssA_T
\]
and comultiplication given by
\[
\Delta_{S,T}(\tH_F\otimes \ssA_I)
:=
(\tH_{F|_S}\otimes{\ssA_{I}}|_S)\otimes(\tH_{F|_T} \otimes {\ssA_{I}}|_T)
.\]
The unit and counit are given by
\[
\mathtt{1}_{\Sig\otimes \textbf{E}_V}:=\tH_{(\, )}\otimes 1_\bC\qquad \text{and}\qquad \epsilon_\emptyset(\tH_{(\, )}\otimes 1_\bC):=1_\bC
.\]
For $\tH_F\otimes \ssA_I\in \Sig\otimes \textbf{E}_V[I]$, we have
\[
\sum_{S\sqcup T=I}
\mu_{S,T}\big ((\tH_{F|_S}\otimes {\ssA_{I}}|_S)\otimes (\overline{\tH}_{F|_T}\otimes {\ssA_{I}}|_T)\big)
=
\underbrace{\sum_{S\sqcup T=I} \tH_{F|_S} \overline{\tH}_{F|_T}}_{\text{$=0$ by \textcolor{blue}{(\refeqq{eq:inversion relation for reverse time-ordered products})}}} \otimes\, \ssA_{I}
=0
\]
and
\[
\sum_{S\sqcup T=I}
\mu_{S,T}\big ((\overline{\tH}_{F|_S}\otimes {\ssA_{I}}|_S)\otimes (\tH_{F|_T}\otimes {\ssA_{I}}|_T)\big)
=
\underbrace{\sum_{S\sqcup T=I} \overline{\tH}_{F|_S} \tH_{F|_T}}_{\text{$=0$ by \textcolor{blue}{(\refeqq{eq:inversion relation for reverse time-ordered products})}}}\otimes\, \ssA_{I}
=0
.\]
It follows that the antipode of $\Sig\otimes \textbf{E}_V$ is given by
\begin{equation}\label{eq:antipode}
\text{s}_I(\tH_F\otimes \ssA_I)= \overline{\tH}_F \otimes \ssA_I.
\end{equation}
\subsection{The Steinmann Algebra}\label{sec:Steain}
The Hopf algebra $\Sig$ is connected and cocommutative, and so the CMM Theorem applies, see \cite[Section 1.4]{norledge2020species}. We now describe the positive\footnote{\ a species $\textbf{p}$ is \emph{positive} if $\textbf{p}[\emptyset]=0$} Lie algebra of primitive elements
\[
\mathcal{P}(\Sig)\subset \Sig
.\]
For $I\in \sfS$ a finite set, let a \emph{tree} $\mathcal{T}$ over $I$ be a planar\footnote{\ i.e. a choice of left and right child is made at every node} full binary tree whose leaves are labeled bijectively with the blocks of a partition of $I$ (a \emph{partition} $P$ of $I$ is a set of disjoint nonempty subsets of $I$, called \emph{blocks}, whose union is $I$). The blocks of this partition, called the \emph{lumps} of $\mathcal{T}$, form a composition called the \emph{debracketing} $F_\mathcal{T}$ of $\mathcal{T}$, by listing them in order of appearance from left to right. We denote trees by nested products $[\, \cdot\, ,\, \cdot\, ]$ of subsets or trees, see \autoref{fig:tree}. We make the convention that no trees exist over the empty set $\emptyset$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{tree} \caption{Let $I$ be various subsets of $\{1,2,3,4,5,6,7,8,9\}$. The trees $[4]$, $[1,23]$ ($\neq[23,1]$), $[[2,3],5]$, $[[24,[1,9]],678]$ are shown. The debracketing of $[[24,[1,9]],678]$ is the composition $(24,1,9,678)$. If we put $\mathcal{T}_1=[24,[1,9]]$ and $\mathcal{T}_2=[678]$, then $[\mathcal{T}_1, \mathcal{T}_2]$ would also denote this tree.}
\label{fig:tree}
\end{figure}
\noindent We define the positive species $\textbf{Zie}$ by letting $\textbf{Zie}[I]$ denote the vector space of formal $\bC$-linear combinations of trees over $I$, modulo the relations of antisymmetry and the Jacobi identity as interpreted on trees in the usual way. Explicitly,
\begin{enumerate}
\item(antisymmetry) for all trees of the form $[\dots [\mathcal{T}_1, \mathcal{T}_2 ]\dots]$ (writing a tree in this form is equivalent to picking a node) we have
\[
[\dots [\mathcal{T}_1, \mathcal{T}_2 ]\dots]
+ [\dots [\mathcal{T}_2, \mathcal{T}_1 ]\dots]=
0
.\]
\item(Jacobi Identity) for all trees of the form $[\dots [[\mathcal{T}_1,\mathcal{T}_2],\mathcal{T}_3] \dots ]$ we have
\[
[\dots [[\mathcal{T}_1,\mathcal{T}_2],\mathcal{T}_3]\dots ]+
[\dots [[\mathcal{T}_3,\mathcal{T}_1],\mathcal{T}_2]\dots ]+
[\dots [[\mathcal{T}_2,\mathcal{T}_3],\mathcal{T}_1]\dots ]=
0
.\]
\end{enumerate}
Then $\Zie$ is a positive Lie algebra in species, with Lie bracket $\partial^\ast$ given by
\[
\partial_{S,T}^\ast(\mathcal{T}_1\otimes \mathcal{T}_2):=[\mathcal{T}_1,\mathcal{T}_2]
.\]
\begin{remark}
We have that $\Zie$ is the free Lie algebra on the positive exponential species $\textbf{E}^\ast_+$, and so the species $\Zie$ is also given by
\[
\Zie[I]
=
\Lie \boldsymbol{\circ} \bE^\ast_+[I]= \bigoplus_{P} \Lie[P]
\]
where $\textbf{Lie}$ is the species of the Lie operad, and the direct sum is over all partitions $P$ of $I$.
\end{remark}
The Lie algebra in species $\Zie$ is closely related to the Steinmann algebra from the physics literature \cite[Section III.1]{bros}, \cite[Section 6]{Ruelle}. Precisely, the Steinmann algebra is an ordinary graded Lie algebra based on the structure map for the adjoint braid arrangement realization of $\Zie$. The adjoint braid arrangement realization of $\Zie$ is the topic of \cite{lno2019}, and the fact that the Lie algebra there is indeed $\Zie$ was shown in \cite{norledge2019hopf}.
Via the commutator bracket, $\Sig$ is a Lie algebra in species, given by
\[
[\tH_F,\tH_G] =\tH_F \tH_G -\tH_G \tH_F
.\]
Let
\[
[I; \text{2}]:= \big\{ \text{surjective functions}\ I\to \{1,2 \} \big\}
\]
denote the set of compositions of $I$ with two lumps. Since $\Sig$ is connected, its positive Lie subalgebra of primitive elements $\mathcal{P}(\Sig)\subset \Sig$ is given on nonempty $I$ by
\[
\mathcal{P}(\Sig)[I]
=\bigcap_{(S,T)\in [I; \text{2}]} \text{ker}
\big(
\Delta_{S,T} :\Sig[I]\to \Sig[S]\otimes \Sig[T]
\big)
.\]
In particular, $\tQ_{(I)}\in \mathcal{P}(\Sig)[I]$ for $I$ nonempty. Since $\Zie$ is freely generated by stick trees $[I]$, we can define a homomorphism of Lie algebras by
\[\Zie\to \mathcal{P}(\Sig), \qquad [I]\mapsto \tQ_{(I)}.\]
To describe this explicitly, given a tree $\mathcal{T}$, let $\text{antisym}(\mathcal{T})$ denote the set of $2^{l(F_{\mathcal{T}})-1}$ many trees which are obtained by switching left and right branches at nodes of $\mathcal{T}$. For $\mathcal{T}' \in \text{antisym}(\mathcal{T})$, let $(\mathcal{T}, \mathcal{T}')\in \bZ/2\bZ$ denote the parity of the number of node switches required to bring $\mathcal{T}$ to $\mathcal{T}'$. Then the homomorphism is given in full by
\[
\textbf{Zie} \to \mathcal{P}(\Sig), \qquad \mathcal{T} \mapsto \tQ_\mathcal{T} := \sum_{\mathcal{T}' \in \text{antisym}(\mathcal{T})} (-1)^{ (\mathcal{T},\mathcal{T}') } \tQ_{F_{\mathcal{T}'}}
.\]
By \cite[Corollary 11.46]{aguiar2010monoidal}, this is an isomorphism. From now on, we make the identification
\[
\Zie= \mathcal{P}(\Sig)
\]
and retire the notation $\mathcal{P}(\Sig)$.
\subsection{Type $A$ Dynkin Elements}\label{adjoint}
Recall that the set of minuscule weights of (the root datum of) $\text{SL}_I(\bC)$ is in natural bijection with $[I; \text{2}]$. We denote the minuscule weight corresponding to $(S,T)$ by $\lambda_{ST}$. See \cite[Section 3.1]{norledge2019hopf} for more details.
A \emph{cell}\footnote{\ also known as maximal unbalanced families \cite{billera2012maximal} and positive sum systems \cite{MR3467341}} \cite[Definition 6]{epstein1976general} over $I$ is (equivalent to) a subset $\cS\subseteq [I; \text{2}]$ such that for all $(S,T)\in [I; \text{2}]$, exactly one of
\[
(S,T)\in \cS \qquad \text{and} \qquad (T,S)\in \cS
\]
is true, and whose corresponding set of minuscule weights is closed under conical combinations, that is
\[
\lambda_{UV}\in \text{coni}\big \la \lambda_{ST} : (S,T)\in \cS \big \ra
\quad \implies \quad
(U,V)\in \cS
.\]
By dualizing conical spaces generated by minuscule weights, cells are in natural bijection with chambers of the adjoint of the braid arrangement, see \cite[Section 3.3]{norledge2019hopf}, \cite[Definition 2.5]{epstein2016}. Their number is sequence \href{https://oeis.org/A034997}{A034997} in the OEIS. We denote the species of formal $\bC$-linear combinations of cells by $\textbf{L}^\vee$.
Associated to each composition $F$ of $I$ is the subset $\cF_F\subseteq [I; \text{2}]$ consisting of those compositions $(S,T)$ which are obtained by merging contiguous lumps of $F$,
\[
\cF_F:=\big \{ (S,T)\in [I; \text{2}] : (S,T) \leq F\big \}
.\]
More geometrically, $\cF_F$ is the subset corresponding to the set of minuscule weights which are contained in the closed braid arrangement face of $F$. Let us write $F\subseteq \cS$ as abbreviation for $\cF_F \subseteq \cS$.
Consider the morphism of species given by
\begin{equation} \label{eq:hbasisexp}
\textbf{L}^\vee\to \Sig
,\qquad
\cS\mapsto \mathtt{D}_\cS
:=
-\sum_{\bar{F}\subseteq \cS} (-1)^{l(F)} \tH_{F}
.
\end{equation}
The element $\mathtt{D}_\cS$ is called the \emph{Dynkin element} associated to the cell $\cS$. These special elements were defined by Epstein-Glaser-Stora in \cite[Equation 1, p.26]{epstein1976general}, and the name is due to \hbox{Aguiar-Mahajan} \cite[Equation 14.1]{aguiar2017topics} (see \autoref{Rem:dny}). In fact, $\mathtt{D}_\cS$ is a primitive element \cite[Proposition 14.1]{aguiar2017topics}, and so we actually have a morphism $\textbf{L}^\vee\to \Zie$.
For $i\in I$, let $\cS_i$ denote the cell given by
\[ \cS_i:=\big \{ (S,T)\in [I, \text{2}]: i\in S \big \} . \]
This is the cell corresponding to the adjoint braid arrangement chamber which contains the projection of the basis element $e_i\in \bR I$ onto the sum-zero hyperplane. Let the \emph{total retarded} Dynkin element $\mathtt{D}_i$ associated to $i$ be given by
\[
\mathtt{D}_i:= \mathtt{D}_{\cS_i} =-\sum_{\substack{F\in \Sigma[I]\\ i\in S_k}} (-1)^{l(F)} \tH_F
.\]
These Dynkin elements are considered in \cite[Section 14.5]{aguiar2013hopf}. For $i\in I$, let
\[
\bar{\cS}_i
:=
\big \{
(S,T)\in [I, \text{2}]: i\in T
\big \}
.\]
This is the cell corresponding to the adjoint braid arrangement chamber which is opposite to the chamber of $\cS_i$. Let the \emph{total advanced} Dynkin element $\mathtt{D}_{\bar{i}}$ associated to $i$ be given by
\[
\mathtt{D}_{\bar{i}}:= \mathtt{D}_{\bar{\cS}_i}=-\sum_{\substack{F\in \Sigma[I]\\ i\in S_1}} (-1)^{l(F)} \tH_F
.\]
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{supp}
\caption{A cell $\cS$ over $\{1,2,3\}$ (on the adjoint braid arrangement) and its Dynkin element $\mathtt{D}_\cS$ (on the tropical geometric realization of $\boldsymbol{\Sigma}$, where the multiplication embeds facets and the comultiplication projects onto facets, see \cite[Introduction]{norledge2019hopf})). In the presence of causal factorization, the time component of the corresponding generalized retarded function $r_\cS$ is a \hbox{$\bC[[\hbar, \formg]]$-valued} generalized function on the braid arrangement with support the gray cone. The Dynkin element shown is $\mathtt{D}_\cS=\mathtt{D}_3=\mathtt{R}_{(12;3)}$. Its support consists of those configurations such that the event labeled by $3$ can be causally influenced by the events labeled by $1$ and $2$.}
\label{fig:supp}
\end{figure}
\begin{remark}\label{Rem:dny}
More generally, Dynkin elements are certain Zie elements of generic real hyperplane arrangements, which are indexed by chambers of the corresponding adjoint arrangement. They were introduced by Aguiar-Mahajan in \cite[Equation 14.1]{aguiar2017topics}. Specializing to the braid arrangement, one recovers the type $A$ Dynkin elements $\mathtt{D}_\cS$.
\end{remark}
In \cite{norledge2019hopf}, the following perspective on the Dynkin elements is given. The Hopf algebra $\Sig^\ast$ which is dual to $\Sig$ is realized as an algebra $\hat{\Sig}^\ast$ of piecewise-constant functions on the braid arrangement. Then its dual, in the sense of polyhedral algebras \cite[Theorem 2.7]{MR1731815}, is an algebra $\check{\Sig}^\ast$ of certain functionals of piecewise-constant functions on the adjoint braid arrangement, i.e. those coming from evaluating on permutohedral cones. We have the morphism of species
\[
\check{\Sig}^\ast\to (\textbf{L}^\vee)^\ast
\]
defined by sending functionals to their restrictions to piecewise-constant functions on the complement of the hyperplanes. Since the multiplication of $\check{\Sig}^\ast$ corresponds to embedding hyperplanes, this morphism is the indecomposable quotient of $\check{\Sig}^\ast$ \cite[Theorem 4.5]{norledge2019hopf}. Then, in \cite[Proposition 5.1]{norledge2019hopf}, we see that taking the linear dual of this morphism recovers the Dynkin elements map,
\[
\textbf{L}^\vee\to \Sig, \qquad \cS\mapsto \mathtt{D}_\cS
.\]
(Here we have identified $\Sig^\ast= \check{\Sig}^\ast$.) Therefore we obtain the following.
\begin{thm}[$\! \! ${\cite{norledge2019hopf}}]
The morphism of species $\textbf{L}^\vee\to \Zie$ is surjective. Therefore the Dynkin elements $\{\mathtt{D}_\cS: \cS\ \text{is a cell over $I$} \}$ span $\Zie$.
\end{thm}
\subsection{The Steinmann Relations}\label{stein}
The Dynkin elements span $\Zie$, but they are not linearly independent. The relations which are satisfied by the Dynkin elements are generated by relations known in physics as the Steinmann relations, introduced in \cite{steinmann1960zusammenhang}, \cite{steinmann1960}.
Let a pair of \emph{overlapping channels} over $I$ be a pair $(S,T),(U,V)\in [I; \text{2}]$ of two-lump compositions of $I$ such that
\[
S\cap U\neq \emptyset \qquad \text{and} \qquad T\cap U \neq \emptyset
.\]
Let $\cS_1$, $\cS_2$, $\cS_3$, $\cS_4$ be four cells over $I$ with $(S,T),(U,V)\in \cS_1$, and such that $\cS_2$, $\cS_3$, $\cS_4$ are obtained from $\cS_1$ by replacing, respectively,
\[ (S,T), (U,V) \mapsto (T,S), (U,V) \]
\[ (S,T), (U,V) \mapsto (T,S), (V,U) \]
\[ (S,T), (U,V) \mapsto (S,T), (V,U). \]
Then, by inspecting the definition of the Dynkin elements \textcolor{blue}{(\refeqq{eq:hbasisexp})}, we see that\footnote{\ we go through the argument for the basic $4$-point case in \autoref{ex:stein}, which is sufficient to exhibit the general phenomenon}
\[
\mathtt{D}_{\cS_1} -\mathtt{D}_{\cS_2} +\mathtt{D}_{\cS_3} -\mathtt{D}_{\cS_4} =0.
\]
In general, a \emph{Steinmann relation} is any relation between Dynkin elements obtained in this way, i.e. an alternating sum of four Dynkin elements which are obtained from each other by switching overlapping channels only. This definition of the Steinmann relations can be found in \cite[Seciton 4.3]{epstein1976general} (it is given slightly more generally there for paracells).
An alternative characterization of the Steinmann relations in terms of the Lie cobracket of the dual Lie coalgebra $\Zie^\ast$ is \cite[Definition 4.2]{lno2019}. Here, the Steinmann relations appear in the same way one can arrive at generalized permutohedra, i.e. by insisting on type $A$ `factorization' in the sense of species-theoretic coalgebra structure. See \cite[Theorem 4.2 and Remark 4.2]{norledge2019hopf}.
Thus, Dynkin elements satisfy the Steinmann relations. Moreover, they are sufficient.
\begin{thm}
The relations which are satisfied by the Dynkin elements are generated by the Steinmann relations. That is, if
\[
\textbf{Stein}[I]
:=
\big \la
\mathtt{D}_{\cS_1}-\mathtt{D}_{\cS_2}+\mathtt{D}_{\cS_3}-\mathtt{D}_{\cS_4}
:
\mathtt{D}_{\cS_1}-\mathtt{D}_{\cS_2}+\mathtt{D}_{\cS_3}-\mathtt{D}_{\cS_4}=0 \text{ is a Steinmann relation}
\big \ra\footnote{\ angled brackets denote $\bC$-linear span}
\]
then
\[
\Zie\cong \bigslant{\textbf{L}^\vee}{\textbf{Stein}}
.\]
\end{thm}
\begin{proof}
This follows by combining \cite[Theorem 4.3]{lno2019} with \cite[Theorems 4.2 and 4.5]{norledge2019hopf}.
\end{proof}
\begin{ex}\label{ex:stein}
Let us give the basic $4$-point example $I=\{1,2,3,4\}$, which takes place on a square facet of the type $A$ coroot solid \cite[Figure 1]{lno2019}. Consider the following four cells over $I$ (we have marked where they differ, the names `$s$-channel' and `$u$-channel' are from physics and refer to Mandelstam variables),
\[
\cS_1=\big\{\underbrace{(23,14)}_{u\text{-channel}}, (12,34), (1,234), (13,24), (13,24), (134,2), (3,124) \}
\]
\[
\cS_2=\big\{(23,14), \underbrace{(34,12)}_{s\text{-channel}}, (1,234), (13,24), (13,24), (134,2), (3,124) \big \}
\]
\[
\cS_3=\big\{\underbrace{(14,23)}_{u\text{-channel}}, (34,12), (1,234), (13,24), (13,24), (134,2), (3,124) \big \}
\]
\[
\cS_4=\big\{(14,23), \underbrace{(12,34)}_{s\text{-channel}}, (1,234), (13,24), (13,24), (134,2), (3,124) \big \}
.\]
The $s$-channel and the $u$-channel overlap, and so we should now have
\[
\mathtt{D}_{\cS_1}-\mathtt{D}_{\cS_2}+\mathtt{D}_{\cS_3}-\mathtt{D}_{\cS_4}=0
.\]
To see this, let us assume throughout that $\tH_{F}$ appears in the $\tH$-basis expansion \textcolor{blue}{(\refeqq{eq:hbasisexp})} of $\mathtt{D}_{\cS_1}$, i.e. $\bar{F} \subseteq \cS_1$. Then we have
\begin{equation*}
\bar{F} \subseteq \cS_1 \setminus \{ (12,34), (23,14) \} \quad \implies \quad \bar{F}\subseteq\cS_1, \ \cS_2,\ \cS_3,\ \cS_4. \tag{$\spadesuit$} \label{1}
\end{equation*}
If $\bar{F} \nsubseteq \cS_1 \setminus \{ (12,34), (23,14) \}$, then either $(12,34)\in \bar{F}$ or $(23,14)\in \bar{F}$ but not both, since the channels overlap. We then have
\begin{equation*}
(12,34)\in \bar{F} \implies \bar{F}\subseteq\cS_1, \ \bar{F}\nsubseteq\cS_2, \ \bar{F}\nsubseteq\cS_3, \ \bar{F}\subseteq\cS_4. \tag{$\varheart$} \label{2}
\end{equation*}
We also have
\begin{equation*}
(23,14)\in \bar{F} \implies \bar{F}\subseteq\cS_1, \ \bar{F}\subseteq\cS_2, \ \bar{F}\nsubseteq\cS_3, \ \bar{F}\nsubseteq\cS_4 . \tag{$\vardiamond$} \label{3}
\end{equation*}
Notice that in all three cases \textcolor{blue}{(\refeqq{1})}, \textcolor{blue}{(\refeqq{2})}, \textcolor{blue}{(\refeqq{3})}, the prefactors of $\tH_{F}$ sum to zero in the four term alternating sum of the Steinmann relation.
\end{ex}
\begin{remark}
In \cite{norledge2019hopf}, the Steinmann condition is seen to be equivalent to the restriction to generalized permutohedra in a certain local (or spherical) sense. Ocneanu \cite{oc17} and Early \cite{early2019planar} have studied an affine version of the Steinmann condition, in the context of higher structures and matroid subdivisions. Here, one observes that the (translated) hyperplanes of the adjoint braid arrangement for the Mandelstam variables give three subdivisions of the hypersimplex $\Delta(2,4)$ (octahedron).
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{affine}
\label{fig:affine}
\end{figure}
\noindent See \cite{borges2019generalized}, \cite{cachazo2019planar} for the closely related study of generalized Feynman diagrams in generalized biadjoint $\Phi^3$-theory.
\end{remark}
\subsection{Ruelle's Identity} \label{sec:Ruelle's Identity and the GLZ Relation}
Since the Dynkin elements span $\Zie$, we can ask what is the description of the Lie bracket of $\Zie$ in terms of the Dynkin elements. The answer is known in the physics literature as Ruelle's identity.
In order to state Ruelle's identity, we need to notice the following. For $S\sqcup T=I$, if $\cS_1$ is a cell over $S$ and $\cS_2$ is a cell over $T$, then $\cS_1 \sqcup \cS_2$ describes a collection of codimension one faces of the adjoint braid arrangement which are supported by the hyperplane orthogonal to $\lambda_{ST}$ (in \cite{lno2019}, such faces were called \emph{Steinmann equivalent}). A cell $\cS^{[S,T]}$ over $I$ which satisfies
\[
\cS^{[S,T]} \supseteq \cS_1\sqcup\cS_2
\qquad \text{and} \qquad
(S,T)\in \cS^{[S,T]}
\]
corresponds to a chamber arrived at by moving (by an arbitrarily small amount) from an interior point of a face of $\cS_1 \sqcup \cS_2$ in the $\lambda_{ST}$ direction. In particular, such cells always exist, but they are not unique (the Steinmann relations exactly quotient out this ambiguity). The chamber obtained by moving in the opposite direction corresponds to the cell obtained by replacing $(S,T)$ with $(T,S)$ in $\cS^{[S,T]}$.
\begin{prop}[Ruelle's Identity {\cite[Equation 6.6]{Ruelle}}] \label{prop:ruelle}
For $S\sqcup T=I$, let $\cS_1$ be a cell over $S$ and let $\cS_2$ be a cell over $T$. Let $\cS^{[S,T]}$ be a cell over $I$ which satisfies
\[
\cS^{[S,T]} \supseteq \cS_1\sqcup\cS_2
\qquad \text{and} \qquad
(S,T)\in \cS^{[S,T]}
.\]
Let $\cS^{[T,S]}$ denote the cell obtained by replacing $(S,T)$ with $(T,S)$ in $\cS^{[S,T]}$. Then the Lie bracket of $\Zie$ is given by
\begin{equation}\label{eq:ruelleiden}
[\mathtt{D}_{\cS_1},\mathtt{D}_{\cS_2}]
=
\mathtt{D}_{\cS^{[S,T]}}-\mathtt{D}_{\cS^{[T,S]}}
.
\end{equation}
\end{prop}
\begin{proof}
This result is clear from \cite[Section 5.2]{lno2019}; the Lie bracket which was given to the adjoint braid arrangement realization of $\textbf{Z}\textbf{ie}$ (denoted there by $\Gam$) coincides with \textcolor{blue}{(\refeqq{eq:ruelleiden})}. Alternatively, we can just explicitly check, as in \cite[Section 4.3]{epstein1976general}.
\end{proof}
\section{$\Sig$ as a Hopf $\textbf{E}$-Algebra} \label{Sig as a Hopf E-Algebra}
We now recall the Steinmann arrows, which are (or we interpret as) actions of the exponential species $\textbf{E}$ on $\Sig$. We show that they give $\Sig$ the structure of a Hopf $\textbf{E}$-algebra (=Hopf monoid internal to $\textbf{E}$-modules) in two ways, and thus the primitive part $\Zie=\mathcal{P}(\Sig)$ the structure of a Lie $\textbf{E}$-algebra in two ways.
\subsection{Derivations and Coderivations of $\Sig$} \label{Derivations and Coderivations of Sig}
Let $Y=\{ y_1,\dots, y_r \}$ be a finite set with cardinality $r\in \bN$. We think of $Y$ as having `color' $\formg$ (physically, the coupling constant). Given a species $\textbf{p}$, we have the \hbox{$Y$-\emph{derivative}} $\textbf{p}^{[Y]}$ of $\textbf{p}$, which is the species given by
\[
\textbf{p}^{[Y]}[I] := \textbf{p}[Y \sqcup I] \qquad \text{and} \qquad \textbf{p}^{[Y]}[\sigma] := \textbf{p}[\text{id}_Y \sqcup \sigma]
.\]
A \emph{raising operator} $u$ on $\textbf{p}$ is a morphism of species of the form\footnote{\ for raising operators, we often abbreviate $u(\mathtt{a}):=u_I(\mathtt{a})$}
\[
u: \textbf{p} \to \textbf{p}^{[Y]}, \qquad \ta \mapsto u(\ta)
.\]
\begin{remark}
Moreover, there is an endomorphism algebra of raising operators \cite[Section 2.4]{norledge2020species}, which features when considering modules internal to species, see \cite[Section 5.1]{norledge2020species}.
\end{remark}
As a particular example of the set $Y$, we have the set of formal symbols $[r]:=\{ \ast_1, \dots, \ast_r \}$ (formally, we have picked a section of the decategorification functor $Y\mapsto r$). We often abbreviate $\ast=\ast_1$, also $\ast=\{\ast\}$ and $\ast I = \{ \ast \}\sqcup I$. The \emph{derivative} $\textbf{p}'$ of $\textbf{p}$ is the $Y$-derivative in the singleton case $Y=\{\ast\}$, thus
\[
\textbf{p}'[I]:= \textbf{p}^{[\ast]}[I]= \textbf{p}[\ast I]
.\]
Following \cite[Section 8.12.1]{aguiar2010monoidal}, an \emph{up operator} $u$ on $\textbf{p}$ is a raising operator of the form $u:\textbf{p} \to \textbf{p}'$. Writing $u_\ast(\ta)=u(\ta)$ in order to specify the name of the adjoined singleton, we call an up operator \emph{commutative} if
\[
u_{\ast_2}(u_{\ast_1}(\mathtt{a})) = u_{\ast_1} (u_{\ast_2}(\mathtt{a}))
.
\]
Raising operators can be obtained by iteratively applying commutative up operators, see \cite[Section 5.4]{norledge2020species}. Following \cite[Section 8.12.4]{aguiar2010monoidal}, an up operator on an algebra $\textbf{a}$ is called an \emph{up derivation} if
\begin{equation}\label{eq:upder}
u\big(\mu_{S,T}(\mathtt{a} \otimes \mathtt{b})\big)
=\mu_{\ast S,T}\big (u(\mathtt{a}) \otimes \mathtt{b}\big)+\mu_{S,\ast T}\big(\mathtt{a} \otimes u(\mathtt{b})\big)
\end{equation}
(it follows that $u(\mathtt{1}_{\ta})=0$ if $\textbf{a}$ is unital) and an up operator on a coalgebra $\textbf{c}$ is called an \emph{up coderivation} if
\begin{equation}\label{eq:upcoder}
\big(
u\otimes \text{id} + \text{id} \otimes u
\big)
\circ
\Delta_{S,T}(\mathtt{a})
=
\Delta_{\ast S,T} \big(u(\mathtt{a})\big)
+
\Delta_{S,\ast T} \big(u(\mathtt{a})\big)
.
\end{equation}
An \emph{up biderivation} on a bialgebra $\textbf{h}$ is an up operator which is both an up derivation and an up coderivation. The data of an up (co/bi)derivation on a connected species $\textbf{h}$ is equivalent to giving $\textbf{h}$ the structure of an \hbox{$\textbf{L}$-(co/Hopf)algebra} (= an (co/Hopf)monoid internal to $\textbf{L}$-modules). The data of a commutative up (co/bi)derivation on $\textbf{h}$ is equivalent to giving $\textbf{h}$ the structure of an \hbox{$\textbf{E}$-(co/Hopf)algebra}. See \cite[Section 5]{norledge2020species} for more details and proofs.
Thus, an up derivation $u$ of $\Sig$ is a morphism of species
\[
u:\Sig\to \Sig',
\qquad
\tH_F\mapsto u(\tH_F)
\qquad \quad \text{such that} \qquad
u(\tH_F \tH_G )=
u(\tH_F)\tH_G+\tH_F u(\tH_G)
.\]
An up derivation of $\Sig$ is determined by its values on the elements $\tH_{(I)}$, $I\in \sfS$, since then
\[
u(\tH_F)= u(\tH_{(S_1)})\tH_{(S_2)}\dots \tH_{(S_k)}
+\ \, \cdots\ \, +
\tH_{(S_1)} \dots \tH_{(S_{k-1})} u(\tH_{(S_k)})
.\]
An up derivation must have $u(\tH_{(\, )})=0$, since $\mathtt{1}_{\Sig}= \tH_{(\, )}$. An up coderivation $u$ of $\Sig$ is a morphism of species
\[
u:\Sig\to \Sig', \qquad \tH_F\mapsto u(\tH_F)
\qquad \quad \text{such that} \qquad
\Delta_{\ast S,T} \big( u( \tH_F )\big)=
u(\tH_{F|_S}) \otimes \tH_{F|_T}
.\]
In particular, an up coderivation must have
\[
\Delta_{\ast S,T}\big(u( \tH_{(I)} )\big)=
u(\tH_{(S)}) \otimes \tH_{(T)}
.\]
Therefore, an up biderivation $u$ of $\Sig$ must have
\[
u(\tH_{(i)})= a_1 \tH_{(\ast,i)}+a_2 \tH_{(\ast i)}+a_3 \tH_{(i, \ast)}
\qquad \quad \text{where} \qquad
a_1+a_2+a_3=0\in \bC
.\]
Motivated by this, given $a,b\in \bC$, we define an up derivation $u_{a,b}$ of $\Sig$ by
\begin{equation}\label{eq:defbider}
u_{a,b}:\Sig\to \Sig'
,\qquad
u_{a,b}(\tH_{(I)}):
= -a \tH_{(\ast,I)}+(a+b) \tH_{(\ast I)}-b \tH_{(I, \ast)}
.
\end{equation}
Towards an explicit description, consider the following example for $I=\{1,2,3\}$,
\begin{align*}
u_{a,b}(\tH_{(12,3)})
&
=\, u_{a,b}(\tH_{(12)})\tH_{(3)}+ \tH_{(12)}u_{a,b}(\tH_{(3)})\\
&
=(-a\tH_{(\ast, 12)}+(a+b)\tH_{(\ast 12)}-b\tH_{(12,\ast)})\tH_{(3)}+
\tH_{(12)}(-a\tH_{(\ast, 3)}+(a+b)\tH_{(\ast 3)}-b\tH_{(3,\ast)})\\
&
=-a\tH_{(\ast, 12,3)}+(a+b)\tH_{(\ast 12,3)}-b\tH_{(12,\ast,3)})
-a\tH_{(12,\ast, 3)}+(a+b)\tH_{(12,\ast 3)}-b\tH_{(12,3,\ast)}
.
\end{align*}
From this, we see that in general
\[
u_{a,b}(\tH_F)
=
\sum_{1\leq m\leq k}
-a\mathtt{H}_{(S_1,\dots ,\ast, S_m,\dots,S_k)}
+(a+b)\mathtt{H}_{(S_1,\dots, \ast S_m,\dots,S_k)}
-b\mathtt{H}_{(S_1,\dots, S_m,\ast,\dots,S_k)}
.\]
\begin{thm}\label{steinmannarrowaredercoder}
Given $a,b\in \bC$, the morphism of species
\[
\Sig\to \Sig',
\qquad
\tH_F \mapsto u_{a,b}(\tH_{F})
\]
is an up biderivation of $\Sig$ (it follows this gives $\Sig$ the structure of a Hopf $\textbf{L}$-algebra).
\end{thm}
\begin{proof}
In the following, for $F=(S_1,\dots,S_k)$ a composition of $I$ and $S\subseteq I$, we write
\[
(U_1,\dots,U_k):=(S_1\cap S, \dots ,S_k\cap S)
.\]
In general, $(U_1,\dots,U_k)$ is a decomposition of $I$.
First, $u_{a,b}$ defines a derivation of $\Sig$ by construction. To see that $u_{a,b}$ also defines a coderivation, we have
\begin{align*}
\Delta_{\ast S,T} \big( u_{a,b}( \tH_F )\big)\ =\
&
\ \ \ \ \Delta_{\ast S,T} \Bigg( \sum_{1\leq m\leq k}
-a\mathtt{H}_{(S_1,\dots,\ast, S_m,\dots,S_k)}
+(a+b)\mathtt{H}_{(S_1,\dots, \ast S_m,\dots,S_k)}
-b\mathtt{H}_{(S_1,\dots, S_m,\ast,\dots,S_k)}\Bigg )\\[7pt]
=\
&
\ \ \ \
\Bigg(\sum_{1\leq m\leq k}-a\mathtt{H}_{(U_1,\dots,\ast, U_m,\dots,U_k)_+}+
(a+b)\mathtt{H}_{(U_1,\dots, \ast U_m,\dots,U_k)_+}
-b\tH_{(U_1,\dots, U_m,\ast,\dots,U_k)_+}\Bigg)\otimes \tH_{F|_T}\\[7pt]
=\
&
\ \ \ \
\Bigg(\sum_{\substack{1\leq m\leq k \\[2pt] U_m \neq \emptyset}}-a\mathtt{H}_{(U_1,\dots,\ast, U_m,\dots,U_k)_+}+
(a+b)\mathtt{H}_{(U_1,\dots, \ast U_m,\dots,U_k)_+}
-b\tH_{(U_1,\dots, U_m,\ast,\dots,U_k)_+}\Bigg)\otimes \tH_{F|_T}\\
&+\ \underbrace{
\Bigg(\sum_{\substack{1\leq m\leq k\\[2pt]U_m=\emptyset}}\big (-a+(a+b)-b\big)\, \mathtt{H}_{(U_1,\dots,U_{m-1},\ast,U_{m+1},\dots,U_k)_+}\Bigg)}_{=0} \otimes\, \tH_{F|_T}\\[7pt]
=\
&
\ \ \ \ u(\tH_{F|_S})\otimes \tH_{F|_T}.
\end{align*}
Therefore $u_{a,b}$ is a biderivation of $\Sig$.
\end{proof}
\subsection{The Steinmann Arrows} \label{sec:The Steinmann Arrows}
We now recall the Steinmann arrows for $\Sig$, whose precise definition is due to Epstein-Glaser-Stora \cite[p.82-83]{epstein1976general}. The Steinmann arrows were first considered by Steinmann in settings where $\Sig$ is represented as operator-valued distributions \cite[Section 3]{steinmann1960}.
Let the \emph{retarded Steinmann arrow} be the up biderivation of $\Sig$ given by
\begin{equation}\label{steindown}
\ast\downarrow(-) :\Sig\to \Sig',
\qquad
\ast\downarrow \tH_F:= u_{1,0}(\tH_F)=
\sum_{1\leq m\leq k}
-\mathtt{H}_{(S_1,\dots,\ast, S_m,\dots,S_k)}
+\mathtt{H}_{(S_1,\dots, \ast S_m,\dots, S_k)}
.
\end{equation}
Let the \emph{advanced Steinmann arrow} be the up biderivation of $\Sig$ given by
\begin{equation}\label{steinup}
\ast\uparrow(-) : \Sig\to \Sig' ,
\qquad
\ast\uparrow \tH_F:= u_{0,1}(\tH_F)=
\sum_{1\leq m\leq k}
\mathtt{H}_{(S_1,\dots, \ast S_m,\dots, S_k)}
-\mathtt{H}_{(S_1, \dots, S_m,\ast, \dots,S_k)}
.
\end{equation}
We use this arrow notation from now on instead of `$u$' in order to match the physics literature. In particular
\[
\ast \downarrow \tH_{(I)}=-\tH_{(\ast,I)} + \tH_{(\ast I)}
\qquad \text{and} \qquad
\ast \uparrow \tH_{(I)}=\tH_{(\ast I)} -\tH_{(I,\ast )}
.\]
We have
\[
\ast \uparrow \tH_{F}\, -\, \ast \downarrow \tH_{F} = u_{-1,1}(\tH_F) = [ \tH_{(\ast)} , \tH_{F}]
.\]
This identity appears often in the physics literature for operator-valued distributions, e.g. \cite[Equation 13]{steinmann1960}, \cite[Equation 83]{ep73roleofloc}. The biderivation $u_{-1,1}$ gives $\Sig$ the structure of a Hopf $\textbf{L}$-algebra. This $\textbf{L}$-action is the restriction of the adjoint representation of $\Sig$. Notice the Steinmann arrows are commutative up operators. By \cite[Proposition 5.4]{norledge2020species}, we can restrict them to obtain up derivations of $\Zie$,
\[
\ast\downarrow(-): \Zie\to \Zie',
\qquad
\mathtt{D}_\cS \mapsto \ast \downarrow \mathtt{D}_\cS
\qquad \text{and} \qquad
\ast\uparrow(-):\Zie\to \Zie',
\qquad
\mathtt{D}_\cS \mapsto \ast \uparrow \mathtt{D}_\cS
.\]
Following \cite[Section 5]{norledge2020species}, the Steinmann arrows equip $\Sig$ with the structure of a Hopf \hbox{$\textbf{E}$-algebra} (and $\Zie$ with the structure of a Lie $\textbf{E}$-algebra) in two ways. The details are as follows. First, $\textbf{E}$ is the \emph{exponential species}, given by
\[
\textbf{E}[I]:=\bC \qquad \text{for all} \quad I\in \textsf{S}
.\]
We denote $\tH_I:=1_\bC\in \textbf{E}[I]$. The exponential species is an algebra in species when equipped with the trivial multiplication
\[
\mu_{S,T}: \textbf{E}[S] \otimes \textbf{E}[T] = \bC \otimes \bC \xrightarrow{\sim} \bC = \textbf{E}[I]
,\qquad
\tH_S \otimes \tH_T \mapsto \tH_I
.\]
We have the following $\textbf{E}$-modules induced by the Steinmann arrows, as defined in \cite[Equation 23]{norledge2020species},
\[
\textbf{E}\bigcdot \Sig \to \Sig,
\qquad
\tH_Y\otimes \mathtt{a}\, \mapsto \,
Y\! \downarrow \mathtt{a}:=
\underbrace{y_r \downarrow
\circ\cdots \circ
y_1 \downarrow}_{\text{invariant of the order}}(\mathtt{a})
\]
and
\[
\textbf{E}\bigcdot \Sig \to \Sig,
\qquad
\tH_Y\otimes \mathtt{a}\, \mapsto \,
Y\! \uparrow \mathtt{a}:=
\underbrace{y_r \uparrow
\circ\cdots \circ
y_1 \uparrow}_{\text{invariant of the order}}(\mathtt{a})
\]
where $Y=\{y_1,\dots,y_r\}$ as usual. In particular, $Y\downarrow(-)$ and $Y\uparrow(-)$ are the Steinmann arrow raising operators obtained from iterating the Steinmann arrow up operators $\ast \downarrow(-)$ and $\ast\downarrow(-)$, as mentioned in \autoref{Derivations and Coderivations of Sig}. For example, the retarded arrow $Y\downarrow(-)$ consists of a linear map of the form
\[
\Sig[I] \to \Sig[Y\sqcup I]
\]
for each choice of finite set $I$. For $Y=[r]:=\{\ast_1 , \dots ,\ast_r \}$, we abbreviate
\[
\downarrow(-) :=\ast\downarrow(-), \qquad \downarrow\downarrow(-) :=\{ \ast_1, \ast_2 \} \downarrow(-),\quad \dots
\]
and similarly for the advanced arrow. Since the arrows are derivations, they respect the multiplication of $\Sig$, and since the arrows are coderivations, they respect the comultiplication of $\Sig$. It follows that these $\textbf{E}$-actions give $\Sig$ the structure of a Hopf monoid constructed internal to $\textbf{E}$-modules.
By inspecting the definitions, we see that
\begin{equation}\label{eq:retardadvan}
Y\! \downarrow \tH_{(I)}=
\mathtt{R}_{(Y;I)}:=\! \sum_{Y_1\sqcup Y_2=Y}\overline{\tH}_{(Y_1)} \tH_{(Y_2\sqcup I)}
\qquad \text{and} \qquad
Y\! \uparrow \tH_{(I)}=
\mathtt{A}_{(Y;I)}:=\! \sum_{Y_1\sqcup Y_2=Y} \tH_{(Y_1\sqcup I)} \overline{\tH}_{(Y_2)}
.
\end{equation}
It follows that
\[
Y\! \downarrow \tH_F
=
\sum_{Y_1 \sqcup\dots\sqcup Y_k =Y} \mathtt{R}_{(Y_1;S_1)}\dots \mathtt{R}_{(Y_{k};S_{k})}
\qquad \text{and} \qquad
Y\! \uparrow \tH_F
=
\sum_{Y_1 \sqcup\dots\sqcup Y_k =Y} \mathtt{A}_{(Y_1;S_1)}\dots \mathtt{A}_{(Y_{k};S_{k})}
.\]
The sums are over all decompositions $(Y_1,\dots, Y_k)$ of $Y$ of length $l(F)$. We call \hbox{$\mathtt{R}_{(Y;I)}, \mathtt{A}_{(Y;I)}\in \Sig[Y\sqcup I]$} the \emph{retarded} and \emph{advanced} elements respectively. The \emph{total retarded} and \emph{total advanced} elements are given by
\[
Y\! \downarrow \tH_{(i)}=
\mathtt{R}_{(Y;i)} =\sum_{Y_1\sqcup Y_2=Y}\overline{\tH}_{(Y_1)}\, \tH_{(Y_2 i)}
\qquad\text{and}\qquad
Y\! \uparrow \tH_{(i)}=
\mathtt{A}_{(Y;i)}=\sum_{Y_1\sqcup Y_2=Y} \tH_{(Y_2 i )}\, \overline{\tH}_{(Y_1)}
\]
respectively.
\begin{remark}\label{rem:double}
If we put $I=J\sqcup \{i\}$, then we have
\[
\mathtt{R}_{(J;i)}=
\sum_{\substack{S\sqcup T=I\\ i\in T}}\overline{\tH}_{(S)}\, \tH_{(T)}
=-\sum_{\substack{F\in \Sigma[I]\\ i\in S_k}} (-1)^{l(F)} \tH_F
=\mathtt{D}_i
\]
and
\[
\mathtt{A}_{(J;i)}=
\sum_{\substack{S\sqcup T=I\\ i\in T}} \tH_{(T)}\, \overline{\tH}_{(S)}=
-\sum_{\substack{F\in \Sigma[I]\\ i\in S_1}} (-1)^{l(F)} \tH_F=
\mathtt{D}_{\bar{i}}
.\]
\end{remark}
\subsection{Currying the Steinmann Arrows}\label{Coalgebras}
Given a species $\textbf{p}$, we let $\textbf{p}^\textbf{E}$ denote the species given by
\[
\textbf{p}^{\textbf{E}}[I] := \prod_{r=0}^\infty\big (\textbf{p}^{[r]}[I]\big )^{ \sfS_r }
.\]
Here, $\textbf{p}^{[r]}$ is the $Y$-derivative of $\textbf{p}$ for $Y=[r]$, and $(-)^{ \sfS_r }$ denotes the subspace of $\sfS_r$-invariants, where $\sfS_r$ is the symmetric group on $[r]$. We denote elements of $\textbf{p}^{\textbf{E}}[I]$ using formal power series notation
\[
\sum_{r=0}^\infty \mathtt{x}_r, \qquad \mathtt{x}_r\in \textbf{p}^{[r]}[I]
.\]
Explicitly, $\mathtt{x}_r$ is an element of the vector space $\textbf{p} [ \{ \ast_1, \dots, \ast_r \} \sqcup I]$ which is invariant under the action of permuting $\{ \ast_1, \dots, \ast_r \}$ and leaving $I$ fixed.
The mapping $\textbf{p}\mapsto \textbf{p}^{\textbf{E}}$ extends to an endofunctor on species. In particular, given a morphism of species $\eta:\textbf{p} \to \textbf{q}$, we have the morphism $\eta^{\textbf{E}}$ given by
\begin{equation}\label{eq:endo}
\eta^{\textbf{E}} : \textbf{p}^{\textbf{E}} \to \textbf{q}^{\textbf{E}}
,\qquad
\sum_{r=0}^\infty \mathtt{x}_r\mapsto \sum_{r=0}^\infty \eta_{[r]\sqcup I}(\mathtt{x}_r)
.
\end{equation}
A \emph{series} of a species $\textbf{p}$ is a morphism of species of the form $s:\textbf{E}\to \textbf{p}$. Notice the elements of $\textbf{p}^{\textbf{E}}[I]$ are naturally series of the species $Y \mapsto \textbf{p}^{[Y]}[I]$. See \cite[Section 3.2]{norledge2020species} for more details. For the connection between $\textbf{p}^\textbf{E}$ and the internal hom for the Cauchy product, see \cite[Section 2.3]{norledge2020species}.
If $\textbf{a}$ is an algebra in species, then so is $\textbf{a}^\textbf{E}$, see \cite[Equation 12]{norledge2020species}. In particular, $\Sig^\textbf{E}$ is an algebra, with multiplication given by
\[
\sum_{r=0}^\infty \mathtt{x}_r \otimes \sum_{r=0}^\infty \mathtt{y}_r \ \mapsto \
\sum_{r=0}^\infty \sum_{r_1 + r_2 =r}
\dfrac{r!}{r_1 !\, r_2 !}
\mu_{[r_1] \sqcup S, [r_2] \sqcup T}(\mathtt{x}_{r_1} \otimes \mathtt{y}_{r_2})
.\]
\begin{thm} \label{coalgahomo}
We have the following homomorphisms of algebras in species,
\[
\Sig\to \Sig^{\textbf{E}},
\qquad
\tH_F\mapsto
\sum_{r=0}^\infty\, \sum_{Y_1 \sqcup\dots\sqcup Y_k =[r]}
\mathtt{R}_{(Y_1;S_1)} \dots \mathtt{R}_{(Y_{k};S_{k})}
\]
and
\[
\Sig\to \Sig^{\textbf{E}},
\qquad
\tH_F\mapsto
\sum_{r=0}^\infty\, \sum_{Y_1 \sqcup\dots\sqcup Y_k =[r]}
\mathtt{A}_{(Y_1;S_1)} \dots \mathtt{A}_{(Y_{k};S_{k})}
.\]
\end{thm}
\begin{proof}
The Steinmann arrows are commutative up biderivations of $\Sig$, and so give $\Sig$ the structure of a Hopf $\textbf{E}$-algebra. This result is then a special case of \cite[Theorem 5.1]{norledge2020species}.
\end{proof}
The homomorphisms of \autoref{coalgahomo} are the unique extensions of the maps
\[
\tH_{(I)}\mapsto \sum_{r=0}^\infty \mathtt{R}_{(r;I)}
\qquad \text{and} \qquad
\tH_{(I)}\mapsto \sum_{r=0}^\infty \mathtt{A}_{(r;I)}
\]
to homomorphisms. In the application to causal perturbation theory, we shall be interested in the decorated analog of these homomorphisms, see \autoref{sec:Perturbation of T-Products by Steinmann Arrows}.
\begin{remark}
These homomorphisms $\Sig\to \Sig^\textbf{E}$ come from currying the $\textbf{E}$-actions of the Steinmann arrows. See \cite[Section 5.1]{norledge2020species} for details.
\end{remark}
\subsection{The Steinmann Arrows and Dynkin Elements} \label{sec:The Steinmann Arrows and Dynkin Elements}
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{steinarrow}
\caption{Schematic for the action of the retarded Steinmann arrow $\ast \downarrow$ for $I=\{1,2,3\}$ on the Steinmann sphere (left) and the tropical geometric realization of $\boldsymbol{\Sigma}$ (right, see \cite[Introduction]{norledge2019hopf}).}
\label{fig:steinarrow}
\end{figure}
We now show that the restriction of the Steinmann arrows to $\Zie$, which are derivations for its Lie bracket, have an interesting description in terms of cells, i.e. chambers of the adjoint braid arrangement.
Following \cite[Section 2]{epstein2016}, we define the commutative up operators
\[
\ast \downarrow(-):\textbf{L}^\vee\to {\textbf{L}^\vee}',
\qquad
\ast \downarrow \cS:=\big \{(\ast S,T),(S,\ast T),(I,\ast):(S,T)\in \cS\big\}
\]
and
\[
\ast\uparrow(-):\textbf{L}^\vee\to {\textbf{L}^\vee}',
\qquad
\ast\uparrow \cS:=\big \{(\ast S,T),(S,\ast T),(\ast,I):(S,T)\in \cS\big\}
.\]
These are indeed well-defined; $\ast \downarrow \cS$ corresponds to the adjoint braid arrangement chamber on the $I$ side of the hyperplane $\lambda_{\ast,I}=0$ which has the face of $\cS$ as a facet, and $\ast\uparrow \cS$ corresponds to the chamber on the $\ast$ side of the hyperplane $\lambda_{\ast,I}=0$ which has the face of $\cS$ as a facet. See around \cite[Remark 2.2]{lno2019} for more details. Thus, it follows from \autoref{prop:ruelle} (Ruelle's identity) that
\[
[ \tH_{(\ast)}, \mathtt{D}_\cS]= \mathtt{D}_{\ast \uparrow \cS} - \mathtt{D}_{\ast \downarrow \cS}
.\]
The induced $\textbf{E}$-modules are given by
\[
\textbf{E}\bigcdot \textbf{L}^\vee \to \textbf{L}^\vee
,\qquad
\tH_Y\otimes \cS \mapsto Y\downarrow \cS:=
\big\{
( Y_1\sqcup S, Y_2\sqcup T )\in [Y \sqcup I;\text{2}] : (S,T)\in \cS\ \text{or}\ S=I
\big\}
\]
and
\[
\textbf{E}\bigcdot \textbf{L}^\vee \to \textbf{L}^\vee
,\qquad
\tH_Y\otimes \cS \mapsto Y\uparrow \cS:=
\big\{
( Y_1\sqcup S, Y_2\sqcup T )\in [Y \sqcup I;\text{2}] : (S,T)\in \cS\ \text{or}\ T=I
\big\}
.\]
\begin{prop}\label{adjointinterp}
Given a cell $\cS$ over $I$, we have
\[
Y \downarrow\mathtt{D}_\cS= \mathtt{D}_{Y \downarrow \cS}
\qquad \text{and} \qquad
Y \uparrow\mathtt{D}_\cS= \mathtt{D}_{Y \uparrow \cS}
.\]
\end{prop}
\begin{proof}
We consider the retarded case $Y\downarrow\mathtt{D}_\cS= \mathtt{D}_{Y \downarrow \cS}$ only, since the advanced case then follows similarly. It is sufficient to consider the case $Y=\{\ast\}$. We have
\[
\downarrow\mathtt{D}_\cS
=
-\sum_{ \bar{F} \subseteq \cS } (-1)^{l(F)} \downarrow\tH_{F}
\qquad \text{and} \qquad
\mathtt{D}_{\downarrow \cS}= -\sum_{\bar{F} \subseteq \, \downarrow\cS } (-1)^{l(F)}\, \tH_{ F }
.\]
So, the result follows if we have the following equality
\[
\sum_{ \bar{F} \subseteq \cS } (-1)^{l(F)} \sum_{1\leq m\leq k}
- \mathtt{H}_{(S_1, \dots,\ast, S_m, \dots,S_k)}
+\mathtt{H}_{(S_1,\dots, \ast S_m,\dots, S_k)}
\overset{\mathrm{?}}{=}
\sum_{\bar{G} \subseteq \, \downarrow\cS } (-1)^{l(G)}\, \tH_{G}
.\]
Indeed, notice that the $\tH$-basis elements $\tH_G\in \Sig[\ast I]$ which appear on the LHS are exactly those such that
\[\bar{G}\subseteq \downarrow\cS.\]
Notice also that each $\tH_G$ appears with total sign $(-1)^{l(G)}$, since when $\ast$ is inserted as a singleton lump, thus increasing $l(G)$ by one, it appears also with a negative sign.
\end{proof}
\begin{remark}
This interpretation of the $\textbf{E}$-module structure of $\Sig$ restricted to the primitive part $\Zie=\mathcal{P}(\Sig)$ in terms of the adjoint braid arrangement suggests obvious generalizations of the Steinmann arrows in the direction of \cite{aguiar2017topics}, \cite{aguiar2020bimonoids}, since the generalization of Hopf monoids there is via hyperplane arrangements.
\end{remark}
\begin{cor}
We have the following homomorphisms of Lie algebras in species,
\[
\Zie\to \Zie^{\textbf{E}},
\qquad
\mathtt{D}_\cS\mapsto \mathtt{D}_{ (-) \downarrow \cS }
=
\sum_{r=0}^\infty \mathtt{D}_{ [r] \downarrow \cS }
= \mathtt{D}_\cS+ \mathtt{D}_{\downarrow \cS}+ \mathtt{D}_{ \downarrow\downarrow \cS}+ \cdots
\]
and
\[
\Zie\to \Zie^{\textbf{E}},
\qquad
\mathtt{D}_\cS\mapsto \mathtt{D}_{ (-) \uparrow \cS }
=
\sum_{r=0}^\infty \mathtt{D}_{ [r]\uparrow \cS }
= \mathtt{D}_\cS+ \mathtt{D}_{\uparrow \cS}+ \mathtt{D}_{\uparrow \uparrow \cS}+ \cdots
.\]
\end{cor}
\begin{proof}
The Steinmann arrows are commutative up biderivations of $\Zie$, and so give $\Zie$ the structure of a Lie $\textbf{E}$-algebra. This result is then a special case of \cite[Theorem 5.1]{norledge2020species}.
\end{proof}
\section{Products and Series}
We now recall several basic constructions of casual perturbation theory in the current, clean, abstract setting. We do this without yet imposing causal factorization/causal additivity. We say e.g. `$\text{T}$-product' and `$\text{R}$-product' for now, and then change to `time-ordered product' and `retarded product' in the presence of causal factorization.
\subsection{T-Products, Generalized T-Products, and Generalized R-Products} \label{sec:T-Products, Generalized T-Products, and Generalized R-Products}
Let $V$ be a vector space over $\bC$. Let $\cA$ be a $\bC$-algebra with multiplication denoted by $\star$. Let $\textbf{U}_\cA$ be the algebra in species given by
\[
\textbf{U}_\cA[I]:= \cA
.\]
The action of bijections is trivial, and the multiplication is the multiplication of $\cA$.
The \emph{positive exponential species} $\textbf{E}^\ast_+$ is given by
\[
\textbf{E}^\ast_+[I]:= \bC \qquad \text{if} \quad I\neq \emptyset \qquad \text{and} \qquad \textbf{E}^\ast_+[\emptyset] =0
.\]
Let a \emph{system of} \hbox{$\text{T}$\emph{-products}} $\text{T}$ be a system of products for the positive exponential species $\textbf{E}^\ast_+$, as defined in \cite[Section 6.2]{norledge2020species}. This means $\text{T}$ is a morphism of species of the form\footnote{\ recall the definition and notation for $\textbf{E}_V$ from \autoref{Decorations}}
\[
\text{T}: \textbf{E}^\ast_+ \otimes \textbf{E}_V \to \textbf{U}_\cA
,\qquad
\tH_{(I)} \otimes \ssA_I \mapsto
\text{T}_I(\tH_{(I)} \otimes \ssA_I)
\]
where recall $\textbf{E}^\ast_+ \otimes \textbf{E}_V$ is the Hadamard product of species, given by
\[
\textbf{E}^\ast_+ \otimes \textbf{E}_V [I] := \textbf{E}^\ast_+[I] \otimes \textbf{E}_V[I]
.\]
Thus, if $I\neq 0$, we have
\[
\textbf{E}^\ast_+ \otimes \textbf{E}_V [I] \cong V^{\otimes I}
.\]
We abbreviate
\begin{equation}\label{eq:abb}
\text{T}_I(\ssA_I):=\text{T}_I(\tH_{(I)} \otimes \ssA_I)
.
\end{equation}
Let $\cH(\textbf{E}_V, \textbf{U}_\cA)$ denote the species of linear maps between components, given by
\[
\cH(\textbf{E}_V, \textbf{U}_\cA)[I] := \Hom_{\textsf{Vec}}\! \big ( \textbf{E}_V[I], \textbf{U}_\cA[I] ) = \Hom_{\textsf{Vec}}\! \big ( V^{\otimes I}, \cA )
.\]
We have that $\cH(-, -)$ is the hom for the Hadamard product. Therefore we can curry $\text{T}$ to give the morphism of species
\[
\textbf{E}^\ast_+\to \cH(\textbf{E}_V, \textbf{U}_\cA), \qquad \tH_{(I)}\mapsto \text{T}(I)
\]
where $\text{T}(I)$ is the linear map
\[
\text{T}(I): V^{\otimes I} \to \cA
,\qquad
\ssA_I \mapsto \text{T}_I(\ssA_I)
.\]
The linear maps $\text{T}(I)$ are called $\text{T}$\emph{-products}. Notice that $\text{T}$-products are commutative in the sense that
\[
\text{T}_I\big( \textbf{E}_V[\sigma](\ssA_I)\big)=\text{T}_I(\ssA_I) \qquad \quad \text{for all bijections}\quad \sigma:I\to I
.\]
This property holds because the system $\text{T}$ is a morphism of species, and bijections act trivially for $\textbf{U}_\cA$. This commutativity exists despite the fact that the algebra $\cA$ is noncommutative in general.
\begin{remark}
In applications to QFT, we shall also have a causal structure on $V$. Then $\text{T}$ is meant to first order the vectors of $\ssA_I$ according to the causal structure, and then multiply in $\cA$, giving rise to this commutativity.
\end{remark}
Let the \emph{system of generalized} $\text{T}$\emph{-products} associated to a system of $\text{T}$-products be the unique extension to a system of products for $\Sig=\textbf{L}\boldsymbol{\circ}\textbf{E}_+^\ast$ which is a homomorphism, as defined in \cite[Section 6.2]{norledge2020species}. Thus
\[
\text{T}: \Sig\otimes \textbf{E}_V \to \textbf{U}_\cA
,\qquad
\tH_F\otimes \ssA_I\mapsto \text{T}_I(\tH_F\otimes \ssA_I):=
\text{T}_{S_1}(\ssA_{S_1}) \star \dots \star \text{T}_{S_k}(\ssA_{S_k})
.\]
The currying of $\text{T}$ is denoted by
\[
\Sig\to \cH(\textbf{E}_V, \textbf{U}_\cA), \qquad \tH_{F}\mapsto \text{T}(S_1)\dots \text{T}(S_k)
.\]
The linear maps
\[
\text{T}(S_1)\dots \text{T}(S_k):V^{\otimes I} \to \cA
,\qquad
\ssA_I\mapsto \text{T}_I(\tH_F\otimes \ssA_I)
\]
are called \emph{generalized} $\text{T}$-\emph{products}. Let the \emph{system of generalized} $\text{R}$\emph{-products} associated to a system of $\text{T}$-products be the restriction to the Lie algebra of primitive elements $\Zie$,
\[
\text{R}: \Zie \otimes \textbf{E}_V \to \textbf{U}_\cA, \qquad
\mathtt{D}_\cS\otimes \ssA_I\mapsto
\text{R}_I(\mathtt{D}_\cS \otimes \ssA_I):=\text{T}_I(\mathtt{D}_\cS \otimes \ssA_I)
.\]
This is a morphism of Lie algebras, where $\textbf{U}_\cA$ is equipped with the commutator bracket. The currying of $\text{R}$ is denoted by
\[
\Zie \to \cH(\textbf{E}_V, \textbf{U}_\cA), \qquad \mathtt{D}_\cS\mapsto \text{R}_\cS
.\]
The linear maps
\[
\text{R}_\cS: \textbf{E}_V[I] \to \cA
,\qquad
\ssA_I\mapsto \text{R}_I(\mathtt{D}_\cS \otimes \ssA_I)
\]
are called \emph{generalized} $\text{R}$-\emph{products}. From the expansion \textcolor{blue}{(\refeqq{eq:hbasisexp})} of Dynkin elements $\mathtt{D}_\cS$ in terms of the $\tH$-basis, we recover \cite[Equation 79]{ep73roleofloc},
\[
\text{R}_\cS=-\sum_{\cF_{F}\subseteq\bar{\cS}} (-1)^{k}\, \text{T}(S_1)\dots \text{T}(S_k)
.\]
\begin{remark}
Consider a system of products of the form
\[
\text{Z}:\textbf{E}^\ast_+\otimes {\textbf{E}_{V}}
\to
\textbf{U}_{V}
,\qquad
\tH_{(I)}\otimes \ssA_I\mapsto \text{Z}_I(\ssA_I)
.\]
Then we obtain a new $\text{T}$-product $\text{T}'$, given by
\[
\text{T}':
\textbf{E}^\ast_+\otimes \textbf{E}_V \to \textbf{U}_\cA
,\qquad
\text{T}'_I(\ssA_I)
:=
\sum_{P}\text{T}_P
\big(
\text{Z}_{S_1} (\ssA_{S_1})\dots \text{Z}_{S_k}(\ssA_{S_k})
\big)
.\]
The sum is over all partitions $P=\{S_1,\dots,S_k\}$ of $I$. This construction underlies renormalization in pAQFT \cite[Section 3.6.2]{dutsch2019perturbative}, which deals with the remaining ambiguity of $\text{T}$-products after imposing causal factorization, and perhaps other renormalization conditions.
\end{remark}
\subsection{Reverse T-Products}\label{sec:rev T-Exponentials}
The system of \emph{reverse generalized} $\text{T}$\emph{-products} $\overline{\text{T}}$ of a system of generalized $\text{T}$-products is given by precomposing $\text{T}$ with the antipode \textcolor{blue}{(\refeqq{eq:antipode})} of $\Sig\otimes \textbf{E}_V$, thus
\[
\overline{\text{T}}:\Sig \otimes {\textbf{E}}_V\to \textbf{U}_{\cA^{\op}},
\qquad
\overline{\text{T}}_I(\tH_{F}\otimes \ssA_I)
:=
{\text{T}}_I\big(\overline{\tH}_{F}\otimes \ssA_I\big)
.\]
Since the antipode is a homomorphism $\Sig\otimes \textbf{E}_V\to (\Sig\otimes \textbf{E}_V)^{\op, \text{cop}}$ \cite[Proposition 1.22 (iii)]{aguiar2010monoidal}, this is a system of generalized $\text{T}$-products into the opposite algebra $\textbf{U}_{\cA^{\op}}$. The image of $\tH_{(I)}$ under the currying of $\overline{\text{T}}$ is called the \emph{reverse} $\text{T}$\emph{-product}
\[
\overline{\text{T}}(I): \textbf{E}_V[I] \to \cA^{\text{op}}
.\]
From \textcolor{blue}{(\refeqq{antipode})}, we obtain
\[
\overline{\text{T}}(I)
=
\sum_{F\in \Sigma[I]} (-1)^{k}\, \text{T}(S_1)\dots \text{T}(S_k)
.\]
Note that reverse $\text{T}$-products in \cite[Equation 11]{ep73roleofloc} are defined to be $(-1)^n\, \overline{\text{T}}(I)$. Our definition agrees with \cite[Definition 15.35]{perturbative_quantum_field_theory}.
\subsection{T-Exponentials} \label{sec:T-Exponentials}
For details on series in species, see \cite[Section 12]{aguiar2010monoidal}. The (scaled) \emph{universal series} $\mathtt{G}(c)$ is the group-like series of $\Sig$ given by
\[
\mathtt{G}(c): \textbf{E} \to \Sig
,\qquad
\tH_{I}\mapsto \mathtt{G}(c)_{I}:= c^n\, \tH_{(I)}
\qquad \quad
\text{for} \quad
c\in \bC
.\]
The fundamental nature of this series is described in \cite[Section 13.6]{aguiar2013hopf}. The series $\text{s} \circ \mathtt{G}(c)$ which is the composition of $\mathtt{G}(c)$ with the antipode $\text{s}$ of $\Sig$ is given by
\begin{equation}\label{inverseuni}
\text{s} \circ \mathtt{G}(c): \textbf{E} \to \Sig
,\qquad
\tH_I\mapsto \big (\text{s} \circ \mathtt{G}(c)\big )_I= c^n\, \overline{\tH}_{(I)}
.
\end{equation}
Let $\cA[[\formj]]$ denote the $\bC$-algebra of formal power series in the formal symbol $\formj$ with coefficients in $\cA$. Given a system of generalized $\text{T}$-products
\[
\text{T}:\Sig \otimes {\textbf{E}}_V\to \textbf{U}_\cA
\]
let the $\text{T}$-\emph{exponential} $\mathcal{S}:=\mathcal{S}_{\mathtt{G}(c)}$ of this system be the $\cA[[\formj]]$-valued function on the vector space $V$ associated to the series $\mathtt{G}(c)$, as constructed in \cite[Section 6.3]{norledge2020species}. Thus, we have\footnote{\ we use the abbreviations \textcolor{blue}{(\refeqq{eq:simpletensors})} and \textcolor{blue}{(\refeqq{eq:abb})}, and also $\text{T}_n:=\text{T}_{[n]}$}
\begin{equation}\label{eq:tsexp}
\mathcal{S}: V\to \cA[[ \formj ]] ,
\qquad
\ssA\mapsto \mathcal{S}(\formj\! \ssA)
=
\sum^\infty_{n=0} \dfrac{c^n}{n!} \text{T}_n
\underbrace{\big ( \formj\! \ssA\otimes \cdots \otimes \formj\! \ssA \big )}_{ \text{ $n$ times } }
:=
\sum^\infty_{n=0} \dfrac{\formj^n c^n}{n!} \text{T}_n(\ssA^n)
.
\end{equation}
By \cite[Equation 34]{norledge2020species} and \textcolor{blue}{(\refeqq{inverseuni})}, the $\text{T}$-exponential for the system of reverse $\text{T}$-products is the inverse of $\mathcal{S}$ as an element of the $\bC$-algebra of functions $\text{Func}(V,\cA[[\formj]])$, given by
\[
\mathcal{S}^{-1}: V\to \cA[[ \formj ]] ,
\qquad
\ssA\mapsto \mathcal{S}^{-1}(\formj\! \ssA) :=
\sum^\infty_{n=0} \dfrac{\formj^n c^n}{n!} \overline{\text{T}}_n(\ssA^n)
=
\sum^\infty_{n=0} \dfrac{\formj^n c^n}{n!} \text{T}_n(\overline{\tH}_{(n)}\otimes \ssA^n)
.\]
Therefore
\[
\mathcal{S}(\formj\! \ssA)\star \mathcal{S}^{-1}(\formj\! \ssA)=
\mathcal{S}^{-1}(\formj\! \ssA)\star \mathcal{S}(\formj\! \ssA) = 1_\cA
\]
for all $\ssA\in V$. This appears in e.g. \cite[Equation 2]{ep73roleofloc}.
\section{Perturbation of T-Products}
For the perturbation of $\text{T}$-products by a certain up coderivation of $\textbf{E}$ which gives the S-matrix scheme $\mathcal{S}_{\formg \ssS}(\formj\! \ssA)= \mathcal{S}(\formg \ssS+ \formj\! \ssA)$, see \cite[Section 10.1]{norledge2020species}.
\subsection{Perturbation of T-Products by Steinmann Arrows} \label{sec:Perturbation of T-Products by Steinmann Arrows}
Suppose we have a system of generalized \hbox{$\text{T}$-products}
\[
\text{T}: \Sig \otimes \textbf{E}_V \to \textbf{U}_\cA
,\qquad
\tH_F\otimes \ssA_I \mapsto \text{T}_I(\tH_F\otimes \ssA_I)
.\]
Following \cite[Section 6.4]{norledge2020species}, given a choice of decorations vector $\ssS\in V$, we can use the retarded Steinmann arrow \textcolor{blue}{(\refeqq{steindown})} to perturb $\text{T}$ as follows.
Recall the decorated Hopf algebra $\Sig\otimes\textbf{E}_V$ from \autoref{Decorations}. Recall also the derivative $(\Sig\otimes\textbf{E}_V)'$ of $\Sig\otimes\textbf{E}_V$ from \autoref{Derivations and Coderivations of Sig}, given by
\[
(\Sig\otimes\textbf{E}_V)'[I] = \Sig[\ast I]\otimes V\otimes V^{\otimes I}
.\]
We have the up derivation of $\Sig\otimes\textbf{E}_V$ which is the decorated analog of the retarded Steinmann arrow, given by
\[
\Sig\otimes\textbf{E}_V \to (\Sig\otimes\textbf{E}_V)', \qquad
\tH_F \otimes \ssA_{i_1} \otimes \dots \otimes \otimes \ssA_{i_n}
\ \mapsto \
\ast \downarrow \tH_F \otimes \ssS\otimes \ssA_{i_1} \otimes \dots \otimes \otimes \ssA_{i_n}
.\]
This is indeed still an up derivation by \cite[Proposition 6.4]{norledge2020species}. Analogous to the setting without decorations, we have the induced raising operators and associated $\textbf{E}$-action by iterating, which, after currying, give us the homomorphism
\[
\Sig\otimes\textbf{E}_V \to (\Sig\otimes\textbf{E}_V)^{\textbf{E}}
\]
\[
\tH_F \otimes \ssA_{i_1} \otimes \dots \otimes \otimes \ssA_{i_n}
\ \mapsto \
\sum^{\infty}_{r=0} \underbrace{\downarrow \dots \downarrow}_{\text{$r$ times}} \tH_F \otimes\underbrace{\ssS \otimes \dots \otimes \ssS}_{\text{$r$ times}}\otimes \ssA_{i_1} \otimes \dots \otimes \otimes \ssA_{i_n}
.\]
This is a homomorphism by \cite[Theorem 5.1]{norledge2020species}. Then, a new `perturbed' system of generalized \hbox{$\text{T}$-products} is given by composing this homomorphism with $\text{T}^{\textbf{E}}$ (defined in \textcolor{blue}{(\refeqq{eq:endo})}),
\[
\widetilde{\text{T}} : \Sig\otimes\textbf{E}_V \to (\Sig\otimes\textbf{E}_V )^{\textbf{E}} \xrightarrow{\text{T}^\textbf{E}} (\textbf{U}_\cA)^{\textbf{E}} \cong \textbf{U}_{\cA[[\formg]]}
.\]
For the result that $(\textbf{U}_\cA)^{\textbf{E}} \cong \textbf{U}_{\cA[[\formg]]}$, see \cite[Section 4]{norledge2020species}.
\begin{remark}
The fact $\widetilde{\text{T}}$ is still a homomorphism, and is thus still a generalized system of products, depends crucially on the fact the Steinmann arrow is a derivation \cite[Theorem 5.1]{norledge2020species}, and that $(-)^\textbf{E}$ is a monoidal functor \cite[Section 2.5]{norledge2020species}. We can similarly perturb a system of generalized $\text{R}$-products, which uses the fact the Steinmann arrow is a biderivation.
\end{remark}
We now unpack all this formalism to give a fully explicit description of the new perturbed system of products. Let us abbreviate
\[
\ssS_Y\ssA_I= \ssS_{y_1}\otimes \dots \otimes \ssS_{y_r} \otimes \ssA_{i_1} \otimes \dots \otimes \ssA_{i_n} \in \textbf{E}_V[Y\sqcup I]
.\]
Let
\begin{equation}\label{eq:retardprod}
\text{R}_{Y;I}(\ssS_Y;\ssA_I)
:=
\underbrace{\text{T}_{Y\sqcup I}( \mathtt{R}_{(Y;I)} \otimes \ssS_Y \ssA_I)
=
\sum_{Y_1 \sqcup Y_2=Y} \overline{\text{T}}_{Y_1 \sqcup \emptyset}(\ssS_{Y_1}) \star \text{T}_{Y_2 \sqcup I}( \ssS_{Y_2} \ssA_I)}_{\text{by } \textcolor{blue}{(\refeqq{eq:retardadvan}})}
\end{equation}
Then the new perturbed system is given by\footnote{\ we abbreviate $\text{R}_{r;I}(\ssS^{\, r};\ssA_I):=\text{R}_{[r];I}(\ssS^{\, [r]};\ssA_I)=\text{R}_{[r];I}(\underbrace{\ssS\otimes \dots \otimes \ssS}_{\text{$r$ times}} \, ; \ssA_I)$}
\[
\widetilde{\text{T}}:\Sig\otimes\textbf{E}_V \to \textbf{U}_{\cA[[\formg]]}
,\quad
\tH_F\otimes \ssA_I \mapsto \sum_{r=0}^\infty\ \sum_{r_1 +\, \cdots\, + r_k=r } \dfrac{\formg^{r}}{r!} \text{R}_{r_1;S_1}(\ssS^{\, r_1};\ssA_{S_1})\star \cdots\star \text{R}_{r_k;S_k}(\ssS^{\, r_k};\ssA_{S_k})
.\]
In particular, the restriction to $\textbf{E}_+^\ast\otimes\textbf{E}_V$, i.e. the new perturbed $\text{T}$-product, is given by
\begin{align*}
\widetilde{\text{T}}_I(\ssA_I)&= \sum_{r=0}^\infty \dfrac{\formg^{r}}{r!} \text{R}_{r;I}(\ssS^{\, r};\ssA_I) \\
&=\text{T}_I(\ssA_I) + \underbrace{\formg\, \text{T}_{\ast_1 I}(\downarrow \tH_{(I)} \otimes \ssS \ssA_I) + \dfrac{\formg^2}{2!} \text{T}_{\ast_2 \ast_1 I}(\downarrow \downarrow \tH_{(I)} \otimes \ssS \ssS \ssA_I) + \cdots}_{\text{perturbation}}\ .
\end{align*}
Similar, we can perturb a system of generalized T-products using the advanced Steinmann arrow.
We let $\mathcal{V}_{\! \formg\ssS}$, respectively $\mathcal{W}_{\! \formg\ssS}$, denote the $\text{T}$-exponential (as defined in \textcolor{blue}{(\refeqq{eq:tsexp})}) for the new perturbed system of generalized $\text{T}$-products using the retarded, respectively advanced, Steinmann arrows. Thus
\[
\mathcal{V}_{\! \formg\ssS}: V\to \cA[[\formg,\! \formj]],
\qquad
\mathcal{V}_{\! \formg\ssS}(\formj\! \ssA)
:=
\sum_{n=0}^\infty
\dfrac{\formj^n c^n}{n!}\, \wt{\text{T}}_n (\ssA^n)
=
\sum_{n=0}^\infty \sum_{r=0}^\infty
\dfrac{\formg^{r} \formj^n c^{r+n}}{r!\, n!}\, \text{R}_{r;n} (\ssS^{\, r} ; \ssA^n)
\]
and
\[
\mathcal{W}_{\! \formg\ssS}: V\to \cA[[\formg,\! \formj]],
\qquad
\mathcal{W}_{\! \formg\ssS}(\formj\! \ssA):=\sum_{n=0}^\infty
\dfrac{\formj^n c^n}{n!}\, \wt{\text{T}}_n (\ssA^n)
=
\sum_{n=0}^\infty \sum_{r=0}^\infty
\dfrac{\formg^{r} \formj^n c^{r+n}}{r! \, n!}\, \text{A}_{r;n} (\ssS^{\, r};\ssA^n)
\]
where
\[
\text{A}_{Y;I}(\ssS_Y;\ssA_I)
:=
\underbrace{\text{T}_{Y\sqcup I}( \mathtt{A}_{(Y;I)} \otimes \ssS_Y \ssA_I)
=
\sum_{Y_1 \sqcup Y_2=Y} \text{T}_{Y_1 \sqcup I}( \ssS_{Y_1} \ssA_I) \star \overline{\text{T}}_{Y_2\sqcup \emptyset}(\ssS_{Y_2})}_{\text{by } \textcolor{blue}{(\refeqq{eq:retardadvan}})}
.\]
\begin{thm}
We have
\[
\mathcal{V}_{\formg \ssS}(\formj\! \ssA)=
\mathcal{S}^{-1}( \formg \ssS)\star \mathcal{S}(\formg \ssS +\formj\! \ssA )
\qquad \text{and} \qquad
\mathcal{W}_{\formg \ssS}(\formj\! \ssA)=
\mathcal{S}(\formg \ssS +\formj\! \ssA )\star \mathcal{S}^{-1}( \formg \ssS)
.\]
\end{thm}
\begin{proof}
We have
\[
\text{R}_{r;I}(\ssS^{\, r}; \ssA_I)=
\sum_{Y_1\sqcup Y_2=[r]} \overline{\text{T}}_{Y_1 \sqcup \emptyset}(\ssS^{Y_1})
\star
\text{T}_{Y_2\sqcup I}( \ssS^{Y_2}\ssA_I)
.\]
Then
\begin{align*}
\mathcal{V}_{\formg \ssS}(\formj\! \ssA)
=&\
\sum_{n=0}^\infty \sum_{r=0}^\infty
\dfrac{\formg^{r} \formj^n c^{r+n}}{r!\, n!}\, \text{R}_{r;n} (\ssS^{\, r} ; \ssA^n) \\[6pt]
=&\
\sum_{n=0}^\infty \sum_{r=0}^\infty
\dfrac{\formg^{r} \formj^n c^{r+n}}{r!\, n!}
\sum_{Y_1\sqcup Y_2=[r]} \overline{\text{T}}_{Y_1 \sqcup \emptyset}(\ssS^{Y_1})
\star
\text{T}_{Y_2\sqcup [n]}(\ssS^{Y_2}\ssA^n) \\[6pt]
=& \
\sum^\infty_{r=0} \dfrac{\formg^{r} c^r}{r!} \overline{\text{T}}_{r+0}(\ssS^{\, r})
\star
\sum_{n=0}^\infty \sum_{r=0}^\infty \dfrac{c^n}{n!} \text{T}_{r+n} (\ssS^{\, r} \ssA^n)
\\[6pt]
=& \
\mathcal{S}^{-1}( \formg \ssS)\star \mathcal{S}(\formg \ssS +\formj\! \ssA )
\end{align*}
The proof for $\mathcal{W}_{\formg \ssS}(\formj\! \ssA)$ is similar.
\end{proof}
\begin{cor}[Bogoliubov's Formula {\cite[Chapter 4]{Bogoliubov59}}]
We have
\begin{equation} \label{eq:Bog}
\widetilde{\text{T}}_i(\ssA)
=
\dfrac{1}{c}\, \dfrac{d}{d \formj } \Bigr|_{\formj=0} \mathcal{V}_{\formg \ssS}(\formj\! \ssA)
.
\end{equation}
\end{cor}
\begin{proof}
We have
\[
\dfrac{d}{d \formj }\mathcal{V}_{\formg \ssS}(\formj\! \ssA)
=
\dfrac{d}{d \formj } \sum_{n=0}^\infty
\dfrac{\formj^n c^n}{n!}\, \widetilde{\text{T}}_n (\ssA^n)
=
\sum_{n=1}^\infty
\dfrac{\formj^{n-1} c^n}{(n-1)!}\, \widetilde{\text{T}}_n (\ssA^n)
.\]
Then, putting $\formj=0$, we obtain
\[
\dfrac{d}{d \formj } \Bigr|_{\formj=0} \mathcal{V}_{\formg \ssS}(\formj\! \ssA)
=
c\, \widetilde{\text{T}}_1 (\ssA)
.\qedhere \]
\end{proof}
This formula was originally motivated by the path integral heuristic, see e.g. \cite[Remark 15.16]{perturbative_quantum_field_theory}.
\subsection{$\text{R}$-Products and $\text{A}$-Products}
The linear maps $\text{R}(Y;I)$ which are given by
\[
\text{R}(Y;I):\textbf{E}_V^{[Y]}[I] \to \cA
,\qquad
\ssS_Y \ssA_I \mapsto \text{R}_{Y;I}(\ssS_Y \ssA_I)
\]
are called R\emph{-products}. In the case of singletons $I=\{i\}$, the maps $\text{R}(Y;i)$ are called \emph{total} R\emph{-products}. By \textcolor{blue}{(\refeqq{eq:retardadvan})}, $\text{R}$-products are given in terms of $\text{T}$-products and reverse $\text{T}$-products by
\[
\text{R}(Y;I)=\sum_{Y_1 \sqcup Y_2 =Y} \overline{\text{T}}(Y_1) \star \text{T}(Y_2\sqcup I)
.\]
Then
\[
\widetilde{\text{T}}(I)= \sum_{r=0}^\infty \dfrac{c^r}{r!} \text{R}(r;I)
.\]
In a similar way, we can define the A-\emph{products} $\text{A}(Y;I)$, so that
\[
\text{A}(Y;I)=\sum_{Y_1 \sqcup Y_2 =Y} \text{T}(Y_1 \sqcup I) \star \overline{\text{T}}(Y_2)
.\]
The total R-products are both R-products and generalized R-products, which is due to the double description appearing in \autoref{rem:double}. A related result is \cite[Proposition 109]{aguiar2013hopf}.
\begin{remark}
In the literature, the total retarded products in our sense are sometimes called retarded products, and the retarded products in our sense are then called generalized retarded products, e.g. \cite{polk58}, \cite[Exercise 3.3.16]{dutsch2019perturbative}.
\end{remark}
\part{Perturbative Algebraic Quantum Field Theory} \label{part 2}
We now apply the theory we have developed to the case of a real scalar quantum field on a Minkowski spacetime, as described by pAQFT.\footnote{\ although pAQFT deals more generally with perturbative \hbox{Yang-Mills} gauge theory on curved spacetimes} Mathematically, the important extra property is a causal structure on the vector space of decorations $V$, which allows one to impose causal factorization. Connections between QFT and species have been previously studied in \cite{MR2036353}, \cite{MR2862982}, \cite{MR3753672}.
Our references for pAQFT are \cite{dutfred00}, \cite{rejzner2016pQFT}, \cite{dutsch2019perturbative}, \cite{perturbative_quantum_field_theory}. We mainly adopt the notation and presentation of \cite{perturbative_quantum_field_theory}. Key features of pAQFT are its local, i.e. \hbox{sheaf-theoretic}, approach, the (closely related) use of adiabatic switching of interaction terms to avoid IR-divergences, and the interpretation of renormalization as the extension of distributions to the fat diagonal to avoid UV-divergences. The Wilsonian cutoff, sometimes called heuristic quantum field theory, may be rigorously formulated within pAQFT \cite{dutfred09}, \cite{dut12}, \cite[Section 3.8]{dutsch2019perturbative}, \cite[Section 16]{perturbative_quantum_field_theory}.
\section{Spacetime and Field Configurations}
Let $\cX\cong \bR^{1,p}$ denote a $(p+1)$-dimensional Minkowski spacetime, for $p\in \bN$. Thus, $\cX$ is a real vector space equipped with a metric tensor which is a symmetric nondegenerate bilinear form $\cX\times \cX\to \bR$ with signature $(1,p)$. The bilinear form gives rise to a volume form on $\cX$, which we denote by $\text{dvol}_\cX\in \Omega^{p+1}(\cX)$. For regions of spacetime $X_1,X_2\subset \cX$, we write
\[
X_1\! \vee\! \! \wedge X_2
\]
if one cannot travel from $X_1$ to $X_2$ on a future-directed timelike or lightlike curve. We have the set valued species $\cX^{(-)}$ given by
\[
I\ \mapsto \ \cX^I:= \big \{ \text{functions}\ I\to \cX \big\}
.\]
For simplicity, we restrict ourselves to the Klein-Gordan real scalar field on $\cX$. Therefore, let $E\to \cX$ be a smooth real vector bundle over $\cX$ with one-dimensional fibers. An (off-shell) \emph{field configuration} $\Phi$ is a smooth section of the bundle $E\to \cX$,
\[
\Phi:\cX\hookrightarrow E
,\qquad
x\mapsto \Phi(x)
.\]
The space of all field configurations, denoted $\Gamma(E)$, has the structure of a Fr\'echet topological (real) vector space.
\begin{remark}
We can always pick an isomorphism $(E\to \cX) \cong (\cX\times \bR\to \cX)$, which induces an isomorphism $\Gamma(E)\cong C^\infty(\cX,\bR)$, so that field configurations are modeled as smooth functions $\cX\to \bR$.
\end{remark}
Let $E^\ast\to \cX$ denote the dual vector bundle of $E$, and let the canonical pairing be denoted by
\[
\la -,- \ra : E^\ast \otimes E \to \bR
.\]
Let a \emph{compactly supported distributional section} $\ga$ be a distribution of field configurations
\[
\ga:\Gamma(E)\to \bR
,\]
i.e. an element of the topological dual vector space of $\Gamma(E)$, which is modeled as a sequence $(\ga_j)_{j\in \bN}$ of smooth compactly supported sections of the dual bundle $E^\ast\to \cX$,
\[
\ga_j:\cX \hookrightarrow E^\ast, \qquad j\in \bN
,\]
where the modeled distribution is recovered as the following limit of integrals,
\[
\Gamma(E)\to \bR, \qquad \Phi \mapsto \underbrace{\int_{x\in \cX} \big \la \ga(x), \Phi(x) \big \ra \text{dvol}_\cX := \lim_{j\to \infty} \int_{x\in \cX} \la \ga_j(x) , \Phi(x) \ra \text{dvol}_\cX}_{\text{sometimes called \emph{generalized function} notation}}
.\]
The space of all compactly supported distributional sections is denoted $\Gamma_{\text{cp}}'(E^\ast)$. By e.g. \cite[Lemma 2.15]{bar15}, \emph{all} distributions $\Gamma(E)\to \bR$ may be obtained as compactly supported distributional sections in this way.
We can pullback the vector bundle $E^\ast$ to $\cX^I$ along each canonical projection
\[
\cX^I\to \cX^{\{i\}}\cong \cX
,\qquad
i\in I
.\]
The tensor product of these $n$ many pullback bundles is the exterior tensor product bundle $(E^\ast)^{ \boxtimes I }$. This defines a presheaf of smooth vector bundles on $\sfS$,
\[
\sfS^{\op}\to \textsf{Diff}_{/ \cX}
, \qquad
I\mapsto (E^\ast)^{ \boxtimes I }
.\]
By taking compexified compactly supported distributional sections ${{\Gamma'}^\bC}_{\! \! \! \! \! \! \text{cp}}(-):=\Gamma_{\text{cp}}'(-)\otimes_\bR \bC$, we obtain the complex vector species ${{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast)$, given by
\[
{{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast)[I]:= {{\Gamma'}^\bC}_{\! \! \! \! \! \! \text{cp}}\Big ((E^\ast)^{ \boxtimes I }\Big)
.\]
Of course, ${{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast)$ does not `factorize' in the sense that it is not a monoidal functor,
\begin{equation} \label{fact}
{{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast)[I] \ncong {{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast)[i_1]\otimes \dots \otimes {{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast)[i_n]
\end{equation}
where $I=\{ i_1,\dots,i_n \}$. There are more distributional sections then just those coming from the tensor product.
\section{Observables} \label{sec:obs}
An off-shell \emph{observable} $\emph{\textsf{O}}$ is a smooth functional of field configurations into the complex numbers,
\[
\emph{\textsf{O}}:\Gamma(E)\to \bC
,\qquad
\Phi\mapsto \emph{\textsf{O}}(\Phi)
.\]
The space of all observables is denoted $\text{Obs}$. We can pointwise multiply observables, sometimes called the \emph{normal ordered product}, so that observables form a commutative $\bC$-algebra,
\[
\text{Obs}\otimes \text{Obs} \to \text{Obs}, \qquad \emph{\textsf{O}}_1\otimes \emph{\textsf{O}}_2 \mapsto \emph{\textsf{O}}_1 \cdot \emph{\textsf{O}}_2
\]
where
\[
\emph{\textsf{O}}_1 \cdot \emph{\textsf{O}}_2(\Phi):= \! \! \! \underbrace{\emph{\textsf{O}}_1(\Phi)\emph{\textsf{O}}_2(\Phi)}_{\text{multiplication in $\bC$}} \! \! \!
.\]
Thus, we may form the commutative algebra in species $\textbf{U}_{\text{Obs}}$, given by $\textbf{U}_{\text{Obs}}[I]=\text{Obs}$.
A \emph{linear observable} $\emph{\textsf{O}}\in \text{Obs}$ is an observable which is additionally a linear functional, that is
\[
\emph{\textsf{O}}( \Phi_1 + \Phi_2 ) = \emph{\textsf{O}}(\Phi_1) + \emph{\textsf{O}}(\Phi_2)
\qquad \text{and} \qquad
\emph{\textsf{O}}(c \Phi) = c \emph{\textsf{O}}(\Phi) \qquad \text{for}\quad c\in \bC
.\]
The space of linear observables is denoted $\text{LinObs}\subset \text{Obs}$. In particular, for each spacetime event $x\in \cX$, we have the \emph{field observable} $\boldsymbol{\Phi}(x) \in \text{LinObs}$, given by
\[
\boldsymbol{\Phi}(x): \Gamma(E)\to \bC, \qquad \Phi\mapsto \Phi(x)
.\]
We now show how linear observables and so-called polynomial observables arise species-theoretically, via (generalized) systems of products for the species $\textbf{E}$ and $\textbf{X}=\mathcal{P}(\textbf{E})$.
Let $\textbf{X}$ denote the species given by $\textbf{X}\big [\{i\}\big]:=\bC$ for singletons and $\textbf{X}[I]:=0$ otherwise. We denote $\tH_i:=1\in \textbf{X}\big [\{i\}\big]$. We have the following morphism of species,
\[
\textbf{X} \otimes {{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast) \to \textbf{U}_{\text{Obs}},
\qquad
\tH_i \otimes \ga \mapsto
\bigg( \Phi \mapsto \int_{x\in \cX} \big \la \ga(x), \Phi(x) \big \ra \text{dvol}_\cX \bigg)
.\]
This is like a system of products for $\textbf{X}$, however ${{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast)$ does not factorize \textcolor{blue}{(\refeqq{fact})}, and so cannot be written in the form $\textbf{E}_V$. It follows from \cite[Lemma 2.15]{bar15} that the colimit (as defined in \cite[Remark 15.7]{aguiar2010monoidal}) of the species which is the image of this morphism is the space of linear observables $\text{LinObs}$. The currying of this map is given by
\[
\textbf{X}\to \cH \big ( {{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast) , \textbf{U}_{\text{Obs}}\big ),
\qquad
\tH_i\mapsto \boldsymbol{\Phi}_i=\boldsymbol{\Phi} \]
where
\[
\boldsymbol{\Phi}(\ga):=\bigg( \Phi \mapsto \int_{x\in \cX}\big \la \ga(x), \Phi(x) \big \ra \text{dvol}_\cX \bigg)
.\]
If we restrict $\boldsymbol{\Phi}$ to bump functions $b\in \Gamma_{\text{cp}}(E^\ast)\otimes_{\bR} \bC$, also called `smearing functions', then one might call the linear map
\[
\boldsymbol{\Phi}: \Gamma_{\text{cp}}(E^\ast)\otimes_{\bR} \bC \to \text{Obs}
,\qquad
b\mapsto \boldsymbol{\Phi}(b)
\]
an `observable-valued distribution', and this is sometimes referred to as `the (smeared) field'. The field observable $\boldsymbol{\Phi}(x)$ is recovered by evaluating $\boldsymbol{\Phi}$ on the Dirac delta function $\delta_x$ localized at $x$. One views $b$ as the smearing of a Dirac delta function, hence smearing functions and smeared field.
We extend the smeared field by replacing $\textbf{X}$ with $\textbf{E}$ to define the following morphism of species,
\[
\textbf{E} \otimes {{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast) \to \textbf{U}_{\text{Obs}}
,\qquad
\tH_I\otimes \ga_I
\mapsto
\bigg( \Phi \mapsto \int_{\cX^I}
\big \la
\ga_I(x_{i_1}, \dots, x_{i_n}),
\Phi(x_{i_1})\dots \Phi(x_{i_n})
\big \ra \text{dvol}_{\cX^I} \bigg)
.\]
This is like a system of products for $\textbf{E}$, but again without factorization. The colimit of the species which is the image of this morphism is the vector space of \emph{polynomial observables}, as defined in e.g. \cite[Definition 7.13]{perturbative_quantum_field_theory}, denoted
\[
\text{PolyObs}\subset \text{Obs}
.\]
(Alternatively, if we restrict the limit of this map $\mathscr{S}({{\boldsymbol{\Gamma}'}^\bC}_{\! \! \! \! \! \! \text{cp}}(E^\ast)) \to \text{Obs}[[\formj]]$ to finite series and set $\formj=1$, then we recover \cite[Definition 1.2.1]{dutsch2019perturbative}.) The space of \emph{microcausal polynomial observables} $\mathcal{F}$ is the subspace
\[
\mathcal{F} \subset \text{PolyObs}
\]
consisting of those polynomial observables which satisfy a certain microlocal-theoretic condition called \emph{microcausality}, see \cite[Definition 1.2.1 (ii)]{dutsch2019perturbative}. Following \cite[Definition 1.3.4]{dutsch2019perturbative}, the space of \emph{local observables}
\[
\mathcal{F}_{\text{loc}}\subset \text{Obs}
\]
consists of those observables obtained by integrating a polynomial with real coefficients in the field and its derivatives (`field polynomials') against a bump function $b\in \Gamma_{\text{cp}}(E^\ast)\otimes_\bR \bC$. Importantly, we have a natural inclusion
\[
\mathcal{F}_{\text{loc}} \hookrightarrow \mathcal{F}
,\qquad
\ssA \mapsto\ :\ssA:
.\]
Let $\mathcal{F}_{\text{loc}}[[\hbar]]$ and $\mathcal{F}[[\hbar]]$ denote the spaces of formal power series in $\hbar$ with coefficients in $\mathcal{F}_{\text{loc}}$ and $\mathcal{F}$ respectively, and let $\mathcal{F}((\hbar))$ denote the space of Laurent series in $\hbar$ with coefficients in $\mathcal{F}$.
Applying Moyal deformation quantization with formal Planck's constant $\hbar$, $\mathcal{F}[[\hbar]]$ is a formal power series $\ast$-algebra, called the (abstract, off-shell) \emph{Wick algebra}, with multiplication the Moyal star product \cite[Definition 2.1.1]{dutsch2019perturbative} defined with respect to the Wightman propagator $\Delta_{\text{H}}$ for the Klein-Gordan field \cite[Section 2.2]{dutsch2019perturbative},
\[
\mathcal{F}[[\hbar]] \otimes \mathcal{F}[[\hbar]] \to \mathcal{F}[[\hbar]]
,\qquad
\emph{\textsf{O}}_1 \otimes \emph{\textsf{O}}_2\\
\mapsto
\emph{\textsf{O}}_1\star_{\text{H}}\emph{\textsf{O}}_2
.\]
We may form the algebra in species $\textbf{U}_{\mathcal{F}[[\hbar]]}$, or, allowing negative powers of $\hbar$, $\textbf{U}_{\mathcal{F}((\hbar))}$.
\section{Time-Ordered Products and S-Matrix Schemes} \label{sec:Time-Ordered Products}
For $\ssA\in \mathcal{F}_{\text{loc}}[[\hbar]]$, let $\text{supp}(\ssA)$ denote the spacetime support of $\ssA$. Given a composition $G$ of $I$, we say that $\ssA_I\in \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}[I]$ \emph{respects} $G$ if
\[
\text{supp}({\ssA_{i_1}})\vee\! \! \wedge\ \text{supp}(\ssA_{i_2})
\qquad \quad \text{for all}\quad
(i_1,i_2)\quad \text{such that}\quad G|_{\{i_1, i_2\}}= (i_1,i_2)
.\footnote{\ $G|_{\{i_1, i_2\}}= (i_1,i_2)$ means that $i_1$ and $i_2$ are in different lumps, with the lump containing $i_1$ appearing to the left of the lump containing $i_2$}\]
Consider a system of $\text{T}$-products (as defined in \autoref{sec:T-Products, Generalized T-Products, and Generalized R-Products}) of the form
\[
\text{T}: \textbf{E}^\ast_+ \otimes \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}\to
\textbf{U}_{\mathcal{F}((\hbar))},
\qquad
\tH_{(I)}\otimes \ssA_I\mapsto \text{T}_I(\tH_{(I)}\otimes \ssA_I)=\text{T}_I(\ssA_I)
.\]
\noindent Since $\Sig$ is the free algebra on $\textbf{E}^\ast_+$, we have the unique extension to a system of generalized \hbox{$\text{T}$-products}
\[
\text{T}: \Sig \otimes \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}\to
\textbf{U}_{\mathcal{F}((\hbar))},
\qquad
\text{T}_I(\tH_{F}\otimes \ssA_I):=\text{T}_{S_1}(\ssA_{S_1})\star_{\text{H}} \dots \star_{\text{H}} \text{T}_{S_k}(\ssA_{S_k})
.\]
Then:
\begin{enumerate}[label={\arabic*.}]
\item
(perturbation) we say that $\text{T}$ satisfies \emph{perturbation} if the singleton components $\text{T}_{i}$ are isomorphic to the inclusion $\mathcal{F}_{\text{loc}}[[\hbar]]\hookrightarrow \mathcal{F}((\hbar))$, that is
\[
{\text{T}}_{i}( \ssA )= \, :\! \ssA \! :
\]
\item
(causal factorization) we say that $\text{T}$ satisfies \emph{causal factorization} if for all compositions $(S,T)$ of $I$ with two lumps, if $\ssA_I \in \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}[I]$ respects $(S,T)$\footnote{\ explicitly, $\text{supp}({\ssA_{i_1}})\vee\! \! \wedge\ \text{supp}(\ssA_{i_2})$ for all $i_1\in S$ and $i_2\in T$} then
\begin{equation}\label{eq:causalfac}
\text{T}_I(\tH_{(I)}\otimes\ssA_I)
=\text{T}_I(\tH_{(S,T)}\otimes\ssA_I).\footnote{\ or equivalently $\text{T}_I(\ssA_I)=\text{T}_S(\ssA_S)\star_{\text{H}} \text{T}_T(\ssA_T)$}
\end{equation}
\end{enumerate}
Let a (fully normalized) \emph{system of time-ordered products} be a system of $\text{T}$-products which satisfies perturbation and causal factorization. The corresponding unique extension of $\text{T}$ to $\Sig$ is called the associated \emph{system of generalized time-ordered products}. After currying
\[
\Sig \to \cH( \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]} , \textbf{U}_{\mathcal{F}((\hbar))} ),
\qquad
\tH_{F}\mapsto \text{T}(S_1)\dots \text{T}(S_k)
\]
the linear maps
\[
\text{T}(S_1)\dots \text{T}(S_k): \mathcal{F}_{\text{loc}}[[\hbar]]^{\otimes I}
\to
\mathcal{F}((\hbar))
,\qquad
\ssA_I\mapsto \text{T}_I(\tH_{F}\otimes \ssA_I)
\]
are called \emph{generalized time-ordered products}. The linear maps $\text{T}(I)$ are called \emph{time-ordered products}. After fixing a field polynomial, so that each $\ssA_{i_j}$ of $\ssA_I$ is determined by a bump function $b_{i_j}$, they are usually presented in generalized function notation as follows,
\[
\text{T}_I (\ssA_{i_1} \otimes \cdots \otimes \ssA_{i_n} )
=
\int_{\cX^I} \text{T}(x_{i_1}, \dots , x_{i_n}) b_{i_1}( x_{i_1} ) \dots b_{i_n}(x_{i_1})
dx_{i_1} \dots dx_{i_n}
\]
where $(x_{i_1}, \dots , x_{i_n}) \mapsto \text{T}(x_{i_1}, \dots , x_{i_n})$ is an `operator-valued' generalized function. See e.g. \cite[Section 1.2]{ep73roleofloc}.
Given compositions $F=(S_1, \dots , S_{k})$ and $G=(U_1,\dots, U_{l})$ of $I$, let
\[
\tH_{F}\triangleright \tH_{G} : =\tH_{( U_1\cap S_1, \dots, U_{l}\cap S_1, \dots \dots, U_{1}\cap S_{k}, \dots , U_{l}\cap S_{k} )_+}
.\]
This is called the \emph{Tits product}, going back to Tits \cite{Tits74}. See \cite[Section 13]{aguiar2013hopf} for more on the structure of the Tits product, where it is shown it is given by the action of $\Sig$ on itself by Hopf powers. See also \cite[Section 1.4.6]{brown08} for the context of other Coxeter systems and Dynkin types.
\begin{prop}\label{prob:causalfac}
Let
\[
\text{T}: \textbf{E}^\ast_+ \otimes \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}\to
\textbf{U}_{\mathcal{F}((\hbar))}
\]
be a system of $\text{T}$-products which satisfies causal factorization. Given a composition \hbox{$G=(U_1,\dots,U_k)$} of $I$, and $\ssA_I \in \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}[I]$ which respects $G$, then
\[
\text{T}_I( \mathtt{a} \otimes\ssA_I)
=
\text{T}_I( \mathtt{a} \triangleright \tH_{G} \otimes\ssA_I)
\qquad \text{for all} \quad
\mathtt{a}\in \Sig[I]
.\]
\end{prop}
\begin{proof}
We have
\[
\text{T}_I(\tH_{G}\otimes\ssA_I)
=
\underbrace{\text{T}_{U_1}(\ssA_{U_1})\star_{\text{H}} \dots \star_{\text{H}} \text{T}_{U_k}(\ssA_{U_k})
=
\text{T}_I(\ssA_I)}_{\text{by repeated applications of causal factorization}}
.\]
Observe that the action $\tH_F \mapsto \tH_F \triangleright \tH_{G}$, for $F\in \Sigma[I]$, replaces the lumps of $F$ with their intersections with $G$. But we just saw that $\text{T}_I(\ssA_I)=\text{T}_I(\tH_{G}\otimes\ssA_I)$, and so it follows that
\[
\text{T}_I( \tH_F \otimes\ssA_I)
=
\text{T}_I( \tH_F \triangleright \tH_{G} \otimes\ssA_I)
.\]
Since the claim is true for the $\tH$-basis, it is true for all $\mathtt{a}\in \Sig[I]$.
\end{proof}
\begin{cor}\label{cor:cor}
If $\mathtt{a} \triangleright \tH_{G} =0$, then
\[
\text{T}_I(\mathtt{a} \otimes\ssA_I)=0
\]
for all $\ssA_I\in \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}[I]$ which respect $G$.
\end{cor}
The restriction of $\text{T}$ to the primitive part Lie algebra is called the associated \emph{system of generalized retarded products},
\[
\text{R}:\Zie\otimes\textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]} \to \textbf{U}_{\mathcal{F}((\hbar))}
.\]
The image of the Dynkin elements $\mathtt{D}_\cS$ under the currying of $\text{R}$ are the \emph{generalized retarded products} $\text{R}_\cS$, see e.g. \cite[Equation 79]{ep73roleofloc}. It follows from \autoref{cor:cor} and the structure of Dynkin elements under the Tits product that generalized retarded products have nice support properties. This is described in \cite{epstein1976general}.
Given a system of generalized time-ordered products
\[
\text{T}: \Sig \otimes \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}\to
\textbf{U}_{\mathcal{F}((\hbar))}
\]
the $\text{T}$-exponential $\mathcal{S}=\mathcal{S}_{\mathtt{G}(1/\text{i}\hbar)}$ (defined in \textcolor{blue}{(\refeqq{eq:tsexp})}) for the group-like series
\[
\mathtt{G}(1/\text{i}\hbar): \textbf{E} \to \Sig
,\qquad
\tH_{I} \mapsto \dfrac{1}{\text{i}\hbar}\tH_{(I)}
\]
is called the associated perturbative \emph{S-matrix scheme}. Thus, $\mathcal{S}$ is the function
\[
\mathcal{S}: \mathcal{F}_{\text{loc}}[[\hbar]] \to \mathcal{F}((\hbar))[[\formj]]
,\qquad
\ssA\mapsto \mathcal{S}(\formj\! \ssA)
:=
\sum^\infty_{n=0} \bigg(\dfrac{1}{\text{i}\hbar}\bigg)^{\! n}\dfrac{\formj^n}{n!} \text{T}_n(\ssA^n)
.\]
\section{Interactions} \label{sec:Interactions}
Given a choice of adiabatically switched \emph{interaction} $\ssS_{\text{int}} \in \mathcal{F}_{\text{loc}}[[\hbar]]$, and a system of fully normalized generalized \hbox{time-ordered} products
\[
\text{T}: \Sig \otimes \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}\to
\textbf{U}_{\mathcal{F}((\hbar))}
,\]
we have the new system of interacting generalized time-ordered products which is obtained by the construction of \autoref{sec:Perturbation of T-Products by Steinmann Arrows},
\[
\wt{\text{T}}
:
\Sig\otimes\textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}
\to
\textbf{U}_{\mathcal{F}((\hbar))[[\formg]]}
.\]
The associated \emph{generating function scheme} $\mathcal{Z}_{\formg \ssS_{\text{int}}}$ for interacting field observables, and more generally for time-ordered products of interacting field observables, is the new $\text{T}$-exponential for the group-like series $\mathtt{G}(1/\text{i}\hbar)$, denoted $\mathcal{V}_{\formg \ssS_{\text{int}}}$ in \autoref{sec:Perturbation of T-Products by Steinmann Arrows}. Thus, $\mathcal{Z}_{\formg \ssS_{\text{int}}}$ is the function
\[
\mathcal{Z}_{\formg \ssS_{\text{int}}}
:
\mathcal{F}_{\text{loc}}[[\hbar]] \to \mathcal{F}((\hbar))[[\formg,\! \formj]]
,\qquad
\ssA \mapsto \mathcal{Z}_{\formg \ssS_{\text{int}}}(\formj\! \ssA)
\]
where
\[
\mathcal{Z}_{\formg \ssS_{\text{int}}}(\formj\! \ssA)
\! :=\!
\sum_{n=0}^\infty \! \bigg(\dfrac{1}{\text{i}\hbar}\bigg)^{\! \! n}\! \dfrac{\formj^n}{n!} \wt{\text{T}}_n(\ssA_n)
=\!
\sum_{n=0}^\infty \sum_{r=0}^\infty \!
\bigg(\dfrac{1}{\text{i}\hbar}\bigg)^{\! \! r+n}\!
\dfrac{\formg^{r} \formj^n}{r!\, n!}\, \! \text{R}_{r;n} (\ssS_{\text{int}}^{\, r} ; \ssA^n)
=
\mathcal{S}^{-1}( \formg \ssS_{\text{int}})\star_{\text{H}} \mathcal{S}(\formg \ssS_{\text{int}} +\formj\! \ssA )
.\]
Then
\[
\ssA_{\text{int}}:
=
\wt{\text{T}}_i(\ssA)
=
\sum_{r=0}^\infty
\bigg(\dfrac{1}{\text{i}\hbar}\bigg)^{\! r}
\dfrac{\formg^{r}}{r!} \text{R}_{r+1} (\ssS^{\, r}_{\text{int}}; \ssA)
\in \mathcal{F}((\hbar))[[\formg]]
\]
is the \emph{local interacting field observable} of $\ssA$. Bogoliubov's formula \textcolor{blue}{(\refeqq{eq:Bog})} now reads
\[
\ssA_{\text{int}}
=
\text{i} \hbar \, \dfrac{d}{d\formj}\Bigr|_{\formj=0} \mathcal{Z}_{\formg \ssS_{\text{int}}}(\formj\! \ssA)
.\]
One views $\ssA_{\text{int}}$ as the deformation of the local observable $\ssA$ due to the interaction $\ssS_{\text{int}}$. One can show that $\wt{\text{T}}$ does indeed land in $\textbf{U}_{\mathcal{F}[[\hbar,\formg]]}$ \cite[Proposition 2 (ii)]{dutfred00}. The perturbative interacting quantum field theory then has a classical limit \cite{collini2016fedosov}, \cite{MR4109798}.
\section{Scattering Amplitudes}\label{sec:scatterung}
We finish with a translation of a standard result in pAQFT (see \cite[Example 15.12]{perturbative_quantum_field_theory}) into our notation, which relates S-matrix schemes as presented in \autoref{sec:Time-Ordered Products} to S-matrices used to compute scattering amplitudes, which are predictions of pAQFT that are tested with scattering experiments at particle accelerators.
Following \cite[Definition 2.5.2]{dutsch2019perturbative}, the \emph{Hadamard vacuum state} $\la - \ra_0$ is the linear map given by
\[
\la - \ra_0
:
\mathcal{F}[[\hbar,\formg]] \to \bC[[\hbar,\formg]]
,\qquad
\emph{\textsf{O}}\mapsto \la \emph{\textsf{O}}\, \ra_0:= \emph{\textsf{O}}\, (\Phi=0)
.\]
Let $\ssS_{\text{int}} \in \mathcal{F}_{\text{loc}}[[\hbar]]$. We say that the Hadamard vacuum state $\la - \ra_0$ is \emph{stable} with respect to the interaction $\ssS_{\text{int}}$ if for all $\emph{\textsf{O}}\in \mathcal{F}[[\hbar,\formg]]$, we have
\begin{equation}\label{eq:vacstab}
\big\la\emph{\textsf{O}}\star_{\text{H}}\mathcal{S}(\formg \ssS_{\text{int}}) \big \ra_0
=
\big \la\emph{\textsf{O}}\, \big\ra_0
\big\la \mathcal{S}(\formg \ssS_{\text{int}}) \big \ra_0
\qquad \text{and} \qquad
\big\la
\mathcal{S}^{-1}(\formg \ssS_{\text{int}})\star_{\text{H}} \emph{\textsf{O}} \,
\big\ra_0
=
\dfrac{1}{\big \la \mathcal{S}(\formg \ssS_{\text{int}})\big \ra_0 } \big \la \emph{\textsf{O}}\, \big \ra_0
.
\end{equation}
In situations where
\[
\ssS_{\text{int}} \otimes \ssA_I\in \textbf{E}'_{\mathcal{F}_{\text{loc}}[[\hbar]]}[I]
\qquad \text{respects the composition} \qquad
(S,\ast,T)
\]
we can interpret free particles/wave packets labeled by $T$ coming in from the far past, interacting in a compact region according to the adiabatically switched interaction $\ssS_{\text{int}}$, and then emerging into the far future, labeled by $S$. For $\ssA_I\in \textbf{E}_{\mathcal{F}_{\text{loc}}[[\hbar]]}[I]$, let
\[\text{G}_I(\ssA_I):=\big\la
\widetilde{\text{T}}(\ssA_I)
\big\ra_0
.\]
If we fix the field polynomial of local observables to be $\text{P}(\Phi)=\Phi$, then $\ssA_I\mapsto \text{G}_I(\ssA_I)$ is the \hbox{time-ordered} $n$-point correlation function, or Green's function. They are usually presented in generalized function notation as follows,
\[
\text{G}_I (b_{i_1} \otimes \cdots \otimes b_{i_n} )
=
\int_{\cX^I} \Big\la \text{T} \big ( \boldsymbol{\Phi}(x_{i_1}) \dots \boldsymbol{\Phi}(x_{i_n})\big ) \Big \ra_0 b_{i_1}( x_{i_1} ) \dots b_{i_n}(x_{i_n})
dx_{i_1} \dots dx_{i_n}
.\]
Note that to obtain the realistic Green's functions, we still have to take the adiabatic limit.
\begin{prop} \label{prop:scattering}
If the Hadamard vacuum state $\la-\ra_0$ is stable with respect to $\ssS_{\text{int}} \in \mathcal{F}_{\text{loc}}[[\hbar]]$, and if $\ssS_{\text{int}} \otimes \ssA_I\in \textbf{E}'_{ \mathcal{F}_{\text{loc}[[\hbar]]}}[I]$ respects the composition $(S, \ast, T)$, then
\[
\text{G}_I(\ssA_I)=
\dfrac{1}
{\Big\la
\mathcal{S}(\formg\ssS_{\text{int}})
\Big\ra_0 }
\Big\la \text{T}_S(\ssA_S)\star_{\text{H}} \mathcal{S}(\formg\ssS_{\text{int}})\star_{\text{H}}
\text{T}_T(\ssA_T)
\Big\ra_0
.\footnote{\ the element $\mathcal{S}(\formg\ssS_{\text{int}})\in \mathcal{F}((\hbar))[[\formg]]$ is called the perturbative S\emph{-matrix}}\]
\end{prop}
\begin{proof}
We have
\begin{align*}
\text{G}_I(\ssA_I)
&=
\big\la
\widetilde{\text{T}}(\ssA_I)
\big\ra_0 \\[6pt]
&=
\bigg\la
\sum_{r=0}^\infty \dfrac{\formg^{r}}{r!}
\text{R}_{r;I}( \ssS^{\, r}_{\text{int}} ; \ssA_{I} )
\bigg\ra_0 \\[6pt]
&=
\bigg\la
\sum_{r=0}^\infty \sum_{r_1 + r_2 =r} \dfrac{\formg^{r}}{r_1! \, r_2!}
\overline{\text{T}}_{[r_1] \sqcup \emptyset}(\ssS^{\, r_1}_{\text{int}}) \star_{\text{H}} \text{T}_{[r_2] \sqcup I}(\ssS^{\, r_2}_{\text{int}} \ssA_{I})
\bigg\ra_0.
\end{align*}
To obtain the final line, we expanded the retarded products according to \textcolor{blue}{(\refeqq{eq:retardprod})}. Then, by causal factorization \textcolor{blue}{(\refeqq{eq:causalfac})}, we have
\[
\text{T}_{[r_2]\sqcup I}(\ssS^{\, r_2}_{\text{int}} \ssA_{I})
=
\text{T}_{S}(\ssA_{S})
\star_{\text{H}}
\text{T}_{[r_2]\sqcup \emptyset}(\ssS^{\, r_2}_{\text{int}})
\star_{\text{H}}
\text{T}_{T}(\ssA_{T})
.\]
Therefore
\begin{align*}
\text{G}_I(\ssA_I)
&=
\bigg\la
\sum_{r=0}^\infty \sum_{r_1 + r_2 =r} \dfrac{\formg^{r}}{r_1! \, r_2!}
\overline{\text{T}}_{[r_1]\sqcup \emptyset}(\ssS^{\, r_1}_{\text{int}}) \star_{\text{H}}
\text{T}_{S}(\ssA_{S})
\star_{\text{H}}
\text{T}_{[r_2] \sqcup \emptyset}(\ssS^{\, r_2}_{\text{int}})
\star_{\text{H}}
\text{T}_{T}(\ssA_{T})
\bigg\ra_0. \\[6pt]
&=
\bigg\la
\sum_{r=0}^\infty \dfrac{\formg^{r}}{r!}
\overline{\text{T}}_{[r]\sqcup \emptyset}(\ssS^{\, r}_{\text{int}}) \star_{\text{H}}
\text{T}_{S}(\ssA_{S})
\star_{\text{H}}
\sum_{r=0}^\infty \dfrac{\formg^{r}}{r!}
\text{T}_{[r]\sqcup \emptyset}(\ssS^{\, r}_{\text{int}})
\star_{\text{H}}
\text{T}_{T}(\ssA_{T})
\bigg\ra_0. \\[6pt]
&=
\Big \la
\mathcal{S}^{-1}(\formg \ssS_{\text{int}})
\star_{\text{H}}
\text{T}_S(\ssA_S)
\star_{\text{H}}
\mathcal{S}(\formg \ssS_{\text{int}})
\star_{\text{H}}
\text{T}_T(\ssA_T)
\Big \ra_0 \\[6pt]
&=
\dfrac{1}
{ \Big \la \mathcal{S}(\formg\ssS_{\text{int}}) \Big \ra_0 }
\Big\la \text{T}_S(\ssA_S)\star_{\text{H}} \mathcal{S}(\formg\ssS_{\text{int}})\star_{\text{H}}
\text{T}_T(\ssA_T)
\Big\ra_0.
\end{align*}
For the final step, we used vacuum stability \textcolor{blue}{(\refeqq{eq:vacstab})}.
\end{proof}
\bibliographystyle{alpha}
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,108,101,565,221 | arxiv | \section{Introduction}
The effect of electron-electron interactions on transport properties of disordered systems {has attracted} a lot of attention since the early prediction \cite{AndersonMIT} and subsequent renormalisation group (RG) analysis \cite{AALR} of the disorder-driven metal-insulator transition. Weak localization corrections to diffusive transport \cite{AALR,GLKh} are enhanced by the Coulomb interactions signalling further localization of the system \cite{AAL:81}. Interactions in strongly localized systems {lead to a} metal-insulator transition at {a} finite temperature proportional to the interaction strength (many-body localization \cite{Basko2006}) {suggesting} that the interaction favors delocalization. Experiments on very clean two-dimensional systems show signatures of a metal-insulator transition driven by {a changing} of interaction strength (for a review, see Ref. \cite{Kravchenko1}).
A theoretical description of the interaction effects in a generic disordered electron system requires non-perturbative approaches{,} which are most developed for one-dimensional systems {whereby} interactions can be treated nonperturbatively {in terms} of the Luttinger liquid (LL) theory \cite{Giamarchi}. {This} makes it tempting to tackle transport in higher-dimensional anisotropic disordered strongly-correlated systems by making use of the LL model. A promising approach describing rich non-Fermi-liquid physics is to consider an anisotropic system as an array of coupled one-dimensional wires \cite{TK,Po}. Previously, various exotic states were considered in the framework of the sliding Luttinger liquid (sLL) model,\cite{Sondhi2001,*Vishwanath2001,*MKL2001,*Kane2002} where the RG analysis of the impact of a single impurity embedded into a LL,\cite{KaneFis:92a,*KF:92b,*FurusakiNagaosa:93b} or continuous disorder in LL\cite{GS} has been generalized for a multi-channel case. A subsequent analysis \cite{SBT} {allowing} for renormalization of the interaction by disorder has shown that the conducting state does not survive at zero temperature for any realistic long-range inter-wire interactions. The only quantum phase transition found in Ref.~\onlinecite{SBT} was a superconductor-insulator one with the boundary distorted by disorder.
A single Luttinger liquid cannot describe {a} metal-insulator transition at high-temperatures where quantum interference does not manifest itself. The only phase transition that is known to happen is the Berezinskii--Kosterlitz--Thouless (BKT) one\cite{Ber1,*Ber2,*KT:73} which takes place when the Luttinger parameter $K=3/2$ (see Ref.~\onlinecite{Giamarchi}) {whereas} repulsive electrons correspond to $K < 1$. The main advantage of {the} sLL model is that inter-wire repulsive interactions stabilize {the} conducting phase{,} bringing {the condition for the BKT transition} into the realm of repulsive fermions. The main disadvantage of the {sLL is that it is unstable (for some non-universal system parameters)} with respect to perturbations like charge-density and Josephson couplings, along with single-particle inter-wire hybridization. These perturbations may become relevant and destroy sLL phase at zero temperature.
In this paper, we focus on the phases existing in the presence of continuous disorder at \emph{finite temperatures} when only the disorder strength and the electron-electron interaction need to be renormalised, generalizing the recently developed method\cite{IVY:2013,*IVY:2017,*YGYL:2013,*KagLY} based on the scattering matrix formalism.
Since we are not interested in {the} regime of very low temperatures, where quantum interference governs the transport properties, we may assume that even relevant inter-wire perturbations do not blow up, provided that their bare values are sufficiently small and temperature infra-red cutoff is relatively high. This is the case of an array of wires that are well separated from each other (weak hybridization means weak single- and two-particle Josephson couplings) and {the interaction potential between the wires is smooth of the scale of the Fermi wave-length.}, so that the bare value of inter-wire charge-density-wave interaction is small.
\section{The multichannel model}
The action describing a multichannel LL is a straightforward generalization of the standard LL action:\cite{Giamarchi}
\begin{align}
S&=\frac{1}{8\pi}\,\int{\rm d}x\,{\rm d}t\,{\bm \Psi}^{\rm T}\left[{\hat\tau}_1\,\partial_t +{\mathsf{V}}\,\partial_x\right]\,\partial_x{\bm\Psi}\notag\\[-6pt]\label{S}\\[-6pt]
&+iD\sum_i\int{\rm d}x\,{\rm d}t\,{\rm d}t'\,\cos \left[\theta_i(x,t)-\theta_i(x,t')\right]\,.\nonumber
\end{align}
Here the composite vector field ${\bm\Psi}^{\rm T}=({\bm\theta}^{\rm T}\,,{\bm\phi}^{\rm T})$
is built on two vector fields, ${\bm\theta}=(\theta_1\,,...\,,\theta_N)$ and ${\bm\phi}=(\phi_1\,,...\,,\phi_N)$, that parametrise density and current excitations in the $i^{{\mathrm{th}}} $ channel \((1\leq i\leq N)\) as \(\rho_i=\frac{1}{2\pi}\partial_x\theta_i\) and \(j_i=\frac{1}{2\pi}\partial_x\phi_i\);
${\hat\tau}_1$ is the Pauli matrix in $\{{\bm\theta}\,,{\bm\phi}\}$-space and ${\mathsf{V}}={\rm diag}[{\mathsf{V}}_+\,,{\mathsf{V}}_-]$ is a block-diagonal (in the same space) matrix describing density-density, ${\mathsf{V}}_+$, and current-current, ${\mathsf{V}}_-$, interactions. In the absence of inter-channel interactions these matrices would become diagonal, $\left[V_{\pm}\right]_{ij}\to\delta_{ij}\,v_i\,K_i^{\mp 1}$, with $v_i$ and $K_i$ being the velocities and Luttinger parameters in the $i^{{\mathrm{th}}} $ channel.
The nonlinear, cosine term in the action results from the standard replica averaging over disorder, as in the single-channel case \cite{Giamarchi}, albeit the replica indices are suppressed in Eq.~\eqref{S}. The averaging has been performed over the standard single-particle disorder potential with random backscattering amplitudes, $\xi_i(x)\,e^{i\theta_i(x)}+{\rm c.c.}$, with the white-noise correlations,
\begin{align}\label{corr}
\left<\xi_i(x)\,\overline{\xi}_j(x')\right>=\delta_{ij}\,D_ {i}\,\delta (x-x') ,
\end{align}
assuming the absence of inter-channel correlations.
\subsection{RG equations}
Following the standard procedure \cite{Giamarchi}, one derives the following {RG} equations for {the} disorder strength and interaction matrices (with $l$ being the logarithm of the scaling factor):
\begin{align}\label{RG}
\begin{aligned}
&\partial_{l} {\mathsf{D}}=(3-2\widetilde{\mathsf {K}})\,{\mathsf{D}}\,,\quad&& {\mathsf{D}}(l\!=\!0)=D_0\, {\mathbb{1}}\,;\\
&\partial_{l} {\mathsf{V}}_{-}^{-1}={\mathsf{D}}\,, && {\mathsf{V}}_{-}(l\!=\!0)={\mathsf{V}}_{-}^{(0)}\,.
\end{aligned}
\end{align}
The density-density interaction matrix, $V_+$, does not {renormalize}: $\partial_{l}{\mathsf{V}}_{+}=0$. In Eq.~\eqref{RG}, we have introduced matrices ${\mathsf{D}}$ and $\widetilde{{\mathsf{K}}}$ which are diagonal in {the channel space}: ${\mathsf{D}}={\rm diag}\{{ D_1\,,\,...\,,D_N}\} $ and $\widetilde{\mathsf {K}}={\rm diag} \{{K_{11}\,,\,...\,, K_{NN}}\} $. The elements of ${\mathsf{D}}$ describe the disorder strength in the appropriate channel, Eq.~\eqref{corr}, while the elements of $\widetilde{{\mathsf{K}}}$ are the diagonal elements of the \emph{Luttinger matrix}, ${\mathsf{K}}$, which is defined from the equation
\begin{equation}\label{K}
{\mathsf{K}}\,{\mathsf{V}}_{+}\,{\mathsf{K}}={\mathsf{V}}_{-}\,.
\end{equation}
In the $N$-channel RG equations, the Luttinger matrix plays the role similar to that of the Luttinger parameter in the single-channel LL (see Ref.~\onlinecite{IVY:2013} for details).
Equation \eqref{K} closes the set of RG equations \eqref{RG}. Below we build the phase diagrams corresponding to these equations for two particular cases.
\section{Lattice of Identical Channels}
{Here we consider the multi-channel model where identical channels (wires) are packed into a 2D or 3D array in such a way that the cross-section perpendicular to the length of the wires forms a lattice ${\cal L}$}. All matrix elements of {the interaction matrices,} ${\mathsf{V}}_{\pm}${,} may be labelled by the spatial positions ${\bm R}$ of wires in the perpendicular plane, ${\mathsf{V}}_{\pm}\to V_{\pm}({\bm R},{\bm R}')$, where ${\bm R}\subset {\cal L}$. If the lattice is a Bravais one, the matrix elements of ${\mathsf{V}}_{\pm}$ become scalars, $V_{\pm}({\bm R}-{\bm R}')$, (assuming translation invariance and periodic boundary conditions). {For non-Bravais lattices they will become matrices in the space of inequivalent wires, which we do not consider here}. Equation \eqref{K} for the the Luttinger matrix transforms to
\begin{equation}\label{F}
\sum_{{\bm R}_1,{\bm R}_2\subset{\cal L}}\,K({\bm R}_{12})\,V_{+}({\bm R}_{23})\,K({\bm R}_{34})=V_{-}({\bm R}_{14})\,,
\end{equation}
where ${\bm R}_{ij}={\bm R}_i-{\bm R}_j$. This equation is easily solved with the use of the discrete Fourier transform on the lattice ${\cal L}$,
\begin{align*}
F({\bm R})&=\int \frac{{\rm d}^dq}{(2\pi)^d}\,F({\bm q})\,e^{i{\bm q}{\bm R}}\,,\\
F({\bm q})&=\sum_{{\bm R}\subset{\cal L}}\,F({\bm R})\,e^{-i{\bm q}{\bm R}}\,.
\end{align*}
Here and elsewhere in this section the momentum integration is performed over the Brillouin zone of the wire lattice. The solution to the Eq.~\eqref{F} has the form
\begin{eqnarray}\label{Kr}
K({\bm r})=\int\frac{{\rm d}^dq}{(2\pi)^d}\,\sqrt{\frac{V_{-}({\bm q})}{V_{+}({\bm q})}}\,e^{i{\bm q}{\bm r}}\,,
\end{eqnarray}
with ${\bm r}={\bm R}-{\bm R}'\subset{\cal L}$ and $V_{\pm}({\bm q})$ being the discrete Fourier transform of interaction potentials $V_{\pm}({\bm r})$.
Now the diagonal matrix $\widetilde{{\mathsf{K}}}$ in Eq.~\eqref{RG} is reduced to $\widetilde{K}{\mathbb{1}}$ where the effective Luttinger parameter ${\widetilde{K}}$
is given by $\widetilde{K}\equiv K({\bm r=\bm0})$.
Assuming equal strength of bare disorder in each wire, $D_i=D$, the matrix RG equations \eqref{RG} are reduced to the two RG equations for the disorder strength and the deviation, $c$, of the Fourier transform of the current-current interaction from its bare value
\begin{align}\label{RGq}
\begin{aligned}
&\partial_{l} {{D}}=\left[3-2\widetilde{ {K}}(c)\right]\,{{D}}\,,\quad&& \partial_{l} c={{D}}\,;\\
&{{D}}(l\!=\!0)=D_0\,, && c({l\!=\!0})=0\,,
\end{aligned}
\end{align}
with $c({l})$ defined by
\begin{equation}\label{c}
c(l)\equiv V^{-1}_-({\bm q};l)-V^{-1}_0({{\bm{q}}})\,,
\end{equation}
where $V_0({{\bm{q}}})\equiv V_-({\bm q},l\!=\!0)$ is the bare value of the current-current interaction.
The closure to the RG equations is provided by the explicit dependence of $\widetilde{K}\equiv \widetilde{K}({\bm {r}={\bm{0}}})$ on $c$:
\begin{equation}\label{k}
\widetilde{K}(c)=\int\frac{{\rm d}^dq}{(2\pi)^d}\,V_{+}^{-1/2}({\bm q})\,\left[V_0({{\bm{q}}})+c({l})\right]^{-1/2}\!.
\end{equation}
The effective Luttinger parameter $\widetilde{K}(c)$ is {a} monotonically decreasing function of $c$ and, therefore, if its bare value $\widetilde{K}_0\equiv \widetilde{K}(c\!=\!0)<3/2$, the disorder will always {grow} under renormalisation and we always end up in the insulating regime. From now on, we will be interested only in {the} case $\widetilde{K}_0>3/2$.
The BKT transition takes place at $c=c^{*}$ with the critical value $c^*$ found from $
\widetilde{K}(c^{*})=\frac{3}{2}\,.
$
The analysis of the RG flow in these terms is possible only in the vicinity of the BKT critical value $\widetilde{K}=3/2$. {This} means that the bare value $\widetilde{K}_0$ should also be close to $3/2$. In this case $c^{*}\ll 1$ and we may approximate
\begin{equation}
\widetilde{K}(c)\approx \widetilde{K}_0-\kappa\,c\,,
\end{equation}
where
\begin{equation}\label{kappa0}
\kappa=-\left.\frac{{\rm d}\widetilde{K}(c)}{{\rm d}c}\right|_{c\!=\!0}=\frac{1}{2}\,\int\frac{{\rm d}^dq}{(2\pi)^d}\,\frac{V_{0}^{3/2}(\bm q)}{V_{+}^{1/2}({\bm q})}\,.
\end{equation}
The critical value $c^{*}$ is given in terms of the initial detuning from the transition:
\begin{equation}
c^{*}=\frac{\delta}{\kappa}\ll 1\,,\quad \delta\equiv \widetilde{K}_0-\frac{3}{2}\geq 0\,.
\end{equation}
The BKT RG equations acquire the standard form,
\begin{align}\label{x}
\partial_l D&=2x\,D\,, &
\partial_l x&=\kappa\,D\, , & x&\equiv \kappa\,c-\delta
\end{align}
with the initial condition $ x(l\!=\!0)=-\delta$ in terms of $x$.
The RG flows in the $(x\,,D)$-plane obey the equation
\begin{equation}
\kappa\, \frac{{\rm d}D}{{\rm d}x}=2x
\end{equation}
that defines the family of trajectories
$
D= {x^2}{\kappa}^{-1} +E\,,
$
with the constant $E$ being defined by the initial values, $
E=D_0- {\delta^2}{\kappa}^{-1}
$.
The boundary between the insulating and conducting phases corresponds $E=0$, i.e.\ $
\delta =\sqrt{\kappa\,D_0}\equiv y$:
the system is conducting for $\delta>\sqrt{\kappa\,D_0}$ and insulating for $\delta<\sqrt{\kappa\,D_0}$.
These RG flows are illustrated in Fig.~\ref{fig:KTflow}, where the phase boundary is clearly seen. The effects of inter-wire interactions are in the definitions of the parameters, given explicitly in Eqs.~\eqref{k}--\eqref{kappa0}.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{KTflow}
\caption{(color online) {BKT flow diagram for the wire lattice, with \(y=\sqrt{\kappa D}\) characterizing the disorder, and \(x\) is the deviation from the critical value of Luttinger parameter \(\widetilde{K}_0=\frac{3}{2}\). The equations have a conserved `energy' \(E\). If \(E<0\) or $E>0$ and $x>0$, the system flows to strong disorder, while only if \(E>0\) and \(x<0\) does it flow to a conducting state.}}
\label{fig:KTflow}
\end{center}
\end{figure}
The position of the phase-separation boundary is mainly dictated by the interaction, $\widetilde{K}_0=\frac{3}{2}$, which also governs renormalisation of the (weak) disorder strength while the feedback from disorder to interaction is negligible. One can show\cite{CKY} that the inter-wire long-range interaction results in $\widetilde{K}_0> K$ (where $K$ is the standard single-wire Luttinger parameter), so that it favors {a} conducting state. The effective Luttinger parameter $\widetilde{K}_0$ depends on both $K$ and inter-wire interaction parameters and can reach {the} value $3/2$ even for $K<1$ corresponding to repulsive fermions. Therefore, one should expect a competition between the weak inter-wire long-range interaction and weak disorder leading to the BKT metal-insulator transition for the wire lattice.
\begin{figure*}
\begin{center}
{\includegraphics[width=0.45\textwidth]{2ChanRGCC.pdf}}\qquad\quad
{\includegraphics[width=0.45\textwidth]{2ChanRGIC.pdf}}
\vspace*{5pt}
\hspace*{11mm}{\large (a)} \hspace*{85mm}{\large (b)}
\caption{\label{fig:CC}{Phase diagram of a two-channel disordered Luttinger liquid. Differently shaded regions I (green online), II (amber) and III (red) correspond to three phases of the uncoupled channels: $cc$ (I), $ic$ ({II}) and $ii$ ({III}). Three pairs of trajectories that show the effect of the inter-channel coupling, $\kappa$, were calculated for \(\kappa =0.4\kappa_{11},\kappa_{22}=0.25\kappa_{11}\) in Eq.~\eqref{kappa1}, and \(y_0=\left(4\kappa_{11}\right)^{-1/2}\) in Eq.~\eqref{Kondo}, with \(\kappa_{11}\) defining the scale. In each pair, the dashed and dotted lines correspond to channels 1 and 2 respectively. The pair started with circles in (a) corresponds to a system with energy \(E\approx2.1\kappa_{11}\) deep inside the (cc) region where the coupling does not lead to qualitative changes. The pair started with squares in (a) corresponds to \(E\approx 0.07\kappa_{11}\) which is ostensibly in the (cc) region for an uncoupled system; the coupling generates a large enough negative energy shift to push the system into the (ii) region. The pair in (b) is for \(E\approx 0.91\kappa_{11}\) when the dashed trajectory is in the insulating region of channel 1. As the disorder there begins to grow, the dotted trajectory for channel 2 that remains in the conducting region for $\kappa=0$ is dragged into insulating region with it. This shows how the mixed (ic) phase is destroyed by inter-channel coupling.}}
\end{center}
\end{figure*}
\section{Two distinct channels}
We limit our analysis of non-identical channels with different (and uncorrelated) disorder strength to the case of two channels. Then all matrices in the RG equations \eqref{RG} are $2\times 2$.
Similar to the approach used in the previous section for the $N$-channel problem, we {now} introduce two renormalizable scalars, $c_{1,2} $, describing the deviation of the current-current interaction matrix from its bare value, ${\mathsf{V}}_0\equiv {\mathsf{V}}_{-}({l\!=\!0})$:
\begin{equation}
{\mathsf{V}}^{-1}_-(l)={\mathsf{V}}_0^{-1}+\mathsf{c}\,,\qquad \mathsf{c}=\left(
\begin{array}{cc}
c_1 & 0 \\
0 & c_2 \\
\end{array}
\right)\,.
\end{equation}
The RG equations \eqref{RG} in the new {variables} become
\begin{align}\label{RGc}
\begin{aligned}
&\partial_{l} {\mathsf{D}}=\left[3-2\widetilde{\mathsf {K}}({\mathsf{c}})\right]\,{\mathsf{D}}\,,\quad&& {\mathsf{D}}(l\!=\!0)=D_0\, {\mathbb{1}}\,;\\
&\partial_{l} \mathsf{c}={\mathsf{D}}\,, && \mathsf{c}(l\!=\!0)=0.
\end{aligned}
\end{align}
Again, ${\widetilde{\mathsf{K}}}={\mathrm{diag}}\{{K_{11}, K_{22} }\} $, with the
two effective Luttinger parameters $K_{ii}(c_1,\,c_2)$ being the diagonal elements of the Luttinger matrix ${\mathsf{K}}$, Eq.~\eqref{K}. This equation can now be rewritten via ${\mathsf{V_0}}$ and ${\mathsf{c}}$ as
\begin{equation}\label{KVK}
\mathsf{K}\,\mathsf{V}_+\,\mathsf{K}\,\left[\mathsf{V}_0^{-1}+\mathsf{c}\right]= {\mathbb{1}}
\end{equation}
Assuming the system to be initially in the vicinity of a generalized BKT transition in each channel, i.e.\ for both $i=1,2$ one has $|K_{ii}^{({0})} -3/2|\ll 1$, one can see that the critical values at which the BKT transition occurs,
\begin{equation}
K_{ii}\left(c^*_1,c^*_2\right)=\frac{3}{2}\,,\qquad i=1,2\,,
\end{equation}
are small, $|{ c}^{*}_i|\ll 1$.
Therefore, in the vicinity of the transition one may use the expansion
\begin{align}\label{kappa}
K_{ii}(c_1,c_2)&\approx K_{ii}^{(0)}-\sum_j\!\kappa_{ij}\, c_j\,,\;
\quad \kappa_{ij}\equiv -\left.\frac{\partial K_{ii}}{\partial c_j}\right|_{c_1=c_2=0}.
\end{align}
The matrix of derivatives, $\{\mathsf{\kappa}_{ij} \}$, is a symmetric positively definite matrix with the positive matrix elements,
\begin{align}\label{kappa1}
\begin{aligned}
\kappa_{ii}&=T^{-1}\,\left[(1-k^2)\,V_{ii}^2+{\rm det V}\right]\geq 0\,,\\
\kappa&=T^{-1}\,\left[(1-k^2)\,V_{12}^2+k^2\,{\rm det V}\right]\geq 0\,,
\end{aligned}
\end{align}
where $\kappa\equiv \kappa_{12}=\kappa_{21}$ (see Appendix A for the derivation).
The RG equations \eqref{RGc} turn into
\begin{eqnarray}\label{RGxs}
\begin{aligned}
&\partial_{l} D_i=2x_i\,D_i\,,\qquad && \partial_{l} x_i= \sum_{j=1,2}\,\kappa_{ij}\,D_j\,;\\
&D_i(l\!=\!0)=D_0\,, && x_i(l\!=\!0)=-\delta_i\,,
\end{aligned}
\end{eqnarray}
where we have introduced the notations
\begin{equation}
x_i=\sum_j\kappa_{ij}\,c_j-\delta_i\,,\qquad \delta_i=K_{ii}^{(0)}-\frac{3}{2}\,,
\end{equation}
to present them in the form familiar from the previous chapter. The substitution $D_i=y_i^2$ reduce these Eq.~\eqref{RGxs} to the pair of coupled BKT equations:
\begin{align}\label{Kondo}
\begin{aligned}
&\partial_{l} y_i = x_i\,y_i\,, &&
\partial_{l} x_i = \sum_j \kappa_{ij}\,y^2_j\, ,\\
&D_i(l\!=\!0)=D_0\,, && x_i(l\!=\!0)=-\delta_i\,,
\end{aligned}
\end{align}
There is only one integral of motion (see Appendix B for details), given in terms of ${\bm x}{=}(x_1,x_2)$ and ${\bm y}{=}(y_1,y_2)$ by
\begin{equation}\label{E0}
E={\bm x}\,{{\mathsf{m}}}\,{\bm x}-{\bm y}^2\,,\quad{\mathsf{m}}={\hat \kappa}^{-1}=\left(
\begin{array}{cc}
m_1 & -m \\
-m & m_2 \\
\end{array}
\right)
\,,
\end{equation}
where $m_{1,2}$ and $m$ are positive.
In the absence of the inter-channel coupling, $m=0$, Eqs.~\eqref{Kondo} would describe two independent systems undergoing the BKT transition. They are equivalent to two uncoupled Kondo impurities, each having the integral of motion (`energy'),
\begin{equation}\label{E}
E_i=m_i\,x_i^2-y_i^2\,,
\end{equation}
with the exchange constants $J_i^{\perp}\equiv y_i$ and $J_i^{\parallel}\equiv x_i$. { Then $\delta_i<0$ in Eq.~\eqref{RGxs} corresponds to the anti-ferromagnetic Kondo impurity ({$J_i^{\parallel}>0$}) {where} all the RG flows go towards the strong-coupling fixed point (the unitary limit of the Kondo screening when $J_i^{\perp}\to\infty $ at the Kondo temperature) corresponding to the insulator. The case of $\delta_i>0$ corresponds to the ferromagnetic Kondo impurity, $J_i^{\parallel}<0$, where the flows go the strong-coupling fixed point for $E_i<0$, but to the weak-coupling fixed point corresponding to the conductor for $E_i>0$. For completeness, these well-known results including explicit expressions for the RG flows are recouped in Appendix~B.}
Let us analyze an impact of a weak inter-channel coupling, $m\ll 1$, when Eqs.~(\ref{Kondo}) describe two coupled Kondo impurities. In this case, there is only one integral of motion,
Eq.~\eqref{E0},
while the former integrals of motions, Eq.~\eqref{E}, become `adiabatic invariants', i.e.\ slow functions of `time' $l$:
\begin{equation}\label{dotE}
\dot{E}_i=2m\,x_i\,\dot{x}_{-i}\,.
\end{equation}
{In Appendix C, we show how to use these invariants to construct the RG flows starting from the uncoupled case. Here we summarize the results of this consideration.}
If both $ \delta_i \leq 0$ in Eq.~\eqref{RGxs}, the system flows to the strong-coupling Kondo regime, i.e.\ an insulator. It means that {the} $(ii)$ phase where both channels were insulators for $m=0$ is not qualitatively affected by the coupling between channels but just expanding in the phase space.
If both $ {\delta_i}\geq 0$, the RG flows depend on the bare values of the adiabatic invariants $E_i(0)$. They start with a negative derivative, Eq.~\eqref{dotE}, bending upwards in comparison to those without the coupling. The flows are illustrated in Fig.~\ref{fig:CC} where the trajectories are numerically calculated for several values of the parameters. Note that the critical value for the initial values of $E_i$ to stay on a conducting trajectory (leading to $y_i(l=\infty)=0$) increases so that the $(cc)$ phase shrinks.
{The mixed $(ci)$ phase turns out to be totally unstable as illustrated in Fig.~\ref{fig:CC}. Since the RG trajectories in the insulating channel flow towards the strong-coupling Kondo regime, the negative `energy' shift in the initially conducting channel arising from the coupling will be sufficient to drag the channel into the negative energy region (See Appendix C for details), finally making it insulating as well. Thus the intermediate $ci$ phase eventually disappears due to the inter-channel coupling, while the BKT transition between the (ii) and (cc) phases is shifted towards the insulator. }
\section{Conclusion}
We have constructed a generic description of a `high'-temperature Berezinskii-Kosterlitz-Thouless transition in a multi-channel array of coupled Luttinger liquids. We have focused on the two cases, a lattice of identical LL wires or two distinct LL channels. The inter-channel coupling makes these transitions in principle observable not only in systems with locally repulsive bosons but also in systems with repulsive fermions, where no such a transition exists for a single LL channel.
\section*{Acknowledgement} IVY gratefully acknowledges support from the Leverhulme Trust via Grant No.\ RPG-2016-044 and hospitality extended to him at the Center for Theoretical Physics of Complex Systems, Daejeon, South Korea.
|
1,108,101,565,222 | arxiv |
\subsection{Batch-size effect}
Here we experiment with the effect of the batch-size on the performance of the HSIC-loss. The empirical HSIC value estimated on $m$ samples has bias of $O(1/m)$. This suggests that training with small batch-size might have a strong negative effect. However, in practice we have found that beyond the obvious computational cost of large batches (which scales as $m^2$), using larger mini-batches lead to worse performance in terms of accuracy. In the attached PDF we show that performance on the synthetic data setup using $m=64$ or 128 is considerably worse than $m=32$. We leave investigating the reasons for this interesting behavior, and the dynamics of gradients of HSIC in general, to future work.
\begin{figure}[]
\centering
\includegraphics[scale=0.6]{batch_size_effect.pdf}
\caption{Effect of batch-size on performance on the linear regression experiment.}
\label{batch_size_effect}
\end{figure}
\subsection{Theorems' implications}
In unsupervised covariate shift scenarios, there is always a tradeoff between how different do we allow the target distribution to be, versus the guarantees we can give for the target performance of a model trained on the source. In this work, we focus on specifying this tradeoff in terms of norms of RKHS spaces: the norm controls the complexity of the allowable target distribution set, and also multiplies components of the bounds. In the discussion of the theorems we argued that for $f^\star$ and functions near it, our bounds are much tighter compared to a bounds based on the infinity norm of the density ratio. Indeed, note that plugging $h=f^\star$ in Theorem \ref{robustness} we get:
\begin{align*}
&\sup_{\substack{P_\text{target} \in \mathcal{Q}\\\ell\in\mathcal{G}}} \mathbb{E}_{x\sim P_\text{target}} [\ell(Y-f^\star(X))] - \mathbb{E}_{x\sim P_\text{source}} [\ell(Y-f^\star(X))] \\
&\le 0,
\end{align*}
which is a tight bound. When attempting to give a similar result using the infinity-norm of $\frac{P_\text{target}}{P_\text{source}} - 1$, the bounds become far from tight:
\begin{align*}
&\sup_{\substack{P_\text{target} \in \mathcal{Q}\\\ell\in\mathcal{G}}} \mathbb{E}_{x\sim P_\text{target}} [\ell(Y-f^\star(X))] - \mathbb{E}_{x\sim P_\text{source}} [\ell(Y-f^\star(X))] \\
&=\sup_{\substack{P_\text{target} \in \mathcal{Q}\\\ell\in\mathcal{G}}} \mathbb{E}_{x\sim P_\text{source}} \left[\left(\frac{P_\text{target}}{P_\text{source}}-1\right)\ell(Y-f^\star(X))\right] \\
&\le C \cdot \sup_{\substack{P_\text{target} \in \mathcal{Q}\\\ell\in\mathcal{G}}} \mathbb{E}_{x\sim P_\text{source}} [\ell(Y-f^\star(X))],
\end{align*}
where $C$ bounds the infinity norm of $\frac{P_\text{target}}{P_\text{source}} - 1$. The fact that the HSIC bound is tighter justifies its use a loss function better suited for dealing with unsupervised covariate shift compared to simply minimizing a standard loss such as MSE.
This tighter bound comes at a cost: there are many cases where the infinity-norm of $\frac{P_\text{target}}{P_\text{source}} - 1$ is bounded but the Hilbert norm is unbounded, meaning that the density-ratio is not within the RKHS. Consider $p(k) = a/k^2, q(k) = b/(k+1)^2$ for $k=1,2,...$ , where $a$ and $b$ are the normalizing constants and $k$ is all positive integers. In this case $||q/p-1||_\infty$ is bounded by roughly $1.55$, but the $l_2$ norm diverges to infinity. This means that our set of distributions $\mathcal{Q}$ is a strictly smaller subset of the set of infinity-norm bounded ratios, while our bounds are correspondingly more informative.
\subsection{Proofs}
Bellow we give the omitted proofs.
\subsubsection{Optimality proof}
We prove that $Y-h(X) \indep X$ implies that $Y\indep X \mid h(X)$.
\begin{proof}
First, we note that this is equivalent to prove that $Y-h(X)\indep X \mid h(X)$, since the conditioning makes $h(X)$ just a constant. Now,
\begin{equation*}
\begin{split}
&P\left(Y-h(X)\mid X,h(X)\right) \\
&= P\left(Y-h(X)\mid X\right) \\
&= P\left(Y-h(X)\right) \\
&= P\left(Y-h(X)\mid h(X)\right),
\end{split}
\end{equation*}
where the last equality is due to the fact that if the residual is independent of $X$, it is also independent of $h(X)$
\end{proof}
\subsubsection{Proof of lemma \ref{lem}}
\begin{proof}
We prove that $\sup_{s\in\mathcal{F},t\in\mathcal{G}}\mathbb{C}\text{ov}[s(X),t(Y)] \leq M_\mathcal{F} \cdot M_\mathcal{G}\sup_{s\in\Tilde{\mathcal{F}},t\in\Tilde{\mathcal{G}}}\mathbb{C}\text{ov}[s(X),t(Y)]$. The other direction of the inequality is true by that same arguments.
Let $\{s_i\}_{i=1}^\infty \subset \mathcal{F}$ and $\{t_i\}_{i=1}^\infty \subset \mathcal{G}$ be such that
\begin{align*}
\lim_{n \to \infty}\mathbb{C}\text{ov}[s_i(X),t_i(Y)]=\sup_{s\in\mathcal{F},t\in\mathcal{G}}\mathbb{C}\text{ov}[s(X),t(Y)].
\end{align*}
Since for all $i$, $\frac{1}{M_\mathcal{F}}\cdot s_i \in \Tilde{\mathcal{F}}$ and likewise $\frac{1}{M_\mathcal{G}}\cdot t_i \in \Tilde{\mathcal{G}}$, we have that
\begin{align*}
&\sup_{s\in\mathcal{F},t\in\mathcal{G}}\mathbb{C}\text{ov}[s(X),t(Y)] \\
&= \lim_{n \to \infty}\mathbb{C}\text{ov}[s_i(X),t_i(Y)] \\
&= M_\mathcal{F}\cdot M_\mathcal{G}\lim_{n \to \infty}\mathbb{C}\text{ov}[\frac{1}{M_\mathcal{F}}s_i(X),\frac{1}{M_\mathcal{G}}t_i(Y)] \\
& \le M_\mathcal{F}\cdot M_\mathcal{G} \sup_{s\in\tilde{\mathcal{F}},t\in\tilde{\mathcal{G}}}\mathbb{C}\text{ov}[s(X),t(Y)].
\end{align*}
\end{proof}
\subsubsection{Proof of theorem \ref{lower bound}}
\begin{proof}
Expanding the HSIC-loss:
\begin{align*}
&\text{HSIC}(X,Y-h(X);\Tilde{\mathcal{F}},\Tilde{\mathcal{G}}) \\\ge&\sup_{s\in\Tilde{\mathcal{F}},t\in\Tilde{\mathcal{G}}}\mathbb{C}\text{ov}\left(s\left(X\right),t\left(Y-h\left(X\right)\right)\right) \\
=& \frac{1}{M_\mathcal{F} \cdot M_\mathcal{G}} \sup_{s\in\mathcal{F},t\in\mathcal{G}}\mathbb{C}\text{ov}\left(s\left(X\right),t\left(Y-h\left(X\right)\right)\right) \\
=& \frac{1}{M_\mathcal{F} \cdot M_\mathcal{G}}\sup_{s\in\mathcal{F},t\in\mathcal{G}}\mathbb{C}\text{ov}\left(s\left(X\right),t\left(f^{\star}\left(X\right)-h\left(X\right)+\varepsilon\right)\right),
\end{align*}
where the first inequality is due to Eq. \eqref{eq:hsiccoco}, and the first equality by Lemma \ref{lem}.
Now, by the assumption that $f^\star-h$ is in the closure of $\mathcal{F}$, there exist $s_{n}\in\mathcal{F}$ s.t. $ \left(s_{n}\right)_{n=1}^{\infty}$ converge to $f^{\star}-h$ under the infinity norm.
Taking $t$ to be the identity function, we get that for all $n \in \mathbb{N}$:
\begin{align*}
&\sup_{s\in\mathcal{F}}\mathbb{C}\text{ov}\left(s\left(X\right),f^{\star}\left(X\right)-h\left(X\right)+\varepsilon\right) \\ &\ge\mathbb{C}\text{ov}\left(s_{n}\left(X\right),t\left(f^{\star}\left(X\right)-h\left(X\right)+\varepsilon\right)\right),
\end{align*}
which implies that
\begin{align*}
&\sup_{s\in\mathcal{F},t\in\mathcal{G}}\mathbb{C}\text{ov}\left(s\left(X\right),t\left(f^{\star}\left(X\right)-h\left(X\right)+\varepsilon\right)\right) \\
\ge& \lim_{n\to \infty}\mathbb{C}\text{ov}\left(s_{n}\left(X\right),f^{\star}\left(X\right)-h\left(X\right)+\varepsilon\right) \\
=&\mathbb{C}\text{ov}\left(f^{\star}\left(X\right)-h\left(X\right),f^{\star}\left(X\right)-h\left(X\right)+\varepsilon\right) \\
=& \mathbb{V}\text{ar}\left(f^{\star}\left(X\right)-h\left(X\right)\right).
\end{align*}
\end{proof}
\subsubsection{Proof of theorem \ref{subgroup}}
\begin{proof}
First, we note that
\begin{equation*}
\begin{split}
&\left|\mathbb{E}\left[s_{A}\left(x\right)\ell\left(y-h(x)\right)\right]-\mathbb{E}\left[1_{A}\left(x\right)\ell\left(y-h(x)\right)\right]\right| \\ &\le \left|\mathbb{E}\left[\left(s_{A}\left(x\right)-1_{A}\left(x\right)\right)\ell\left(y-h(x)\right)\right]\right| \\&
\le \mathbb{E}\left[\left|s_{A}\left(x\right)-1_{A}\left(x\right)\right|\ell\left(y-h(x)\right)\right] \\&
\le \delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right].
\end{split}
\end{equation*}
And therefore
\begin{equation*}
\begin{split}
&\mathbb{E}\left[1_{A}\left(x\right)\ell\left(y-h(x)\right)\right] \\
&\le \mathbb{E}\left[s_{A}\left(x\right)\ell\left(y-h(x)\right)\right]+\delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right].
\end{split}
\end{equation*}
Now, by definition,
\begin{equation*}
\begin{split}
&\mathbb{E}\left[s_{A}\left(x\right)\ell\left(y-h(x)\right)\right]\\ &\le \mathbb{E}\left[s_{A}\left(x\right)\right]\mathbb{E}\left[\ell\left(y-h(x)\right)\right]\\&+M_{\mathcal{F}}M_{\mathcal{G}}HSIC\left(x,y-h(x);\tilde{\mathcal{F}},\tilde{\mathcal{G}}\right).
\end{split}
\end{equation*}
Similar to before,
\begin{equation*}
\begin{split}
&\left|\mathbb{E}\left[s_{A}\left(x\right)\right]\mathbb{E}\left[\ell\left(y-h(x)\right)\right]-\mathbb{E}\left[1_{A}\left(x\right)\right]\mathbb{E}\left[\ell\left(y-h(x)\right)\right]\right|\\
&\le\delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right],
\end{split}
\end{equation*}
and therefore,
\begin{equation*}
\begin{split}
&\mathbb{E}\left[s_{A}\left(x\right)\right]\mathbb{E}\left[\ell\left(y-h(x)\right)\right]\\
&\le\mathbb{E}\left[1_{A}\left(x\right)\right]\mathbb{E}\left[\ell\left(y-h(x)\right)\right]+\delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right].
\end{split}
\end{equation*}
Combining all the above:
\begin{equation*}
\begin{split}
&\frac{\mathbb{E}\left[1_{A}\left(x\right)\ell\left(y-h(x)\right)\right]}{\mathbb{E}\left[1_{A}\right]} \\
&\le \frac{\mathbb{E}\left[s_{A}\left(x\right)\ell\left(y-h(x)\right)\right]+\delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right]}{\mathbb{E}\left[1_{A}\right]} \\
&\le \frac{\mathbb{E}\left[s_{A}\left(x\right)\right]\mathbb{E}\left[\ell\left(y-h(x)\right)\right]}{\mathbb{E}\left[1_{A}\right]} \\
&+\frac{M_{\mathcal{F}}M_{\mathcal{G}}HSIC\left(x,y-h(x);\tilde{\mathcal{F}},\tilde{\mathcal{G}}\right)+\delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right]}{\mathbb{E}\left[1_{A}\right]}\\
&\le \frac{\mathbb{E}\left[1_{A}\left(x\right)\right]\mathbb{E}\left[\ell\left(y-h(x)\right)\right]+\delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right]}{\mathbb{E}\left[1_{A}\right]} \\
&+ \frac{M_{\mathcal{F}}M_{\mathcal{G}}HSIC\left(x,y-h(x);\tilde{\mathcal{F}},\tilde{\mathcal{G}}\right)+\delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right]}{\mathbb{E}\left[1_{A}\right]} \\
&= \frac{M_{\mathcal{F}}M_{\mathcal{G}}HSIC\left(x,y-h(x);\tilde{\mathcal{F}},\tilde{\mathcal{G}}\right)+2\delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right]}{\mathbb{E}\left[1_{A}\right]} \\
&+\mathbb{E}\left[\ell\left(y-h(x)\right)\right] \\
&\le \frac{M_{\mathcal{F}}M_{\mathcal{G}}HSIC\left(x,y-h(x);\tilde{\mathcal{F}},\tilde{\mathcal{G}}\right)+2\delta\mathbb{E}\left[\ell\left(y-h(x)\right)\right]}{c} \\
&+\mathbb{E}\left[\ell\left(y-h(x)\right)\right] \\
&=\frac{M_{\mathcal{F}}M_{\mathcal{G}}HSIC\left(x,y-h(x);\tilde{\mathcal{F}},\tilde{\mathcal{G}}\right)}{c} \\
&+\left(\frac{2\delta}{c}+1\right)\mathbb{E}\left[\ell\left(y-h(x)\right)\right]
\end{split}
\end{equation*}
\end{proof}
\subsubsection{Proof of theorem \ref{robustness}}
\begin{proof}
We have:
\begin{equation*}
\begin{split}
&\text{HSIC}(X,Y-h(X);\Tilde{\mathcal{F}},\Tilde{\mathcal{G}}) \\
\ge& \sup_{s\in \Tilde{\mathcal{F}}, \ell\in \Tilde{\mathcal{G}}}\bigg(\mathbb{E}_{x\sim\mathcal{P}}[s(X)\ell(Y-h(X))]\\
&-E_{x\sim\mathcal{P}_\text{source}}[s(X)]E_{x\sim\mathcal{P}_\text{source}}[\ell(Y-h(X))]\bigg)\\
=& \frac{1}{M_\mathcal{F} \cdot M_\mathcal{G}} \sup_{s\in \mathcal{F}, \ell\in \mathcal{G}}\bigg(\mathbb{E}_{x\sim\mathcal{P}_\text{source}}[s(X)\ell(Y-h(X))] \\
&-\mathbb{E}_{x\sim\mathcal{P}_\text{source}}[s(X)]\mathbb{E}_{x\sim\mathcal{P}_\text{source}}[\ell(Y-h(X))] \bigg)\\
\ge&\frac{1}{M_\mathcal{F} \cdot M_\mathcal{G}}\sup_{s\in \mathcal{S}, \ell\in \mathcal{G}}\bigg(\mathbb{E}_{x\sim\mathcal{P}_\text{source}}[s(X)\ell(Y-h(X))]\\
&-\mathbb{E}_{x\sim\mathcal{P}_\text{source}}[s(X)]\mathbb{E}_{x\sim\mathcal{P}_\text{source}}[\ell(Y-h(X))] \bigg)\\
=&\frac{1}{M_\mathcal{F} \cdot M_\mathcal{G}} \sup_{\mathcal{P}_\text{target} \in \mathcal{Q}}
\sup_{\ell\in \mathcal{G}}\bigg(\mathbb{E}_{x\sim\mathcal{P}_\text{target}}[\ell(Y-h(X))]\\
&-\mathbb{E}_{x\sim\mathcal{P}_\text{source}}[\ell(Y-h(X))]\bigg).
\end{split}
\end{equation*}
The first equality is an immediate result of \eqref{eq:hsiccoco} and the identity $\mathbb{C}\text{ov}(A,B) = \mathbb{E}[A B]-\mathbb{E}[A]\mathbb{E}[B]$. The second inequality is by the restriction from $\mathcal{F}$ to $\mathcal{S}$. The final equality is by the assumption that for all $\mathcal{P}_\text{target} \in \mathcal{Q}$, $\mathbb{E}_{x\sim \mathcal{P}}\left[\frac{P_\text{target}}{P_\text{source}}(x)\right]=1$, i.e. that $\mathcal{P}_\text{target}$ is absolutely continuous with respect to $\mathcal{P}_\text{source}$.
\end{proof}
\subsubsection{Proof of theorem \ref{thm:together}}
\begin{proof}
By the lower bound of Theorem \ref{lower bound}, we get $\mathbb{V}\text{ar}(f^\star(X)-h(X)) \le M_\mathcal{F}\cdot M_\mathcal{G} \cdot \delta_{\text{HSIC}}$. By Theorem \ref{robustness} and the assumption we get that:
{\small
\begin{align*}
&M_\mathcal{F}\cdot M_\mathcal{G} \cdot \delta_{\text{HSIC}}(h) \\
\ge& \sup_{\mathcal{P}_\text{target} \in \mathcal{Q}} \mathbb{E}_{x\sim \mathcal{P}_\text{target}} [(Y-h(X))^2] - \mathbb{E}_{x\sim \mathcal{P}_\text{source}} [(Y-h(X))^2] \\
=& \sup_{\mathcal{P}_\text{target} \in \mathcal{Q}} \bigg(\mathbb{E}_{x\sim \mathcal{P}_\text{target}} [(Y-h(X))^2] \\
&- \mathbb{E}_{x\sim \mathcal{P}_\text{source}} [(f^\star(X)+\varepsilon-h(X))^2]\bigg) \\
=& \sup_{\mathcal{P}_\text{target} \in \mathcal{Q}} \bigg(\mathbb{E}_{x\sim \mathcal{P}_\text{target}} [(Y-h(X))^2]- \mathbb{V}\text{ar}[f^\star(X)-h(X)] \\
& - (\mathbb{E}_{x\sim \mathcal{P}_\text{source}} [f^\star(X)-h(X)])^2-\mathbb{E}[\varepsilon^2]\bigg) \\
=& \sup_{\mathcal{P}_\text{target} \in \mathcal{Q}}\bigg(\textit{MSE}_{P_\text{target}}(h) - \mathbb{V}\text{ar}[f^\star(X)-h(X)] \\
&- {\textit{bias}_\text{source}(h)}^2 - \sigma^2\bigg).
\end{align*}}
Together, these inequalities give the result.
\end{proof}
\subsubsection{Proof of theorem \ref{leanability_thm}}
Bellow we give the proof of Theorem \ref{leanability_thm}, showing uniform convergence for the HSIC-loss. The basic idea is to reduce the problem to a standard learning problem of the form $\sup_{h\in\mathcal{H}}\left|z(h)-\frac{1}{n}\sum_{i=1}^n z_i(h)\right|$, where $z$ is some statistic, and $z_i(h)$ are i.i.d. samples. To do so, we follow \citet{gretton2005measuring}, decomposing the HSIC-loss to three terms, and after some algebraic manipulations, and under some assumptions about the form of the kernel, we show that there is one term which we need to bound:
$$\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right|.$$
We then show how we can treat this as a learning problem over pairs of instances, where the objective is to predict the difference in $y$, allowing us to use standard tools and concentration bounds.
Recall that the empirical estimation problem we pose is
{
\begin{align}\label{eq:l_problem}
\min_\theta\widehat{\text{HSIC}}\{(x_i,r^\theta_i)\}_{i=1}^n;\mathcal{F},\mathcal{G}) = \min_\theta\frac{1}{(m-1)^2} \textbf{tr} R^\theta HKH,
\end{align}}
where $R^\theta _{i,j}=k(r_i^\theta,r_j^\theta),\, r_l^\theta=y_l-h_\theta (x_l),$ and $K_{i,j}=l(x_i,x_j)$ and by the cyclic property of the trace we switched the positions of $R^\theta$ and $K$.
\begin{lemma}
Let $C_1=\sup_{x,x^\prime}l(x,x^\prime)$, $C_2=\sup_{r,r^\prime}k(r,r^\prime)$. Then the following holds:
{\small
\begin{align*}
&\sup_{h \in \mathcal{H}} \left|\text{HSIC}(X,Y-h(X);\mathcal{F},\mathcal{G}) - \widehat{\text{HSIC}}\left(\{(x_i,r_i)\}_{i=1}^n;\mathcal{F},\mathcal{G}\right)\right|\\
\le &3C_1 \cdot \sup_{h\in \mathcal{H}} \left|\mathbb{E}_{r,r^\prime}[k(r,r^\prime)] - \frac{1}{(n)_2} \sum_{i_2^n}k(r_{i_1},r_{i_2})\right| \\
&+ 3C_2 \cdot \left|\mathbb{E}_{x,x^\prime} [l(x,x^\prime] - \frac{1}{(n)_2}\sum_{i_2^n}l(x_{i_1},x_{i_2})\right|
\end{align*}
}\label{learnability lemma}
\end{lemma}
\begin{proof}
Following \cite{gretton2005measuring}, HSIC can be decomposed into a three part sum:
\begin{equation*}
\begin{split}
&HSIC\left(X,Y-h\left(X\right);\mathcal{F},\mathcal{G}\right)\\
=&\mathbb{E}_{x,x^{\prime},r,r^{\prime}}\left[k\left(r,r^{\prime}\right)l\left(x,x^{\prime}\right)\right]\\
&-2\mathbb{E}_{x,r}\left[\mathbb{E}_{x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\mathbb{E}_{r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\right]\\
&+\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right].
\end{split}
\end{equation*}
And likewise, the empirical HSIC can be decomposed as follows:
\begin{equation*}
\begin{split}
&\widehat{HSIC}\left(\left\{ \left(x_{i},r_{i}\right)\right\} _{i=1}^{n};\mathcal{F},\mathcal{G}\right)\\
=&\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{1}},x_{i_{2}}\right)\\
&-\frac{2}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{2}},x_{i_{3}}\right)\\
&+\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{3}},x_{i_{4}}\right)+O\left(\frac{1}{n}\right).
\end{split}
\end{equation*}
where $\left(n\right)_{m}=\frac{n!}{\left(n-m\right)!}$ and $i_r^n$ is the set of all $r-$tuples drawn without replacement from $[n]$. From this we can see that it is enough to bound the following three terms:
\begin{align}\label{term_1}
\sup_{h\in\mathcal{H}}& \left|\vphantom{\sum_{i_{2}^{n}}}\mathbb{E}_{x,x^{\prime},r,r^{\prime}}\left[k\left(r,r^{\prime}\right)l\left(x,x^{\prime}\right)\right]\right. \\
&\left. -\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{1}},x_{i_{2}}\right)\right|,
\end{align}
\begin{equation}
\begin{split} \label{term_2}
\sup_{h\in\mathcal{H}}&\left|\vphantom{\sum_{i_{2}^{n}}}\mathbb{E}_{x,r}\left[\mathbb{E}_{x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\mathbb{E}_{r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\right] \right.\\
& \left.-\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{2}},x_{i_{3}}\right)\right|,
\end{split}
\end{equation}
\begin{equation} \label{term_3}
\begin{split}
\sup_{h\in\mathcal{H}}&\left|\vphantom{\sum_{i_{2}^{n}}}\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right.\\
&\left.-\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{3}},x_{i_{4}}\right)\right|.
\end{split}
\end{equation}
Using simple algebra, one can obtain the following bound for \eqref{term_1}:
{\tiny
\begin{equation*}
\begin{split}
&\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{x,x^{\prime},r,r^{\prime}}\left[k\left(r,r^{\prime}\right)l\left(x,x^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{1}},x_{i_{2}}\right)\right| \\
=&\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{x,x^{\prime},r,r^{\prime}}\left[k\left(r,r^{\prime}\right)l\left(x,x^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{1}},x_{i_{2}}\right) \right. \\
&\left. +\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\mathbb{E}\left[l\left(x,x^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\mathbb{E}\left[l\left(x,x^{\prime}\right)\right]\right| \\
=&\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{x,x^{\prime},r,r^{\prime}}\left[\left(k\left(r,r^{\prime}\right)-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right)l\left(x,x^{\prime}\right)\right] \right.\\
&\left. +\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\left(\mathbb{E}\left[l\left(x,x^{\prime}\right)\right]-l\left(x_{i_{1}},x_{i_{2}}\right)\right)\right|\\
\le&\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{x,x^{\prime},r,r^{\prime}}\left[\left(k\left(r,r^{\prime}\right)-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right)l\left(x,x^{\prime}\right)\right]\right|\\
&+\sup_{h\in\mathcal{H}}\left|\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\left(\mathbb{E}\left[l\left(x,x^{\prime}\right)\right]-l\left(x_{i_{1}},x_{i_{2}}\right)\right)\right| \\
\le&C_1\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{x,x^{\prime},r,r^{\prime}}\left[k\left(r,r^{\prime}\right)-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right]\right|\\
&+C_2\sup_{h\in\mathcal{H}}\left|\mathbb{E}\left[l\left(x,x^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}l\left(x_{i_{1}},x_{i_{2}}\right)\right| \\
=&C_1\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right|\\
&+C_2\left|\mathbb{E}\left[l\left(x,x^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}l\left(x_{i_{1}},x_{i_{2}}\right)\right|.
\end{split}
\end{equation*}
}
Where the first inequality follows from properties of $\sup$, the second inequality follows from the definitions of $C_1$ and $C_2$, and the last equality follows from the fact that the second term does not depend on $h$.
Similarly, \eqref{term_2} can be bounded as follows:
{\tiny
\begin{equation*}
\begin{split}
&\sup_{h\in\mathcal{H}}\left|\vphantom{\sum_{i_{2}^{n}}}\mathbb{E}_{x,r}\left[\mathbb{E}_{x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\mathbb{E}_{r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\right]\right.\\
&\left.-\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{2}},x_{i_{3}}\right)\right| \\
= &\sup_{h\in\mathcal{H}}\left|\vphantom{\sum_{i_{2}^{n}}}\mathbb{E}_{x,r}\left[\mathbb{E}_{x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\mathbb{E}_{r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\right]\right.\\
&\left.-\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{2}},x_{i_{3}}\right)\right.\\
&+\left.{} \frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right.\\
&\left.-\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right|\\
= &\sup_{h\in\mathcal{H}}\left|\vphantom{\sum_{i_{2}^{n}}}\mathbb{E}_{x,r}\left[\vphantom{\sum_{i_{2}^{n}}}\mathbb{E}_{x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\mathbb{E}_{r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\right.\right.\\
&\left.\left.-\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\mathbb{E}_{x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right] \right.\\ &\left.+\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\left(\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]-l\left(x_{i_{2}},x_{i_{3}}\right)\right)\right| \\
= &\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{x,r}\left[\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}\bigg(\mathbb{E}_{r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\right.\right.\\
&\left.\left.-k\left(r_{i_{1}},r_{i_{2}}\right)\bigg)\mathbb{E}_{x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\vphantom{\sum_{i_{2}^{n}}}\right]\right.\\
&\left.+\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\bigg(\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]-l\left(x_{i_{2}},x_{i_{3}}\right)\bigg)\right| \\
\le &\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{x,r}\left[\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}\bigg(\mathbb{E}_{r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\right.\right.\\
&\left.\left.\vphantom{\sum_{i_{2}^{n}}}-k\left(r_{i_{1}},r_{i_{2}}\right)\bigg)\mathbb{E}_{x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right]\right|\\
&+\sup_{h\in\mathcal{H}}\left|\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\bigg(\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]-l\left(x_{i_{2}},x_{i_{3}}\right)\bigg)\right| \\
\le &\sup_{x}\mathbb{E}_{x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{x,r}\left[\mathbb{E}_{r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\vphantom{\sum_{i_{2}^{n}}}\right.\right.\\
&\left.\left.-\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right]\right| \\
&+\sup_{r,r^{\prime}}k\left(r,r^{\prime}\right)\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]-\frac{1}{\left(n\right)_{3}}\sum_{i_{3}^{n}}l\left(x_{i_{2}},x_{i_{3}}\right)\right| \\
\le &C_1\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right|\\
&+C_2\left|\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}l\left(x_{i_{1}},x_{i_{2}}\right)\right|,
\end{split}
\end{equation*}
}
where the first inequality follows from properties of $\sup$, and the last inequality follows from the definition of $C_1$ and $C_2$, and the definitions of $(n)_m$.
And finally, the same reasoning can be applied to bound \eqref{term_3}:
{\small
\begin{equation*}
\begin{split}
&\sup_{h\in\mathcal{H}}\left|\vphantom{\sum_{i_{2}^{n}}}\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right.\\
&\left.-\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{3}},x_{i_{4}}\right)\right| \\
= &\sup_{h\in\mathcal{H}}\left|\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right.\\
&\left.-\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)l\left(x_{i_{3}},x_{i_{4}}\right)\right.\\
&+\left.{}\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right.\\
&\left.-\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right| \\
= &\sup_{h\in\mathcal{H}}\left|\left(\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]-\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right)\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right.\\
&\left.+\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\left(\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]-l\left(x_{i_{3}},x_{i_{4}}\right)\right)\right| \\
\le &\sup_{h\in\mathcal{H}}\left|\left(\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]-\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right)\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]\right|\\
&+\sup_{h\in\mathcal{H}}\left|\frac{1}{\left(n\right)_{4}}\sum_{i_{4}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\left(\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]-l\left(x_{i_{3}},x_{i_{4}}\right)\right)\right| \\
\le &C_1\sup_{h\in\mathcal{H}}\left|\left(\mathbb{E}_{r,r^{\prime}}\left[k\left(r,r^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}k\left(r_{i_{1}},r_{i_{2}}\right)\right)\right|\\
&+C_2\left|\mathbb{E}_{x,x^{\prime}}\left[l\left(x,x^{\prime}\right)\right]-\frac{1}{\left(n\right)_{2}}\sum_{i_{2}^{n}}l\left(x_{i_{1}},x_{i_{2}}\right)\right|.
\end{split}
\end{equation*}
}
\end{proof}
Now, the second term of the RHS of the bound in Lemma \ref{learnability lemma} can be bounded using standard techniques such as Hoeffding's inequality. We therefore shift our attention to the first term. This term can be bounded using Rademacher based techniques.
Let us first recall the definition of the Rademacher complexity of a function class.
\begin{definition}
Let $\mathcal{D}$ be a distribution over $Z$, and let $S=\{z_i\}_{i=1}^n$ be $n$ i.i.d. samples. The empirical Rademacher complexity of a function class $\mathcal{F}$ is defined as:
\begin{equation*}
\mathcal{R}_n(\mathcal{F}) = \mathbb{E}_\sigma\left[\sup_{f\in \mathcal{F}} \frac{1}{n} \sum_{i=1}^n \sigma_i h(z_i)\right].
\end{equation*}
\end{definition}
We assume that $k\left(r,r^{\prime}\right)=s\left(r-r^{\prime}\right)$ for some function $s$ with Lipschitz constant $L_{s}$. Next, we define a distribution over $\mathcal{X}\times\mathcal{X}\times\mathcal{Y}$ by $p^{\prime}\left(\boldsymbol{x}\right)= p^{\prime}\left(\left(x,x^{\prime}\right)\right)=p\left(x\right)p\left(x^{\prime}\right)$,
and we let
$y\left(x,x^{\prime},\varepsilon,\varepsilon^{\prime}\right)=f\left(x\right)-f\left(x^{\prime}\right)+\varepsilon-\varepsilon^{\prime}$. Now, we can define a new function class $$\mathcal{H}^{2} = \left\{ \left(x_{1},x_{2}\right)\mapsto h\left(x_{1}\right)-h\left(x_{2}\right)\mid h\in\mathcal{H}\right\},$$
and consider $$\sup_{h\in\mathcal{H}^{2}}\left|\mathbb{E}_{\boldsymbol{x},y}\ell\left(h\left(\boldsymbol{x}\right),y\right) - \sum_{i=1}^n \ell(h(\textbf{x}_i),y_i)\right|$$
where $\ell\left(h\left(\boldsymbol{x}\right),y\right)=s\left(r_{1}-r_{2}\right)$. This is exactly the first term in the bound, which can be bounded using standard generalization bounds. The only missing pieces left are how to relate the Rademacher complexity of $\mathcal{H}^{2}$ to that of $\mathcal{H}$, and how the Lipschitz constant of the residuals' kernel affects it.
\begin{lemma}\label{rademacher_lemma}
$\mathcal{R}_n(\mathcal{H}^2) \le 2\mathcal{R}_n(\mathcal{H})$.
\end{lemma}
\begin{proof}
Let $S=\left\{(z_{i_1},z_{i_2})\right\}_i=1^n$. Then,
\begin{align*}
\mathcal{R}_n(\mathcal{H}^2) =& \mathbb{E}_\sigma\left[\sup_{h\in\mathcal{H}^2}\frac{1}{n}\sum_{i=1}^n\sigma_ih(s_i)\right] \\
=&\mathbb{E}_\sigma\left[\sup_{h\in\mathcal{H}}\frac{1}{n}\sum_{i=1}^n\sigma_i\left(h(z_{i_1})-h(z_{i_2})\right)\right] \\
\le&\mathbb{E}_\sigma\left[\sup_{h\in\mathcal{H}}\frac{1}{n}\sum_{i=1}^n\sigma_ih(z_{i_1})+\sup_{h\in\mathcal{H}}\frac{1}{n}\sum_{i=1}^n\sigma_ih(z_{i_2})\right] \\
= & 2\mathcal{R}_n(\mathcal{H}),
\end{align*}
where the inequality is due to algebra of $\sup$ and sums.
\end{proof}
As for the Lipschitz constant, a known result relating the Rademacher complexity of a function class to the Rademacher complexity of the class composed with a Lipschitz loss is the following.
\begin{theorem}[\citet{Rademacher_Composition}]\label{lipschitz}
Let $\ell:\,\mathbb{R}\times\mathcal{Y}\to\mathbb{R}$ be s.t. $\ell(\cdot, y)$ is an $L-$Lipschitz function for all $y$. Denote $\ell\circ\mathcal{H}=\left\{(x,y)\to\ell(h(x),y) \mid h\in \mathcal{H}\right\}$. Then,
\begin{equation*}
\mathcal{R}_n(\ell\circ\mathcal{H}) \le L\cdot \mathcal{R}_n(\mathcal{H}).
\end{equation*}
\end{theorem}
As an example, the following lemma proves the Lipschitz condition for RBF kernels.
\begin{lemma} \label{rbf_lemma}
Assume $\ell(z,y)=\exp(-\gamma(z-y)^2)$, as is the case with RBF kernels, and suppose $|y|\le \frac{M}{2}$ for some $M>0$ for all $y$. Then $\ell(\cdot,y)$ is $\gamma M-$Lipschitz for all $y$.
\end{lemma}
\begin{proof}
Let $\ell(z,y)=\exp\left(-\gamma\|z-y\|^2\right)$ for some $\gamma>0$, and suppose $\|y\|\le\frac{M}{2}$ for some $M>0$ for all $y\in\mathcal{Y}$. Then,
\begin{align*}
&\exp\left(-\gamma\|y-z_1\|^2\right) - \exp\left(-\gamma\|y-z_2\|^2\right) \\
=&\exp\left(-\gamma\|y-z_1\|^2\right)\left(1-\exp\left(\gamma\left(\|y-z_1\|^2-\|y-z_2\|^2\right)\right)\right) \\
\le & \exp\left(-\gamma\|y-z_1\|^2\right)\left(1-\left(1+\gamma\left(\|y-z_1\|^2-\|y-z_2\|^2\right)\right)\right) \\
=&\exp\left(-\gamma\|y-z_1\|^2\right)\left(\gamma\left(\|y-z_2\|^2-\|y-z_1\|^2\right)\right) \\
\le & \gamma\left(\|y-z_2\|^2-\|y-z_1\|^2 \right) \\
\le & \gamma\|z_1-z_2\|^2\\
\le &\gamma \cdot M\|z_1-z_2\|
\end{align*}
where the first inequality is due to the fact that $1+x\le e^x$, the second is due to the fact that $\exp(-c) \le 1 \, \forall c\ge0$, the third is due to triangle inequality and the last is due to the definition of $M$.
\end{proof}
Before concluding, we state the uniform convergence result based on the Rademacher complexity of a class.
\begin{theorem}[\citet{mohri2018foundations}, Ch. 3]
Suppose $f(z)\in[0,1]$ for all $f\in \mathcal{F}$, and let $\delta \in (0,1)$. Then, with probability of at least $ 1 - \delta$ over the choice of $S$, the following holds uniformly for all $f \in \mathcal{F}$:
\begin{equation*}
\left|\mathbb{E}_\mathcal{P}[f(z)] - \frac{1}{n}\sum_{i=1}^n f(z_i)\right| \le 2\mathcal{R}_n(\mathcal{F}) + O\left(\sqrt{\frac{\ln(1/\delta)}{n}}\right).
\end{equation*}
\label{rademacher_bound}
\end{theorem}
Equipped with those results, the learnability Theorem \ref{leanability_thm} immediately follows.
\begin{proof}[Proof of Theorem \ref{leanability_thm}]
This is a direct application of the previous lemmas and Hoeffding's inequality.
\end{proof}
\subsection{Experiments' details}
\subsubsection{Synthetic data}
In order to find the $l_2$ regularization parameter, we perform cross validation on validation data created from $10\%$ of the training set, where the possible values are in $\{15, 12, 10, 5, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001,0\}$, for absolute- and HSIC-losses, and $\{35,37,...,69\}$ for the squared-loss.
When training with absolute-loss, we used stochastic gradient descent, with initial learning rate determined by cross validation from $\{0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001,0.00005,0.00001\}$, and later decayed using inverse scaling. When training with HSIC-loss, the learning rate was drawn from a uniform distribution over $[0.0001,0.0002]$.
\subsubsection{Rotated MNIST}
\begin{wrapfigure}{r}{0.17\textwidth}
\centering
\includegraphics[scale=0.22]{rotated_images.pdf}
\caption{MNIST \textsc{target} images.}
\label{rotated_mnist_images}
\end{wrapfigure}
Both losses were optimized using Adam \citep{Kingma2015AdamAM}, and the learning rate was drawn each time from a uniform distribution over $[10^{-4}, 4\cdot 10^{-4}]$. Experimenting with different regimes of the learning rate gave similar results.
\subsection{Understanding the HSIC-loss}
Here we provide two additional views on the HSIC-loss, motivating its use in cases which go beyond additive noise.
The first is based on the observation that, up to a constant, the residual $Y-h(X)$ is the gradient of the squared error with respect to $h(X)$. Intuitively, this means that by optimizing for the residual to be independent of $X$, we ask that the direction and magnitude in which we need to update $h(X)$ to improve the loss is the same regardless of $X$. Put differently, the gradient of $h(X)$ would be the same for every subset of $X$. This is also true for classification tasks: consider the outputs of a classification network as logits $o$ which are then transformed by Sigmoid or Softmax operations into a probability vector $h$. The gradient of the standard cross-entropy loss with respect to $o$ is exactly the residual $Y-h(X)$. Thus, even when not assuming additive noise, requiring that the residual would be independent of $X$ encourages learning a model for which the gradients of the loss have no information about the instances $X$.
The second interpretation concerns the question of what does it mean for a model $h(X)$ to be optimal with respect to predicting $Y$ from $X$. One reasonable way to define optimality is when $h(X)$ captures all the available information that $X$ has about the label $Y$. That is, a classifier is optimal when:
\begin{equation} \label{optimal_classifier}
Y \indep X \mid h(X).
\end{equation}
This is also related to the condition implied by recent work on Invariant Risk Minimization \citep{arjovsky2019invariant}. Optimizing for the condition in equation \ref{optimal_classifier} is difficult because of the conditioning on $h(X)$. We show in the supplemental that attaining the objective encouraged by the HSIC-loss, namely learning a function $h(X)$ such that $Y-h(X)\indep X$, implies the optimality condition \ref{optimal_classifier}.
\subsection{RKHS Background}
A reproducing kernel Hilbert space $\mathcal{F}$ is a Hilbert space of functions from $\mathcal{X}$ to $\mathbb{R}$ with the following (reproducing) property: there exist a positive definite kernel $K:\mathcal{X}\times\mathcal{X} \to \mathbb{R}$ and a mapping function $\phi$ from $\mathcal{X}$ to $\mathcal{F}$ s.t. $K(x_1,x_2)=\langle\phi(x_1),\phi(x_2)\rangle_\mathcal{F}$.
Given two separable (having a complete orthonormal basis) RKHSs $\mathcal{F}$ and $\mathcal{G}$ on metric spaces $\mathcal{X}$ and $\mathcal{Y}$, respectively, and a linear operator $C:\mathcal{F}\to\mathcal{G}$, the Hilbert-Schmidt norm of $C$ is defined as follows:
\begin{align*}
\|C\|_{\text{HS}}=\sum_{i,j}\langle Cu_i,v_j\rangle_\mathcal{G}^2,
\end{align*}
where $\{u_i\}$ and $\{u_i\}$ are some orthonormal basis for $\mathcal{F}$ and $\mathcal{G}$ respectively. Here we consider two probability spaces $\mathcal{X}$ and $\mathcal{Y}$ and their corresponding RKHSs $\mathcal{F}$ and $\mathcal{G}$. The mean elements $\mu_x$ and $\mu_y$ are defined such that $\langle \mu_x,s \rangle_\mathcal{F} := \mathbb{E}[\langle \phi(x),s\rangle_\mathcal{F}] = \mathbb{E}[s(x)]$, and likewise $\langle \mu_y,t \rangle_\mathcal{G} := \mathbb{E}[\langle \psi(y),t\rangle_\mathcal{G}] = \mathbb{E}[t(y)]$, where $\psi$ is the embedding from $\mathcal{Y}$ to $\mathcal{G}$. Notice that we can compute the norms of those operators quite easily: $\|\mu_x\|_\mathcal{F}^2=\mathbb{E}[K(x_1,x_2)]$ where the expectation is done over i.i.d. samples of pairs from $\mathcal{X}$.
For $s\in\mathcal{F}$ and $t\in\mathcal{G}$, their tensor product $s\otimes t:\mathcal{G}\to\mathcal{F}$ is defined as follows:
$(s\otimes t)(h)= \langle t,h \rangle_\mathcal{G} \cdot s.$
The Hilbert-Schmidt norm of the tensor product can be shown to be given by $\|s\otimes t\|_\text{HS}^2=\|s\|_\mathcal{F}^2\cdot\|t\|_\mathcal{G}^2$.
Equipped with these definitions, we are ready to define the cross covariance operator $C_{xy}:\mathcal{G}\to\mathcal{F}$:
\begin{align*}
C_{xy}=\mathbb{E}[\phi(x)\otimes\psi(y)]-\mu_x\otimes\mu_y.
\end{align*}
\subsection{HSIC}
Consider two random variables $X$ and $Y$, residing in two metric spaces $\mathcal{X}$ and $\mathcal{Y}$ with a joint distribution on them, and two separable RKHSs $\mathcal{F}$ and $\mathcal{G}$ on $\mathcal{X}$ and $\mathcal{Y}$ respectively. HSIC is defined as the Hilbert Schmidt norm of the cross covariance operator:
\begin{align*}
\text{HSIC}(X,Y;\mathcal{F},\mathcal{G})\equiv \|C_{xy}\|_\text{HS}^2.
\end{align*}
\citet{gretton2005measuring} show that:
\begin{align}\label{eq:hsiccoco}
\text{HSIC}(X,Y;\mathcal{F},\mathcal{G}) \geq \sup_{s\in \mathcal{F}, t\in \mathcal{G}} \mathbb{C}\text{ov}\left[s(x), t(y) \right],
\end{align}
an inequality which we use extensively for our results.
We now state Theorem 4 of \citet{gretton2005measuring}) which shows the properties of HSIC as an independence test:
\begin{theorem}[\citet{gretton2005measuring}, Theorem 4] \label{thm:1}
Denote by $\mathcal{F}$ and $\mathcal{G}$ RKHSs both with universal kernels, $k,l$ respectively on compact domains $\mathcal{X}$ and $\mathcal{Y}$. Assume without loss of generality that $\|s\|_\infty \le 1$ for all $s\in\mathcal{F}$ and likewise $\|t\|_\infty \le 1$ for all $t\in\mathcal{G}$.
Then the following holds: $\|C_{xy}\|_\text{HS}^2 = 0 \Leftrightarrow X\indep Y$.
\end{theorem}
Let $\{(x_i,y_i)\}_{i=1}^n$ be i.i.d. samples from the joint distribution on $\mathcal{X} \times \mathcal{Y}$. The empirical estimate of HSIC is given by:
\begin{align}\label{eq:emphsic}
\widehat{\text{HSIC}}\{(x_i,y_i)\}_{i=1}^n;\mathcal{F},\mathcal{G}) = \frac{1}{(n-1)^2} \textbf{tr} KHLH,
\end{align}
where $K_{i,j}=k(x_i,x_j)$, $L_{i,j}=l(y_i,y_j)$ are kernel matrices for the kernels $k$ and $l$ respectively, and $H_{i,j}=\delta_{i,j}-\frac{1}{n}$ is a centering matrix.
The main result of \citet{gretton2005measuring} is that the empirical estimate $\widehat{\text{HSIC}}$ converges to HSIC at a rate of $O\left(\frac{1}{n^{1/2}}\right)$, and its bias is of order $O(\frac{1}{n})$.
\subsection{Synthetic Data}
\begin{figure*}[!t]
{\includegraphics[scale=0.4]{linear_exp.pdf}}
\caption{Comparison of models trained with squared-loss, absolute-loss and HSIC-loss. Each point on the graph is the MSE averaged over 20 experiments, and the shaded area represents one standard error from the mean. Dashed lines are the MSE evaluated over the source distribution, solid lines are the MSE evaluated over the target distribution.}
\label{linear_regression_results}
\end{figure*}
As a first evaluation of the HSIC-loss, we experiment with fitting a linear model. We focus on small sample sizes as those often lead to difficulties in covariate shift scenarios. The underlying model in the experiments is $y=\beta^\top x + \varepsilon$ where $\beta\in\mathbb{R}^{100}$ is drawn for each experiment from a Gaussian distribution with $\sigma=0.1$. In the training phase, $x$ is drawn from a uniform distribution over $[-1,1]^{100}$. We experimented with $\varepsilon$ drawn from one of three distributions: Gaussian, Laplacian, or a shifted exponential: $\varepsilon=1-e$ where $e$ is drawn from an exponential distribution $\exp(1)$. In any case, $\varepsilon$ is drawn independently from $x$.
In each experiment, we draw $n\in\{2^i\}_{i=5}^{13}$ training samples and train using either using squared-loss, absolute-loss, and HSIC-loss, all with an $l_2$ regularization term.
The \textsc{source} test set is created in the same manner as the training set was created, while the \textsc{target} test set simulates a covariate shift scenario. This is done by changing the marginal distribution of $x$ from a uniform distribution to a Gaussian distribution over $\mathbb{R}^{100}$. In all cases the noise on the \textsc{source} and \textsc{target} is drawn from the same distribution. This process is repeated $20$ times for each $n$. When training the models with HSIC-loss, we used batch-size of 32, and optimized using Adam optimizer \citep{kingma2014adam}.
The kernels we chose were radial basis function kernels, with $\gamma=1$ for both covariates' and residuals' kernels.
Figure \ref{linear_regression_results} presents the results of experiments with Gaussian, Laplacian, and shifted-exponential noise. With Gaussian noise, HSIC-loss performs similarly to squared-loss regression, and with Laplacian noise HSIC-loss performs similarly to absolute-loss regression, where squared-loss is the maximum-likelihood objective for Gaussian noise and absolute-loss is the maximum-likelihood objective for Laplacian noise.
In both cases it is reassuring to see that HSIC-loss is on par with the maximum-likelihood objectives. In all cases we see that HSIC-loss is better on the \textsc{target} distribution compared to objectives which are not the maximum likelihood objective. This is true especially in small sample sizes. We believe this reinforces our result in Theorem \ref{robustness} that the HSIC-loss is useful when we do not know in advance the loss or the exact target distribution.
\subsection{Bike Sharing Dataset}
In the bike sharing dataset by \citet{fanaee2014event} from the UCI repository, the task is to predict the number of hourly bike rentals based on the following covariates: temperature, feeling temperature, wind speed and humidity. Consisting of 17,379 samples, the data was collected over two years, and can be partitioned by year and season. This dataset has been used to examine domain adaptation tasks by \citet{subbaswamy2019preventing} and \citet{rothenhausler2018anchor}. We adopt their setup, where the \textsc{source} distribution used for training is three seasons of a year, and the \textsc{target} distribution used for testing is the forth season of the same year, and where the model of choice is linear.
We compare with least squares, anchor regression (AR) \citet{rothenhausler2018anchor} and Surgery by \citet{subbaswamy2019preventing}.
We ran 100 experiments, each of them was done by randomly sub-sampling $80\%$ of the \textsc{source} set and $80\%$ of the \textsc{target} set, thus obtaining a standard error estimate of the mean. When training the models with HSIC-loss, we used batch-size of 32, and optimized the loss with Adam \citep{kingma2014adam}, with learning rate drawn from a uniform distribution over $[0.0008,0.001]$. The kernels we chose were radial basis function kernels, with $\gamma=2$ for the covariates' kernel, and $\gamma=1$ for the residuals' kernel.
We present the results in Table \ref{bike_sharing_variance}. Following the discussion in section \ref{Theory}, we report the \textit{variance} of the residuals in the test set. We can see that training with HSIC-loss results in better performances in 6 out of 8 times. In addition, unlike AR and Surgery, training with HSIC-loss does not require knowledge of the specific causal graph of the problem, nor does it require the training to be gathered from different sources as in AR.
\begin{table}[!hbt]
\centering
\caption{Variance results on the bike sharing dataset. Each row corresponds to a training set consisting of three season of that year, and the variance of $Y-h(X)$ on the \textsc{target} set consisting of the forth season is reported. In bold are the best results in each experiment, taking into account one standard error.}
\vspace{10pt}
{\small
\begin{tabular}{l@{\hskip 1mm}l@{\hskip 1mm}l@{\hskip 1mm}l@{\hskip 1mm}l@{\hskip 1mm}l}
\hline
Test data & OLS & AR & Surgery & HSIC \\
\hline
(Y1) Season 1 & \textbf{15.4}$\pm$0.02 & \textbf{15.4}$\pm$0.02 & 15.5$\pm$0.03 & 16.0$\pm$0.04 \\
Season 2 & 23.1$\pm$0.03 & 23.1$\pm$0.03 & 23.7$\pm$0.04 & \textbf{22.9}$\pm$0.03 \\
Season 3 & 28.0$\pm$0.03 & 28.0$\pm$0.03 & 28.1$\pm$0.03 & \textbf{27.9}$\pm$0.03 \\
Season 4 & 23.7$\pm$0.03 & 23.7$\pm$0.03 & 25.6$\pm$0.04 & \textbf{23.6}$\pm$0.04 \\
\hline
(Y2) Season 1 & \textbf{29.8}$\pm$0.05 & \textbf{29.8}$\pm$0.05 & 30.7$\pm$0.06 & 30.7$\pm$0.07 \\
Season 2 & 39.0$\pm$0.05 & 39.1$\pm$0.05 & 39.2$\pm$0.06 & \textbf{38.9}$\pm$0.04 \\
Season 3 & 41.7$\pm$0.05 & 41.5$\pm$0.05 & 41.8$\pm$0.05 & \textbf{40.8}$\pm$0.05 \\
Season 4 & 38.7$\pm$0.04 & \textbf{38.6}$\pm$0.04 & 40.3$\pm$0.06 & \textbf{38.6}$\pm$0.05 \\
\hline
\end{tabular}}
\label{bike_sharing_variance}
\end{table}
\subsection{Rotating MNIST}
In this experiment we test the performance of models trained on the MNIST dataset by \citet{lecun1998gradient} as the \textsc{source} distribution, and digits which are rotated by an angle $\theta$ sampled from a uniform distribution over $[-45,45]$ as the \textsc{target} distribution. Samples of the test data are depicted in the supplementary material. The standard approach to obtain robustness against such perturbations is to augment the training data with images with similar transformations, as in \citet{796} for example. However, in practice it is not always possible to know in advance what kind of perturbations should be expected, and therefore it is valuable to develop methods for learning robust models even without such augmentations. We compared training with HSIC-loss to training with cross entropy loss, using three types of architectures. The first is a convolutional neural network (CNN):$\to$ \textit{input $\to$ conv(dim=32)$\to$ conv(dim=64) $\to$
fully-connected(dim=524)$\to$ dropout(p=0.5) $\to$ fully-connected(dim=10)}. The second is a multi-layered perceptron (MLP) with two hidden layers of size $256,524,1024$:
\textit{input $\to$ fully-connected(dim={256,524,1024}) $\to$ fully-connected(dim={256,524,1024})$\to$dropout(p=0.5) $\to$ fully-connected(dim=10)}. The third architecture was also an MLP, except there were four hidden layers instead of two.
Each experiment was repeated 20 times, and in every experiment the number of training steps (7 epochs) remained constant for a fair comparison. Each time the training set consisted of 10K randomly chosen samples.
The kernels we chose were radial basis function kernels with $\gamma=1$ for the residuals, and $\gamma=22$ for the images, chosen according to the heuristics suggested by \citet{mooij2009regression}.
The results are depicted in Figure \ref{rotated_mnist_results}. We see that for all models, moving to the \textsc{target} distribution induces a large drop in accuracy. Yet for all architectures we see that using HSIC-loss gives better performance on the \textsc{target} set compared to using the standard cross-entropy loss.
\begin{figure}[]
\centering
\includegraphics[scale=0.28]{new_rotating_mnist_rotated_test.pdf}
\caption{Accuracy on \textsc{source} and \textsc{target} test sets, with models trained with either cross entropy or HSIC-loss. Plotted are the median, 25th and 75th percentiles.}
\label{rotated_mnist_results}
\end{figure}
\subsection{Cell Out of Sample Dataset}
In the last experiment, we test our approach on the cell out of sample dataset introduced by \citet{lu2019cells}. This dataset was collected for the purpose of measuring robustness against covariate shift. It consists of $64\times 64$ microscopy images of mouse cells stained with one of seven possible fluorescent proteins (highlighting distinct parts of the cell), and the task is to predict the type of the fluorescent protein used to stain the cell. Learning systems trained on microscopy data are known to suffer from changes in plates, wells and instruments \cite{caicedo2017data}. Attempting to simulate these conditions, the dataset contains four test sets, with increasing degrees of change in the covariates' distribution, as described in Table \ref{cell_description}, adopted from \citet{lu2019cells}.
Following \cite{lu2019cells}, we trained an 11-layer CNN, DeepLoc, used in \cite{kraus2017automated} for protein subcellular localization. We followed the pre-processing, data augmentation, architecture choice, and training procedures described there, with the exception of using HSIC-loss and different learning rate when using HSIC. When computing HSIC, the kernel width was set to 1 for both kernels. Training was done for 50 epochs on 80\% of the \textsc{source} dataset, and the final model was chosen according to the remaining 20\% used as a validation set. The optimization was done with Adam \cite{kingma2014adam}, with batch size of 128, and exponential decay of the learning rate was used when training with cross-entropy loss. \citet{lu2019cells} used data augmentation during training (random cropping and random flips) and test time (prediction is averaged over 5 crops taken from the corners and center image), as this is a common procedure to encourage robustness. We compared HSIC-based models to cross-entropy based models both with and without test time augmentation. We note that \cite{lu2019cells} examined several deep net models and DeepLoc (with cross-entropy training) had the best results.
\begin{table}
\caption{Description of the source and target distributions in the cell out of sample dataset}
{\small
\begin{tabular}{l@{\hskip 1mm}|p{54mm}@{\hskip 1mm}|l@{\hskip 1mm}l@{\hskip 1mm}l@{\hskip 1mm}l}
\hline
Dataset & Description & Size \\
\hline
Source & Images from 4 independent
plates for each class & 41,456
\\
Target1 & Held out data &10,364 \\
Target2 & Same plates,
but different wells & 17,021\\
Target3 & 2 independent plates for each class,
different days &32,596
\\
Target4 & 1 plate for each class,
different day and microscope & 30,772
\\
\hline
\end{tabular}}
\vspace{-10pt}
\label{cell_description}
\end{table}
\begin{table}
\caption{Class balanced accuracy on each of the four target distributions. The last row depicts the results of training with cross-entropy as reported in \cite{lu2019cells}. HSIC-aug and CE-aug refer to experiments done with test time augmentation.}
{\small
\begin{tabular}{l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 2mm}l}
\hline
Training loss & Target1 & Target2 & Target3 & Target4 \\
\hline
HSIC & 99.2 & 98.8 & 93.4 & 95.3
\\
CE & 98.4 & 98.1 & 91.7 & 93.8
\\
\hline
HSIC-aug & 99.2 & 98.9 & 93.4 & 95.4
\\
CE-aug-\cite{lu2019cells} &98.8 & 98.5 & 92.6 & 94.6
\\
\end{tabular}}
\vspace{-10pt}
\label{cell_results}
\end{table}
Table \ref{cell_results} depicts the results, showing a clear advantage of the HSIC-based model which is able to achieve new state-of-the-art results in the more difficult \textsc{target} distributions, while preserving the performance in \textsc{target} distributions closer to the \textsc{source}.
\section{Introduction}
\label{Introduction}
\input{introduction}
\section{Background and Setup}
\label{Background}
\input{background}
\section{Proposed Method}
\label{Approach}
\input{approach}
\section{Theoretical Results}
\label{Theory}
\input{theory}
\section{Related Work}
\label{Related}
\input{related_work}
\section{Experimental Results}
\label{Experiments}
\input{experiments}
\section{Conclusion}
\label{Conclusion}
\input{conclusion}
\section{Acknowledgments}
\label{Acknowledgments}
\input{acknowledgments}
\subsection{Lower Bound}
We first relate HSIC-loss to standard notions of model performance: we show that under mild assumptions, the HSIC-loss is an upper bound to the variance of the residual $f^{\star}\left(X\right)-h\left(X\right)$. The additional assumption is that for all $h\in \mathcal{H}$, $f^\star -h$ is in the closure of $\mathcal{F}$, and that $\mathcal{G}$ contains the identity function from $\mathbb{R}$ to $\mathbb{R}$. This means that $M_\mathcal{F}$ acts as a measure of complexity of the true function $f^\star$ that we trying to learn. Note however this does not imply that $f^\star \in \mathcal{H}$, but rather this is an assumption on the kernel space used to calculate the HSIC term.
\begin{theorem} \label{lower bound}
Under the conditions specified above:
{\small
\begin{align*}
\mathbb{V}\text{ar}(f^\star(X)-h(X)) \le M_\mathcal{F} \cdot M_\mathcal{G} \cdot \text{HSIC}(X,Y-h(X);\Tilde{\mathcal{F}},\Tilde{\mathcal{G}}).
\end{align*}}
\end{theorem}
Recalling the bias-variance decomposition:
\begin{align*}
&\mathbb{E}\left[\left(Y-h\left(X\right)\right)^{2}\right]=\\
&\mathbb{V}\text{ar}\left(f^{\star}\left(X\right)-h\left(X\right)\right)+\left(\mathbb{E}\left[f^{\star}\left(X\right)-h\left(X\right)\right]\right)^{2}+\mathbb{E}\left[\varepsilon^{2}\right],
\end{align*}
we see that the HSIC-loss minimizes the variance part of the mean squared error (MSE). To minimize the entire MSE, the learned function should be adjusted by adding a constant which can be inferred from the empirical mean of $Y$.
\subsubsection{The Realizable Case}
If $h \in \mathcal{H}$ has HSIC-loss equal to zero, then up to a constant term, it is the correct function:
\begin{corollary} \label{right function}
Under the assumptions of Theorem \ref{lower bound}, we have the following:
\begin{align*}
\text{HSIC}\left(X,Y-h(X);\Tilde{\mathcal{F}},\Tilde{\mathcal{G}}\right)=0 \Rightarrow h\left(X\right)=f^{\star}\left(X\right)+c,
\end{align*}
almost everywhere.
\end{corollary}
\begin{proof}
From Theorem \ref{lower bound}, we have that
\begin{align*}
&\text{HSIC}\left(X,Y-h(X);\Tilde{\mathcal{F}},\Tilde{\mathcal{G}}\right)=0 \implies \\
&\mathbb{V}\text{ar}\left(f^{\star}\left(X\right)-h\left(X\right)\right) =0 ,
\end{align*}
therefore $f^{\star}\left(X\right)-h\left(X\right)$ must be a constant up to a zero-probability set of $X$.
\end{proof}
\subsection{Robustness Against Covariate Shift}
Due to its formulation as a supremum over a large set of functions, the HSIC-loss is an upper bound to a natural notion of robustness. This notion, which will be formalised below, captures the amount by which the performance of a model might change as a result of a covariate shift, where the performance is any that can be measured by some $\ell \in \mathcal{G}$ applied on the residuals $Y-h(X)$. In this subsection we denote the functions in $\mathcal{G}$ as $\ell$ instead of $t$, to emphasize that we now think of $\ell(r)$ as possible loss functions acting on the residuals.
We consider two different ways of describing a target distribution which is different from the source.
The first is by specifying the density ratio between the target and source distributions. This is useful when the support of the distribution does not change but only the way it is distributed. A second type of covariate shift is due to restricting the support of the distribution to a certain subset. This can be described by an indicator function which states which parts of the source domain are included. The following shows how the HSIC-loss is an upper bound to the degradation in model performance in both covariate shift formulations.
We start with the latter case. For a subset $A\subset \mathcal{X}$ of positive measure, the quantity comparing a model's performance in terms of the loss $\ell$ on the source distribution and the same model's performance when the target distribution is restricted to $A$, is as follows:
\begin{equation*}
\frac{1}{\mathbb{E}\left[1_A(x)\right]}\mathbb{E}\left[1_A(x)\ell\left(y-h\left(x\right)\right)\right]- \mathbb{E}\left[\ell\left(y-h\left(x\right)\right)\right]
\end{equation*}
For $\delta, c>0$ let $\mathcal{W}{c,\delta}$ denote the family of subsets $A$ with source probability at least $c>0$ s.t. there exists some $s\in \mathcal{F}$ which is $\delta$ close to $1_A$:
\begin{equation*}
\mathcal{W}_{c,\delta}=\{ A \subset \mathcal{X} | \exists s \in \mathcal{F} \text{ s.t. } \|1_A-s\|_{\infty} \le \delta, \, \mathbb{E}\left[1_A(x)\right] \ge c \}.
\end{equation*}
All these subset can be approximately described by functions from $\mathcal{F}$. The complexity of such subsets is naturally controlled by $M_\mathcal{F}$.
\begin{theorem}
\label{subgroup}
Let $\ell \in \mathcal{G}$ be a non-negative loss function, and let $\delta, c>0$:
\begin{align*}
&\sup_{A\in \mathcal{W}{c,\delta}} \frac{1}{\mathbb{E}\left[1_A(x)\right]}\mathbb{E}\left[1_A(x)\ell\left(y-h\left(x\right)\right)\right] \le \\ &\frac{M_\mathcal{F} M_\mathcal{G} HSIC(X,Y-h(X);\Tilde{\mathcal{F}},\Tilde{\mathcal{G}})}{c} \\&+ \left(\frac{2\delta}{c}+1\right)\mathbb{E}\left[\ell\left(y-h\left(x\right)\right)\right].
\end{align*}
\end{theorem}
Theorem \ref{subgroup} states that the degradation in performance due to restricting the support of the distribution to some subset is bounded by terms related to the size of the set and the ability to represent it by $\mathcal{F}$. Compare this to the following naive bound:
\begin{equation*}
\sup_{A\in \mathcal{W}{c,\delta}} \frac{\mathbb{E}\left[1_A(x)\ell\left(y-h\left(x\right)\right)\right]}{\mathbb{E}\left[1_A(x)\right]} \le \frac{\mathbb{E}\left[\ell\left(y-h\left(x\right)\right)\right]}{c}.
\end{equation*}
Failing to account how the loss is distributed across different subsets of $\mathcal{X}$, as done in the HSIC-loss, leads to poor generalization guarantees. Indeed, the naive bound will not be tight for the original function, i.e. $h=f^\star$, but the HSIC based bound will be tight whenever $\delta \ll c$.
Returning to the first way of describing covariate shifts, we denote by $P_{\text{source}}(X)$ the density function of the distribution on $\mathcal{X}$ from which the training samples are drawn, and $P_{\text{target}}(X)$ the density of an unknown target distribution over $\mathcal{X}$.
\begin{theorem} \label{robustness}
Let $\mathcal{Q}$ denote the set of density functions on $\mathcal{X}$ which are absolutely continuous w.r.t. $P_{\text{source}}(X)$, and their density ratio is in $\mathcal{F}$:
\begin{align*}
\mathcal{Q} =
\bigg\{& P_\text{target}:\mathcal{X} \rightarrow \mathbb{R}_{\geq 0} \quad \text{s.t. } \mathbb{E}_{x\sim P_\text{target}}\left[1\right]=1,\\
& \mathbb{E}_{x\sim P_\text{source}}\left[\frac{P_\text{target}(x)}{P_\text{source}(x)}\right]=1, \frac{P_\text{target}}{P_\text{source}} \in \mathcal{F} \bigg\}.
\end{align*}
Then,
{\small
\begin{align*}
&\sup_{\substack{P_\text{target} \in \mathcal{Q}\\\ell\in\mathcal{G}}} \mathbb{E}_{x\sim P_\text{target}} [\ell(Y-h(X))] - \mathbb{E}_{x\sim P_\text{source}} [\ell(Y-h(X))] \\
&\le M_\mathcal{F} \cdot M_\mathcal{G} \cdot \text{HSIC}(X,Y-h(X);\Tilde{\mathcal{F}},\Tilde{\mathcal{G}}),
\end{align*}}
\end{theorem}
where HSIC is of course evaluated on the training distribution $P_\text{source}$.
Combining Theorem \ref{robustness} and the lower bound of Theorem \ref{lower bound}, we obtain the following result:
\begin{corollary}
\label{thm:together}
Under the same assumptions of Theorems \ref{lower bound} and \ref{robustness}, further assume that the square function $x\mapsto x^2$, belongs to $\mathcal{G}$ or its closure.
Denote: $\delta_{\text{HSIC}}(h) = \text{HSIC}(X,Y-h(X);\Tilde{\mathcal{F}},\Tilde{\mathcal{G}}) $, $\text{MSE}_{P_\text{target}}(h) = \mathbb{E}_{P_\text{target}} [(Y-h(X))^2]$, \, $\text{bias}_\text{source}(h) = \mathbb{E}_{ P_\text{source}} [f^\star(x)-h(x)]$, and $\sigma^2 = \mathbb{E}[\varepsilon^2]$.
Then:
\begin{align*}
&\sup_{P_\text{target} \in \mathcal{Q}} \text{MSE}_{{P}_\text{target}}(h) \\
\le &2M_\mathcal{F}\cdot M_\mathcal{G} \cdot\delta_{\text{HSIC}}(h) + \text{bias}_\text{source}(h)^2 + \sigma^2.
\end{align*}
\end{corollary}
Theorem \ref{robustness} and Corollary \ref{thm:together} show that minimizing HSIC minimizes an upper bound on the worst case loss relative to a class of target distribution whose complexity is determined by the norm of the RKHS $M_\mathcal{F}$. Compared to a naive bound based on the infinity norm of the density ratio, this bound is much tighter when considering $f^\star$ for example, and by continuity for functions near it. Further discussion can be found in the supplementary material.
\subsection{Learnability: Uniform Convergence}
By formulating the HSIC learning problem as a learning problem over pairs of samples from $\mathcal{X} \times \mathcal{X}$ with specially constructed labels, we can reduce the question of HSIC learnability to a standard learning theory problem (\citet{mohri2018foundations}, Ch 3). We use this reduction to prove that it is possible to minimize the HSIC objective on hypothesis classes $\mathcal{H}$ with bounded Rademacher complexity $\mathcal{R}_n(\mathcal{H})$ using a finite sample.
\begin{theorem} \label{leanability_thm}
Suppose the residuals' kernel $k$ is bounded in $[0,1]$ and satisfies the following condition: $k(r,r^\prime)=\iota(h(x)-h(x^\prime),y-y^\prime)$ where $\iota:\,\mathbb{R}\times\mathcal{Y}\to\mathbb{R}$ is s.t. $\iota(\cdot, y)$ is an $L_\iota-$Lipschitz function for all $y$. Let $C_1=\sup_{x,x^\prime}l(x,x^\prime)$, $C_2=\sup_{r,r^\prime}k(r,r^\prime)$. Then, with probability of at least $1-\delta$, the following holds for all $h\in\mathcal{H}$:
{\small\begin{align*}
&\left|\text{HSIC}\left(X,Y-h(X);\mathcal{F},\mathcal{G}\right) - \widehat{\text{HSIC}}\left(\{(x_i,r_i)\}_{i=1}^n;\mathcal{F},\mathcal{G})\right)\right| \\
\le &3C_1\left( 4L_\iota\mathcal{R}_n(\mathcal{H})+O\left(\sqrt{\frac{\ln(1/\delta)}{n}}\right)\right) + 3C_2C_1\sqrt{\frac{\ln(2/\delta)}{2n}}.
\end{align*}}
\end{theorem} |
1,108,101,565,223 | arxiv | \section{Introduction}
One of the remarkable phenomena in condensed matter
physics is the universality of properties related to phonon attenuation at
temperatures smaller than $\approx 3$ K in a large class of materials, ranging
from amorphous solids, to disordered lattices, disordered polymers, and
quasi-crystals\cite{ZP71,HR86,PLT02}. The fact that universality is also
quantitative, both in the magnitude of phonon attenuation, and in the energy
scale dictating the temperature below which the phenomenon is observed, and the
broadness of systems exhibiting it, attests to the presence of a mechanism,
pertaining to the disordered state itself, that dictates phonon attenuation in
disordered systems. Exceptions to universality are rare, and have been observed
only in two-dimensional films under special conditions; hydrogenated silicon
films\cite{LWP+97} and silicon nitride films under applied stress\cite{SBV+09}.
Theoretically, much of the characteristics of disordered solids can be
understood within the ``standard tunneling model"
(STM)\cite{AHV72,Phi72,Jac72}, which introduces tunneling two-level systems
(TLSs); their interaction with the phonon field dominating phonon attenuation.
Within the STM all phonon attenuation properties are given in terms of the
``tunneling strength" $C_0 \approx 0.1 n \gamma^2/(\rho v^2)$, where n is the
density of states (DOS) of the TLSs, $\gamma$ is their interaction constant
with the phonon field (strain), $\rho$ is the mass density, and $v$ the
acoustic velocity. However, the STM can not explain what is the nature of the
TLSs, why is phonon attenuation universal among different systems, and the
origin of the energy scale of $\approx 3$ K dictating the universality regime.
Attempts to understand the nature of the TLSs include ``top-down" approaches,
trying to construct a theory which relies only on the glassy state of
matter\cite{YL88,Par94,BNOK98,LW01,Kuh03,PSG07}, and ``bottom-up" approaches,
attempting to identify the relevant TLSs in a given system, and then generalize
to all disordered systems showing universality\cite{SC85,YKMP86,GRS88,SK94}.
Specifically, the KBr:CN system was scrutinized, and it was suggested early
on\cite{SC85,SK94} that it is the flipping of the CN$^-$ impurities (and not
e.g. their rotations) that make for the relevant TLSs dictating the universal
properties in this system. To test this idea an experiment was carried on the
Ar:N$_2$ glass, which does not possess single impurities that can flip, with
and without the addition of CO impurities\cite{NYHC87}. The fact that the
linear term in the specific heat, corresponding to the DOS of the TLSs, had
little if any dependence on the CO concentration, was interpreted as a
refutation of the molecular flips as being the relevant TLSs dictating
universality. Little attention was paid to the fact that Ar:N$_2$ did not
exhibit universal phonon attenuation as observed in its thermal
conductivity\cite{YNH89}.
Recently, the fact that CN$^-$ flips in KBr:CN indeed constitute the TLSs
dictating universal phonon attenuation and the linear specific heat in KBr:CN
at $T \lesssim 3$ K received overwhelming support. Using Density Functional
Theory and ab-initio calculations it was shown that the coupling of CN$^-$
flips to the strain $\gamma_{\rm f}^{{\rm CN}^-} \approx 0.1$ eV, whereas the
coupling of CN$^-$ rotations to the strain $\gamma_{\rm r}^{{\rm CN}^-} \approx
3$ eV\cite{GS11}, the former value agreeing with the experimental value
obtained for the relevant TLSs at low temperatures\cite{BDL+85,YKMP86}.
Furthermore, the DOS of CN$^-$ flips and CN$^-$ rotations was calculated
numerically from first principles\cite{CBS14}, where it was shown that CN$^-$
flips are abundant at energies $\lesssim 3$ K, whereas CN$^-$ rotations are
scarce below $\approx 10$ K, as they are gapped by the weakly interacting
flips. All these findings are in excellent agreement with the recently
introduced two-TLS model\cite{SS09,CGBS13}, which derives the universality of
phonon attenuation and the energy scale of $\approx 3$ K for the universal
regime as a consequence of the existence of two classes of TLSs
differentiated by their interaction with the strain, based on the symmetry
of the TLSs under inversion. Yet, the above mentioned results in KBr:CN and
the two-TLS model seems to be at odds with the experimental results in
Ar:N$_2$:CO, as the latter suggest that the CO flips have no significant
contribution to the low energy properties of Ar:N$_2$:CO.
In this paper we reconcile this alleged discrepancy.
Using a discrete atomic model employing the Lennard Jones (LJ) potential, as
well as density function theory (DFT), we calculate the TLS-strain interaction
constant for various TLSs in both Ar:N$_2$ and Ar:N$_2$:CO systems. For the
latter we find that indeed CO flips have a much weaker interaction with the
phonon field compared to all other excitations studied, in agreement with the
two-TLS model, and similar to CN$^-$ flips in KBr:CN. However, because of the
untypical softness of the Ar:N$_2$:CO lattice, and the resulting smallness of
TLS-phonon interactions (both $\gamma_{\rm f}^{\rm CO}$ and $\gamma_{\rm
r}^{\rm CO}$ are smaller than $\gamma_{\rm f}^{{\rm CN}^-}$ and $\gamma_{\rm
r}^{{\rm CN}^-}$, by a factor of $\approx 3$ and by an order of magnitude,
respectively), we expect universality in Ar:N$_2$:CO to appear only below $T
\approx 0.1$ K. For Ar:N$_2$, where single impurity flips are absent, we find
a related absence of a distinct class of TLSs weakly interacting with the
strain. We then study the DOS of TLS bias energies for the different TLS
configurations, and find again no distinct class of TLSs typified by low bias energies.
Since Ar:N$_2$ does not fulfill the necessary conditions for universality as are
suggested by the two-TLS model\cite{SS09}, and in view of its low temperature thermal
conductivity as was obtained experimentally, we argue that Ar:N$_2$ constitutes a
first example of a non-universal strongly disordered glass in three dimensions and
in ambient pressure.
\section{Methods}
\label{Methods}
For most of our calculations, we have employed a model developed expressly for
this purpose. It is based on the iterative solution, using
previously described numerical methods\cite{kwon,burc}, of a mesh of non-linear
springs with Lennard-Jones $r^6-r^{12}$ potential\cite{lejo} to describe all
the interactions. In particular, we assumed the following Lennard-Jones parameters
$\epsilon_{Ar-Ar}=3.7936\cdot10^{-4} {\rm E_h}$, $\sigma_{Ar-Ar}=6.3302 r_B$,\cite{hoover}
$\epsilon_{Ar-N}=2.1263\cdot10^{-4} {\rm E_h}$, $\sigma_{Ar-N}= 6.3306 r_B$,\cite{nielaba}
$\epsilon_{N-N}= 0.3601 {\rm E_h}$, $\sigma_{N-N}= 2.0749 r_B$, \cite{raman},
$\epsilon_{N_2-N_2}=1.3916\cdot10^{-4} {\rm E_h}$, $\sigma_{N_2-N_2}= 6.3136
r_B$,\cite{johnson}, taking measures to deal with the numerical instabilities
that arise from the double N-N potential (intramolecular and intermolecular).
Starting from a pristine Ar lattice (a pure Ar network with crystallographic
positions\cite{nielaba}), we very gradually raise the concentration of $N_2$,
adding them one by one. We do this up to a limit of 20\%Ar:80\%N$_2$. As
the Ar:$N_2$ ratio is decreased and to minimize artificial structural stress,
we adjust the lattice intersite distance to match the real density of Ar:N$_2$
mixtures.
In the N$_2$ enrichment process of the Ar lattice the goal is to minimize the
energy and at the same time to obtain a variety of structures so that the end
result is statistically representative. To achieve this, each new N$_2$
impurity added takes a random position and orientation, but every time that we
are about to add a new impurity, we choose up to 50 random positions and
orientations for it, starting from the same equilibrium lattice. For each of
these 50 possibilities, the structure of the lattice is relaxed, allowing the
positions of all atoms and the orientations of all molecules to adjust
slightly. Note that in the very first step, when the previous position was the
pristine Ar lattice, this generates 50 second-step structures, branching out
the procedure. Subsequent steps do not continue with the branching and instead
optimize the energy: for each one of the 50 structures with $n$ impurities we
obtain 50 with $n+1$, but keep only the most stable one.
We end up with 50 different low-energy configurations for each Ar:N$_2$ ratio.
For each of these, a Monte-Carlo procedure is used allowing for random
orientation tunneling of each N$_2$ molecule (25 sweeps) for further lowering
the energy. After each Monte-Carlo step the structure is relaxed. At the end of
the Monte-Carlo procedure, the system is ready to suffer the different kinds of
excitations as described below.
The efficiency of this model allows the use of rather large fragments, a
continuous exploration of the range of Ar:N$_2$ ratios and the obtention of
statistically significant number of low-energy configurations and all their
relevant excitations. We performed calculations both for 2D and 3D (hcp and
fcc) structures, but focused mainly on 2D, where reaching good low energy
states is easier because of the smaller number of stable N$_2$ orientations
($6$ in 2D vs. $12$ in 3D), and at the same time the constraints for finding
centrosymmetric TLSs are less stringent than in 3D, see below. More
details about the model can be found in a dedicated article.~\cite{vicente}
\begin{figure} [htb]
(a)\includegraphics[width=0.95\columnwidth]{ArN2_0a.eps}
\begin{tabular}{ccccc}
& & \\
(b)\includegraphics[width=0.27\columnwidth]{ArN2_3.eps} &
(c)\includegraphics[width=0.27\columnwidth]{ArN2_1.eps} &
(d)\includegraphics[width=0.27\columnwidth]{ArN2_2.eps} \\
\end{tabular}
\caption{(a) Initial configuration, highlighting the central hexagon where
excitations take place. (b)-(d) Different excited states starting from the same
initial state configuration. (b) flip-flop (c) Ar-tunneling (d) rotation.}
\label{excitations}
\end{figure}
To avoid border effects, we only study excitations within the seven central
positions of a 9x9 lattice (Fig. \ref{excitations} (a)). Within these seven
positions, we consider all possible N$_2$-N$_2$ (or Ar-N$_2$) pairs and the
three types of excitations depicted in Fig.~\ref{excitations} (b), (c), (d) and
detailed in the next section.
For the purposes of evaluating the density of states, we define the excitation
energy $\text{E$_{\rm bias}$}$ \eqref{eq:Edef} as the difference between the calculated
potential energies of the excited state $V_e$ and its corresponding ground
state $V_g$:
\begin{equation}
\label{eq:Edef}
\text{E$_{\rm bias}$}=\left(V_{e}-V_{g}\right) .
\end{equation}
To evaluate TLS-phonon coupling $\gamma$\eqref{eq:gamdef} we apply a 0.5\% mesh
contraction to obtain the difference between potential energies
\begin{equation}
\label{eq:gamdef}
\gamma=\left(V^{ph}_{g}-V_{g}\right)-\left(V^{ph}_{e}-V_{e}\right) \, .
\end{equation}
Here superscript $ph$ indicates the system after
the phonon and subscripts $g$ and $e$ indicate ground and excited state, respectively.
We have employed two different protocols, where in the first the system is relaxed
after the mesh contraction and in the second it is not relaxed. Whereas single $\gamma$ values
differ between these two protocols, differences between
the two protocols in the statistical distribution of $\gamma$
values as is plotted in Fig.~\ref{gammas} was found to be negligible.
For a few calculations on small 2D fragments, we employed the same DFT methods
as presented in~\cite{GS11} to extract the values of the TLS-phonon interaction
energy $\gamma$ of the different TLSs as explained above. We also use this
methodology to extract the values of the TLS-phonon interaction and estimate
$\gamma$ for CO head-tail flips (exactly analogous to the CN$^-$ flips
in~\cite{GS11}).
\section{Results}
As an initial exploration, we start from pristine Ar networks, either 2D or 3D,
and substitute N$_2$ for Ar at certain positions and in certain orientations.
Then we minimize the energy allowing only small displacements, and for each
starting configuration we arrive at a unique (local) ground state, in the sense
that we found no off-center displacement in any single N$_2$ molecule or in any
single Ar atom that could give rise to a centrosymmetric TLS.
On the other hand we did find pairs of low-energy configurations that relate to
each other through an inversion center, which we denote $\tau$-TLSs. Those involve, in the
simplest case, the exchange of orientation of two non-parallel neighbouring
N$_2$, which we label as {\it flip-flop}. Note that this is actually a
fragile, environmental-dependent phenomenon, where the low-energy part of a
complex spectrum of two or more non-centrosymmetric (denoted $S$-type) TLSs happens to
take the form of a $\tau-TLS$. This is a fundamental difference with the
CN$^-$ case, where each molecule had an intrinsic, built-in, $\tau$-TLS in the
form of a CN$^-$ flip.
Because of this, the variable influence of an extra neighbouring N$_2$ results
in a wide spectrum for the interaction strength of flip-flop excitations
with the strain, ranging from $\gamma_{\rm f}$ to $\gamma_{\rm r}$.
A different, similarly fragile, $\tau$-TLS can be
defined if we consider a tunneling process where an Ar atom and an N$_2$
molecule exchange positions. This we label as {\it Ar-tunnel}. Finally, any
change in orientation of a single given N$_2$ molecule always constitutes an
S-TLS. Except where otherwise stated, in our calculations we choose an
orientation change such that an N$_2$ adopts the orientation of a neighbouring,
non-parallel N$_2$, and label this as {\it rotation}. An example of each of
these three collective TLSs is shown in Figure~\ref{excitations}.
To evaluate the character of these collective TLSs in realistic situations, we
proceed with a systematic exploration at increasing concentrations of N$_2$ as
detailed in section \ref{Methods} and in reference~\cite{vicente}.
As can be seen in Fig.~\ref{gammas}, we find for all TLSs distributions
of bias energies with typical values of $\gamma\approx 0.3-0.5$eV and
$E_{\rm bias} \approx10meV$, rather than markedly distinct behaviors for different TLS types
as would be expected for the $\tau$-$S$-TLSs model and was observed for KBr:CN
\cite{GS11}. Of course, for every kind of TLS there are particular TLSs which
are almost uncoupled to phonons coming from a particular direction. Note that
in our model the phonon compression is not randomized, but is instead applied
in one of the special directions of the lattice. Therefore, it is expected that
there is a sub-class within each type of TLSs that presents low values of $\gamma$ to
phonons coming from a special direction. The difference with the KBr:CN case
is the absence of two full (symmetry-defined) classes of TLSs $\tau$, $S$ where
the phonon coupling $\gamma$ is, for any direction, much lower in $\tau$ TLSs
compared with $S$ TLSs. The absence of such a class of TLSs is caused by
the dramatic perturbation of the
centrosymmetric TLSs by the nearest neighbours in the Ar:N$_2$ system. While
the interaction energies are the same in 2D and 3D, there are 12
nearest-neighbours in 3D compared with just 6 in 2D, meaning that the
centrosymmetric nature of collective TLSs will be, if at all, even more fragile
in 3D. We thus expect a similar wide distribution of the coupling
constants also for Ar:N$_2$ in three dimensions.
Notably, we also find that the obtained values for the TLS-strain
interaction strengths
are small compared with the $\gamma_{\rm r}$ determined for KBr:CN in a previous
work.\cite{GS11}. We have to emphasize that, up to a point, this was to be
expected. Indeed, N$_2$ and Ar (and CO, for that matter) are very different
from the vast majority of substances, in that they are formed by molecules that
are very weakly bounded to each other, compared with almost anything else. The
main interactions among these molecules are van der Waals rather than
electrostatic. Besides causing their extremely low boiling- and melting
temperatures, this also results in the low energy of their crystal defects. As
an example, the energy of a vacancy-type crystal defect in Ar is $< 0.1 {\rm
eV}$, while for iron --which is malleable-- this energy is $> 1{\rm eV}$.
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=0.450\textwidth]{flipflopdos0208.eps} & \includegraphics[width=0.450\textwidth]{flipflopfonon_n} \\
\includegraphics[width=0.450\textwidth]{tunneldos0208.eps} &
\includegraphics[width=0.450\textwidth]{flipfonon_n} \\
\includegraphics[width=0.450\textwidth]{rotationdos0208.eps} & \includegraphics[width=0.450\textwidth]{flipmedioflopfonon_n} \\
\end{tabular}
\caption{Numerical results of 50 independent histories with Ar:N$_2$
ratios ranging from 0.8:0.2 to 0.2:0.8. Left: Density of states of bias energies
[Eq.~(1)]. Right: frequency of encountered values for $\gamma$ [Eq.~(2)]. Exitation
types from top to bottom: flipflop; Ar-tunnel; rotation [see Fig.~\ref{excitations}].}
\label{gammas}
\end{figure*}
Nevertheless, to further test this smallness of the $\gamma$, we construct
smaller lattices which we treat with the DFT methodology that
was originally used for the KBr:CN system \cite{GS11}. We employ the B3LYP functional and
the 6-311G basis set, plus a better basis set 6-311+G* to verify the
results. We confirm that the obtained $\gamma_{\rm r}$ are an order of magnitude
smaller than those obtained by the same methodology in KBr:CN (see
table~\ref{DFT-gammaS}).
\begin{table}[h]
\caption{$\gamma_{\rm r}$(rotation) in different configurations. Problem A: 3x3
pristine Ar lattice with a central N$_2$; A2 is the same lattice but using a
6-311+G* basis set. Problem B: 5x5 pristine Ar lattice with a central N$_2$.
problem C: 5x5 pristine Ar lattice with three nearest neighbours N$_2$ in the
central line, oriented parallel to each other and perpendicular to the line
defined by their centers.}
\begin{tabular}{c|c|c|c|c}
& A & A2 & B & C \\
\hline
$\gamma_{\rm r}$ (DFT,eV) & 0.60 & 0.56 & 0.320 & 0.22 \\
\end{tabular}
\label{DFT-gammaS}
\end{table}
We then used the
same methodology to estimate $\gamma$ for CO flips in Ar:N$_2$:CO samples.
we have found that $\gamma_{\rm f}$ is an order of magnitude smaller than $\gamma_{\rm r}$,
and that like
in the case of $\gamma_{\rm r}$, the values of $\gamma_{\rm f}$ are at least a
factor of three below the values found for flip excitations in KBr:CN
(Table~\ref{CO}). The tests were done on pristine 5x5 Ar fragments where some
of the Ar atoms in the central hexagon were substituted by N$_2$ molecules at
random orientations. In each configuration, we applied either horizontal or
vertical phonons. For convenience and brevity, we label the N$_2$ impurities
according to their clock position and orientations.
\begin{table}[h]
\caption{$\gamma_{\rm f}$ for a CO impurity in the center of different 5x5
fragments. The low-state impurity is defined by its clock-orientation (e.g.
3=horizontal). All N$_2$ impurities are in the first hexagon and thus are
uniquely defined by their clock-position and -orientation, in that order.}
\begin{tabular}{c|c|c}
(CO)(N$_2$)(N$_2$)(N$_2$) & $\gamma_{\rm f,h}$ (meV) & $\gamma_{\rm f,v}$ (meV) \\
\hline
(3)(1,3);(5,3);(9,6) & 31.8 & 34.6 \\
(3)(1,3);(3,4);(5,1) & 33.7 & 47.6 \\
(5)(1,3);(3,3);(9,6) & & 6.3 \\
\end{tabular}
\label{CO}
\end{table}
\section{Discussion and conclusions}
KBr:CN is one of many disordered lattices showing low temperature phonon
attenuation properties which are equivalent in both functional form and
magnitude to those observed in all amorphous solids, and are thus dubbed
universal. For KBr:CN it was shown that universality is intimately related to
the existence of bi-modality in the typical values of the TLS-strain
coupling; inversion symmetric (flips, ``$\tau$") TLSs having a typically weak
interaction with the strain and inversion asymmetric (rotations, ``S") TLSs
having a typically strong interaction with the strain. The ratio between the
two interaction constants, $g \equiv \gamma_{\rm f}/\gamma_{\rm r}$, dictates
the ratio between the typical bias energies of $\tau$-TLSs and S-TLSs, and
consequently the universality and smallness of the tunneling strength. The
energy scale related to the temperature below which universality is observed is
given by the typical bias energy of the $\tau$-TLSs: $\gamma_{\rm f}
\gamma_{\rm r}/(\rho v^2 R_0^3)$, where $R_0$ is the typical distance between
impurities. This energy, being $g$ times smaller than the glass temperature, is
typically a few Kelvin.
As we found for the KBr:CN system\cite{GS11}, we find here also for the
Ar:N$_2$:CO system a bi-modality in the values of the interaction
constants of TLSs with the strain, where CO flips constitute a distinct
group of weakly interacting TLSs, {\it i.e.} $\gamma_{\rm f}$ has a typical
value much smaller than all other calculated TLS-phonon coupling constants,
including that of CO rotation. Thus, we expect the DOS of TLSs in Ar:N$_2$:CO
to show a similar structure to that found in KBr:CN\cite{CBS14}, where the
$\tau$-TLSs dominate the spectrum at low energies, have a roughly constant DOS, and thus dominate phonon attenuation. Furthermore, this structure of DOS coupled with the above mentioned bi-modality in the typical strengths of the TLSs interaction with the strain necessarily leads to qualitative and quantitative universality in phonon attenuation. We thus expect Ar:N$_2$:CO to show all the universal properties known in glasses.
However, since both $\gamma_{\rm f}$ and $\gamma_{\rm r}$ are untypically small for CO excitations in Ar:N$_2$:CO, we expect universality in this system to be present only at temperatures smaller than $\approx 0.1$ K.
The situation in Ar:N$_2$, in the absence of CO impurities, is very different,
since no single impurity flips exist.
Furthermore, we have shown here that no other excitation can play the role of such
flips in having a small typical interaction with the strain and small typical
bias energies in comparison to
all other excitations in this system; we find no off-center displacements of Ar
atoms or N$_2$ molecules; we then show explicitly that N$_2$ rotations and pair
excitations have strain interactions of typical value $\approx \gamma_{\rm r}$
and typical bias energies $\approx 10meV$. This is true also for pair excitations
possessing local inversion symmetry, because the
proximity of each of the pair molecules (or Ar atom) to other N$_2$ molecules
in the solid yields large typical values for TLS-strain interactions and TLS
bias energies. Although we have not checked all types of excitations,
including those
involving larger number of impurities, we can not foresee a scenario in which
any such type of excitation would give rise to a systematically weak interaction with
the strain. Indeed, although excess low temperature specific
heat is found in Ar:N$_2$, resulting from an abundance of low energy excitations, its
thermal conductivity was found to have a very different temperature dependence than that
typical for glasses at low temperatures\cite{YNH89}.
We therefore argue that Ar:N$_2$ is a non-universal glass, the first among
strongly disordered systems having tunneling states, whereas the apparent
similarity between the values of the specific heat in Ar:N$_2$ and Ar:N$_2$:CO\cite{NYHC87}
is limited to the relatively high temperature, in comparison to the energy scale
of the bias energies of CO flips, studied experimentally.
Our results here are tightly connected to the discussion regarding the
necessary and sufficient conditions to observe universality in disordered
lattices. In addition to strong strain disorder and tunneling
TLSs\cite{Wat95,TTP99}, the existence of a distinct class of TLSs weakly
interacting with the strain is required.
This is in line with the two-TLS model, showing that such a class of low energy
excitations is an outcome of the existence of inversion symmetric TLSs. Indeed,
we find that Ar:N$_2$, which does not show universal properties at low
temperatures, does not possess local inversion symmetric flip excitations, and
does not have a distinct class of weakly interacting TLSs. At the same time, we
refute the notion that CO flips in Ar:N$_2$:CO do not contribute to the low
energy properties of this system. In that we refute the central criticism to
the sufficiency of the above conditions,
including the presence of
inversion symmetric (flip) excitations, for the appearance of universality at
low temperatures.
Furthermore, our results suggest that the rather universal temperature
of $\approx 3$ K below which universality is observed is related also to the
fact that typical interactions between near neighbor impurities are rather similar in different
solids, Ar:N$_2$:CO being a marked exception.
Our conclusions here are based on the two-TLS model, and its proven
applicability to disordered lattices showing universality. Were these
conclusion to apply also to amorphous solids, it would suggest the generic
existence in amorphous solids of local excitations with markedly weak
interaction with the phonon field and small bias energies. It would
therefore be of much interest to check the applicability of the two-TLS model
to amorphous solids.
{\it Acknowledgments .---}
The present work has been funded by the EU (Project ELFOS and ERC Advanced
Grant SPINMOL), the Spanish MINECO (grant MAT2011-22785, the CONSOLIDER
project on Molecular Nanoscience), the Generalitat Valenciana (Prometeo
and ISIC Programmes of excellence) and the Israel Science Foundation (Grant No.
982/10). A.G.A. acknowledges funding by the MINECO (Ram\'on y Cajal Program).
|
1,108,101,565,224 | arxiv | \section{Introduction}
Anderson localization (AL), i.e. inhibition of transport, is one of the most famous phenomena in disordered systems \cite{Anderson1958}. It was successfully observed in many experiments, including quantum systems \cite{Roati2008, Billy2008, Chabe2008, Jendrzejewski2012, Manai2015, Semeghini2015}, light \cite{Chabanov2000, Schwartz2007}, and sound waves \cite{Hu2008} among many others. Despite the many years since the first work on AL, a new phenomenon has recently been discovered that is a direct manifestation of localization: the quantum boomerang effect (QBE) \cite{Prat2019}. The new phenomenon involves the dynamics of wave packets with non-zero initial velocity evolving in Anderson localized systems.
In an Anderson localized system, as shown in \cite{Prat2019}, the center of mass (CM) of a quantum wave packet with an initial velocity, on average, returns to its initial position. This behavior is very different from that observed for the classical counterpart: a classical particle will randomize its velocity and, on average, localize after traveling a finite distance (a transport mean free path). The QBE is a genuine quantum phenomenon occurring in one and higher-dimensional Anderson localized systems \cite{Prat2019}, as well as generalized systems including {the} kicked rotor \cite{Tessieri2021}, systems without time-reversal symmetry \cite{Janarek2022}, and non-Hermitian systems \cite{Macri22}. Recently, the QBE was observed in a quantum kicked rotor experiment \cite{Sajjad2021}, where the U-turn of the average momentum was reported.
Consider an Anderson localized one-dimensional system with {the} Hamiltonian $H = p^2/2m + V(x)$, where $V(x)$ is a {disordered potential} \cite{Prat2019}. We define the average CM as $\langle x (t)\rangle = \int x\overline{|\psi(x,t)|^2}\diff{x}$, where $\overline{(\ldots)}$ denotes averaging over disorder realizations. The full quantum boomerang effect occurs iff the CM of a wave packet with a non-zero initial velocity (e.g. $\psi_0(x) = \mathcal{N}\exp(-x^2/2\sigma^2 + ik_0 x)$) returns to its initial position, $\langle x(t=\infty)\rangle = 0$ for large times $t\rightarrow\infty$.
When interactions are present in the system, this behavior changes as interactions tend to weaken localization phenomena. In effect a full localization may be replaced by a subdiffusive evolution at long times in the presence of interactions \cite{Pikovsky2008, Skokos2009, Flach2009, Cherroret2014a}. In a previous study \cite{Janarek2020a} we have shown that the interactions treated within the mean-field approach using {the} Gross-Pitaevskii equation (GPE) \cite{Pitaevskii2016}, lead to a partial destruction of the QBE. After the initial evolution, typical for the full QBE, the CM {performs a U-turn but} does not fully return to its origin{, saturating} at some finite value. This final CM position {depends on} the interaction strength via the interaction energy. Moreover, it was shown that the destruction of the QBE may be described using a single characteristic time scale, dubbed the \emph{break time},
beyond which the QBE is destroyed by interactions \cite{Janarek2020a}.
In the present work, we investigate many-body interactions between particles using quasi-exact numerical methods.
In the first part, we analyze weakly interacting bosons with contact interaction and compare their dynamics to the mean-field approximation results. The many-body interactions lead to a stronger destruction of the QBE. However, it is shown that the effective break time analysis is still valid in the many-body system. In the second part, we show the full QBE for {the} Tonks-Girardeau gas where we also present a full localization of the final particle density.
In the last part, we study strongly interacting bosons, which map to weakly interacting fermions with momentum-dependent interactions. Similarly to weakly interacting bosons, we observe a partial QBE only.
We show that, also in this case, the destruction of the QBE can be captured using similar methods. The results presented reveal that the destruction of the full QBE does not depend on the details of the interaction between the particles.
The paper is organized as follows. Section~\ref{sec:model} introduces the model and explains the method used for numerical simulations of the system. It also presents the main parameters of the system. In Sec.~\ref{sec:bosons}, we study the case of weakly interacting bosons, where we also present a comparison with the mean-field model. Section~\ref{sec:tg} presents the observation of the QBE for {the} Tonks-Girardeau gas, while in Sec.~\ref{sec:fermions} we study strongly interacting bosons and analyze results from the perspective of weakly interacting fermions. Finally, we conclude the paper in Sec.~\ref{sec:conclusions}.
\section{The model}\label{sec:model}
We study a one-dimensional many-body bosonic Hamiltonian:
\begin{equation}
\label{eq:many_body_hamiltonian}
\hat{H} =\! \int \!\hat{\Psi}^\dagger(x)\!\left( - \frac{\hbar^2}{2m}\Delta + V(x) + \frac{U}{2}\hat{\Psi}^\dagger(x) \hat{\Psi}(x) \right)\! \hat{\Psi}(x)\diff{x},
\end{equation}
where $m$ is the particle mass, $V(x)$ represents the disordered potential, and $U$ is the strength of the two-body contact potential. In our work, we adopt the method introduced in \cite{Schmidt2007} and map {the} continuous Hamiltonian (\ref{eq:many_body_hamiltonian}) to a discrete model on a equidistant grid with $L$ lattice sites, where the position is given by $x_j = {j \Delta x}$, $j\in \mathbb{Z}$, and $\Delta x$ is {the} grid spacing. We start by expanding the field operators in the basis of bosonic annihilation operators $\hat{a}_j$ and single-particle wave functions $\psi_j(x)$: step functions localized at position $x_j$. {The field} operators decomposition is given by:
\begin{equation}
\hat{\Psi}(x) = \sum_{j=1}^L \psi_j(x)\hat{a}_j.
\end{equation}
The derivative in Hamiltonian~(\ref{eq:many_body_hamiltonian}) is expressed as the three-point stencil, $\partial^2_x\hat{\Psi}(x_j) \to (\hat{\Psi}(x_{j-1}) - 2 \hat{\Psi}(x_j) + \hat{\Psi}(x_{j+1}))/\Delta x^2$. The resulting Hamiltonian has the form of {a disordered} Bose-Hubbard Hamiltonian:
\begin{equation}
\label{eq:bose_hubbard}
\hat{H} = -J_0\sum_{j=1}^L\left(\hat{a}_j^\dagger \hat{a}_j + \text{c.c.}\right) + \sum_{j=1}^L V_j \hat{n}_j + \frac{U_0}{2}\sum_{j=1}^L \hat{n}_j(\hat{n}_j-1),
\end{equation}
where the parameters are directly connected with the lattice spacing $\Delta x$ and the parameters of Hamiltonian (\ref{eq:many_body_hamiltonian}): $J_0 = \hbar^2/(2m\Delta x^2)$, $U_0 = U/\Delta x$, and $V_j = V(x_j)$. Thanks to this discretization technique, we are able to study the many-body QBE in a continuous space with techniques developed for lattice models.
In our work, we are almost exclusively interested in {the temporal dynamics} of the system. To compute the time evolution {under} the Bose-Hubbard Hamiltonian (\ref{eq:bose_hubbard}), we use a homemade implementation of the time-evolving block decimation (TEBD) algorithm \cite{Vidal2003, Vidal2004} based on matrix product states (MPS). At each time step, the many-body state is expressed in terms of matrices $\Gamma^{i_l}$ and vectors $\lambda^{[l]}$:
\begin{equation}\label{eq:psi_mps}
\ket{\Psi} = \sum_{\substack{\alpha_1, \ldots,\alpha_L\\ i_1,\ldots,i_L}} \Gamma^{ i_1}_{1,\alpha_1} \lambda^{[1]}_{\alpha_1}\Gamma^{i_2}_{\alpha_1,\alpha_2}\ldots\Gamma^{i_L}_{\alpha_L,1} \ket{i_1,i_2,\ldots,i_L},
\end{equation}
where matrices $\Gamma^{i_l}$ describe the $l$-th site and vectors $\lambda^{[l]}$ describe bonds between $l$ and $l+1$ sites. The indices $i_l$ run form 0 to maximal occupation $i_\text{max}$; the indices $\alpha_l$ run from 1 to $\chi$ (the so called bond dimension).
The QBE is a phenomenon {displayed} by wave packets having nonzero initial velocities. For {a} single-particle wave function, a nonzero initial velocity translates into a nonzero phase factor $e^{ik_0x}$ {for the} wave function, where $k_0$ {is related to} the velocity by $v_0=\hbar k_0/m$. It means that it is possible to \emph{kick} a wave packet by multiplying its wave function by {a proper} phase factor.
When the state is in the MPS form, the procedure is quite different: the kick acts on the state in configuration space, whereas the MPS is represented in a space which is a mixture of configuration space and Fock basis. Additionally, we want all of the particles to have the same initial velocity. The total phase imprinting the initial velocity should include factors for all particles:
$\prod_{n=1}^N\exp(i k_0 x_n) = \exp\left(ik_0\sum_{n} x_n\right),$
where $n$ numbers the particles and $N$ is the total number of particles. The sum inside the exponent may be rewritten using particle occupations at each site, i.e.
$\sum_{n} x_n = \sum_{l} i_l x_l.$
This allows us to use the MPS representation: to kick the MPS, we modify the matrices $\Gamma^{i_l}$: $\Gamma^{i_l} \to \Gamma^{i_l}\cdot e^{i k_0 i_l l \Delta x}.$ The vectors $\lambda^{[l]}$ are not changed because the kick does not change the properties of the MPS links. The kick preserves the MPS standard form (4) of the many-body state.
The discretization method was successfully used in a study of a disordered many-body system, where Anderson localization of solitons was observed \cite{Delande2013}. To observe the QBE, the system has to be Anderson localized \cite{Prat2019}. In our work, we use Gaussian uncorrelated disorder:
\begin{equation}
\overline{V(x)} = 0,\quad \overline{V(x)V(x')} = \gamma\delta(x-x'),
\end{equation}
where $\overline{(\ldots)}$ denotes averaging over disorder realizations and $\gamma$ is the disorder strength. {As explained below, the kicked wavepacket has an energy close to $\hbar^2k_0^2/2m.$ Using the Born approximation, it is easy} to compute the values of the mean free time and mean free path {at this energy}:
\begin{equation}\label{eq:born_tau_ell}
\tau_0 = \frac{\hbar^3 k_0}{2m\gamma},\quad \ell_0 = \frac{\hbar^4 k_0^2}{2m^2\gamma}.
\end{equation}
Finally, to observe the QBE, we study the center of mass time evolution: it is evaluated using the average particle density $\overline{n(x,t)}$:
\begin{equation}
\langle x(t)\rangle = \sum_l x_l \overline{n(x_l,t)}.
\end{equation}
The particle density can be fairly easily {computed} from the MPS representation: {the} occupation {$n(x_l)$} on {the} $l$-th site depends only on $\lambda^{[l-1]}$, $\lambda^{[l]}$, and $\Gamma^{i_l}$ that are known at each time step.
\section{Weakly interacting bosons}\label{sec:bosons}
\begin{figure}
\centering
\includegraphics{tebd_vs_gpe.pdf}
\caption{Comparison of results obtained in the many-body simulations (solid lines with error bars), the mean-field simulations (orange dashed lines {with tiny error bars, not shown}), and the single-particle theoretical prediction (green dotted line). Panels correspond to (a) $U = 0.1$, (b) $ U=0.15$, and (c) $U=0.2$. While for the lowest interaction between particles the curves seem to agree (panel (a)), with the increase of the interactions the many-body result saturates at a significantly higher value.
Mean-field simulations are averaged over $10^5$ disorder realizations.
}
\label{fig:tebd_vs_gpe}
\end{figure}
Let us commence our study with the weakly interacting bosons case.
{This will allow us to compare our ``quasi-exact'' simulations with results obtained within the mean field approximation that revealed an only partial QBE in the presence of interactions.
The initial state of the system in the mean-field study was a Gaussian wave packet. To mimic this scenario, we prepare the initial state as the ground state of $N$ non-interacting particles in a harmonic trap. Application of an imaginary time evolution {using the} TEBD {algorithm} allows us to prepare the initial state in the MPS form. The frequency of the trap is chosen to match the desired particle density width $\sigma$. Then, the kick with initial momentum $k_0$ is applied to the initial MPS as explained above. The initial particle density is a Gaussian with width $\sigma$: $n_0(x)=N/\sqrt{\sigma^2 \pi}e^{-x^2/\sigma^2}$. In numerical simulations, we use $\sigma=10/k_0 = 2\ell_0$. {Because $k_0\sigma\gg 1$, the wavepacket is quasi-monochromatic with an energy distribution sharply peaked near the energy $\hbar^2k_0^2/2m.$}
In the numerical simulations, we use $1/k_0$ as the unit of length. The system size is $L_\text{size} = 400/k_0$ divided into $L=2000$ lattice sites meaning that $\Delta x = 0.2/k_0$. We use $N=5$ particles in our simulations. Unfortunately, higher numbers of particles would demand too large computer resources. The disorder strength is chosen to be $\gamma = 0.1 \hbar^4 k_0^3/m^2$ meaning $k_0\ell_0 = 5$, so we can assume a weak disorder case. The maximal time of simulations was chosen to be $t_\text{max} = 60\tau_0$. For each interaction strength we used 500 disorder realizations (otherwise stated in figures' captions).
For comparison we present simultaneously the {full many-body results together with} results obtained within the mean-field approach, using the Gross-Pitaevskii equation \cite{Pethick2008,Pitaevskii2016}:
\begin{equation}\label{eq:gpe}
\begin{split}
&i\hbar\partial_t \psi(x,t) =\\
&\left(-\frac{\hbar^2}{2m}\Delta + V(x) + U (N-1)|\psi(x,t)|^2\right)\psi(x,t).
\end{split}
\end{equation}
Since we consider a very small number of particles, the usual factor $g$ multiplying the density part is taken in its exact form $g=U(N-1)$.
In Fig.~\ref{fig:tebd_vs_gpe}, we show comparisons of the results for many-body and mean-field systems for different interaction strengths. To account for differences between exact scattering mean free time $\tau$ (scattering mean free path $\ell$) and mean free time $\tau_0$ (mean free path $\ell_0$), Eq.~(\ref{eq:born_tau_ell}), we fit the theoretical prediction for CM time dependence \cite{Prat2019} to results obtained for non-interacting particles. This yields $\tau = 0.94\tau_0$ and $\ell = 1.07\ell_0$, which is in a full agreement with expected corrections to the Born approximation{, which are of order $1/k_0\ell_0$ \cite{Akkermans2007book}.}
Unsurprisingly, we observe that, for nonzero interactions, the boomerang effect is only partial. After the initial ballistic-like motion and the U-turn, typical for the boomerang effect, the CM does not return to the origin but saturates at some finite, interaction strength dependent, position. This closely resembles the behavior observed in the mean-field study \cite{Janarek2020a}.
On the one side, for the lowest presented value of interaction strength $U=0.1$, the many-body and the mean-field solutions are in agreement (within error bars). On the other side, when the interaction strength is higher, for example, in panels (b) and (c) {of} Fig.~\ref{fig:tebd_vs_gpe}, the curves seem to separate and the many-body CM $\langle x(t)\rangle$ saturates significantly higher than the mean-field one.
The interactions present in the system may be understood as a source of dephasing mechanism which destroys Anderson localization, hence the boomerang. From this perspective, it should be natural that, when we treat the interactions without approximation, their impact should be larger, destroying {the} QBE {more efficiently}. However, the simulations include only few particles, we {expect on the general grounds that} in the limit of a large number of particles, the difference between full quantum and mean field results vanishes.
To {further} study this difference, we also analyze the one-body reduced density matrix $\rho(x,x') = \langle \hat{\Psi}^\dagger(x')\hat{\Psi}(x)\rangle$, which may be used to analyze correlations in many-body systems, see \cite{Pethick2008}. For this study, we use interaction strength $U=0.2$ with increased disorder strength, so that $k_0\ell_0 = 2.5$. The maximal time of simulation is set to $t_\text{max} = 120 \tau_0$, what should reflect better the long-time limit.
To quantitatively check the amount of the condensate fraction in the final density matrix $\rho_f(x,x')$, we compute its eigenvalues. The largest eigenvalue represents the condensate fraction \cite{Penrose1956, Yang1962}. For a non-interacting fully condensed system, there is only one eigenvalue $\lambda_0 = N$. When non-zero interactions are present in the system it is no longer true. This approach may be generalized: the interactions decrease the value of $\lambda_0$, however the sum of all eigenvalues is given by the total number of particles, i.e. $\sum_j\lambda_j = N$. The state may be considered a condensate as long as $\lambda_0\sim N$.
In our study, we compute {the} four largest eigenvalues of the final one-body density matrix. We find the averages {of them} to be $\overline{\lambda_0}/N = 0.147\pm0.028$, $\overline{\lambda_1}/N = 0.110\pm0.016$, $\overline{\lambda_2}/N = 0.083\pm0.011$, $\overline{\lambda_3}/N = 0.064\pm0.007$. {The values for single realizations do not differ much from the averages, the distributions of $\lambda_j$ are narrow.} This clearly indicates that the final state of the system is very far from a true condensate. The GPE describes only the condensate part of the system, while our state consists mainly of particles outside the condensate. Thus, the full dynamics of the system cannot be described with {the} GPE. This fact reinforces the conclusion that the difference between many-body and the mean-field results comes from truly man-body effects.
\subsection{Break time analysis}
\begin{figure}
\centering
\includegraphics{boson_algebraic_fit.pdf}
\caption{Time evolution of the center of mass (solid lines) in the interval $[30\tau, 64\tau]$ where a fitting of the algebraic decay (dashed lines), Eq.~(\ref{eq:fit}), is performed. When the exponent $\alpha=3$ is used, the resulting fits yield very good results. The $U$ values increase from the bottom to the top as indicated in the legend.}
\label{fig:fit_algebraic_decay_bosons}
\end{figure}
The destruction of the QBE in the mean-field approximation was successfully described using {the} so-called break time \cite{Janarek2020a}. It is the time {$t_b$ for} which the CM position {in the} interaction-free case reaches the long-time limit obtained in the presence of interactions
\begin{equation}\label{eq:break_time_bosons}
\langle x(t_b) \rangle_{U=0} = \langle x\rangle_\infty. \end{equation}}
To examine the break time, it is necessary to compute the infinite-time CM position: $\langle x \rangle_\infty = \langle x (t\to\infty)\rangle$. For the mean-field approximation, the infinite-time CM position was approximated with the long time average:
\begin{equation}\label{eq:long-time_average}
\langle x \rangle_\infty = \frac{1}{t_2-t_1} \int_{t_1}^{t_2} \langle x(t)\rangle \diff{x}.
\end{equation}
This was reasonable for large maximal times of numerical simulations, extending up to $2500\tau_0$. In the present study where $t_\text{max}\approx 64\tau,$ such {a} long-time average is not {available}. To overcome this problem, we fit an algebraic decay to the data:
\begin{equation}\label{eq:fit}
\langle x(t)\rangle = \langle x\rangle_\infty + \frac{\beta}{t^\alpha},
\end{equation}
where $\langle x \rangle_\infty$ and $\beta$ are fitting parameters. The fit is performed in the time interval $[30\tau, t_\text{max}\!\approx\! 64\tau]$. Knowing that, in the non-interacting case, the long-time time dependence is $\langle x (t) \rangle \approx 64\ell \log(t/4\tau) \tau^2/t^2$ \cite{Prat2019}, for the non-interacting case we expect that $\alpha=2$ will return $\langle x\rangle_\infty = 0$, what is confirmed using our numerical data. For the interacting cases, we find {a} slightly faster decay, thus we use $\alpha=3$ as the exponent in Eq.~(\ref{eq:fit}). Figure~\ref{fig:fit_algebraic_decay_bosons} shows a comparison of the numerical data with fitted functions. The fits show very good agreement with the data. It also turns out that the overall fitting results, i.e. the values of $\langle x \rangle_\infty$, only slightly depend on the exponent value $\alpha$ in Eq.~(\ref{eq:fit}) and the time fitting interval.
\begin{figure}
\centering
\includegraphics{one_over_tb_vs_u_bosons.pdf}
\caption{Inverse of the break time $t_b$ computed for the many-body simulations (blue points with error bars, computed from the fits of the algebraic decay, Eq.~(\ref{eq:fit})) and the mean-field simulations (orange triangles, calculated using long-time averaging, Eq.~(\ref{eq:long-time_average})) versus the interaction strength $U$. Dashed lines present the best linear fits $\tau/t_b=aU$, with slope coefficients $a_\text{many-body}=0.22$ and $a_\text{mean-field}=0.076$. The mean-field data is clearly linear, as expected. For the many-body results, with a small deviation of the point for $U=0.05$, the points strongly suggest linear dependence. The error bars represent the {uncertainty on} the break time based on the {error bars for} the final center of mass position value.}
\label{fig:one_over_tb_vs_v_bosons}
\end{figure}
Having the estimate for $\langle x\rangle_\infty$, we
can find the break time with the help of Eq.~\eqref{eq:break_time_bosons}.
Fitting errors on the infinite time CM position $\langle x \rangle_\infty$ allows us to calculate the error bars on the break times for various interaction strengths, $U$.
In analogy with the mean-field study \cite{Janarek2020a}, we expect the inverse of $t_b$ to be proportional to $U$, a measure of the interaction energy in the system.
The dependence of $1/t_b$ versus $U$ is shown in Fig.~\ref{fig:one_over_tb_vs_v_bosons}, where we present results for the many-body and the mean-field simulations (where $\langle x\rangle_\infty$ is computed from the long-time average, Eq.~(\ref{eq:long-time_average})). While for the mean-field results the dependence is obviously linear, the many-body result also suggests a linear behavior, with a small deviation of the point with $U=0.05$. This point, the lowest value of the interaction strength, requires the longest time of evolution to saturate around the true $\langle x \rangle_\infty$ value. The corresponding infinite time CM position value may be overestimated, {in turn} underestimating the break time. On the opposite side, for stronger interactions, the linearity is better. This is also related to the fact that the final infinite time CM position values are higher, hence, to compute the break time, a shorter time evolution is sufficient.
The fact that the break time is much shorter for the full many-body calculation than in the mean-field approximation, emphasizes the importance of quantum fluctuations. This is also supported by the analysis of the average one-body density matrix presented above.
Before moving to the next part of our study, we should make a comment on the many-body localization phenomenon (MBL), which may be present in many-body interacting disordered systems (for reviews see \cite{Alet2018, Abanin2019}). Although, we study a disordered many-body system, we are not in the MBL regime. Typically, MBL is studied in systems with much higher interaction strengths (for example of bosonic systems see \cite{Sierant2017, Sierant2018}) than considered in our work, where $U_0/J_0\ll 1$ (translating $\Delta x$, and $U$ to Bose-Hubbard model parameters). The other important factor is the density of particles -- the average filling in our system, taking into account only the sites occupied by the initial density profile, is very low, $n\approx0.1$. Together with the small number of particles considered, this does not allow for a comparison with other studies of interacting bosons on a lattice. Finally, the disorder strength used in our study corresponds to Anderson localization {in the} weak disorder regime, which should not be sufficient to induce many-body localization effects.
\section{Strongly interacting bosons: the Tonks-Girardeau limit}\label{sec:tg}
Let us now consider a second entirely different situation - the case of very strong interactions.
A one-dimensional system of bosons with repulsive contact interactions may be described by the Lieb-Liniger model \cite{Lieb1963}:
\begin{equation}\label{eq:lieb_liniger}
H = \sum_{j=1}^N \left( -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x_j^2} + V(x_j)\right) + U\sum_{1\leq j < k\leq N} \delta(x_j - x_k),
\end{equation}
where $U>0$ is the coupling constant and $m$ denotes the atom mass. The model is frequently characterized by a dimensionless parameter $\zeta = mU/\hbar^2n$, where $n=N/L$ being the average density of bosons, and $L$ is the system length. When
$\zeta=0$, the model corresponds to free bosons while $\zeta\to\infty$ is called the Tonks-Girardeau limit.
The Tonks-Girardeau (TG) gas describes impenetrable (or \emph{hard-core}) bosons, which can be mapped to non-interacting spinless (spin-polarized) fermions \cite{Girardeau1960, Girardeau2000}. The model can be solved
exactly in the free case $V=0$ (for details see \cite{Cazalilla2011}). Reference~\cite{Olshanii1998} showed that the Tonks-Girardeau gas can be obtained in cold atom experiments, and the experimental observations of hard-core Rubidium bosons were reported shortly after in \cite{Paredes2004,Kinoshita2004}.
Even though {the} TG gas is highly correlated, Anderson localization is not destroyed by the interactions. TG particles map to non-interacting fermions, hence Anderson localization is present in the system: non-interacting fermions are fully localized in a one-dimensional system. Anderson localization of the TG gas was discussed in \cite{Radic2010}.
In an Anderson localized system, we expect to observe the full quantum boomerang effect for particles with non-zero initial velocity. We perform numerical simulations to study the center of mass temporal evolution, using the same methods as in the case of weakly interacting bosons. In order to simulate {the} TG gas, we use a trick: the MPS representation has a parameter $i_\text{max}${, the maximum number of bosons on a given site (the local Hilbert space thus has dimension $i_\text{max}+1$)}. For $N$ bosons, it is natural that $i_\text{max}\approx N$ which allows the MPS to represent faithfully states with many particles at one site. This parameter can be used in the other way: we restrict the number of particles occupying one site by setting $i_\text{max}=1$, effectively realizing the concept of impenetrability of the Tonks particles. {Note that the local Hilbert space has dimension 2, explaining why it can be mapped on spinless fermions, where the local Hilbert space is spanned by states with 0 or 1 fermion.}
On the numerical side, our results have been simulated in a similar way to the weakly interacting bosons. The main difference is that, in the Tonks-Girardeau gas, we enlarged the discretization constant, so that $\Delta x = 1/k_0$. By using larger $\Delta x$, we can decrease the number of lattice sites in the simulations to $L=500$ and scale down {the} CPU-time. The main effect of larger $\Delta x$ is its influence on the dispersion relation. As we show in the next sections, apart from the change of velocity due to not ideal discretization, the quantum return to the origin still can be analyzed.
As opposed to above studies of QBE, here we cannot use {a} Gaussian wave packet as the initial state of the system. It is very different from the ground state of TG particles in a harmonic trap. Nevertheless, the ground state of TG particles in a trap can be computed. Due to {the} mapping to fermions, it can be easily found in the absence of the disordered potential. Fermions cannot occupy the same eigenstate of the system, hence the state with the lowest energy has {the} following structure in the Fock basis (ordered by increasing energy): $\ket{\text{GS}} = \ket{11\ldots10\ldots}$, with $N$ particles occupying {the} $N$ {single particle} states with the lowest energy. Then, the particle density can be calculated in a straightforward way:
\begin{equation}\label{eq:tg_groundstate}
n_\text{TG}(x) = \sum_{n=0}^{N-1}|\psi_n(x)|^2,
\end{equation}
where $\psi_n(x)$ denotes a single-particle eigenstate of the trap. The density is much broader than the harmonic oscillator's ground state for a single particle. On the numerical side, the initial state is prepared using the imaginary-time evolution {in presence of interactions, but in the absence of disorder} followed by a velocity kick, similarly to the weakly interacting bosons case.
\begin{figure}
\centering
\includegraphics{tg_boomerang.pdf}
\caption{Time evolution of {the} center of mass $\langle x \rangle$ for {the} Tonks-Girardeau gas (solid blue line with error bars) compared with {the} single-particle theoretical prediction (orange dashed line) \cite{Prat2019}. {The r}esult is fitted using {the} theoretical boomerang prediction to {adjust} the mean scattering time $\tau$ and length $\ell$. The numerical data perfectly agree with the theoretical curve. The results have been averaged over 10000 disorder realizations. Error bars represent statistical average uncertainties.}
\label{fig:tg_boomerang}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=1]{tg_density.pdf}
\caption{{Initial (green solid line) and } final density profile for kicked hard core bosons (blue solid line with error bars), compared with {the theoretical initial particle density (red dotted line)}, Eq.~(\ref{eq:tg_groundstate}), and the Tonks-Girardeau-Gogolin profile (orange dashed line){, Eq.~(\ref{eq:tgg_profile})}. {The numerical data for the initial and final times agree fully with the initial density and with the theoretical Tonks-Girardeau-Gogolin profile, respectively.} The inset shows the theoretical and numerical final profiles to show agreement even in the exponentially decaying tails. }
\label{fig:tg_final_density}
\end{figure}
Figure~\ref{fig:tg_boomerang} presents the time evolution of the CM $\langle x(t)\rangle$ for the {TG} gas.
It faithfully follows the single-particle QBE. To show the agreement between the numerical data and the theoretical prediction \cite{Prat2019}, we perform a fitting procedure which accounts for the difference between the exact mean free time $\tau$ (mean free path $\ell$) and the mean free time $\tau_0$ (mean free path $\ell_0$) computed using the Born approximation, Eq.~(\ref{eq:born_tau_ell}).
After such a fit, the agreement between the TG gas and the theoretical prediction is excellent. The disorder strength used should result in $k_0\ell_0 = 5$. The fitted exact values of mean free time and path are $\tau=0.97\tau_0$ and $\ell=0.9\ell_0$,
being thus consistent with the Born approximation.
There is, however, a slight caveat. The particles of the {TG} gas have slightly different energies, because they correspond to different eigenstates of the harmonic potential. This should mean that each particle has a different mean free time, hence $\langle x (t)\rangle$ should be a superposition of the boomerang curves with different $\tau$.
The energy of the $n$-th eigenstate of the harmonic potential is $(n + \frac{1}{2})\hbar\omega$, where $\omega$ is the frequency of the {harmonic oscillator}. In our analysis we use kicked states, and the kick adds $\hbar^2k_0^2/2m$ to the total energy. If $\hbar^2k_0^2/2m \gg (n + \frac{1}{2})\hbar\omega$, we may assume that all states have roughly the same scattering mean free time and path. This is the case in our simulations, where $\omega=0.01$.
The small dispersion of energies does not influence the final $\langle x(t)\rangle$, and we observe the universal boomerang curve.
We also study the final particle density. It is symmetric and has exponentially decaying tails. Although \cite{Radic2010} used a slightly different initial state (ground state of the trap including the disorder), a similar behavior of the tails in their simulations was reported. After our observation that the boomerang effect is described by a single-particle theoretical result, we construct an infinite-time density profile based on the (single-particle) Gogolin profile \cite{Gogolin1976}:
\begin{equation}\label{eq:gogolin}
\begin{split}
&\overline{|\psi^\text{Gogolin}_\ell(x,t=\infty)|^2} = \\
&\int_0^\infty \frac{\diff{\eta} \pi^2}{32\ell} \frac{\eta(1+\eta^2)^2 \sinh(\pi\eta)e^{-(1+\eta^2)|x|/8\ell}}{(1+\cosh(\pi\eta))^2},
\end{split}
\end{equation}
which depends on the mean free path $\ell$.
{As explained in \cite{Gogolin1976}, this density profile is the theoretical prediction at infinite time for a single particle initially located at $x=0$ and evolving in the presence of a disordered potential. In our case, t}he final density should be given by the convolution of the Gogolin profile with the initial particle density $n_\text{TG}(x)${, Eq.~(\ref{eq:tg_groundstate})}:
\begin{equation}\label{eq:tgg_profile}
\overline{n_\text{TG-G}(x)} = \int_{-\infty}^{+\infty}\diff{x'} n_\text{TG}(x-x') \overline{|\psi_\ell^\text{Gogolin}(x')|^2}.
\end{equation}
In the analysis of the final density profile, we also fit $\overline{n_\text{TG-G}(x)}$ to numerical data. The numerical calculation of the Tonks-Girardeau-Gogolin profile for $x/\ell \gg 1$ is laborious, thus we fit the profile only around $x=0$ for several points. The value of the fitted mean free path is $\ell_\text{fit}\approx4.025/k_0$. The mean free path extracted from the center of mass time evolution is $\ell=4.5/k_0$. Taking into account the fact that $\ell_0 k_0=5$, so that corrections to the Born approximation may be visible, the agreement between $\ell_\text{fit}$ and $\ell$ is good.
Figure~\ref{fig:tg_final_density} shows both the numerical and fitted final densities as well as the initial density profile. Our crude fitting method gives neverthless very good results -- the numerical and theoretical infinite-time densities agree {perfectly}. The inset presenting the densities in a logarithmic scale shows almost no difference also in the wings, far from the region of the fit.
\section{Strongly interacting bosons: mapping to weakly interacting fermions}\label{sec:fermions}
For an arbitrary interaction strength in Hamiltonian~(\ref{eq:lieb_liniger}), the bosonic model can be mapped to interacting fermions~\cite{Sen1999, Cheon1999, Sen2003}. The interaction is much more complicated, it is mapped to a momentum-dependent attractive interaction~\cite{Grosse2004}. Fermions are governed by the following Hamiltonian:
\begin{equation}\label{eq:fermionic}
H_F = \sum_{j=1}^N \left(-\frac{\hbar^2}{2m}\frac{\partial}{\partial x_j} + V_\text{ext}(x_j)\right) + V_F,
\end{equation}
where $V_F$ denotes the fermionic interaction term:
\begin{equation}\label{eq:fermionic_interaction}
V_F = \frac{\hbar^4}{m^2 U} \!\!\sum_{1\leq j<k\leq N} \!\! \left(\partial_{x_j} - \partial_{x_k}\right) \delta(x_j - x_k) \left(\partial_{x_j} - \partial_{x_k}\right).
\end{equation}
The eigenfunctions of the Lieb-Liniger Hamiltonian~(\ref{eq:lieb_liniger}) coincide with the eigenstates of Hamiltonian~(\ref{eq:fermionic}) when particle coordinates $x_j$ are ordered and their sign is changed upon exchange of the particle coordinates. The models have the same eigenspectra. The fermionic interaction strength is proportional to $U^{-1}$, see Eq.~(\ref{eq:fermionic_interaction}). In order to simplify the notation, in the following sections we use $U_F=U^{-1}$ to represent the interaction strength between the fermions.
The mapping can be used to study systems in different potentials including disordered ones, e.g. the fluid-insulator transition for strongly interacting bosons was studied in~\cite{Michal2016}.
Above, we argued that the simulations of disordered many-body systems require large amounts of computational resources. To compute simulations of strongly interacting bosons, we allow at most two particles at one site, $i_\text{max}=2$. In the case of weak interactions, such constraint would change the results and simulations would not be faithful. On the other hand, when the interactions are strong, the probability of having more than two particles at one site is small being energetically very costly \cite{footnote1}.
Additionally, we keep $\Delta x =1/k_0$ as in the TG gas case. Altogether that allows us to save computational resources and calculate the {temporal} evolution for longer times than for weakly interacting bosons. Let us stress that here we cannot be guided by the mean-field analysis.
\begin{figure}
\centering
\includegraphics{tg_com_vs_interactions.pdf}
\caption{{Temporal evolution of the center of mass position} for different values of {interaction strength $U$, decreasing from the bottom to the top, in the strong interaction limit}. Similarly to the mean-field case and weakly interacting bosons, the short time evolution is almost unaffected by interactions. At longer times, the center of mass {saturates} at finite values. Error bars indicate statistical errors and are shown only for one curve to indicate their magnitude. Orange dashed curve shows the theoretical {center of mass} {temporal} evolution (cf. Fig~\ref{fig:tg_boomerang}). }
\label{fig:tg_cmp_interactions}
\end{figure}
At the qualitative level, the effect of interactions on the QBE should not depend on their details. For strongly interacting bosons (weakly interacting fermions), we also expect that interactions will weaken Anderson localization. The interactions, which are considered as an effective dephasing mechanism, lead to the destruction of coherence between scattering paths, and finally to destruction of the full QBE.
Figure~\ref{fig:tg_cmp_interactions} presents the result of the CM time evolution. Similarly to the non-interacting case, after the initial ballistic evolution, the CM is reflected towards the origin. Analogously to the mean-field and weakly interacting bosonic cases, the destruction of the boomerang effect is visible in the long time regime. For all situations with finite $U$ (nonzero effective interaction between fermions $U_F$), we observe that the return is not complete: the infinite time CM position saturates at some nonzero value. The figure shows also the statistical error bars. Because the number of disorder realizations is small, the errors are relatively large. Nonetheless, the effect of interactions is clearly visible and can be analyzed taking into account the uncertainties. The limited maximal time of evolution does not allow us to study in detail the mean square displacement of the particle density.
The main observation is that the boomerang effect is only partial, even though the effective interactions between fermions are attractive and fairly complicated. There is no qualitative difference between the results of the mean-field approximation, weakly interacting bosons and weakly interacting fermions. Interactions weaken the QBE.
\begin{figure}
\centering
\includegraphics{tg_algebraic_fit.pdf}
\caption{{Temporal} evolution of the {center of mass position for strongly interacting bosons} (solid lines) and fits of the algebraic decay, Eq.~(\ref{eq:fit}) (dashed lines). As indicated in the figure the values of $U$ decrease from bottom to top curves. Similarly to Fig.~\ref{fig:fit_algebraic_decay_bosons}, with the exponent $\alpha=3$, the resulting fits yield satisfactory results.}
\label{fig:fit_algebraic_decay}
\end{figure}
As for weakly interacting bosons, we analyze the final CM position.
We use, as before, the algebraic fit{, Eq.~(\ref{eq:fit})} to extract the infinite time CM position {from data}
in the time interval $[60\tau,120\tau]$.
As for weakly interacting bosons we assume $\alpha=3$ for the fits. Figure~\ref{fig:fit_algebraic_decay} shows a comparison of the numerical data with fitted functions. The data show high correlation between different interaction strengths because we use the same disorder realizations. Also in this case, we have checked that the overall fitting result is almost independent of the exponent value $\alpha$ in Eq.~(\ref{eq:fit}).
\begin{figure}
\centering
\includegraphics{cmp_vs_u_tg.pdf}
\caption{Final center of mass position $\langle x \rangle_\infty$ versus $U_F=U^{-1}$. The errors for the points result from the fitting of the decay Eq.~(\ref{eq:fit}). Like in the mean-field study, the dependence of the final center of mass position on the effective interaction strength between fermions $U_F=U^{-1}$ is quadratic.}
\label{fig:final_cmp_vs_u}
\end{figure}
In Fig.~\ref{fig:final_cmp_vs_u}, we present the dependence of $\langle x\rangle_\infty$ on the effective interaction strength $U_F$ between fermions. As in the case of the mean-field approximation \cite{Janarek2020a}, for the smallest values of the interaction strength, the dependence seems to be quadratic. This confirms that the observed breakdown of the QBE does not depend on the details of the interactions present in the system.
\begin{figure}
\centering
\includegraphics{one_over_tb_vs_u_tg.pdf}
\caption{Inverse of the break time $t_b$ versus $U_F=U^{-1}$ calculated for the final center of mass position $\langle x \rangle_\infty$ by fitting an algebraic decay, Eq.~(\ref{eq:fit}). The {error bars} are calculated using the {uncertainty on} $\langle x \rangle_\infty$. The data strongly suggest a linear dependence. The dashed line presents the best linear fit $\tau/t_b = 0.28 U_F = 0.28/U$.}
\label{fig:one_over_tb_vs_u}
\end{figure}
Given the results presented in the previous section, we may ask whether the destruction of the boomerang effect for strongly interacting bosons can be effectively described using the break time, a universal parameter used to capture the influence of the interactions.
\subsection{Break time -- boomerang effect}
For the weakly interacting bosons, the use of break time was a natural extension of the mean-field approximation. In the case of strongly interacting bosons (mapping to weakly interacting fermions), this has to be analyzed anew. Figure~\ref{fig:final_cmp_vs_u} shows the approximately quadratic dependence of the $\langle x \rangle_\infty$ on the effective interaction strength between fermions $U_F$.
Figure~\ref{fig:one_over_tb_vs_u} shows the dependence of the inverse of the break time, $1/t_b$, on the effective interaction $U_F=U^{-1}$ suggesting a linear behavior. Similarly to the weakly interacting bosons case, the point for the weakest interactions slightly deviates from the linear dependence. When the QBE is only moderately affected by the interactions, the time evolution has to be very long to extract the exact value of the infinite time CM position. When $\langle x \rangle_\infty$ is overestimated, the corresponding $t_b$ is smaller than the exact value.
The results are very similar to those obtained for the weakly interacting bosons, see Fig.~\ref{fig:one_over_tb_vs_v_bosons}. It means that the underlying mechanism of the destruction of the QBE is independent of the type of interactions. The destruction of the QBE may be fully characterized by a single parameter, the break time $t_b$, proportional to the interaction strength between the particles.
{It is possible to understand semi-quantitatively the $1/U$ dependence of the break time. At infinite $U,$ the dynamics of the system takes entirely place in the sub-space spanned by occupation numbers $i=0$ and $i=1$ on each site of the Bose-Hubbard Hamiltonian, Eq.~(\ref{eq:bose_hubbard}), and one observes full QBE. When $U$ is large, but finite, the state with occupation number $i=2$ also comes into the game. However, due to interaction, its energy is larger by $U,$ while the coupling with $i=0,1$ states is typically of the order of $J$. An example is the coupling between states $|0,2\rangle$ and $|1,1\rangle$ on two neighboring sites. The perturbation brought by $i=2$ states is thus expected, at lowest order, to shift the energy levels in the $i=0,1$ subspace proportionally to $J_0^2/U_0.$ In the absence of this shift, the QBE is full. It is thus reasonable to expect that, for finite $U_0=U/\Delta x,$ it will take a time $\hbar/(J_0^2/U_0)$ before the QBE is affected. In other words, we expect the break time to be roughly $U_0/J_0^2$.}
It turns out that the boomerang break times (expressed in units of the scattering mean free time) agree within several percent with {the rough} estimate:
\begin{equation}
\begin{aligned}
&U_0 = 50\quad \frac{U_0}{J_0^2} &=\,\,& 200 \quad &t_b = 133.2\tau,\\
&U_0 = 25\quad \frac{U_0}{J_0^2} &=\,\,& 100 \quad &t_b = 95.6\tau, \\
&U_0 = 20\quad \frac{U_0}{J_0^2} &=\,\,& 80\quad &t_b = 77.2\tau, \\
&U_0 = 15\quad \frac{U_0}{J_0^2} &=\,\,& 60\quad &t_b = 52.1\tau.
\end{aligned}
\end{equation}
As explained above, the break time for the highest interaction strength $U_0$ is, most probably, underestimated.
\subsection{Break time for the entropy of entanglement}
\begin{figure}
\centering
\includegraphics[scale=1.]{ent_vs_time.pdf}
\caption{{Temporal} evolution of the entropy of entanglement (average{d} over all possible bipartitions) for different values of the interaction strength $U$, decreasing from the bottom to the top. $S_0^\infty$ denotes the final value of the entropy in the Tonks-Girardeau gas. }
\label{fig:ent_vs_time}
\end{figure}
In the simulations, we can also observe another interaction-driven phenomenon, which can be characterized by its own time scale. Due to the interactions, we observe a growth of the entropy of entanglement in the system. Does it increase on the same time scale as $t_b$?
Figure~\ref{fig:ent_vs_time} shows the time evolution of the entropy of entanglement computed as an average over {all} possible bipartitions. For the Tonks-Girardeau gas case, apart from the initial growth, the entropy saturates, what is also confirmed by the analysis of the supremum of the entropy over possible bipartitions {(not shown)}. We denote the final value of the entropy for the Tonks-Girardeau gas by $S_0^\infty$. When the interactions are finite ($U_F=U^{-1}\neq0$), the entropy grows further.
\begin{figure}
\centering
\includegraphics[scale=1.0]{break_time_comparison.pdf}
\caption{Entropy based break time $t^S_b$ plotted versus {the} boomerang break time $t_b$. The values of break times are comparable within a factor 2. The dependence is more or less linear, the slight deviation for the point around $t_b\approx13
0\tau$ originates probably in the overestimation of $\langle x \rangle_\infty$ due to too short time evolution.}
\label{fig:break_times_comparison}
\end{figure}
We can define a characteristic time scale called entropy break time, denoted by $t_b^S$, for which the entropy between the interacting particles exceeds the final value of the Tonks-Girardeau gas entropy $S_0^\infty$. We calculate its value from the following relation:
\begin{equation}
S(t^S_b)(U) = S_0^\infty,
\end{equation}
where for the left-hand side, we use the data for nonzero interactions. Figure~\ref{fig:break_times_comparison} presents a comparison of the boomerang break time and entropy break time. The relation between the break times is approximately linear{:} $t^S_b/t_b\approx 0.5$.
\section{Conclusions}\label{sec:conclusions}
In this work, we have discussed the effect of interactions on the quantum boomerang effect using {a} quasi-exact many-body approach. On the numerical side, the simulations have been performed using the time evolving block decimation algorithm based on matrix product states. This has allowed us to study the weakly interacting bosons, the Tonks-Girardeau gas, and strongly interacting bosons which can be mapped to weakly interacting fermions.
The first part of our study has shown that the effect of weak interactions between bosons is qualitatively similar to the behavior in the mean-field approximation \cite{Janarek2020a}. {However, in the present work}, the interactions are not approximated, which strengthens their effect on the destruction of the boomerang effect: the final center of mass positions are higher than in the mean-field approximation. This translates {in}to shorter break times {for} the many-body system. In the simulations, the total number of particles is not very high, so, to support this conclusion, we have also analyzed the features of the average one-body density matrix which have clearly shown that the condensate fraction in our system is very low. Hence, the observed phenomena are necessarily {beyond} the mean-field analysis.
In the second part, we have shown that the particles of the Tonks-Girardeau gas undergo the full boomerang effect. Apart from agreement between the numerical and theoretical results for the center of mass evolution, we have shown that the final particle density is given by {the} convolution of the Gogolin profile and the initial particle density.
Finally, we have presented that, in the case of {finite} {strong} interactions between bosons (that is effective weak interactions between fermions), the boomerang effect is only partial. To study the destruction of the QBE in detail, we have calculated the break time and shown that is proportional to the interaction strength between bosons, i.e. inversely proportional to the effective interaction strength between {the} fermions. Moreover, from the analysis of the entropy of entanglement, we have computed another characteristic time and shown that this time is comparable and proportional to the break time.
{Altogether, our results strongly suggest that the breaking of the QBE by interactions is a rather simple and universal phenomenon, which can be described by a single parameter, the break time, independently of the details of the interaction and whether a full many-body or a mean-field description is used.}
Possible future studies of the many-body quantum boomerang could include analysis of the phenomenon for a composite particle, i.e. {a} soliton. In~\cite{Delande2013}, many-body Anderson localization of a bright soliton was shown using very similar numerical tools. It would be very interesting to check whether such a composite object undergoes the quantum boomerang effect.
{While the present work was restricted to rather weak disorder - where analytical predictions for $\langle x(t)\rangle$ are possible - the regime of both strong disorder and strong interactions would be very interesting, especially in the regime of many-body localization.}
\acknowledgments
We thank Nicolas Cherroret for useful discussions.
This research has been supported by the National Science Centre (Poland) under Project No. 2016/21/B/ST2/01086 (J.J.) and 2019/35/B/ST2/00034 (J.Z.). We acknowledge support of the Foundation for Polish Science (FNP) through the first Polish-French Maria Sklodowska-Pierre Curie reward received by D. Delande and J. Zakrzewski. J.~Janarek acknowledges also the support of French Embassy in Poland through the \emph{Bourse du Gouvernement Fran\c cais} program. We acknowledge the support of the PL-Grid structure which made numerical calculations possible.
\input{boom.bbl}
\end{document}
|
1,108,101,565,225 | arxiv | \section{Introduction}
\label{sec:introduction}
The \emph{magic} of quantum correlations has been often invoked to explain the exotic and
extraordinary phenomena like the superfluidity, superconductivity, and Bose-Einstein condensation,
displayed by, what may be called, the \emph{quantum fluids}\,\cite{London-I, London-II,Landau, Gross, Pitaevskii, Hohenberg, Fischer,Fetter}.
What links all these diverse systems is the absence of vorticity.
The Meissner effect, a complete expulsion of the magnetic field (the electromagnetic vorticity), for instance,
is taken to be the defining attribute of the superconducting state\,\cite{London-I,Landau}
(see also \cite{mahcs} for a characterization of superconductivity as vanishing of the total vorticity, a sum of the fluid and electromagnetic vorticities).
Of course, these highly correlated quantum states are accessible only under very special conditions, in particular at very low temperatures.
It is, perhaps, legitimate to infer that these quantum fluids, when they are not in there \emph{super} phase, may, in fact,
entertain some sort of vorticity.
In this article, we explore how such a vortical state may emerge in a quantum system,
whose basic dynamical equations (like the Schr\"odinger equation)
are not fundamentally suitable for hosting a vortex.
The investigation of the quantum vortex states constitutes a fundamental enquiry,
because such states, like their classical counterparts, aught to be ubiquitous.
And again as to their classical counterparts, the vorticity will lend enormous diversity and complexity
to the behavior and properties of quantum fluids.
Unveiling the mechanisms responsible for creating/sustaining vorticity
will, surely, advance our understanding of the dynamics of the {phase transition}
---from zero vorticity to finite vorticity and vice versa.
We will carry out this deeper enquiry, aimed at bridging the \emph{vorticity gap}, by exploring
a quantum system equivalent to a hot fluid/plasma\,\cite{mahase2014}.
We will show that the thermodynamic forces induce two distinct fundamental changes in the dynamics:
the Hamiltonian becomes 1) nonlinear by the thermal energy, and 2) non-Hermitian by the entropy.
By these new effects, a finite vorticity becomes accessible to the quantum fluid.
Such a vorticity-carrying hot quantum fluid could define a new and interesting state of matter.
Within the framework of a hot Pauli-Schr\"odinger quantum fluid, we will
demonstrate the existence of one such state---the Quantum Spiral.
We believe that it marks the beginning of a new line of research.
To highlight the new aspect of our construction,
we present a short overview of papers on the classical-quantum interplay.
The first set of investigations\,\cite{mad,bohm,tak1,tak2,tak3,tak4,cufaro, Fro,haas,marklund1,marklund2,vortical,asenjorqp}
is devoted to deriving (and studying) the fluid-like systems from the standard equations of quantum mechanics,
while the second set\,\cite{kania,pesci1,pesci2,Andreev, carbonaro, Koide, Kambe}
constructs a quantum mechanics equivalent to a given classical system.
Building from the energy momentum tensor for a perfect isotropic hot fluid,
Mahajan \& Asenjo\,\cite {mahase2014} have recently demonstrated
that the emergent quantum mechanics of an elementary constituent of the Pauli-Schr\"odinger hot fluid (called a \emph{fluidon})
is nonlinear as distinct from the standard linear quantum mechanics; the thermal interactions manifest as the fluidon self-interaction.
Through a deeper reexamination of this {thermal interaction},
we begin our quest for the thermal mechanism of creating quantum vorticity.
In addition to being a source of nonlinearity, the thermal interaction for the quantum fluid
also endows it with an {entropy}.
\section{Circulation law ---the Vortex and Heat}
Theres exists a deep relationship between the vortex and a heat cycle,
which is mediated by an entropy.
We may consider a \emph{vortex} in general space;
by {vortex} we mean finiteness of a \emph{circulation} (or, non-exact differential).
A {heat cycle} $\oint T d{S} \neq 0$ ($T$ is the temperature and ${S}$ is the entropy)
epitomizes such a circulation.
Upon the realization of the thermodynamic law on a fluid, the heat cycle is related to the circulation of momentum,
i.e., the mechanical vortex.
For a fluid with a density $\rho$, pressure ${P} $, and enthalpy $H$,
obeying the thermodynamic relation $Td{S}=d{H} - \rho^{-1} d{P} $,
a finite heat cycle $\oint T d{S} \neq 0$ is equivalent to the \emph{baroclinic effect} $\oint \rho^{-1} d{P} \neq 0$
(the exact differential $dH$ does not contribute a circulation; $\oint dH\equiv 0$).
Notice that the entropy, a deep independent attribute of thermodynamics,
is the source of the baroclinic effect; such an effect will be encountered whenever a
field has a similar internal degree of freedom represented by some scalar like an entropy.
Kelvin's circulation theorem says that, as far as the specific pressure force $\rho^{-1} d{P} $
(or, the heat $Td{S}$) is an exact differential,
the fluid (so-called barotropic fluid) conserves the circulation $\oint_\Gamma \wp $ of the momentum $\wp$
along an arbitrary co-moving loop $\Gamma$.
Therefore, a vorticity-free flow remains so forever.
As the antithesis, non-exact thermodynamic force $\rho^{-1} d{P} $
(or, equivalently, a heat cycle $\oint_\Gamma Td{S}\neq0$)
violates the conservation of circulation of momentum,
leading to the vortex creation.
\section{Vortex in spinor fields}
To formulate a quantum-mechanical baroclinic effect by quantizing a classical fluid model,
the {correspondence principle} is best described by the Madelung representation of wave functions\,\cite{mad,bohm} (see Appendix A).
We must, however, remember that a scalar (zero-spin) Schr\"odinger field falls short of describing a vortex,
because the momentum field is the gradient of the eikonal of the Schr\"odinger field
(or, in the language of classical mechanics, the momentum is the gradient of the action),
which is evidently curl-free.
This simple fact is, indeed, what prevents a conventional Schr\"odinger quantum regime from hosting vorticity.
In a set of classic papers~\cite{tak1,tak2,tak3,tak4}, Takabayasi showed that
the differences in the phases of spinor components could
generate a \emph{spin vorticity} in the `Madelung fluid' equivalent of
the Pauli-Schr\"odinger quantum field.
In this paper, we investigate
an additional source of vorticity provided by a baroclinic mechanism in a thermally
modified nonlinear Pauli-Schr\"odinger system, obtained here, by adding a thermal energy $U$
to the Pauli-Schr\"odinger Hamiltonian.
In order to delineate the new effect in the simplest way, we consider a minimum, field-free Hamiltonian
\begin{equation}
\mathscr{H} = \int \left[ \frac{1}{2m} \left(\frac{\hbar}{i}\nabla\Psi\right)^* \cdot
\left(\frac{\hbar}{i}\nabla\Psi\right)
+ {U} \Psi^*\cdot \Psi \right]
\,dx ,
\label{Q-Hamiltonian-spinor}
\end{equation}
where $\Psi=(\psi_1, \psi_2)$ is a two-component spinor field,
and $U$ is a thermal energy.
Notice that the conventional potential energy is replaced by the thermal energy.
The formulation of ${U}$ as a function of $\Psi$ is the most essential element of our construction.
In classical thermo/fluid dynamics, the thermal energy is generally expressed as $U=U(\rho,S)$
with the density $\rho$ and the entropy $S$.
Although $\rho$ is readily expressed as $\Psi^*\Psi$, finding an expression for $S$ is more challenging.
It is the right juncture to inform the reader that for $U=U(\rho)$, the sought after baroclinic effect is absent; see \cite{mahase2014}.
The Madelung representation of the wave function
\begin{equation}
\psi_j (\bm{x},t) = \sqrt{\rho_j(\bm{x},t)} e^{i\mathscr{S}_j(\bm{x},t)/\hbar}
\quad (j=1,2) .
\label{Madelung-2}
\end{equation}
converts the two complex field variables $(\psi_1, \psi_2)$ into four real variables $(\rho_1,\mathscr{S}_1,\rho_2,\mathscr{S}_2)$.
It will be, however, more convenient to work with an equivalent set:
\begin{equation}
\left\{ \begin{array}{l}
\rho = \rho_1+\rho_2, \\%= \Psi^*\Psi\\,
\mu= \rho_1-\rho_2, \\
\varphi = (\mathscr{S}_1+\mathscr{S}_2)/2, \\
\sigma = (\mathscr{S}_1-\mathscr{S}_2)/2.
\end{array} \right.
\label{Clebsch}
\end{equation}
The 4-momentum becomes
\begin{equation}
{p}^\nu = \frac{1}{\rho} \Re \left( \Psi^*\cdot i\hbar \partial^\nu \Psi \right)
= -\left( \partial^\nu \varphi + \frac{\mu}{\rho}\partial^\nu \sigma \right),
\label{momentum}
\end{equation}
where $(x_0,x_1,x_2,x_3)=(t,-x,-y,-z)$,
and $(\partial^0,\partial^1,\partial^2,\partial^3)=(\partial_t,-\nabla)$).
The spatial part of (\ref{momentum}),
\begin{equation}
\bm{p} = \nabla\varphi + \frac{\mu}{\rho} \nabla \sigma
\label{Clebsch2}
\end{equation}
reads as the Clebsch-parameterized momentum field\,\cite{Clebsch,Lin,Jackiw,Yoshida_Clebsch}.
The second term of the right-hand side of (\ref{Clebsch2}) yields a
\emph{vorticity}:
\begin{equation}
\nabla\times\bm{p} =\nabla \left( \frac{\mu}{\rho}\right) \times \nabla \sigma .
\label{spin_vorticity2''}
\end{equation}
In (\ref{spin_vorticity2''}), we have assumed that $\varphi$ does not have a phase singularity.
If $\varphi$ is an angular (multi-valued) field,
as in the example of quantum spirals given later,
a circulation, representing a \emph{point vortex}, will be created by the singularity of $\nabla\times(\nabla\varphi)$
(mathematically, a cohomology).
We may regard $\sigma$ as a Lagrangian label of scalar fields co-moving with the fluid
(see Appendix A).
Hence, for an isentropic process,
we parameterize entropy as ${S}=S(\sigma)$ to put $U=U(\rho,\sigma)$, completing
the process of identification of the thermal variables with the wave function.
The enthalpy and temperature are, respectively, given by
\begin{equation}
{{H}} = \frac{\partial(\rho {U} )}{\partial \rho},
\quad
{{T}} = \frac{\partial {U}}{\partial{S}} .
\label{enthalpy}
\end{equation}
Denoting $S'(\sigma)=dS(\sigma)/d\sigma$, we may define an effective temperature
$\tau=\partial U/\partial \sigma = S'(\sigma) T$.
\section{Thermally-modified nonlinear Pauli-Schr\"odinger equation}
We are ready to derive the determining equation.
In terms of $\Psi$, the canonical 1-form reads
$\Theta = \int p^0 \rho \,dx$.
The variation of the action $\int (\Theta - \mathscr{H})\,dt$ by $\Psi$
yields a thermally-modified nonlinear Pauli-Schr\"odinger equation:
\begin{equation}
i\hbar\partial_t \psi_j =
- \frac{\hbar^2}{2m} \nabla^2\psi_j + \left( {{H}} - i{{G}}_j \right) \psi_j
\quad (j=1,2) ,
\label{Schroedinger-baroclinic}
\end{equation}
where
\begin{equation}
{{G}}_j = (-1)^j \hbar \frac{ {{S'(\sigma)T}} \rho}{4 \rho_j} \quad (j=1,2).
\label{baroclinic_term}
\end{equation}
The following results can be readily derived by the new equation (\ref{Schroedinger-baroclinic}):
(i) The terms $-iG_j \psi_j$ ($j=1,2$) on the right-hand side of (\ref{Schroedinger-baroclinic})
represent the {baroclinic effect}, by which the generator of the system is {non-Hermitian}.
However, the the particle number ($\int\rho\,dx$) and the energy ($\mathscr{H}$) are preserved
as constants of motion.
(ii) When $S'(\sigma)=0$ (i.e., the fluid is homentropic), the baroclinic terms are zero.
Then, (\ref{Schroedinger-baroclinic}) consists of two
coupled nonlinear Schr\"odinger equations (the nonlinear coupling comes through $H$ being a function of $\rho$).
Of course, we may put $\psi_2\equiv 0$, and then, the system reduces into the standard scalar-field nonlinear Schr\"odinger equation
governing $\psi_1$.
It is well known that, in a one-dimensional space, we obtain \emph{solitons}, when ${{H}}=a \rho$ ($a<0$).
The nonlinear coupling of the two components $\psi_1$ and $\psi_2$ induces chaotic behavior.
Interestingly, however, $\rho = |\psi_1|^2 + |\psi_2|^2$ remains ordered.
These features are displayed in Fig.\,\ref{fig:1D}, where a representative solution in the barotropic ($G_j=0$) limit is plotted.
\begin{figure}[tb]
\raisebox{3.2cm}{\textbf{a}}
\includegraphics[scale=0.6]{schroedinger-1D-psi.eps}
\\ ~ \\
\raisebox{3.2cm}{\textbf{b}}
\includegraphics[scale=0.6]{schroedinger-1D-rho.eps}
\caption{
\label{fig:1D}
A typical nonlinear Pauli-Schr\"odinger field in one-dimensional space (here the eigenvalue $\mu=0$).
(\textbf{a}) The two spinor components
(here, real-valued functions) $\psi_1$ (blue) and $\psi_2$ (red) are coupled though the nonlinear enthalpy coefficient
$H = -2(|\psi_1|^2+|\psi_2|^2)$, exhibiting chaotic oscillations.
(\textbf{b}) The densities $\rho_1 = |\psi_1|^2$ (red) and $\rho_2 = |\psi_2|^2$ (yellow) oscillates irregularly,
while the total density $\rho = \rho_1 + \rho_2$ (blue) remains ordered.
}
\end{figure}
(iii) When the baroclinic terms are finite,
there is no one-dimensional (plane wave) solution.
In fact, upon substitution of $\psi_j=e^{i(k_yy+k_zz-\omega t)/\hbar}\phi_j(x)$ ($j=1,2$) into (\ref{Schroedinger-baroclinic}),
we obtain an eigenvalue problem
\begin{equation}
\frac{\hbar^2}{2m} \frac{d^2}{dx^2} \phi_j = (\lambda + {{H}} -i {{G}}_j ) \phi_j
\quad (j=1,2),
\label{Schroedinger-baroclinic-1D}
\end{equation}
where $\lambda = (k_y^2+k_z^2)/2m -\omega$. Half of the (local, i.e., for each $\psi_j$)
eigenvalues for this operator,
$\pm \sqrt{\lambda+ {{H}} -i {{G}}_j}$, have
positive real parts whenever ${{G}}_j\neq 0$,
and thus, (\ref{Schroedinger-baroclinic-1D}) cannot have a bounded solution for any $\lambda$.
The nonexistence of a one-dimensional (plane-wave) solution in a baroclinic system emphasizes the fact that the
baroclinic effect is absent in a one-dimensional system.
However, we do find interesting solutions in two-dimensional space.
\section{Quantum spirals}
Let us assume a solution of a spiral form:
\begin{equation}
\Psi =
\left(\begin{array}{c}
\psi_1 \\ \psi_2 \end{array} \right)
=
\left(\begin{array}{cc}
e^{i(n\theta + \beta_1(r) -\omega t)} \phi_1(r)
\\
e^{i(n\theta + \beta_2(r) -\omega t)} \phi_2(r)
\end{array} \right) .
\label{spiral-1}
\end{equation}
The azimuthal mode number $n$ gives the number of arms.
The phase factor $\beta_j(r)$ determines their shape;
for example, when $\beta_j(r)$ is a linear function of $r$, we obtain a Archimedean spiral.
The factor $\phi_j(r)$ yields the radial modulation of amplitudes.
We find that the nonlinear terms
\begin{eqnarray}
\rho &=&
\phi_1^* \phi_1+\phi_2^* \phi_2 ,
\label{spiral-2}
\\
\sigma &=&
\frac{\hbar}{2} \left( \beta_1 - \beta_2 + \arg \phi_1 - \arg \phi_2 \right)
\label{spiral-3}
\end{eqnarray}
are functions only of $r$; the azimuthal mode number $n$, therefore, is a good quantum number.
Inserting (\ref{spiral-1}) into (\ref{Schroedinger-baroclinic}), we obtain ($j=1, 2$, $\dot{~}$=$d/dr$),
\begin{eqnarray}
\ddot{\phi}_j + \left(\frac{1}{r} + i\dot{\beta}_j\right) \dot{\phi}_j =
\left[\left( -\omega + \frac{n^2}{r^2} + \dot{\beta}_j^2
+ {{H}} \right)
\right.
\nonumber
\\
\left.
- i\left(\ddot{\beta}_j +\frac{\dot{\beta}_j}{r}- {{G}}_j\right) \right] \phi_j .
\label{Schroedinger-baroclinic-spiral}
\end{eqnarray}
Bounded solutions are obtained when the non-Hermitian term vanishes; one must, then, solve
the system of simultaneous equations
\begin{eqnarray}
\ddot{\phi}_j + \left(\frac{1}{r} + i\dot{\beta}_j\right) \dot{\phi}_j &=&
\left( -\omega + \frac{n^2}{r^2} + \dot{\beta}_j^2 + {{H}} \right) \phi_j ,
\label{Schroedinger-baroclinic-spiral-1}
\\
\ddot{\beta}_j + \frac{\dot{\beta}_j}{r} - {{G}}_j &=& 0.
\label{Schroedinger-baroclinic-spiral-2}
\end{eqnarray}
By (\ref{Schroedinger-baroclinic-spiral-2}), the baroclinic term $G_j$
generates $\beta_j(r)$ that determines the shape of spiral.
Evidently, in a barotropic field ($G_j=0$), spirals do not appear ($\beta_j(r)=0$).
To construct explicit examples,
let us consider an \emph{ideal gas} that has an internal energy such that
\begin{equation}
{U} = c_v (\rho e^{{S}-{\sigma}_0} )^{1/c_v},
\label{ideal_gas}
\end{equation}
where $c_v$ (specific heat normalized by Rydberg constant) and ${\sigma}_0$ are constants.
For simplicity, we assume $S=\sigma$ (thus, $T=\tau$)
to evaluate
\begin{equation}
{{T}} = (\rho e^{{\sigma}-{\sigma}_0} )^{1/c_v},
\quad
{{H}}=(c_v +1) {{T}}.
\label{ideal_gas-2}
\end{equation}
Substituting (\ref{spiral-2}) and (\ref{spiral-3}), we may write
the coefficients $H$ and $G_j$ in terms of $\phi_j$ and $\beta_j$ ($j=1,2$).
As an example ($c_V=1$ and $\sigma_0=0$) of the numerical solutions of this system,
we display in Fig.\,\ref{fig:baroclinic_m2}, a typical $n=2$ solution exhibiting twin spirals (opposite sense) of
the two components of the spinor $\Psi$.
Figure\,\ref{fig:baroclinic_m2_fields}
picks up the phase factors $\beta_j(r)$ and the amplitude factors $|\phi_j(r)|^2$
from the spiral spinor fields of Fig.\,\ref{fig:baroclinic_m2}.
\begin{figure}[tb]
\includegraphics[scale=0.3]{baroclinic_m2-1.eps}
~~~
\includegraphics[scale=0.3]{baroclinic_m2-2.eps}
\caption{
\label{fig:baroclinic_m2}
Dual spirals created in a baroclinic Pauli-Schr\"odinger field;
the density plots of $\Re \psi_1$ (left) and $\Re \psi_2$ (right)
show opposite-sense spirals.
Here $\omega=4.5$.
}
\end{figure}
\begin{figure}[tb]
\begin{center}
\raisebox{3.2cm}{\textbf{a}}~
\includegraphics[scale=0.6]{schroedinger-spiral-beta.eps}
\\ ~ \\
\raisebox{3.2cm}{\textbf{b}}~
\includegraphics[scale=0.6]{schroedinger-spiral-rho.eps}
\caption{
\label{fig:baroclinic_m2_fields}
(\textbf{a}) The phase factors $\beta_1(r)$ (blue) and $\beta_2(r)$ (red)
of the quantum spirals shown in Fig.\,\ref{fig:baroclinic_m2}.
Except in the core region ($r<3$), $\beta_j(r)$ is an approximately linear function of $r$,
hence the arms are approximately Archimedean.
The opposite signs of $\beta_1(r)$ and $\beta_2(r)$ give opposite-sense spirals.
(\textbf{b}) The amplitude factors $|\phi_1(r)|^2$ ($=|\phi_2(r)|^2$ by the assumption).
}
\end{center}
\end{figure}
The baroclinic Pauli-Schr\"odinger equation has axisymmetric ($n=0$) solutions also (see Fig.\,\ref{fig:baroclinic_m0}-a).
Figure\,\ref{fig:baroclinic_m0}-b shows that when the baroclinic term is zero
(then, the generator is Hermitian),
no spiral structures, even with a finite $n$, are created.
This is because the phase factors $\beta_j(r)$ become zero when $G_j=0$; see (\ref{Schroedinger-baroclinic-spiral-2}).
\begin{figure}[tb]
\begin{center}
\raisebox{3.5cm}{\textbf{a}}
\includegraphics[scale=0.3]{baroclinic_m0.eps}
~~
\raisebox{3.5cm}{\textbf{b}}
\includegraphics[scale=0.3]{barotropic_m2.eps}
\caption{
\label{fig:baroclinic_m0}
(\textbf{a}) Axisymmetric ($n=0$) solution of the baroclinic equation with $\omega=4.5$;
plot of $\Re \psi_1$.
(\textbf{b}) Barotropic ($G_j=0$) Pauli-Schr\"odinger field ($n=2$) with $\omega=4.5$;
plot of $\Re \psi_1$.
When $G_j=0$, the phase factor $\beta_j(r)=0$, thus no arms are created.
}
\end{center}
\end{figure}
\section{Conclusion}
We have shown that the quantum fluid with internal thermal energy is capable of supporting
entirely new quantum states like the quantum spirals we have derived in the present simplified model.
The harnessing of the profound effect of entropy, in a thermal spin quantum system, has
led us to a new mechanism
(whose classical counterpart is the famous baroclinic effect generating,
for example, hurricanes)
that substantially extends the range of vortical states accessible to quantum systems.
We note that although the conventional spin forces
(either spin-magnetic field interactions or spin-spin interaction; cf. Appendix A)
may amplify or sustain vorticities, they cannot generate it from zero.
The baroclinic effect is a creation mechanism that works without a \emph{seed}.
A close analogy is the respective roles of dynamo amplification of magnetic field\,\cite{Moffatt}
and the Biermann battery mechanism\,\cite{Biermann} in a classical plasma; the former needs a seed magnetic field.
We also note that the baroclinic effect we have studied is an isentropic process,
and differs from dissipation mechanisms like friction.
For example, a finite-temperature Bose gas is modeled by a modified Gross-Pitaevskii equation
including an effective friction term, which is coupled with
a quantum Boltzmann equation describing a thermal cloud\,\cite{ZNG,Jackson,Allen}
(see also tutorial\,\cite{Proukakis}).
The friction term introduces a non-Hermitian Hamiltonian which, in contrast to the presently
formulated isentropic model, destroys the conservation of the particle number and the energy of the condensate component.
One expects to find a variety of new/interesting states when this work is extended
for more encompassing quantum systems: the thermal Pauli-Schr\"odinger system and the relativistic thermal Dirac (Feynman-Gellmann)
equation coupled to the electromagnetic field.
For both of these systems, baroclinic terms can be readily incorporated in
known formalisms \cite{vortical,Koide,mahase2014}.
Coriolis force, a close cousin of the magnetic force, could also be included as a gauge field\,\cite{Kambe}.
When we consider higher-order spinors
(for example, spin-1 representation of SU(2) fields may be applied for Bose gas in a trap\,\cite{spin-1})
the number of Clebsch parameters increases, and the field starts to have a helicity\,\cite{Yoshida_Clebsch}.
\section*{Acknowledgment}
A part of this work was done as a JIFT program of US-Japan Fusion Research Collaboration.
ZY's research was supported by JSPS KAKENHI Grant Number 23224014.
SMM's research was supported by the US DOE
Grant DE-FG02-04ER54742.
\section*{Appendix A. Correspondence principle}
We explain the correspondence principle
relating the classical and quantized fields.
In terms of the real variables (\ref{Clebsch}),
the Hamiltonian (\ref{Q-Hamiltonian-spinor}) reads $\mathscr{H} = \mathscr{H}_c + \mathscr{H}_q $ with
\begin{eqnarray}
\mathscr{H}_c &=& \int \left[ \frac{|\nabla \varphi + \frac{\mu}{\rho}\nabla\sigma|^2}{2m}
+ {U}(\rho,\sigma ) \right] \rho \,dx ,
\label{c-limit}
\\
\mathscr{H}_q &=& \int \frac{\hbar^2}{8m} \left( \frac{|\nabla\rho|^2}{\rho^2} + \sum_\ell |\nabla {S}_\ell|^2 \right) \rho \,dx.
\label{H-quantization}
\end{eqnarray}
Here $\mathscr{H}_c$ is a classical Hamiltonian generating fluid dynamics\,\cite{Lin}.
With the canonical 1-form
$\Theta = \int {p}^0 \rho dx = -\int (\rho \partial_t\varphi + \mu \partial_t\sigma )\,dx $,
the variation of the classical action $\int (\Theta - \mathscr{H}_c)\,dt$
with respect to the canonical variables $(\rho,\varphi,\mu,\sigma)$ yields Hamilton's equation:
\begin{eqnarray}
\partial_t{\rho} &=& \partial_{\varphi} \mathscr{H}_c = -\nabla\cdot(\bm{v}\rho) ,
\label{fluid_canonical-1}
\\
\partial_t{\varphi} &=& -\partial_\rho \mathscr{H}_c = -\bm{v}\cdot\nabla\varphi+m|\bm{v}|^2/2- H,
\label{fluid_canonical-2}
\\
\partial_t{\mu} &=& \partial_{\sigma} \mathscr{H}_c = -\nabla\cdot(\bm{v}\mu) + \rho T,
\label{fluid_canonical-3}
\\
\partial_t{\sigma} &=& -\partial_{\mu} \mathscr{H}_c = -\bm{v}\cdot\nabla\sigma ,
\label{fluid_canonical-4}
\end{eqnarray}
where $\bm{v} =\bm{p}/m$ is the fluid velocity,
(\ref{fluid_canonical-1}) is the mass conservation law,
(\ref{fluid_canonical-4}) is the entropy conservation law
(justifying the parameterization $U(\rho,\sigma)$),
and the combination of all equations with the thermodynamic relation $\nabla {{H}} - {{T}} \nabla{\sigma} = \rho^{-1} d {P} $ ($P$ is the pressure) yields the momentum equation
\begin{equation}
\partial_t\bm{p} + (\bm{v}\cdot\nabla)\bm{p} = - \rho^{-1}\nabla {P}.
\label{fluid_canonical-6}
\end{equation}
The system (\ref{fluid_canonical-1})-(\ref{fluid_canonical-4}) is an infinite-dimensional
Hamiltonian system endowed with a canonical Poisson bracket such that
\begin{equation}
\{ \rho(\bm{x}), \varphi(\bm{y}) \} = \delta(\bm{x}-\bm{y}),
~
\{ \mu(\bm{x}), \sigma(\bm{y}) \} = \delta(\bm{x}-\bm{y});
\label{canonical_bracket-1}
\end{equation}
all other brackets are zero.
By (\ref{Madelung-2}) and (\ref{Clebsch}),
the Poisson algebra (\ref{canonical_bracket-1})
is equivalent to the Lie algebra of the second-quantized Pauli-Schr\"odinger field
acting on the Fock space of either Bosons or Fermions\,\cite{Jackiw}:
\begin{equation}
\begin{array}{l}
{[} \psi_j(\bm{x}), \psi_k^*(\bm{y}) {]}_\pm = (i\hbar)^{-1} \delta_{jk} \delta(\bm{x}-\bm{y}),
\\
{[} \psi_j(\bm{x}), \psi_k(\bm{y}) {]}_\pm =0,
~
{[} \psi_j^*(\bm{x}), \psi_k^*(\bm{y}) {]}_\pm =0.
\end{array}
\label{canonical_bracket-0}
\end{equation}
Based on this correspondence principle, we can quantize the classical field by adding $\mathscr{H}_q$ to $\mathscr{H}_c$;
the action principle with respect to $\Psi$ yields (\ref{Schroedinger-baroclinic}).
The same action principle with respect to the real variables (\ref{Clebsch}) yields the fluid representation:
on the right-hand side of (\ref{fluid_canonical-6}), $\mathcal{H}_q$ adds quantum forces\,\cite{tak1,tak2}
\begin{equation}
\bm{F}_q=
\nabla \left( \hbar^2 \frac{\nabla^2\sqrt{\rho}}{2m\sqrt{\rho}}\right)
-\sum_\ell\frac{\hbar^2}{4m\rho}\nabla\cdot[\rho\nabla {S}_\ell\otimes \nabla {S}_\ell] ,
\label{fluid_canonical-7}
\end{equation}
where
${S}_\ell =\rho^{-1}\Psi^* \bm{\sigma}_\ell \Psi $
($\bm{\sigma}_\ell$ are the Pauli matrices).
|
1,108,101,565,226 | arxiv | \section{Introduction}
\setcounter{equation}{0}
\label{sec1}
The energy and the momentum of the gravitational field cannot be
localized \cite{one,two}. In fact, assuming the equivalence principle in its
stronger formulation, the laws of physics are those of special relativity in
a freely falling (non-rotating) laboratory that occupies a small portion of spacetime.
As long as the coordinates can be transformed to a freely falling frame,
it is possible to eliminate locally the gravitational field. It is then conceptually
difficult to propose a unique and local definition of the gravitational energy density.
As a consequence there are no reasons why the energy-momentum
tensor of the gravitational field itself should be either unique or
covariantly conserved. For the same reason the energy-momentum tensor
of the gravitational waves of cosmological origin is not unique.
A variety of pseudo-tensors can be concocted and they should be
ultimately equivalent at least in the short wavelength limit, i.e. when the
typical frequencies are much larger than the rate of variation of the corresponding geometry.
This equivalence is not conclusive since there are physical situations where the frequencies
of the waves are smaller than the rate of variation of the corresponding
background. For instance, if we consider cosmological backgrounds during an accelerated
stage of expansion the particle horizon diverges while the event horizon is proportional to the
Hubble rate. The wavelengths of the gravitons become larger than the
Hubble radius but are still shorter than the typical size of causally connected
regions: by a mode being beyond the horizon we only mean
that the physical wavenumber $k/a$ is much less than the expansion rate $H$ and
this does not have anything to do with causality \cite{threea}.
Some of the most notable strategies developed through the years can be
summarized, in short, as follows. The Landau and Lifshitz \cite{three} proposal
is rooted in the second-order corrections of the Einstein tensor supplemented
by the observation that Bianchi identities must be valid to all orders in the perturbative expansion.
The Brill-Hartle strategy \cite{four} is instead based on the properties of a specific covariant averaging
scheme aimed at separating the terms that evolve faster than the rate of
variation of the corresponding background. The Brill-Hartle scheme has been used to derive
the Isaacson effective pseudo-tensor providing a sound description
of gravitational radiation in the high-frequency limit \cite{five} (see also \cite{fivea}).
The suggestion of Ford and Parker \cite{six,seven} follows instead from the
effective action of the relic gravitons derived by perturbing the gravitational action
to second order in the tensor amplitude. Other apparently different strategies are related to the
ones mentioned above. For instance the approach of Refs. \cite{eight,nine} is
the Landau-Lifshitz approach appropriately discussed in the case of a cosmological background.
Through the years the various suggestions have been tested in different frameworks
either for the solution of the concrete problems of backreaction \cite{nine,ten} or for
the analysis of the implications of the different proposals \cite{eleven,twelve,thirteen,fourteen}
not necessarily in connection with the cosmological problems.
Babak and Grishchuk \cite{fifteen} came up with a possible
definition of a true energy-momentum tensor of the gravitational field.
By treating gravity as a nonlinear tensor field in flat space-time \cite{sixteen},
Ref. \cite{fifteen} claimed a result with all the
necessary properties of a true energy-momentum tensor of the
gravitational field itself (i.e. symmetry, uniqueness, gauge-invariance and covariant conservation).
By taking the results of Ref.\cite{fifteen} at face value
the problem of localizing the energy and momentum of the gravitational field would be
completely solved. The perspective of Ref. \cite{fifteen} has been subsequently questioned by Butcher, Hobson and Lasenby \cite{seventeen,eighteen,nineteen} who suggested that the proposal of Ref. \cite{fifteen}
does not have a definite physical significance. In spite of the reasonable concerns of Refs. \cite{seventeen,eighteen,nineteen},
what matters for the present ends is that the geometrical object most closely related
to the Babak-Grishchuk suggestion is (again) the Landau-Lifshitz pseudo-tensor \cite{three} as explicitly
recognized by the authors of Ref. \cite{fifteen}.
In this paper we intend to clarify the analogies and the important differences characterizing the
various approaches developed so far. After scrutinizing the limitations and the ambiguities
of the diverse proposals some general lessons will be drawn. In the light of a number of reasonable
criteria (i.e. gauge-invariance, frame-invariance, positivity of the energy density) a rather
plausible strategy to assign the energy-momentum tensor of the relic gravitons is rooted in
the original Ford-Parker suggestion \cite{six,seven} where the background metric and the
corresponding perturbations are treated as independent fields; the effective
energy-momentum pseudo-tensor follows by functional derivation
of the effective action with respect to the background metric. In view of the general discussion
it is practical to separate three complementary
aspects of the problem: {\it i)} the strategy for the derivation of the energy-momentum pseudo-tensor; {\it ii)}
the averaging scheme; {\it iii)} the connection to the observables.
This will be the overall logic followed in the present investigation.
The pseudo-tensors explored through the years must implicitly assume an averaging scheme which is often difficult to formulate in general terms.
As long as the relic gravitons potentially populating the present universe did start their evolution either from
a quantum mechanical initial state
the expectation values of their energy density and of their pressure can be computed without
imposing any extrinsic averaging scheme. Indeed during an inflationary stage of expansion the classical
fluctuations are diluted away \cite{twentythreea,twentythreeb,twentythreec,twentythreed}
while the quantum fluctuations reappear continuously so that the relic gravitons are parametrically
amplified thanks to the pumping action of the gravitational field itself, a perspective
invoked in Refs. \cite{twenty,twentyone} (see also \cite{twentythree})
even before the formulation of the conventional inflationary paradigm.
By following the tenets of the quantum theory of parametric amplification
(originally developed in the case of optical photons \cite{twentytwo}) a fair estimate of the mean energy density
and pressure of the relic gravitons is obtained by averaging the various expressions over the same initial state
(e.g. the vacuum). Within each of the various parametrizations
of the energy-momentum pseudo-tensor the quantum averages will then be used to compare the competing proposals.
The other schemes (like the Brill-Hartle average \cite{four} and its descendants \cite{eleven,twelve,fourteen})
reproduce the results of the quantum averaging when the wavelengths are shorter than the Hubble radius
but are not defined in the opposite limit.
The present paper is organized as follows. In section \ref{sec2} the various proposals are presented in a common
perspective and with the purpose of easing their mutual comparison. By focussing the attention on the
case of cosmological backgrounds the explicit expressions for the energy density, for the pressure and for the anisotropic stress are obtained
within the different physical suggestions. The quantum averaging and its basic properties is discussed in section \ref{sec3}.
Two physically relevant examples are presented in section \ref{sec4} with the aim of illustrating
the basic features and the patent ambiguities of the different parametrizations.
The connection of the various proposals with the observables customarily
employed in the analysis of relic graviton backgrounds is discussed in section \ref{sec5}. In the same framework we also
evaluate the spectral density in the Brill-Hartle-Isaacson approach by assuming that the tensor amplitude is just an isotropic
random field varying in time; we connect the related spectral density to the power spectrum. At the end of section \ref{sec5}
we show that the effective energy-momentum pseudo-tensor can be derived in a different (conformally related) frame
even if the obtained results are ultimately frame-invariant. Finally section \ref{sec6} contains the concluding remarks.
\renewcommand{\theequation}{2.\arabic{equation}}
\section{Different answers for a similar question}
\setcounter{equation}{0}
\label{sec2}
Let us consider a conformally
flat background geometry $\overline{g}_{\mu\nu} = a^2(\tau) \eta_{\mu\nu}$,
where $a(\tau)$ is the scale factor and $\tau$ denotes the conformal time coordinate;
$\eta_{\mu\nu}= \mathrm{diag}(1, \, -1, \, -1,\, -1)$ is the Minkowski metric.
The metric fluctuations are introduced as $g_{\mu\nu}(\vec{x}, \tau) = \overline{g}_{\mu\nu} + \delta^{(1)}_{t} g_{\mu\nu}$,
where $\delta_{t}^{(1)} g_{\mu\nu}$ denotes
the (first-order) tensor fluctuation. The same notation will be used
for any tensor in four-dimensions (i.e. the Ricci or Einstein tensor) so that
$\delta_{t}^{(1)} A_{\mu\nu}$ and $\delta_{t}^{(2)}A_{\mu\nu}$ will denote the first- and second-order tensor fluctuations
of the generic tensor $A_{\mu\nu}$. The first- and second-order tensor fluctuations of the metric and of the square root of its determinant
are given by:
\begin{eqnarray}
\delta_{t}^{(1)}\, g_{ij} &=& - a^2\, h_{ij}, \qquad \delta_{t}^{(1)} \, g^{ij} = \frac{h^{ij}}{a^2}, \qquad
\delta_{t}^{(2)} \, g^{ij} = - \frac{h^{i}_{k}\,\, h^{j k}}{a^2},
\label{oneB}\\
\delta_{t}^{(1)} \sqrt{ -g} &=&0, \qquad \delta_{t}^{(2)} \sqrt{ -g} = - \frac{a^4}{4} h_{k\ell}\, h^{k\ell},
\label{oneBB}
\end{eqnarray}
where $h_{i\,j}$ is a rank-two tensor in three dimensions which is divergenceless and traceless, i.e.
$\partial_{i} h^{\,\,i}_{j} =0 = h_{i}^{\,\,\,i}$. The prime will denote a derivation with respect to the
conformal time coordinate. With this notation ${\mathcal H} = a^{\prime}/a$ where ${\mathcal H} = a \, H$ and $H = \dot{a}/a$
is the Hubble rate in the cosmic time parametrization [note that $a(\tau) \, d\tau = d\, t$].
To avoid the proliferation of superscripts we shall sometimes make explicit the derivation with respect to $\tau$
and write $h_{k\ell} \,\partial_{\tau} h^{k\ell}$ instead of $h_{k\ell} \,h^{k\ell\,\,\prime}$. It is relevant to mention
that the tensor amplitude $h_{ij}$ defined in Eqs. (\ref{oneB})-(\ref{oneBB}) is invariant under
infinitesimal coordinate transformations; if the tensor amplitude is defined as in Eqs. (\ref{oneB})--(\ref{oneBB})
its quadratic combinations will be automatically gauge-invariant.
\subsection{The effective action of the relic gravitons}
The effective energy-momentum pseudo-tensor of the relic gravitons follows from the observation
\cite{six,seven} that the tensor fluctuations and the background
metric can be regarded as independent variables. Neglecting, for simplicity, the presence of the sources the
action for the relic gravitons is essentially the Einstein-Hilbert action perturbed to second-order i.e.
\begin{equation}
S_{t} = \delta^{2} S = \frac{1}{2 \ell_{P}^2} \delta^{(2)}_{t} \biggl\{\int d^{4} x \biggl[ \sqrt{- g} \, g^{\alpha\beta} \,\biggl(\Gamma_{\alpha\beta}^{\,\,\,\,\, \rho} \,\,
\Gamma_{\rho\sigma}^{\,\,\,\,\, \sigma} - \Gamma_{\beta\rho}^{\,\,\,\,\, \sigma} \,\,
\Gamma_{\sigma\alpha}^{\,\,\,\,\, \rho} \biggr) \biggr]\biggr\},
\label{twoA}
\end{equation}
where $\ell_{P} = \sqrt{8 \pi G}$ and the quantity appearing inside the curly bracket is the Einstein-Hilbert action where
the total derivatives have been excluded. The second-order fluctuation implicitly indicated in Eq. (\ref{twoA}) can also be expressed as
\begin{eqnarray}
\delta^{(2)}_{t} S &=& \frac{1}{2 \ell_{P}^2}\int d^{4} x \biggl[ \overline{g}^{\alpha\beta} \, \overline{{\mathcal Z}}_{\alpha\beta} \,\,\delta^{(2)}_{t} \sqrt{-g} + \sqrt{ -\overline{g}} \biggl( \delta^{(2)}_{t} g^{\alpha\beta} \,\overline{{\mathcal Z}}_{\alpha\beta}
\nonumber\\
&+&
\delta^{(1)}_{t} g^{\alpha\beta} \, \delta^{(1)}_{t} {\mathcal Z}_{\alpha\beta} + \overline{g}^{\alpha\beta}\,\,\delta^{(2)}_{t} {\mathcal Z}_{\alpha\beta} \biggr)\biggr],
\label{twoB}
\end{eqnarray}
where ${\mathcal Z}_{\alpha\beta} = \Gamma_{\alpha\beta}^{\,\,\,\,\, \rho}
\Gamma_{\rho\sigma}^{\,\,\,\,\, \sigma} - \Gamma_{\beta\rho}^{\,\,\,\,\, \sigma} \Gamma_{\sigma\alpha}^{\,\,\,\,\, \rho}$
and $\overline{{\mathcal Z}}_{\alpha\beta}$ denotes the corresponding background quantity. To zeroth-order we have that $\overline{{\mathcal Z}}_{00} =0$ and $\overline{{\mathcal Z}}_{ij} = 2 {\mathcal H}^2 \delta_{ij}$. To first-order
in the tensor amplitude we have instead $\delta^{(1)} {\mathcal Z}_{00} =0$ while $\delta^{(1)} {\mathcal Z}_{ij} = 2 {\mathcal H}^2 \, h_{ij}$. Finally the explicit second-order contributions are:
\begin{eqnarray}
\delta^{(2)} {\mathcal Z}_{00} &=& - \frac{1}{4} h_{k\ell}^{\,\prime} \, h^{k\ell\,\,\prime} + \frac{{\mathcal H}}{2} h_{k\ell}^{\prime}\, h^{k\ell},
\label{twoBa}\\
\delta^{(2)} {\mathcal Z}_{ij} &=& - \frac{{\mathcal H}}{2} h_{k\ell}^{\prime}\, h^{k\ell} \,\,\delta_{ij} - \frac{1}{4} \biggl[ h_{i}^{\,\,k\,\,\prime}\,\, h_{k\, j}^{\prime} + h_{j}^{\,\,k\,\,\prime}\,\, h_{k\, i}^{\prime}\biggr]
\nonumber\\
&-& \frac{1}{4} \biggl[ \partial_{\ell} \,h_{i}^{\,\, k} + \partial_{i} h_{\ell}^{\,\, k} - \partial^{k} \,h_{\ell \,i} \biggr] \biggl[ \partial_{k} \,\,h_{j}^{\,\, \ell} + \partial_{j} \,\,h_{k}^{\,\, \ell} - \partial^{\ell} \,\,h_{k \,j} \biggr].
\label{twoBb}
\end{eqnarray}
Inserting Eqs. (\ref{twoBa}) and (\ref{twoBb}) into Eq. (\ref{twoB}) the effective action of the relic gravitons is:
\begin{equation}
S_{t} = \frac{1}{8 \ell_{P}^2} \int d^{4} x \sqrt{ - \overline{g}} \,\, \overline{g}^{\alpha\beta} \, \, \partial_{\alpha} h_{ij} \, \partial_{\beta} h^{ij}.
\label{threeB}
\end{equation}
The possible presence of background sources does not change the result of Eq. (\ref{threeB}).
In fact $\delta^{(2)}_{t} S$ must always be evaluated by imposing the validity of the background evolution
and the tensor modes decouple from the matter fields at least if the anisotropic stress of the sources
vanishes. Since the effective action of the relic gravitons in a conformally flat metric is given by Eq. (\ref{threeB}),
their energy-momentum pseudo-tensor can be introduced from the functional derivative
of $S_{t}$ with respect to $\overline{g}_{\mu\nu}$ by considering $h_{ij}$ and $\overline{g}_{\mu\nu}$ as
independent variables:
\begin{equation}
\delta S_{t} = \frac{1}{2} \int d^{4} x\,\, \sqrt{-\overline{g}} \,\, T^{(gw)}_{\mu\nu} \,\,\delta \overline{g}^{\mu\nu}.
\label{fourB}
\end{equation}
From Eq. (\ref{threeB}) the explicit form of Eq. (\ref{fourB}) becomes:
\begin{equation}
{\mathcal F}_{\mu\nu} = \frac{1}{4 \ell_{\mathrm{P}}^2} \biggl[ \partial_{\mu} h_{i j} \,\,\partial_{\nu} h^{i j}
- \frac{1}{2} \overline{g}_{\mu \nu} \,\,\biggl(\overline{g}^{\alpha\beta}\, \partial_{\alpha} h_{ij} \,\,\partial_{\beta} h^{ij} \biggr)\biggr],
\label{fiveB}
\end{equation}
where we used the notation ${\mathcal F}_{\mu\nu} = T_{\mu\nu}^{(gw)}$ to distinguish Eq. (\ref{fiveB}) from the other proposals examined below. The indices of ${\mathcal F}_{\mu\nu}$
are raised and lowered with the help of the background metric (i.e. ${\mathcal F}_{\mu}^{\nu} = \overline{g}^{\alpha\nu} {\mathcal F}_{\alpha\nu}$); the energy density and the pressure are:
\begin{eqnarray}
\rho^{(F)}_{gw} &=& \frac{1}{8 \ell_{\mathrm{P}}^2 a^2} \biggl[ \partial_{\tau} h_{k \ell}\, \partial_{\tau}h^{k \ell} + \partial_{m} h_{k\ell} \partial^{m} h^{k\ell}\biggr],
\label{rhoF}\\
p^{(F)}_{gw} &=& \frac{1}{8 \ell_{\mathrm{P}}^2 a^2} \biggl[ \partial_{\tau} h_{k\ell}\partial_{\tau} h^{\,k\ell} - \frac{1}{3} \partial_{m} h_{k \ell} \,\partial^{m} h^{\,k\ell} \biggr].
\label{pF}
\end{eqnarray}
The associated anisotropic stress is traceless (i.e. $\Pi_{i}^{(F)\,\,i} =0$) and it is:
\begin{equation}
\Pi_{i}^{(F)\,\,j} = \frac{1}{4 \ell_{\mathrm{P}}^2 a^2} \biggl[ - \partial_{i} \,h_{k\ell} \,\,\partial^{j} \,h^{k\ell} + \frac{1}{3} \delta_{i}^{j} \,\,\partial_{m}\, h_{k \ell} \,\,\partial_{m} \,h_{k\ell} \biggr].
\label{anF}
\end{equation}
In terms of Eqs. (\ref{rhoF}), (\ref{pF}) and (\ref{anF}) the components of ${\mathcal F}_{\mu}^{\nu}$ are
\begin{eqnarray}
{\mathcal F}_{0}^{0} &=& \rho^{(F)}_{gw}, \qquad {\mathcal F}_{i}^{0} = S_{i}^{(F)}=
\frac{1}{4 \ell_{\mathrm{P}}^2 a^2} \partial_{\tau} h_{k\ell} \,\partial_{i} h^{k\ell},
\nonumber\\
{\mathcal F}_{i}^{j} &=& - p_{gw}^{(F)} \,\,\delta_{i}^{j} + \Pi_{i}^{(F)\,\,j},
\label{sixB}
\end{eqnarray}
where $ S_{i}^{(F)}$ denotes the energy flux. The energy density, the pressure and the energy flux
combine in the following identity
\begin{eqnarray}
\partial_{\tau} \rho^{(F)}_{gw} + 3 {\mathcal H} \biggl[\rho^{(F)}_{gw} + p^{(F)}_{gw} \biggr] = \frac{h_{k\ell}^{\prime}}{4 \ell_{P}^2 a^2}
[ h_{k \ell}^{\prime\prime} + 2 {\mathcal H} h_{k\ell}^{\prime} - \nabla^2 h_{k \ell} ] + \vec{\nabla} \cdot \vec{S}^{(F)}.
\label{sevenB}
\end{eqnarray}
The first term at the right hand side of Eq. (\ref{sevenB}) vanishes because of the evolution of the
tensor amplitude following from the extremization of the action (\ref{threeB}) with respect to $h_{ij}$.
The second term at the right hand side of Eq. (\ref{sevenB}) is not vanishing, in general; but if we regard the energy flux as an operator constructed from the corresponding quantum fields, its expectation value over the vacuum is generally vanishing (see section \ref{sec3}).
\subsection{The second-order variation of the Einstein tensor}
The Landau-Lifshitz strategy that is based on the
analysis of the nonlinear corrections to the Einstein tensor consisting, to lowest order,
of quadratic combinations of the tensor amplitude $h_{ij}$. While the derivation of Eq. (\ref{fiveB}) does not require the systematic use of
a the evolution of the tensor amplitude the opposite is true in the Landau-Lifshitz framework
where the energy-momentum pseudo-tensor ${\mathcal L}_{\mu}^{\nu}$ can be expressed as:
\begin{equation}
\ell_{\rm P}^2 {\mathcal L}_{\mu}^{\,\,\,\nu} = - \delta^{(2)}_{t} {\mathcal G}_{\mu}^{\,\,\,\nu}.
\label{oneC}
\end{equation}
Furthermore, since the Bianchi identity $\nabla_{\mu} {\cal G}_{\nu}^{\mu}=0$
must be valid to all orders, we must also demand that $\delta_{\rm t}^{(2)} ( \nabla_{\mu} {\cal G}^{\mu}_{\nu}) =0$
ultimately leading to a relation analog to Eq. (\ref{sevenB}). From the second-order fluctuations
of the Einstein tensor the energy density, the pressure and the anisotropic stress are:
\begin{eqnarray}
\rho_{gw}^{(L)} &=& \frac{1}{a^2 \ell_{\rm P}^2} \biggl[ {\mathcal H} \,
(\partial_{\tau}h_{k\ell })\, h^{k\ell} + \frac{1}{8} ( \partial_{m} h_{k\ell}\,\,\partial^{m} h^{k\ell} +
\partial_{\tau} h_{k\ell}\,\, \partial_{\tau} h^{k\ell})\biggr],
\label{rhoL}\\
p_{gw}^{(L)} &=& - \frac{1}{24 a^2 \ell_{\rm P}^2}\biggl[ 5 \,\partial_{\tau}h_{k\ell}\,\partial_{\tau}h^{k\ell} - 7\,\,
\partial_{m} h_{k\ell}\, \partial^{m} h^{k\ell} \biggr].
\label{pL}\\
\Pi_{i}^{(L)\,\,j} &=&
\frac{1}{a^2 \ell_{P}^2} \biggl\{ \frac{1}{6} \biggl[ \partial_{\tau}\,h_{k\ell}\, \partial_{\tau}\,h^{k\ell} -
\frac{1}{2} \partial_{m}\, h_{k\ell} \,\,\partial^{m}\, h^{k\ell} \biggr] \delta_{i}^{j}
+ \frac{1}{2} \partial_{m} \,h_{\ell i} \,\,\partial^{m}\, h^{\ell j}
\nonumber\\
&-& \frac{1}{4} \partial_{i}\, h_{k\ell} \,\,\partial^{j} \,h^{k\ell}
- \frac{1}{2} \partial_{\tau}\,h_{k i}\,\, \partial_{\tau}\, h^{k j} \biggr\},
\label{fourC}
\end{eqnarray}
with $\Pi_{i}^{(L)\,\,i} =0$. In analogy with Eq. (\ref{sixB}) the components of energy-momentum pseudo-tensor ${\mathcal L}_{\mu}^{\nu}$
in the Landau-Lifshitz approach are:
\begin{eqnarray}
{\mathcal L}_{0}^{0} &=& \rho^{(L)}_{gw}, \qquad {\mathcal L}_{i}^{0} = S_{i}^{(L)} =
\frac{1}{4 \ell_{\mathrm{P}}^2 a^2} \partial_{\tau} \,h_{k\ell} \,\,\partial_{i}\, h^{k\ell},
\nonumber\\
{\mathcal L}_{i}^{j} &=& - p_{gw}^{(L)} \delta_{i}^{j} + \Pi_{i}^{(L)\,\,j}.
\label{fiveC}
\end{eqnarray}
For a more direct comparison with Eqs. (\ref{rhoF}) and (\ref{pF})
various total derivatives (i.e. three-divergences of a quadratic combination of tensor amplitudes) have been excluded
from Eqs. (\ref{rhoL}) and (\ref{pL}). Consider, for instance, the second-order variations of the Ricci tensor and of the
Ricci scalar contributing to $\delta_{t}^{(2)} {\mathcal G}_{00}$ (and to ${\mathcal L}_{00}$):
\begin{eqnarray}
\delta_{t}^{(2)} R_{00} &=& \frac{1}{4} \partial_{\tau} h_{k\ell} \,\,\partial_{\tau} h^{k \ell} + \frac{1}{2} h^{k \ell} \,\,[ h_{k \ell}^{\prime\prime} + {\mathcal H} h_{k \ell}^{\prime} ],
\label{sixC}\\
\delta_{t}^{(2)} R &=& \frac{1}{a^2}\biggl[ \frac{3}{4} \partial_{\tau} h_{k\ell} \partial_{\tau} {h^{k\ell}} +
{\mathcal H} \partial_{\tau} h_{k\ell} h^{k\ell} - \frac{3}{4} \partial_{i}h^{k\ell} \partial^{i} h_{k\ell}\biggr]
+ \frac{1}{a^2} {\mathcal D}_{R},
\nonumber\\
{\mathcal D}_{R} &=& \partial_{i} \biggl[ h_{k \ell} \partial^{i} h^{k \ell} - \frac{1}{4} h_{k}^{ \ell} \partial_{\ell} h^{k}_{i} \biggr],
\label{sevenC}
\end{eqnarray}
where ${\mathcal D}_{R}$ is the total derivative term. According to the logic of this approach
the term $h_{k \ell}^{\,\prime\prime}$ must be replaced by $ - 2 {\mathcal H} h_{k \ell}^{\prime} + \nabla^2 h_{k \ell}$
that follows from the evolution of the tensor amplitude. As a result of this lengthy but straightforward procedure
when Eqs. (\ref{sixC}) and (\ref{sevenC}) are combined in
${\mathcal L}_{00}$ a further total derivative term emerges so that the final result for the energy density is:
\begin{equation}
{\mathcal L}_{00} = \frac{1}{\ell_{P}^2} \biggl[ ({\mathcal H} \,
\partial_{\tau}h_{k\ell })\, h^{k\ell} + \frac{1}{8} ( \partial_{m} h_{k\ell}\,\partial^{m} h^{k\ell} +
\partial_{\tau} h_{k\ell} \,\partial_{\tau} h^{k\ell})\biggr] - \frac{1}{8 \ell_{P}^2} {\mathcal D}_{00},
\label{eightC}
\end{equation}
where $ {\mathcal D}_{00} = \partial_{i} [ h_{k \ell}\, \partial^{\ell} h^{k \, i} ]$.
All in all Eq. (\ref{eightC}) shows, as anticipated, that Eq. (\ref{rhoL}) is determined up to the
total derivative term (i.e. ${\mathcal D}_{00}$) and the same happens in the case of
the pressure terms whose associated total derivatives are qualitatively similar to ${\mathcal D}_{R}$ and ${\mathcal D}_{00}$
but will not be explicitly reported. Finally the explicit form of the condition $\delta_{t}^{(2)} ( \nabla_{\mu} {\cal G}^{\mu}_{\nu}) =0$
following from the validity of the Bianchi identity is:
\begin{eqnarray}
&&\partial_{\mu} \delta_{t}^{(2)} {\mathcal G}^{\,\,\mu}_{\nu} +
\delta_{t}^{(2)} \Gamma_{\mu\alpha}^{\,\,\,\,\,\mu} \overline{\mathcal G}_{\,\,\nu}^{\alpha} +
\overline{\Gamma}_{\mu\alpha}^{\,\,\,\,\,\,\mu} \delta^{(2)}_{t} {\mathcal G}_{\,\,\nu}^{\alpha} +
\delta_{ t}^{(1)} \Gamma_{\mu\alpha}^{\,\,\,\,\,\mu} \delta^{(1)}_{t} {\mathcal G}_{\nu}^{\,\,\alpha}
- \delta_{t}^{(2)} \Gamma_{\nu\alpha}^{\,\,\,\,\,\beta} \overline{{\mathcal G}}_{\beta}^{\,\,\alpha}
\nonumber\\
&& - \overline{\Gamma}_{\nu\alpha}^{\,\,\,\,\,\,\,\beta} \delta_{t}^{(2)} {\mathcal G}_{\beta}^{\,\,\alpha} -
\delta_{ t}^{(1)} \Gamma_{\nu\alpha}^{\,\,\,\,\,\beta} \delta^{(1)}_{ t} {\mathcal G}_{\beta}^{\,\,\alpha}=0.
\label{nineC}
\end{eqnarray}
Equation (\ref{nineC}) implies some sort of conservation equation similar to Eq. (\ref{sevenB}); indeed, from
the energy density and the pressure defined in Eqs. (\ref{rhoL}) and (\ref{pL}), f Eq. (\ref{nineC}) becomes
after some algebra:
\begin{equation}
\partial_{\tau} \rho_{gw}^{(L)} + 3 {\mathcal H}\biggl[ \rho_{gw}^{(L)} + {\mathcal P}_{gw}^{(L)}\biggr]
= \frac{1}{ 4 \ell_{P}^2 a^2} [h_{k \ell}^{\prime} + 4 {\mathcal H} h_{k \ell} ][ h_{k\ell}^{\prime\prime}
+ 2 {\mathcal H} h_{k \ell}^{\prime} - \nabla^2 h_{k\ell} ] + \vec{\nabla} \cdot \vec{Q}^{(L)}.
\label{tenC}
\end{equation}
In Eq. (\ref{tenC}) the shifted pressure ${\mathcal P}_{gw}^{(L)}$ does not
coincide with $p_{gw}^{(L)}$ (see Eq. (\ref{pL})); the same comment holds for $\vec{Q}^{(L)}$ which differs from $\vec{S}^{(L)}$ introduced in Eq. (\ref{fiveC}). In explicit terms we have that the shifted pressure and the shifted vector are:
\begin{eqnarray}
{\mathcal P}_{gw}^{(L)} &=& p_{gw}^{(L)} + \frac{({\mathcal H}^2 - {\mathcal H}^{\prime})}{ 3 {\mathcal H} a^2\ell_{P}^2} (\partial_{\tau} h_{k\ell}) h^{k\ell},
\label{elevenC}\\
Q_{i}^{(L)} &=& \frac{1}{4 \ell_{P}^2 a^2} [h_{k\ell}^{\prime} + 4 {\mathcal H} h_{k \ell}] \,\,\partial_{i} h^{k \ell}.
\label{twelveC}
\end{eqnarray}
In comparison with $p_{gw}^{(L)}$ the value of ${\mathcal P}_{gw}^{(L)}$ is shifted
by the second-order fluctuations of the Christoffel connection
\begin{equation}
{\mathcal P}_{gw}^{(L)} - p_{gw}^{(L)} = - \frac{2}{3 \,a^2 \, {\mathcal H}\, \ell_{P}^2} ({\mathcal H}^2 - {\mathcal H}^{\prime}) \delta_{t}^{(2)} \Gamma_{k0}^{\,\,\,\,\,k}, \qquad \delta_{t}^{(2)} \Gamma_{k0}^{\,\,\,\,\,k} = - \frac{1}{2} h_{k\ell} \partial_{\tau} h^{k \ell}.
\label{thirteenC}
\end{equation}
The shifted pressure entering Eq. (\ref{tenC}) should be regarded as the true physical pressure
as it will emerge from the explicit examples of sections \ref{sec4} and \ref{sec5}.
The first term at the right hand side of Eq. (\ref{tenC}) vanishes because of the evolution of the first-order amplitude.
The second term at the right-hand side of Eq. (\ref{tenC}) vanishes when averaged
over the quantum state of the relic gravitons (see the discussion in section \ref{sec3}).
\subsection{The covariant approach}
The covariant approach, in its original formulation, assumes the Brill-Hartle scheme \cite{four} that implicitly
selects the frequencies exceeding the Hubble rate. In the covariant approach the metric is decomposed as
\begin{equation}
g_{\mu\nu} = \overline{g}_{\mu\nu} + \widetilde{h}_{\mu\nu}, \qquad u^{\mu} \widetilde{h}_{\mu\nu} =0, \qquad \overline{\nabla}_{\mu} \, \widetilde{h}^{\mu\nu} =0,\qquad \widetilde{h}_{\mu}^{\,\,\,\,\mu} =0,
\label{oneD}
\end{equation}
where $\overline{\nabla}_{\mu}$ denotes the covariant derivative with respect to the background metric
$\overline{g}_{\mu\nu}$; the indices of $\widetilde{h}_{\mu\nu}$ are raised and lowered
with the help of $\overline{g}_{\mu\nu}$. Within the approach of Eq. (\ref{oneD}) the
cosmological fluctuations correspond to
$u^{\mu} \widetilde{h}_{\mu\nu} =0$ where $u_{\mu}$ is the fluid four-velocity. In the case of a conformally flat background
geometry $\overline{g}_{\mu\nu} = a^2(\tau) \eta_{\mu\nu}$ the conditions $u^{\mu} \widetilde{h}_{\mu\nu} =0$ and
$ \overline{\nabla}_{\mu}\,\widetilde{h}^{\mu\nu} =0 $ imply $\widetilde{h}_{\mu} ^{\mu} =0$; if $a(\tau)$ is constant the three conditions
must all be separately imposed\footnote{By projecting the condition $\overline{\nabla}_{\mu} \widetilde{h}^{\,\mu\nu}=0$ along $u_{\nu}$ we obtain, for a cosmological background with flat spatial sections, that $(\overline{\nabla}_{\mu} \widetilde{h}^{\,\mu\nu})u_{\nu}= H (g_{\alpha\beta} - u_{\alpha} u_{\beta}) \widetilde{h}^{\alpha\beta}$ where $H$ is the Hubble rate. If we then impose, according to Eq. (\ref{oneD}) that $u^{\mu} \widetilde{h}_{\mu\nu} =0$, the condition
$(\overline{\nabla}_{\mu} \widetilde{h}^{\,\mu\nu}) u_{\nu} =0$ also demands $\widetilde{h}_{\mu}^{\,\,\mu} =0$ {\em provided} $H\neq 0$ [i.e. $a(\tau)$ must {\em not} be constant].}. The tensor amplitude $\widetilde{h}_{\mu\nu}$ of Eq. (\ref{oneD}) is related to the tensor amplitude $h_{ij}$
of Eqs. (\ref{oneB})--(\ref{oneBB}) as
\begin{equation}
\widetilde{h}_{ij} = - a^2 h_{ij}, \qquad \widetilde{h}_{0\mu} =0, \qquad \partial_{i} \widetilde{h}^{ij} =0, \qquad \widetilde{h}_{i}^{\,\,i} =0.
\label{twoD}
\end{equation}
Equation (\ref{twoD}) implies the conditions of Eq. (\ref{oneD}) but while the indices of $\widetilde{h}_{ij}$ are raised
and lowered with the help of the background metric, the indices of $h_{ij}$ are all Euclidean.
Within the covariant approach the energy-momentum tensor following from the Brill-Hartle
average is given by:
\begin{equation}
{\mathcal B}_{\mu\nu} = \frac{1}{4 \ell_{P}^2} \overline{\nabla}_{\mu} \widetilde{h}_{\alpha\beta} \, \overline{\nabla}_{\nu} \,
\widetilde{h}^{\alpha\beta}.
\label{threeD}
\end{equation}
To compare the covariant approach with the other proposals we insert Eq. (\ref{twoD})
inside Eq. (\ref{threeD}) and the result for the various components of ${\mathcal B}_{\mu}^{\,\,\,\nu}$ is:
\begin{eqnarray}
{\mathcal B}_{0}^{\,\,0} &=& \rho_{gw}^{(B)}, \qquad {\mathcal B}_{i}^{\,\,0} = S_{i}^{(B)} = \frac{1}{4 \ell_{P}^2 a^2} \, h_{k\ell}^{\prime}\,\,\partial_{i} \,h^{k \ell},
\nonumber\\
{\mathcal B}_{i}^{\,\,j} &=& - p_{gw}^{(B)} \delta_{i}^{\,\, j} + \Pi_{i}^{(B)\,\, j},
\label{fourD}
\end{eqnarray}
where the energy density, the pressure and the anisotropic stress are:
\begin{eqnarray}
\rho_{gw}^{(B)} &=& \frac{1}{4 \ell_{P}^2 a^2} \partial_{\tau} h_{k \ell} \, \partial_{\tau} h^{k \ell},
\label{rhoB}\\
p_{gw}^{(B)} &=& \frac{1}{12 \ell_{P}^2 a^2} \partial_{m} h_{k \ell} \, \partial^{m} h^{k \ell},
\label{pB}\\
\Pi_{i}^{(B)\,\,j} &=& \frac{1}{4 \ell_{P}^2 a^2} \biggl[ - \partial_{i} \,h_{k \ell} \,\,\partial^{j} \,h^{k \ell}
+ \frac{1}{3} \delta_{i}^{\,\, j}\, \partial_{m} \, h_{k \ell}\,\, \partial^{m} \,h^{k \ell} \biggr].
\label{anB}
\end{eqnarray}
Equation (\ref{threeD}) is the result of an averaging procedure which excludes by construction the long wavelengths and can therefore be applied only inside the Hubble radius.
If Eq. (\ref{threeD}) is blindly applied
beyond the Hubble radius various ambiguities arise and they will be discussed in sections \ref{sec4} and \ref{sec5}.
The covariant approach can be however extended for typical wavelengths larger than the Hubble radius
by employing a different averaging scheme. In this case the results applicable to cosmological background geometries
will coincide exactly with Eq. (\ref{fiveB}) (see the discussion at the end of section \ref{sec3}).
\subsection{Mutual relations between the different prescriptions}
The expressions of the energy density obtained in the cases examined above do not coincide in general terms. To appreciate
the differences it is useful to write their mutual relations:
\begin{eqnarray}
\rho_{gw}^{(F)} &=& \rho_{gw}^{(L)} - \frac{{\mathcal H} h_{k\ell}^{\prime} \, h^{k \ell}}{a^2 \ell_{P}^2},
\label{FtoL}\\
\rho_{gw}^{(L)} &=& \frac{\rho_{gw}^{(B)}}{2} + \frac{{\mathcal H} \, h_{k\ell}^{\prime} \, h^{k \ell}}{a^2 \ell_{P}^2} + \frac{1}{8 \ell_{P}^2 a^2} \partial_{m} \,h_{k\ell} \,\,\partial^{m} \,h^{k\ell},
\label{LtoB}\\
\rho_{gw}^{(B)} &=& 2 \biggl[\rho_{gw}^{(F)} - \frac{1}{8 \ell_{P}^2 a^2} \partial_{m} \,h_{k\ell} \,\,\partial^{m} \,h^{k\ell}\biggr].
\label{BtoF}
\end{eqnarray}
The expressions of $\rho_{gw}^{(F)}$ and
$\rho_{gw}^{(L)}$ are very similar but they differ by a crucial term containing ${\mathcal H}$.
A similar remark holds in the case of the relation between $\rho_{gw}^{(B)}$ and $\rho_{gw}^{(F)}$ since they both
differ by terms that are negligible beyond the Hubble radius i.e. when the frequency of the graviton is smaller than the rate
of variation of the background. The corresponding pressures obey qualitatively similar
relations that can be easily deduced from the results given above.
\renewcommand{\theequation}{3.\arabic{equation}}
\section{The quantum averaging}
\setcounter{equation}{0}
\label{sec3}
The classical and quantum fluctuations of cosmological backgrounds obey the same evolution equations,
but while classical fluctuations are given once forever (on a given space-like hypersurface) quantum fluctuations
keep on reappearing all the time. If the kinematical and dynamical problems of a decelerated cosmology are fixed
by means of a phase of accelerated expansion lasting (at least) $65$ efolds, the classical fluctuation
are exponentially suppressed during inflation \cite{twentythreea,twentythreeb,twentythreec,twentythreed}
(see also \cite{twentythreee,twentythreef,twentythreeg}). At a purely classical level it is then plausible to conclude that
any finite portion of the event horizon gradually loses the memory of an initially imposed anisotropy
or inhomogeneity so that the metric attains the observed regularity regardless of the initial boundary conditions.
Since in this situation the power spectra of the scalar and tensor modes of the geometry follow
from the quantum mechanical expectation values of two field operators evaluated
at the same time (but at different spatial locations), it is also very reasonable to
apply the quantum averaging for the estimates of the expectation values of the
different components of the pseudo-tensors derived above.
In the quantum description the classical fields and their
derivatives are promoted to the status of quantum mechanical operators i.e.
$h_{ij} \to \hat{h}_{ij}$ and $ h_{ij}^{\,\prime} \to \hat{h}_{ij}^{\, \prime}$:
\begin{eqnarray}
\hat{h}_{ij}(\vec{x}, \tau) &=& \frac{\sqrt{2} \, \ell_{P}}{(2\pi)^{3/2}} \sum_{\lambda} \int d^{3} k \,\, e^{(\lambda)}_{i\, j}(\hat{k})\biggl[ F_{k\,\lambda} \hat{a}_{\vec{k},\, \lambda} e^{- i \vec{k} \cdot\vec{x}}+ F_{k\,\lambda}^{*} \hat{a}_{\vec{k},\, \lambda}^{\dagger} e^{i \vec{k} \cdot\vec{x}} \biggr],
\label{oneE}\\
\hat{h}_{ij}^{\,\,\prime}(\vec{x}, \tau) &=& \frac{\sqrt{2} \, \ell_{P}}{(2\pi)^{3/2}} \sum_{\lambda} \int d^{3} k \,\, e^{(\lambda)}_{i\, j}(\hat{k})\biggl[ G_{k\,\lambda} \hat{a}_{\vec{k},\, \lambda} e^{- i \vec{k} \cdot\vec{x}}+ G_{k\,\lambda}^{*} \hat{a}_{\vec{k},\, \lambda}^{\dagger} e^{ i \vec{k} \cdot\vec{x}} \biggr],
\label{twoE}
\end{eqnarray}
where $[\hat{a}_{\vec{k},\, \lambda}, \, \hat{a}^{\dagger}_{\vec{p},\, \lambda^{\prime}}] = \delta_{\lambda\, \lambda^{\prime}} \delta^{(3)}
(\vec{k} - \vec{p})$;
$e_{ij}^{(\lambda)}(\hat{k})$ (with $\lambda = \oplus, \, \otimes$) accounts for the two tensor polarizations\footnote{If we define a triplet of mutually orthogonal unit vectors
$\hat{m}$, $\hat{n}$ and $\hat{k}$ we can set the direction
of propagation of the wave along $\hat{k}$ and, in this case,
the two tensor polarizations are
$e_{ij}^{\oplus}= (\hat{m}_{i} \, \hat{n}_{j} - \hat{n}_{i} \, \hat{m}_{j})$ and
$e_{ij}^{\otimes}= (\hat{m}_{i} \, \hat{n}_{j} + \hat{n}_{i} \, \hat{m}_{j})$.} and
the mode functions (for each separate polarization) obey
\begin{eqnarray}
&& G_{k} = F_{k}^{\,\,\prime}, \qquad G_{k}^{\,\,\prime} = - k^2 F_{k} - 2 {\mathcal H} G_{k},
\nonumber\\
&& F_{k}(\tau) G_{k}^{*}(\tau) - F_{k}^{*}(\tau) G_{k}(\tau) = \frac{i}{a^2(\tau)},
\label{threeE}
\end{eqnarray}
where $ F_{k, \oplus} = F_{k,\otimes} = F_{k}$ and $G_{k, \oplus} = G_{k,\otimes} = G_{k}$ in the unpolarized case treated here.
The sum over the polarizations is given by
$\sum_{\lambda} e^{\lambda}_{i\, j} \, e^{(\lambda)}_{m\,n}(\hat{k}) = 4 {\mathcal S}_{i\,j\,m\, n}(\hat{k})$
and ${\mathcal S}_{i\, j\, m\, n}$ is defined as
\begin{equation}
{\mathcal S}_{i\,j\,m\, n} = \frac{1}{4} \biggl[ p_{i\,m}(\hat{k}) p_{j\, n}(\hat{k}) + p_{i\, n}(\hat{k}) p_{j\, m}(\hat{k}) -
p_{i\,j}(\hat{k}) p_{m\, n}(\hat{k}) \biggr],
\label{Sdef}
\end{equation}
where $p_{ij}(\hat{k}) =[ \delta_{ij} - \hat{k}_{i} \hat{k}_{j}]$ is the traceless projector. The field operators
(\ref{oneE}) and (\ref{twoE}) consist of a positive and of a negative frequency part, i.e.
$\hat{h}_{ij}(x) = \hat{h}^{(+)}_{ij}(x) + \hat{h}^{(-)}_{ij}(x)$ with $ \hat{h}^{(+)\,\,\dagger}_{ij}(x) = \hat{h}^{(-)}_{ij}(x)$.
If $| \mathrm{vac} \rangle$ is the state that minimizes the tensor Hamiltonian when all the modes are inside
the effective horizon (for instance at the onset of inflation) the operator $\hat{h}^{(+)}_{ij}(x)$ annihilates
the vacuum [i.e. $ \hat{h}^{(+)}_{ij}(x) \,| \mathrm{vac} \rangle=0$ and $\langle \mathrm{vac} | \,\hat{h}^{(-)}_{ij}(x) =0$].
The two-point functions associated with $\hat{h}_{ij}$ and $\hat{h}^{\,\,\prime}_{ij}$ are therefore given by:
\begin{eqnarray}
\langle \mathrm{vac} | \hat{h}_{ij}(\vec{x}, \tau) \hat{h}_{ij}(\vec{x} + \vec{r}, \tau) | \mathrm{vac}\rangle
&=& \int \,d \ln{k} \,\,P_{T}(k,\tau) \,\,j_{0}(k r),
\label{fourEa}\\
\langle \mathrm{vac} | \hat{h}^{\,\,\prime}_{ij}(\vec{x}, \tau) \hat{h}^{\,\,\prime}_{ij}(\vec{x} + \vec{r}, \tau) | \mathrm{vac}\rangle
&=& \int \,d \ln{k} \,\,Q_{T}(k,\tau) \,\,j_{0}(k r),
\label{fourEb}
\end{eqnarray}
and $j_{0}(k r)$ are spherical Bessel functions of zeroth order \cite{twentyfour,twentyfive}; $P_{T}(k,\tau)$ is the standard tensor power spectrum while $Q_{T}(k,\tau)$ is usually not discussed
but its presence is essential in the present context:
\begin{equation}
P_{T}(k,\tau) = \frac{4 \ell_{P}^2}{\pi^2} \,\,k^3 \,\,\bigl| F_{k}(\tau) \bigr|^2, \qquad Q_{T}(k,\tau) = \frac{4 \ell_{P}^2}{\pi^2} \,\,k^3 \,\,\bigl| G_{k}(\tau) \bigr|^2.
\label{fiveE}
\end{equation}
In Eqs. (\ref{fourEa})--(\ref{fourEb}) the expectation values have been computed over the vacuum state. The averages of the field operators can also be obtained directly in Fourier space
from the corresponding Fourier transforms; by representing the quantum operators
in Fourier space as
\begin{equation}
\hat{h}_{ij}(\vec{k},\tau) = \frac{1}{(2\pi)^{3/2}} \int d^{3} x\, e^{i \vec{k}\cdot\vec{x}} \, \hat{h}_{i\, j}(\vec{x}, \tau),\qquad
\hat{h}^{\,\,\prime}_{ij}(\vec{k},\tau) = \frac{1}{(2\pi)^{3/2}} \int d^{3} x\, e^{i \vec{k}\cdot\vec{x}} \, \hat{h}_{i\, j}^{\,\,\prime}(\vec{x}, \tau),
\label{eightE}
\end{equation}
the explicit expressions of $\hat{h}_{ij}(\vec{k},\tau)$ and of $\hat{h}^{\,\,\prime}_{ij}(\vec{k},\tau) $ follow
from Eqs. (\ref{oneE}) and (\ref{twoE}) so that the corresponding expectation values are
\begin{eqnarray}
\langle \hat{h}_{i\, j}(\vec{k},\,\tau) \hat{h}_{m\, n}(\vec{p},\,\tau) \rangle &=& \frac{2 \pi^2}{k^3} \,\, P_{T}(k,\tau) \,
\delta^{(3)}(\vec{k} + \vec{p}) \,\,{\mathcal S}_{i\,j\,m\,n},
\label{sixE}\\
\langle \hat{h}_{i\, j}^{\, \prime}(\vec{k},\,\tau) \hat{h}_{m\, n}^{\, \prime}(\vec{p},\,\tau) \rangle &=& \frac{2 \pi^2}{k^3} \,\, Q_{T}(k,\tau) \, \delta^{(3)}(\vec{k} + \vec{p}) \,\,{\mathcal S}_{i\,j\,m\,n}.
\label{sevenE}
\end{eqnarray}
The expressions of Eqs. (\ref{sixE}) and (\ref{sevenE}) hold for quantum mechanical operators but
can be easily viewed as classical expectation values of isotropic random fields, as we shall discuss in section \ref{sec5}.
The expectation values of the energy density and of the pressures
in the different parametrizations examined above will be computed in the remaining part of this section by using the following notations\footnote{In the Landau-Lifshitz parametrization we have to add also the spectral density corresponding to the shifted pressure ${\mathcal P}_{gw}^{(L)}(k,\tau)$ (see Eq. (\ref{sixG})) that
is, in some sense, the true pressure term arising from the second order fluctuation of the Bianchi identity.
The contribution of the shifted pressure has been sometimes interpreted as an effective bulk viscosity of the relic gravitons \cite{ten} but this suggestion shall not be pursued here.}:
\begin{equation}
\overline{\rho}_{gw}^{(X)} = \langle \mathrm{vac} | \hat{\rho}_{gw}^{(X)} | \mathrm{vac} \rangle\, \qquad \overline{p}_{gw}^{(X)} = \langle \mathrm{vac} | \hat{p}_{gw}^{(X)} | \mathrm{vac} \rangle,
\label{fourXX}
\end{equation}
where $X =F,\, L,\,B$. From Eq. (\ref{fourXX}) it is also practical to introduce the
spectral energy density and the spectral pressure defined as
\begin{equation}
\rho^{(X)}_{gw}(k,\tau) = \frac{d \overline{\rho}^{(X)}_{gw}}{d \ln{k}}, \qquad p^{(X)}_{gw}(k,\tau) = \frac{d \overline{\rho}^{(X)}_{gw}}{d \ln{k}}.
\label{oneL}
\end{equation}
The quantum averaging implies a correct ordering of the operators: for instance the quantum version of the classical expression
$2 {\mathcal H} \partial_{\tau} h_{k\ell}\, h^{k\ell}$ reads, as usual,
${\mathcal H}( \partial_{\tau} \hat{h}_{k\ell}\, \hat{h}^{k\ell} + \hat{h}_{k\ell}\, \partial_{\tau}\hat{h}^{k\ell})$.
The choice of $|\mathrm{vac} \rangle$ is not mandatory: if the vacuum state is replaced by some other initial state the present considerations apply in the same way provided the {\em same} initial state is used for all the expectation values in the various descriptions.
\subsection{The effective energy momentum pseudo-tensor}
In spite of the specific parametrization of the energy-momentum pseudo-tensor, it is a general
property of the quantum mechanical expectation values that the averages of the anisotropic stresses
and of the total derivatives are all vanishing:
\begin{eqnarray}
&& \langle \, \mathrm{vac} | \hat{\Pi}_{i}^{(F)\, j} | \mathrm{vac} \rangle = \langle \, \mathrm{vac} | \hat{\Pi}_{i}^{(B)\, j} | \mathrm{vac} \rangle = \langle \, \mathrm{vac} | \hat{\Pi}_{i}^{(B)\, j} | \mathrm{vac} \rangle =0,
\label{oneF}\\
&& \langle \partial_{i} \biggl[ \hat{h}_{k\,\ell} \partial^{i} \hat{h}^{k\,\ell}\biggr] \rangle = \langle \partial_{i} \biggl[ \hat{h}_{k\,\ell} \partial^{\ell} \hat{h}^{k\,i}\biggr] \rangle =0.
\label{twoF}
\end{eqnarray}
Similarly the expectation values of the three-divergences of the energy fluxes vanish, i.e.
\begin{equation}
\langle \vec{\nabla} \cdot \vec{S}^{(F)} \rangle =\, \langle \vec{\nabla} \cdot \vec{S}^{(L)} \rangle =\,\langle \vec{\nabla} \cdot \vec{Q}^{(L)} \rangle=\,
\langle \vec{\nabla} \cdot \vec{S}^{(B)} \rangle =\,\,0.
\label{threeF}
\end{equation}
While the results of Eqs. (\ref{oneF}) and (\ref{twoF})--(\ref{threeF}) hold for all the cases examined above,
the spectral energy densities and the spectral pressures of Eq. (\ref{oneL}) are all different.
In the case $X=F$ (see Eq. (\ref{oneL})) the spectral energy density and the spectral pressure become:
\begin{eqnarray}
\rho_{gw}^{(F)}(k,\tau) &=& \frac{d \overline{\rho}_{gw}^{(F)}}{d \ln{k}} = \frac{1}{8 \ell_{P}^2 a^2} \biggl[ Q(k,\tau) + k^2 P_{T}(k,\tau) \biggr],
\label{sixF}\\
p_{gw}^{(F)}(k,\tau) &=& \frac{d \overline{p}_{gw}^{(F)}}{d \ln{k}} = \frac{1}{8 \ell_{P}^2 a^2} \biggl[ Q(k,\tau) - \frac{k^2}{3} P_{T}(k,\tau) \biggr].
\label{sevenF}
\end{eqnarray}
Recalling the explicit form of the power spectra given in Eq. (\ref{fiveE}), $\rho_{gw}^{(F)}(k,\tau)$ and $p_{gw}^{(F)}(k,\tau)$
can also be expressed as:
\begin{eqnarray}
\rho_{gw}^{(F)}(k,\tau) &=& \frac{k^3}{2 \pi^2 a^2} \biggl[ k^2 \bigl| F_{k}(\tau)\bigr|^2 + \bigl| G_{k}(\tau)\bigr|^2 \biggr],
\label{eightF}\\
p_{gw}^{(F)}(k,\tau) &=& \frac{k^3}{2 \pi^2 a^2} \biggl[ - \frac{k^2}{3} \bigl| F_{k}(\tau)\bigr|^2 + \bigl| G_{k}(\tau)\bigr|^2 \biggr].
\label{nineF}
\end{eqnarray}
When the typical frequencies of the gravitons
are much larger than the rate of variation of the geometry, Eq. (\ref{threeE}) implies that
$|G_{k}(\tau)|^2 = [ k^2 + {\mathcal H}^2 + {\mathcal O}({\mathcal H}^2/k^2)] |F_{k}(\tau)|^2$;
therefore in the limit $k \gg {\mathcal H}$ the effective barotropic index computed from
Eqs. (\ref{eightF})--(\ref{nineF}) becomes:
\begin{equation}
\lim_{k\gg {\mathcal H}} \frac{p_{gw}(k,\tau)}{\rho_{gw}(k,\tau)} = \frac{1}{3} \biggl( 1 + \frac{{\mathcal H}^2}{k^2} \biggr).
\label{tenF}
\end{equation}
According to Eq. (\ref{tenF}), when the modes are inside the Hubble radius the barotropic index
of the relic gravitons coincides approximately with $1/3$ to leading order in ${\mathcal H}^2/k^2 \ll 1$ (i.e. $k \tau \gg 1$).
In the opposite limit (i.e. $k^2/{\mathcal H}^2\ll 1$) the frequency of the waves is much smaller than the rate of variation
of the geometry and Eq. (\ref{threeE}) can be solved by iteration:
\begin{eqnarray}
F_{k}(\tau) &=& F_{k}(\tau_{ex}) + G_{k}(\tau_{ex}) \int_{\tau_{ex}}^{\tau} \frac{a_{ex}^2}{a^2(\tau_{1})} \, d\tau_{1}
- k^2 \int_{\tau_{ex}}^{\tau} \frac{ d\tau_{2}}{a^2(\tau_{2})} \int_{\tau_{ex}}^{\tau_{2}} F_{k}(\tau_{1}) \, d\tau_{1}
\label{elevenF}\\
G_{k}(\tau) &=& \biggl(\frac{a_{ex}}{a}\biggr)^2 G_{k}(\tau_{ex}) - \frac{k^2}{a^2} \int_{\tau_{ex}}^{\tau} F_{k}(\tau_{1}) d\tau_{1},
\label{twelveF}
\end{eqnarray}
where $\tau_{ex}$ is the conformal time corresponding to the exit of a given wavelength from the Hubble radius (i.e.
$k \tau_{ex} = {\mathcal O}(1)$).
From Eqs. (\ref{elevenF}) and (\ref{twelveF}) we have that in the limit $k \ll {\mathcal H}$
\begin{eqnarray}
\rho_{gw}^{(F)}(k,\tau) &=& \frac{k^3}{2 \pi^2 a^2} \biggl[ k^2 \bigl| F_{k}(\tau_{ex})\bigr|^2 + \biggl(\frac{a_{ex}}{a}\biggr)^4 \bigl| G_{k}(\tau_{ex})\bigr|^2 \biggr],
\label{thirteenF}\\
p_{gw}^{(F)}(k,\tau) &=& \frac{k^3}{2 \pi^2 a^2} \biggl[ - \frac{k^2}{3} \bigl| F_{k}(\tau_{ex})\bigr|^2 + \biggl(\frac{a_{ex}}{a}\biggr)^4 \bigl| G_{k}(\tau_{ex})\bigr|^2 \biggr].
\label{fourteenF}
\end{eqnarray}
If the background expands, the terms proportional to $\bigl| G_{k}(\tau_{ex})\bigr|^2$ quickly becomes
negligible and the effective barotropic index goes to $-1/3$; conversely if the background contracts
the first term becomes subleading and the effective barotropic index tends asymptotically towards
$1$:
\begin{eqnarray}
w_{gw} &=& \frac{p_{gw}^{(F)}(k,\tau) }{\rho_{gw}(k,\tau)} \to - \frac{1}{3} , \qquad k^2 \bigl| F_{k}(\tau_{ex})\bigr|^2 \gg \biggl(\frac{a_{ex}}{a}\biggr)^4 \bigl| G_{k}(\tau_{ex})\bigr|^2,
\label{fifteenF}\\
w_{gw} &=& \frac{p_{gw}^{(F)}(k,\tau) }{\rho_{gw}(k,\tau)} \to 1 , \qquad k^2 \bigl| F_{k}(\tau_{ex})\bigr|^2 \ll \biggl(\frac{a_{ex}}{a}\biggr)^4 \bigl| G_{k}(\tau_{ex})\bigr|^2.
\label{sixteenF}
\end{eqnarray}
All in all we can then say that when the typical frequency of the gravitons exceeds the rate of variation of the geometry
(i.e. $k \gg {\mathcal H}$) the high-frequency gravitons behave as a perfect relativistic fluid and the barotropic index is
$1/3$. In the opposite
limit (i.e. $k \ll {\mathcal H}$) the effective barotropic index
becomes $-1/3$ if the background expands while it becomes $1$ if the background contracts.
Equation (\ref{sevenB}) can also be averaged term by term and the result will be an evolution
equation for the expectation values of the energy density and of the pressure, i.e.
\begin{equation}
\partial_{\tau} \overline{\rho}^{(F)}_{gw} + 3 {\mathcal H} [ \overline{\rho}^{(F)}_{gw} + \overline{p}^{(F)}_{gw} ] =0,
\label{seventeenF}
\end{equation}
where the contribution of the energy flux originally present in Eq. (\ref{sevenB})
disappears because of Eq. (\ref{threeF}).
\subsection{The Landau-Lifshitz pseudo-tensor}
The quantum averaging of the Landau-Lifshitz pseudo-tensor leads to the same
results of the effective energy-momentum tensor in the high-frequency limit but the results
are sharply different when the frequency is smaller than the Hubble rate. Using the same
notations of Eq. (\ref{fourXX}) we have that, in this case, the spectral distributions
are:
\begin{eqnarray}
\rho_{gw}^{(L)}(k,\tau) &=& \frac{k^3}{2 \pi^2 a^2} \biggl[ k^2 \bigl| F_{k}(\tau)\bigr|^2 + \bigl|G_{k}(\tau)\bigr|^2 + 4 {\mathcal H} ( G_{k} \, F_{k}^{*} + G_{k}^{*} F_{k})\biggr],
\label{fourG}\\
p_{gw}^{(L)}(k,\tau) &=& \frac{k^3}{6 \pi^2 a^2} \biggl[ 7 k^2 \bigl| F_{k}(\tau)\bigr|^2 - 5 \bigl| G_{k}(\tau)\bigr|^2 \biggr],
\label{fiveG}\\
{\mathcal P}_{gw}^{(L)}(k,\tau) &=& \frac{k^3}{6 \pi^2 a^2} \biggl[ 7 k^2 \bigl| F_{k}(\tau)\bigr|^2 - 5 \bigl| G_{k}(\tau)\bigr|^2 + 4 \biggl( {\mathcal H} - \frac{{\mathcal H}^{\prime}}{{\mathcal H}}\biggr)(G_{k} F_{k}^{*} + G_{k}^{*} F_{k})\biggr].
\label{sixG}
\end{eqnarray}
Equations (\ref{fiveG}) and (\ref{sixG}) give the pressure and the shifted pressure respectively;
note that ${\mathcal P}_{gw}^{(L)}$ enters the conservation
equation obeyed by the mean values:
\begin{equation}
\partial_{\tau} \overline{\rho}^{(L)}_{gw} + 3 {\mathcal H} [ \overline{\rho}^{(L)}_{gw} + \overline{{\mathcal P}}^{(L)}_{gw} ] =0,
\label{sevenG}
\end{equation}
and it coincides with $p_{gw}^{(L)}$ when ${\mathcal H}^2 = {\mathcal H}^{\prime}$, i.e. in the case of an exact de Sitter expansion.
Inside the Hubble radius (i.e. for $k \gg {\mathcal H}$), Eqs. (\ref{fourG}), (\ref{fiveG}) and (\ref{sixG}) imply:
\begin{equation}
\lim_{k \gg {\mathcal H}} \frac{p_{gw}^{(L)}(k,\tau) }{\rho_{gw}^{(L)}(k,\tau)} = \frac{{\mathcal P}_{gw}^{(L)}(k,\tau) }{\rho_{gw}^{(L)}(k,\tau)} = \frac{1}{3} \biggl( 1 + \frac{{\mathcal H}^2}{k^2} \biggr),
\label{eightG}
\end{equation}
We can then conclude, as expected, that the result (\ref{eightG}) coincides with Eq. (\ref{tenF})
(following, in turn, from Eqs. (\ref{sixF})--(\ref{sevenF})). In the opposite physical regime (i.e. when $k \ll {\mathcal H}$), however,
quantitative conclusions cannot be deduced in general terms and the best strategy will then be (see section \ref{sec4}) to analyze a number of specific examples
by explicitly computing the spectral energies and pressures. This discussion will allow for a fair comparison among the results of
Eqs. (\ref{fourG})--(\ref{sixG}) and Eqs. (\ref{sixF})--(\ref{sevenF}) in the low-frequency domain where
$k \ll {\mathcal H}$.
\subsection{The Brill-Hartle scheme and the quantum averaging}
The quantum average of the Brill-Hartle-Isaacson results of Eqs. (\ref{rhoB}) and (\ref{pB})
leads to the following spectral energy and pressure:
\begin{equation}
\rho_{gw}^{(B)}(k,\tau) = \frac{k^3}{\pi^2 a^2} \bigl|G_{k}(\tau)\bigr|^2 ,\qquad
p_{gw}^{(B)}(k,\tau) = \frac{k^5}{3\pi^2 a^2} \bigl|F_{k}(\tau)\bigr|^2.
\label{fourH}
\end{equation}
As usual in the limit $k \gg {\mathcal H}$ we have that $p_{gw}^{(B)}(k,\tau)/\rho_{gw}^{(B)}(k,\tau) \to 1/3$ since
$\bigl|G_{k}(\tau)\bigr|^2 = (k^2 + {\mathcal H}^2) \bigl|F_{k}(\tau)\bigr|^2$ when the corresponding wavelengths
are shorter than the Hubble radius. In the opposite limit, however, Eq. (\ref{fourH}) leads to a bizarre result:
the energy density is asymptotically vanishing (i.e. $\rho_{gw}^{(B)}(k,\tau) \to 0$) and the spectral pressure
becomes much larger than $\rho_{gw}^{(B)}(k,\tau)$ (i.e. $p_{B}(k,\tau)/\rho_{B}(k,\tau) \gg 1$) and it is formally divergent.
For contracting backgrounds the opposite is true: the spectral pressure gets progressively more negligible
(i.e. $p_{gw}^{(B)}(k,\tau) \to 0$) so that $p_{B}(k,\tau)/\rho_{B}(k,\tau) \ll 1$.
These apparent inconsistencies (further explored in the concrete examples of sections \ref{sec4} and \ref{sec5})
can be expected as long as Brill-Hartle average automatically selects all the modes that
are inside the Hubble radius and it is therefore not surprising
that they lead to quantitative ambiguities when the wavelengths exceed the Hubble radius. It is possible to obtain a covariant expression that also applies beyond the Hubble radius. In this case, however,
the energy density and pressure do not follow from the Brill-Hartle scheme. To
prove this statement let us neglect, for simplicity, the potential sources and write the full
second-order action in the covariant case:
\begin{equation}
S_{cov} = \frac{1}{8 \ell_{P}^2}\int \, \, \sqrt{- \overline{g}}\,\, d^{4} x\,\, \biggl[ \overline{g}^{\alpha\beta}\,\,\overline{\nabla}_{\alpha} \widetilde{h}_{\mu\nu} \overline{\nabla}_{\beta} \widetilde{h}^{\mu\nu} + 2 \overline{R}^{\,\,\gamma\,\,\,\,\,\,\,\,\alpha}_{\,\,\,\,\,\mu\nu} \, \widetilde{h}_{\gamma\alpha} \widetilde{h}^{\mu\nu} \biggr],
\label{fiveH}
\end{equation}
where the conditions of Eq. (\ref{oneD}) have been consistently imposed.
By extremizing the action (\ref{fiveH}) with respect to the variation of $ \widetilde{h}^{\mu\nu}$ we obtain the evolution equation for the covariant tensor amplitude
\begin{equation}
\overline{\nabla}_{\alpha} \overline{\nabla}^{\alpha} \widetilde{h}_{\mu\nu}
- 2\,\,\overline{R}^{\,\,\gamma}_{\,\,\,\,\,\mu\nu\alpha} \, \widetilde{h}_{\gamma}^{\,\,\,\,\alpha} =0.
\label{sixH}
\end{equation}
If we now consider the background metric and the fluctuating amplitude
as independent variables the energy-momentum tensor following from Eq. (\ref{fiveH})
reads:
\begin{eqnarray}
T_{\mu\nu}^{(gw)} &=& \frac{1}{4\ell_{P}^2} \biggl\{ \overline{\nabla}_{\mu} \widetilde{h}_{\alpha\beta} \overline{\nabla}_{\nu} \widetilde{h}^{\,\,\alpha\beta} + \overline{\nabla}_{\alpha} \widetilde{h}_{\mu\beta} \,\, \overline{\nabla}^{\alpha} \widetilde{h}^{\,\,\,\,\beta}_{\nu} + \overline{\nabla}_{\alpha} \widetilde{h}_{\nu\beta}\,\, \overline{\nabla}^{\alpha} \widetilde{h}^{\,\,\,\,\beta}_{\mu}
\nonumber\\
&+& 2\,\, \overline{R}^{\gamma}_{\,\,\,\,\,\,\mu \rho \alpha} \, \widetilde{h}_{\gamma}^{\,\,\alpha} \, \widetilde{h}_{\nu}^{\,\,\rho}
+ 2 \,\,\overline{R}^{\,\,\gamma}_{\,\,\,\,\,\, \nu \rho \alpha} \, \widetilde{h}_{\gamma}^{\,\,\alpha} \, \widetilde{h}^{\,\,\,\rho}_{\mu}
\nonumber\\
&-& \frac{1}{2} \overline{g}_{\mu\nu} \biggl[ \overline{\nabla}_{\rho} \widetilde{h}_{\alpha\beta} \overline{\nabla}^{\rho} \widetilde{h}_{\alpha\beta} + 2 \overline{R}^{\,\,\gamma\,\,\,\,\,\,\,\,\rho}_{\,\,\,\,\,\alpha\beta} \, \widetilde{h}_{\gamma\rho} \,\,\,\widetilde{h}^{\alpha\beta} \biggr]\biggr\}.
\label{sevenH}
\end{eqnarray}
If we now apply the tenets of the Brill-Hartle procedure \cite{four} the covariant gradients average out to zero.
Therefore we can flip the covariant derivative from one amplitude to the other. If we do this with the terms inside the squared bracket
of Eq. (\ref{sevenH}) we can obtain terms like $\widetilde{h}^{\alpha\beta} \overline{\nabla}_{\rho} \overline{\nabla}^{\rho} \widetilde{h}_{\alpha\beta}$. Using Eq. (\ref{sixH}) all these terms will produce various Riemann tensors
that will be neglected so that, at the very end, the only term surviving the average will be the first contribution
of Eq. (\ref{sevenH}), i.e.
\begin{equation}
T_{\mu\nu}^{(gw)} = {\mathcal B}_{\mu\nu} = \frac{1}{4 \ell_{P}^2} \langle \overline{\nabla}_{\mu} \widetilde{h}_{\alpha\beta} \overline{\nabla}_{\nu} \widetilde{h}^{\alpha\beta} \rangle_{BH} = \frac{1}{4 \ell_{P}^2} \langle \partial_{\mu} h_{ij} \partial_{\nu} \, h^{i j} \rangle_{BH},
\label{eightH}
\end{equation}
where the Brill-Hartle average has been intentionally indicated to clarify the origin of the term.
The second equality in Eq. (\ref{eightH})
follows by making explicit the covariant derivatives and by appreciating that, within the present
definitions, $\widetilde{h}_{i\, j} = - a^2 h_{ij}$ while the other components of $\widetilde{h}_{\mu\nu}$ vanish.
Equation (\ref{eightH}) coincides with Eq. (\ref{threeD}) and it shows that the Brill-Hartle average
effectively neglects all the terms that are relevant beyond the Hubble radius.
A result applicable beyond the Hubble radius follows from
Eq. (\ref{fiveH}) but without imposing the Brill-Hartle averaging: if we
use Eq. (\ref{fiveH}) and express it in the conformally flat case
(i.e. $\overline{g}_{\mu\nu} = a^2 \eta_{\mu\nu}$ and $ \widetilde{h}_{ij} = - a^2 h_{ij}$)
we obtain, after a lengthy but straightforward calculation, the same expression
of the effective energy-momentum tensor ${\mathcal F}_{\mu\nu}$
given in Eqs. (\ref{rhoF})--(\ref{pF}).
\renewcommand{\theequation}{4.\arabic{equation}}
\section{The spectral energy density}
\setcounter{equation}{0}
\label{sec4}
The salient properties of the different pseudo-tensors in the case of expanding backgrounds are summarized,
for the sake of conciseness, in Tab. \ref{TABLE1}. While the basic features of ${\mathcal F}_{\mu\nu}$ have been deduced
without assuming any specific evolution of the background, the physical properties of ${\mathcal L}_{\mu\nu}$
in the long wavelength limit demand a more concrete analysis of some specific examples. Table \ref{TABLE1} also suggests
that the Brill-Hartle results are only applicable in the high-frequency regime and they must be otherwise completed
by the full expression of the covariant energy-momentum tensor (see Eq. (\ref{sevenH}) which coincides
with ${\mathcal F}_{\mu\nu}$ in the conformally flat case.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{||c|c|c|c|c||}
\hline
\hline
\rule{0pt}{4ex} pseudo-tensor& $w_{gw}$ ($k\tau\gg 1$) & $w_{gw}$ ($k\tau \ll 1$) & $\rho_{gw}^{(X)}$ ($k\tau \gg 1$) & $\rho_{gw}^{(X)}$($k\tau \ll 1$) \\
\hline
\hline
${\mathcal F}_{\mu\nu}$& $1/3$ & $-\,1/3$ & $\rho_{gw}^{(F)} \geq 0$ & $\rho_{gw}^{(F)} \geq 0$ \\
${\mathcal L}_{\mu\nu}$& $1/3$ & undetermined & $\rho_{gw}^{(L)}\geq 0 $ & undetermined \\
${\mathcal B}_{\mu\nu}$ & $1/3$ & not applicable & $\rho_{gw}^{(B)} \geq 0$ & not applicable \\
\hline
\hline
\end{tabular}
\caption{Summary of the salient properties of the different pseudo-tensors in the case of expanding backgrounds;
$w_{gw}$ denotes the ratio of the spectral pressure and of the spectral energy density in the different cases.}
\label{TABLE1}
\end{center}
\end{table}
A table similar to Tab. \ref{TABLE1} can be compiled in the case of contracting
backgrounds\footnote{For instance $-1/3$ must be substituted
by $1$, as the general considerations of Eq. (\ref{sixteenF}) demonstrate (see also Ref. \cite{ten}
for some explicit example).} but in what follows the ambiguities of Tab. \ref{TABLE1} will be addressed
mainly in the case of expanding backgrounds. When the wavelengths are all inside the Hubble radius, the frequency range
of the spectrum roughly ranges between the aHz and $100$ MHz (i.e. $10^{-18}$ Hz and $10^{8}$ Hz).
In backreaction problems (see e.g. \cite{nine,ten}) the averaged energy density and pressure
beyond the Hubble radius are determined by integrating the spectral energy density and the spectral pressure
over $d \ln{k}$ between the fixed extrema $k_{ex}$ and $k_{re}$
corresponding to the wavelengths that exit and reenter the Hubble radius.
\subsection{The expanding branch of the de Sitter space-time}
If de Sitter space is exact (i.e. in the absence of slow-roll corrections) the scale
factor is given by $a_{i}(\tau) = (- \tau_{1}/\tau)$ with ${\mathcal H}= - 1/\tau$; the
scalar modes are absent but the propagating tensors are characterized by
the following mode function:
\begin{equation}
F_{k}(\tau) = \frac{1}{\sqrt{2 k} \, a(\tau)} \biggl( 1 - \frac{i }{k \tau} \biggr) \, e^{- i \, k \tau}, \qquad \tau \leq - \tau_{1},
\label{MF}
\end{equation}
where the boundary conditions follow from Eq. (\ref{threeE}). The spectral energy
density is in general a function of $k$ and $\tau$ but
if we introduce $x = | k \tau |$ the spectral energy and pressure are both
functions of the single dimensionless variable $x$:
\begin{equation}
\rho_{gw}^{(F)}(x) = \frac{H_{1}^4}{4 \pi^2} \biggl[ x^2 (2 x^2 +1)\biggr], \qquad
p_{gw}^{(F)}(x) = \frac{H_{1}^4}{12 \pi^2} \biggl[ x^2 (2 x^2 -1)\biggr],
\label{twoL}
\end{equation}
where $H_{1} a_{1} \equiv H_{1} = 1/\tau_{1}$ [recall that $a_{1} = a(- \tau_{1}) =1$]. According to Eq. (\ref{twoL})
the spectral energy is always positive semi-definite and that the effective barotropic index interpolates
between $-1/3$ (when $x \ll 1$) and $1/3$ (when $x \gg 1$). Both results have been anticipated
in Tab. \ref{TABLE1} on the basis of the general considerations of section \ref{sec3}. Using then Eq. (\ref{twoL})
we have:
\begin{equation}
\rho_{gw}^{(F)}(x) \geq 0, \qquad \lim_{x \gg 1} \frac{p_{gw}^{(F)}(x)}{\rho_{gw}^{(F)}(x)} = \frac{1}{3},
\qquad \lim_{x \ll 1} \frac{p_{gw}^{(F)}(x)}{\rho_{gw}^{(F)}(x)} = - \frac{1}{3},
\label{threeL}
\end{equation}
so that Eq. (\ref{threeL}) agrees exactly with Eqs. (\ref{tenF}) and (\ref{sixteenF}) since
the limit $ x \ll 1$ corresponds to those frequencies that are smaller than the
rate of variation of the geometry while in the regime $ x \gg 1$ the frequencies exceed
${\mathcal H}$.
In the case of ${\mathcal L}_{\mu\nu}$ the same analysis leading to Eq. (\ref{threeL}) solves some of the ambiguities
listed in Tab. \ref{TABLE1}. In particular,
using Eq. (\ref{MF}) into Eqs. (\ref{fourG})--(\ref{fiveG}) (recall also Eqs. (\ref{rhoL})--(\ref{pL})) the
spectral energy density and the spectral pressure are:
\begin{equation}
\rho_{gw}^{(L)}(x) = \frac{H_{1}^4}{4 \pi^2} \biggl[x^2 (2 x^2 - 7)\biggr], \qquad
p_{gw}^{(L)}(x) = {\mathcal P}_{gw}^{(L)}(x) = \frac{H_{1}^4}{12 \pi^2} \biggl[ x^2 (2 x^2 + 7)\biggr].
\label{fourL}
\end{equation}
According to Eq. (\ref{fourL}) spectral energy density does not have a definite sign since it is positive
inside the Hubble radius but negative outside:
\begin{equation}
\lim_{x \gg 1} \rho_{gw}^{(L)}(x) = \frac{H_{1}^4}{2 \pi^2} x^4, \qquad \lim_{x \ll 1} \rho_{gw}^{(L)}(x) = - \frac{7 H_{1}^4}{4 \pi^2} x^2,
\label{fiveL}
\end{equation}
in agreement with previous results \cite{eight,nine,ten}. Since in the de Sitter case ${\mathcal H}^2 = {\mathcal H}^{\prime}$ we also have that the pressure and the shifted pressure coincide, i.e. $p_{gw}^{(L)}(x) = {\mathcal P}_{gw}^{(L)}(x) $; the effectice barotropic index is then given by
\begin{equation}
\lim_{x \gg 1} \frac{{\mathcal P}_{gw}^{(L)}(x)}{\rho_{gw}^{(L)}(x)} = \frac{1}{3},
\qquad \lim_{x \ll 1} \frac{{\mathcal P}_{gw}^{(L)}(x)}{\rho_{gw}^{(L)}(x)} = - \frac{1}{3},
\label{sixL}
\end{equation}
which is formally the same result of Eq. (\ref{threeL}) with the difference that the signs are inverted:
the averaged energy density is negative while the corresponding pressure is positive. Taken at face
value the result of Eq. (\ref{sixL}) violates the weak energy conditions but it is difficult to attribute a
profound physical meaning to this occurrence as long as there exist other pseudo-tensors (like ${\mathcal F}_{\mu\nu}$)
not violating the weak energy condition.
In the Brill-Hartle case Eqs. (\ref{rhoB})--(\ref{pB}) and (\ref{fourH}) imply that the corresponding spectral energy density
and pressure are
\begin{equation}
\rho_{gw}^{(B)}(x) = \frac{H_{1}^4}{2 \pi^2} x^4, \qquad
p_{gw}^{(B)}(x) = \frac{ H_{1}^4}{6 \pi^2} \biggl[ x^2 ( x^2 + 1)\biggr].
\label{sevenL}
\end{equation}
According to Eq. (\ref{sevenL}) the energy density is positive semidefinite but the effective barotropic
index diverges in the limit $x \to 0 $:
\begin{equation}
\rho_{gw}^{(B)}(x) \geq 0, \qquad \lim_{x \gg 1} \frac{p_{gw}^{(B)}(x)}{\rho_{gw}^{(B)}(x)} = \frac{1}{3},
\qquad \lim_{x \ll 1} \frac{p_{gw}^{(B)}(x)}{\rho_{gw}^{(B)}(x)} \simeq \frac{1}{3 x^4}.
\label{eightL}
\end{equation}
Equation (\ref{eightL}) confirms the conclusion of section \ref{sec3} where it has been
shown, on a general ground, that the Brill-Hartle approach selects a priori only the wavelengths inside
the Hubble radius and it gives the same result of all the other strategies only in this physical domain.
\subsection{The expanding de Sitter background matched to radiation}
If the mode function normalized during the de Sitter phase
but it evolves through radiation the modes not only exit the Hubble radius but they can also reenter. The
spectral energy density and pressure can then be expressed in terms of two dimensionless variables\footnote{Note that, unlike the pure de Sitter case, we find it more convenient to define $x = k\tau$ (and not $x = |k\tau|$ as in the previous case).} i.e. $x = k\tau$ and $y = k \tau_{1}$. The scale factor for $\tau \geq -\tau_{1}$ is linear (as in the case of a radiation-dominated
regime) $a_{r}(\tau) = (\tau + 2 \tau_{1})/\tau_{1}$ and it is
continuously matched to $a_{i}(\tau) = (- \tau_{1}/\tau)$. The scale
factor and the its rate of variation are both continuous in $-\tau_{1}$, i.e.
$a_{i}(-\tau_{1}) = a_{r}(-\tau_{1})$ and ${\mathcal H}_{i}(-\tau_{1})=
{\mathcal H}_{r}(-\tau_{1})$. With these precisions we have that for $\tau \geq - \tau_{1}$ the mode functions are
given by
\begin{equation}
F_{k}(\tau) = \frac{1}{ \sqrt{2 k} \,a_{r}(\tau)} \biggl[ c_{+}(k, \tau_{1}) e^{ - i k (\tau+ 2 \tau_{1}) }+ c_{-}(k, \tau_{1}) e^{ i k (\tau+ 2 \tau_{1})}\biggr],
\label{oneM}
\end{equation}
where ${\mathcal H} = 1/(\tau + 2 \tau_{1})$ and $G_{k}= F_{k}^{\prime}$. The complex coefficients $c_{\pm}(k,\tau_{1})$ appearing in Eq. (\ref{oneM})
obey $|c_{+}(k,\tau_{1})|^2 - |c_{-}(k,\tau_{1})|^2 =1$ and are given by:
\begin{equation}
c_{+}( y) = \frac{e^{ 2 \,i\, y} ( 2 y^2 + 2 \, i\, y-1)}{2 y^2}, \qquad c_{-}( y) = \frac{1}{2 y^2},
\label{twoM}
\end{equation}
where we introduced the dimensionless variable $y = k \tau_{1}$. The spectral energy density
and the pressure have an exact expression which is however not so revealing. For instance
in the case of ${\mathcal F}_{\mu\nu}$ we have:
\begin{eqnarray}
\rho_{gw}^{(F)}(x,y) &=& \frac{H_{1}^4\, y^3}{8 \pi ^2 (x+2 y)^6} \biggl[\left(2 y^4+1\right) \left(2 x^2+8 x y+8 y^2+1\right)
\nonumber\\
&+&\left(4 x y^2-2 x+8 y^3-2 y\right)
\sin{2 (x+y)} - \left(4 x y+6 y^2+1\right) \cos{2 (x+y)}\biggr],
\nonumber\\
p_{gw}^{(F)}(x,y) &=& \frac{ H_{1}^4 y^3}{24 \pi ^2 (x+2 y)^6} \biggl[\left(2 y^4+1\right) \left(2 x^2+8 x y+8 y^2+3\right)
\nonumber\\
&-& 2 \left(4 x^2 y+10 x y^2+3 x+4 y^3+3 y\right) \sin{2 (x+y)}
\nonumber\\
&-&\left(x^2 \left(8 y^2-4\right)+4 x \left(8 y^2-1\right) y+32 y^4
+ 2 y^2+3\right)\cos{2 (x+y)} \biggr],
\label{threeM}
\end{eqnarray}
The spectral energy density and pressure appearing in Eq. (\ref{threeM})
depend on the two dimensionless variables $x= k\tau$ and $y = k\tau_{1}$; these expressions
can be usefully compared with the results of Eq. (\ref{twoL}) holding in the pure de Sitter case.
The frequencies amplified in the transition from de Sitter space-time
to the radiation epoch always obey the condition $y \ll 1$. When $x < 1$ the amplified frequencies are still smaller than the rate of variation of the geometry: this means that to make sure that the wavelengths are larger than the Hubble radius during the radiation stage, Eq. (\ref{threeM}) should be expanded for $ y \ll 1$ and for $x\ll1 $ with the condition $ y < x$. The leading order result of this double expansion is then given by:
\begin{eqnarray}
\rho_{gw}^{(F)}(x,y) &=& \frac{H_{1}^4}{4 \pi^2} \frac{y^4}{x^2} \biggl[ 1 + {\mathcal O}(x^2) + {\mathcal O}\biggl(\frac{y}{x}\biggr) \biggr],
\label{fourM}\\
p_{gw}^{(F)}(x,y) &=& - \frac{H_{1}^4}{12 \pi^2} \frac{y^4}{x^2} \biggl[ 1 + {\mathcal O}(x^2) + {\mathcal O}\biggl(\frac{y}{x}\biggr) \biggr].
\label{fiveM}
\end{eqnarray}
As expected Eqs. (\ref{fourM}) and (\ref{fiveM}) imply that the effective barotropic index is $-1/3$ while the energy density
is positive semi-definite. The wavelengths that exited the Hubble radius during the de Sitter
phase and reentered during the radiation epoch correspond to the limit $y \ll 1$ and $x \gg 1$; as expected, the barotropic index goes in this case to $1/3$ even if the approach is not monotonic but oscillating as it can also be argued from Eq. (\ref{threeM}). Equations (\ref{fourM}) and (\ref{fiveM}) confirm, once more, the summary of Tab. \ref{TABLE1} in the case of ${\mathcal F}_{\mu\nu}$.
The results of Eqs. (\ref{threeM}), (\ref{fourM}) and (\ref{fiveM}) will now be compared with the analog expressions
derived from Eqs. (\ref{rhoL}) and (\ref{pL}) in the Landau-Lifshitz approach; the exact
results in this case are:
\begin{eqnarray}
\rho_{gw}^{(L)}(x,y) &=& \frac{H_{1}^4\, y^3}{8 \pi ^2 (x+2 y)^6} \biggl[\left(2 y^4+1\right) \left(2 x^2+8 x y+8 y^2-7\right)
\nonumber\\
&+& \left(12 x y+10 y^2+7\right) \cos{2 (x+y)}
\nonumber\\
&-& 2 \left(6 x y^2-3 x+12 y^3+y\right) \sin{2 (x+y)}\biggr],
\nonumber\\
p_{gw}^{(L)}(x,y) &=& \frac{H_{1}^4\, y^3}{24 \pi ^2 (x+2 y)^6}\biggl[
\left(2 y^4+1\right) \left(2 x^2+8 x y+8 y^2-5\right)
\nonumber\\
&+& \left(12 x^2 \left(2 y^2-1\right)+4 x \left(24 y^2-7\right) y+96 y^4-18
y^2+5\right) \cos{2 (x+y)}
\nonumber\\
&+& 2 \left(12 x^2 y+38 x y^2+5 x+28 y^3+5 y\right) \sin{2 (x+y)}\biggr].
\label{sixM}
\end{eqnarray}
According to Eq. (\ref{sixM}), for $x \gg 1$ and $y \gg 1$ the effective barotropic index is
always $1/3$; this conclusion is compatible with Eq. (\ref{threeM}) in the same physical limit.
However in the limit $ y \ll 1$, $x\ll 1$ and $y< x$ the results are:
\begin{eqnarray}
\rho_{gw}^{(L)}(x,y) &=& - \frac{5 H_{1}^4}{12 \pi^2} \frac{y^4}{x^2} \biggl[ 1 + {\mathcal O}(x^2) + {\mathcal O}\biggl(\frac{y}{x}\biggr) \biggr],
\label{sevenM}\\
p_{gw}^{(L)}(x,y) &=& \frac{7 H_{1}^4}{12 \pi^2} \frac{y^4}{x^2} \biggl[ 1 + {\mathcal O}(x^2) + {\mathcal O}\biggl(\frac{y}{x}\biggr) \biggr].
\label{eightM}
\end{eqnarray}
According to Eqs. (\ref{sevenM}) and (\ref{eightM}) the spectral energy density is negative while
the barotropic index is given by $-7/5$. If we consider the shifted pressure
${\mathcal P}_{gw}^{(L)}(k,\tau)$ (see Eqs. (\ref{thirteenC}) and (\ref{sixG})) the result is different
\begin{eqnarray}
{\mathcal P}_{gw}^{(L)}(x,y) &=& \frac{5 H_{1}^4}{36 \pi^2} \frac{y^4}{x^2} \biggl[ 1 + {\mathcal O}(x^2) + {\mathcal O}\biggl(\frac{y}{x}\biggr) \biggr],
\label{nineM}
\end{eqnarray}
and it leads, as expected, to the more standard (i.e. $-1/3$) barotropic index. However, while
in the case of Eqs. (\ref{fourM}) and (\ref{fiveM}) the energy density is positive and the pressure is negative
(as it is common when the spatial gradients dominate) in the Landau-Lifshitz case the situation is reversed
since the energy density is negative and the pressure is positive. Finally, in the case of the Brill-Hartle proposal
the energy density is positive semi-definite and the effective barotropic index is $1/3$ when $x \gg 1$ and $y \ll 1$.
However, in the limits $y \ll 1$, $x\ll 1$ and $y < x$ we have instead
\begin{eqnarray}
\rho_{gw}^{(L)}(x,y) &=& \frac{ H_{1}^4}{18\pi^2} \frac{y^4}{x^2} \biggl[ 1 + {\mathcal O}(x^2) + {\mathcal O}(y x) \biggr],
\label{tenM}\\
p_{gw}^{(L)}(x,y) &=& \frac{H_{1}^4}{6 \pi^2} \frac{y^4}{x^2} \biggl[ 1 + {\mathcal O}(x^2) + {\mathcal O}(y x) \biggr],
\label{elevenM}
\end{eqnarray}
showing that the spectral pressure is much larger than the energy density, as already discussed in the pure de Sitter case
and as it follows from the general arguments illustrated after Eq. (\ref{fourH}).
All in all the effective energy momentum tensor obtained from the second-order
variation of the action leads to an energy density that is always gauge-invariant and positive semidefinite
exactly as argued in Tab. \ref{TABLE1}. In the Landau-Lifshitz parametrization the
weak energy condition is violated while the ambiguities of the Brill-Hartle approach (when applied for frequencies
smaller than the rate of variation of the geometry) demand a completion of the energy-momentum
pseudo-tensor in the low-frequency limit.
\renewcommand{\theequation}{5.\arabic{equation}}
\section{Observables in the concordance scenario}
\setcounter{equation}{0}
\label{sec5}
The expectation values of the energy density [i.e. $\overline{\rho}^{(X)}_{gw}$ with $X=F,\, L,\, B$] lead to
the corresponding spectral energy densities in critical units
\begin{equation}
\Omega_{gw}^{(X)}(k,\tau) = \frac{1}{\rho_{crit}} \, \frac{d \overline{\rho}_{gw}^{(X)}}{d \ln{k}} \equiv \frac{\rho_{gw}^{(X)}(k,\tau)}{\rho_{crit}},
\label{oneN}
\end{equation}
where $\rho_{crit} = 3 \, H^2 \, \overline{M}_{P}^2$; $\Omega_{gw}^{(X)}(k,\tau)$ together with the
power spectra $P_{T}(k,\tau)$ and $Q_{T}(k,\tau)$ are the pivotal observables customarily employed
in the concordance scenario to assess the energy density
of the relic gravitons. The (less conventional) spectral pressure
in critical units can be instead defied as:
\begin{equation}
\Sigma_{gw}^{(X)}(k,\tau) = \frac{1}{\rho_{crit}} \, \frac{d \overline{p}_{gw}^{(X)}}{d \ln{k}} \equiv \frac{p_{gw}^{(X)}(k,\tau)}{\rho_{crit}}.
\label{twoN}
\end{equation}
$\Omega_{gw}^{(X)}(k,\tau)$ and $\Sigma_{gw}^{(X)}(k,\tau)$ will now be computed in the different parametrizations explored so far and in the realistic situation where the evolution begins with a quasi-de Sitter phase, continues through a radiation-dominated epoch and finally arrives at a matter-dominated stage of expansion.
\subsection{The spectral energy density in critical units}
The properties of $\Omega_{gw}^{(F)}$ and $\Sigma_{gw}^{(F)}$ can be deduced in general terms without a specific knowledge of the evolution of the corresponding mode functions. To illustrate this point, Eqs. (\ref{sixF})--(\ref{sevenF}) can be inserted into Eqs. (\ref{oneN})--(\ref{twoN}) so that the resulting expressions are:
\begin{eqnarray}
\Omega_{gw}^{(F)}(k,\tau) &=& \frac{1}{24 \, H^2 \, a^2 } \biggl[ Q_{T} + k^2 P_{T}\biggr],
\label{threeN}\\
\Sigma_{gw}^{(F)}(k,\tau) &=& \frac{1}{24 \, H^2 \, a^2 } \biggl[ Q_{T} - \frac{k^2}{3} P_{T}\biggr].
\label{fourN}
\end{eqnarray}
To leading order in ${\mathcal H}/k < 1$ (and even without an explicit form of the mode functions) we have that $Q_{T} = k^2\, P_{T} [ 1 + ({\mathcal H}/k)^2+ {\mathcal O}({\mathcal H}^4/k^4)]$; thus the expressions of $\Omega_{gw}^{(F)}(k,\tau)$ and $\Sigma_{gw}^{(F)}(k,\tau)$ inside the Hubble radius become:
\begin{eqnarray}
\Omega_{gw}^{(F)}(k,\tau) &=& \frac{k^2\,\, P_{T}(k,\tau)}{12 \, H^2 \, a^2 } \biggl[ 1 + \frac{{\mathcal H}^2}{2k^2} + {\mathcal O}\biggl(\frac{{\mathcal H}^4}{k^4}\biggr) \biggr],
\label{sixN}\\
\Sigma_{gw}^{(F)}(k,\tau) &=& \frac{k^2\,\, P_{T}(k,\tau)}{36 \, H^2 \, a^2 } \biggl[ 1 + \frac{3{\mathcal H}^2}{2k^2} + {\mathcal O}\biggl(\frac{{\mathcal H}^4}{k^4}\biggr) \biggr].
\label{sevenN}
\end{eqnarray}
The expressions of $\Omega_{gw}^{(F)}(k,\tau)$ and $\Sigma_{gw}^{(F)}(k,\tau)$ for typical wavelengths larger than the Hubble
radius are equally immediate since, in this limit, Eqs. (\ref{elevenF})--(\ref{twelveF}) imply
\begin{eqnarray}
Q_{T}(k,\tau) &=& Q_{T}(k,\tau_{ex}) \biggl(\frac{a_{ex}}{a} \biggr)^4 \biggl [ 1 + {\mathcal O}\biggl(\frac{k^2}{{\mathcal H}^2}\biggr) \biggr],
\label{eightN}\\
P_{T}(k,\tau) &=& P_{T}(k,\tau_{ex}) \biggl(\frac{a_{ex}}{a} \biggr)^4 \biggl [ 1 + {\mathcal O}\biggl(\frac{k^2}{{\mathcal H}^2}\biggr) \biggr],
\label{nineN}
\end{eqnarray}
where $P_{T}(k,\tau_{ex})$ and $Q_{T}(k,\tau_{ex})$ are the (constant)
values of the power spectra for $k \tau_{ex} = {\mathcal O}(1)$. Inserting
Eqs. (\ref{eightN})--(\ref{nineN}) into Eqs. (\ref{threeN})--(\ref{fourN}) the leading-order
expression for $\Omega_{gw}^{(F)}(k,\tau)$ and $\Sigma_{gw}^{(F)}(k,\tau)$ are:
\begin{eqnarray}
\Omega_{gw}^{(F)}(k,\tau) &=& \frac{k^2\,\, P_{T}(k,\tau_{ex})}{24 \, H^2 \, a^2 } \,\,\biggl[ 1 + \frac{Q_{T}(k,\tau_{ex})}{k^2\, P_{T}(k,\tau_{ex})} \biggl(\frac{a}{a_{ex}}\biggr)^4 + {\mathcal O}\biggl(\frac{k^4}{{\mathcal H}^4}\biggr) \biggr],
\label{tenN}\\
\Sigma_{gw}^{(F)}(k,\tau) &=& -\frac{k^2\,\, P_{T}(k,\tau_{ex})}{72 \, H^2 \, a^2 } \,\, \biggl[ 1 - \frac{3 Q_{T}(k,\tau_{ex})}{k^2\, P_{T}(k,\tau_{ex})} \biggl(\frac{a}{a_{ex}}\biggr)^4+ {\mathcal O}\biggl(\frac{k^4}{{\mathcal H}^4}\biggr) \biggr].
\label{elevenN}
\end{eqnarray}
If the background expands the second term inside the squared brackets at the right hand side of Eqs. (\ref{tenN})--(\ref{elevenN})
is always negligible and, approximately, $\Omega_{gw}^{(F)} \simeq - \Sigma_{gw}^{(F)}/3$.
If the background contracts the second term inside the squared brackets at the right hand side of Eqs. (\ref{tenN})--(\ref{elevenN}) may become dominant and, in this case,
$\Omega_{gw}^{(F)} \simeq \Sigma_{gw}^{(F)}$.
For wavelengths shorter that the Hubble radius the general results of Eqs. (\ref{sixN}) and (\ref{sevenN}) are, in practice, the same for all the various prescriptions and the only differences appear from the
first correction to the leading-order result;
for illustration the next-to-leading order correction of the spectral energy density is reported in two
relevant cases:
\begin{eqnarray}
\Omega_{gw}^{(L)}(k,\tau) &=& \frac{k^2\,\, P_{T}(k,\tau)}{12 \, H^2 \, a^2 } \biggl[ 1 - \frac{7{\mathcal H}^2}{2k^2} + {\mathcal O}\biggl(\frac{{\mathcal H}^4}{k^4}\biggr) \biggr],
\label{twelveN}\\
\Omega_{gw}^{(B)}(k,\tau) &=& \frac{k^2\,\, P_{T}(k,\tau)}{12 \, H^2 \, a^2 } \biggl[ 1 + \frac{{\mathcal H}^2}{k^2} + {\mathcal O}\biggl(\frac{{\mathcal H}^4}{k^4}\biggr) \biggr].
\label{thirteenN}
\end{eqnarray}
All in all, as long as we are inside the Hubble radius,
the spectral energy density and pressure are unambiguous and do not crucially change from one
pseudo-tensor to the others. The same is not true when the corresponding wavelengths
are larger than the Hubble radius.
\subsection{Explicit results in the concordance scenario}
To examine more closely the implications of the different proposals we consider a realistic evolution where a quasi-de Sitter stage of expansion ends at a time $- \tau_{r}$ and
it is replaced by the radiation-dominated stage:
\begin{equation}
a_{r}(\tau) = \frac{\beta \tau + (\beta+ 1) \tau_{r}}{\tau_{r}}, \qquad x(\tau) = k \biggl[ \tau + \frac{\beta +1}{\beta} \tau_{r} \biggr],
\label{oneO}
\end{equation}
where $\beta = (1 -\epsilon)^{-1}$ is a numerical factor required for the continuity of the scale factors in the quasi-de Sitter
stage and $\epsilon = - \dot{H}/H^2$ is the conventional slow-roll parameter. The inflationary phase ends for $\tau= - \tau_{r}$ and the scale factor is normalized as $a_{r}(- \tau_{r}) =1$. The evolution dictated by Eq. (\ref{oneO})
lasts until $\tau_{m}$ when the matter dominated stage begins:
\begin{equation}
a_{m}(\tau) = \frac{[\beta (\tau + \tau_{m}) + 2 (\beta+ 1) \tau_{r}]^2}{4 \tau_{r}[ \beta \tau_{m} + (\beta+1) \tau_{r}]}, \qquad y(\tau) = k \biggl[ \tau + \tau_{m} + 2 \frac{\beta +1}{\beta} \tau_{r} \biggr],
\label{threeO}
\end{equation}
where $a_{m}(\tau_{m}) = a_{r}(\tau_{m})$ and $a_{m}^{\prime}(\tau_{m}) = a_{r}^{\prime}(\tau_{m})$.
From Eqs. (\ref{oneO}) and (\ref{threeO}) the power
spectra can be derived before (i.e. $\tau < \tau_{m}$) and after (i.e. $\tau>\tau_{m}$) the dominance of matter:
\begin{eqnarray}
P^{(r)}_{T}(k,\tau, \tau_{r}) &=& \overline{P}_{T}(k,\tau_{r}) \frac{\sin^2{x(\tau)}}{|x(\tau)|^2}, \qquad \tau< \tau_{m},
\label{fourO}\\
P^{(m)}_{T}(k, \tau, \tau_{r}, \tau_{m}) &=& 9 \overline{P}_{T}(k,\tau_{r})\biggl[ \frac{\cos{y(\tau)}}{y^2(\tau)} - \frac{\sin{y(\tau)}}{y^3(\tau)}\biggr]^2, \qquad \tau > \tau_{m}.
\label{fiveO}
\end{eqnarray}
When the relevant wavelengths exceed the Hubble radius the general expressions of
Eqs. (\ref{fourO}) and (\ref{fiveO}) coincide i.e.
\begin{equation}
\lim_{|k\tau| \ll 1} P^{(r)}_{T}(k,\tau, \tau_{r}) = \lim_{|k\tau| \ll 1} P^{(m)}_{T}(k, \tau, \tau_{r}, \tau_{m}) = \overline{P}_{T}(k,\tau_{r}),
\label{fiveOa}
\end{equation}
where $\overline{P}_{T}(k,\tau_{r})$ denotes the (constant) inflationary power spectrum:
\begin{equation}
\overline{P}_{T}(k,\tau_{r}) = 2^{ 2 \nu} \frac{\Gamma^2(\nu)}{\pi^3} \biggl(\frac{H_{r}}{\overline{M}_{P}}\biggr)^2
\, |k\tau_{r}|^{ 3 - 2 \nu}, \qquad \nu = \frac{(3 - \epsilon)}{ 2( 1- \epsilon)}.
\label{twoO}
\end{equation}
Inserting Eqs. (\ref{fourO})--(\ref{fiveO}) into the general expressions of Eqs. (\ref{threeN})--(\ref{fourN})
the spectral energy density and pressure inside and beyond the Hubble radius can be obtained and they are:
\begin{eqnarray}
\Omega_{gw}^{(F)}(k,\tau) &=& \frac{\overline{P}_{T}(k,\tau_{r})}{24},\qquad \Sigma_{gw}^{(F)}(k,\tau) = \frac{\overline{P}_{T}(k,\tau_{r})}{72},\qquad |k\tau| \gg 1,
\label{fiveOb}\\
\Omega_{gw}^{(F)}(k,\tau) &=& \frac{\overline{P}_{T}(k,\tau_{r})}{24} |k\tau|^2 ,\qquad \Sigma_{gw}^{(F)}(k,\tau) = - \frac{\overline{P}_{T}(k,\tau_{r})}{72} |k\tau|^2,\qquad |k\tau| \ll 1.
\label{fiveOc}
\end{eqnarray}
Equations (\ref{fiveOb})--(\ref{fiveOc}) arise as limits of concrete expressions (holding for a specific
form of the mode functions) and they agree with the results of Eqs. (\ref{sixN})--(\ref{sevenN}) and (\ref{tenN})--(\ref{elevenN}) that are instead derived as approximated expressions\footnote{It follows from Eq. (\ref{fourO}), $P_{T}(k,\tau) \to \overline{P}_{T}(k, \tau_{r})$ for $|k\tau| \ll 1$ while $P_{T}(k,\tau) \to \overline{P}_{T}(k, \tau_{r})/2$ for $|k\tau| \gg 1$ since, in this limit, $ \sin^2{x(\tau)} \to 1/2$. } of the general results (\ref{threeN})--(\ref{fourN}). A similar discussion can be repeated in the matter-dominated stage of expansion (i.e. $\tau > \tau_{m}$) where the limits of the concrete expressions read:
\begin{eqnarray}
\Omega_{gw}^{(F)}(k,\tau) &=& \frac{3}{32\, |k\tau|^2}\overline{P}_{T}(k,\tau_{r}),\qquad \Sigma_{gw}^{(F)}(k,\tau) = \frac{\overline{P}_{T}(k,\tau_{r})}{32 \, |k\tau|^2},\qquad |k\tau| \gg 1,
\label{fiveOd}\\
\Omega_{gw}^{(F)}(k,\tau) &=& \frac{\overline{P}_{T}(k,\tau_{r})}{96} |k\tau|^2 ,\qquad \Sigma_{gw}^{(F)}(k,\tau) = - \frac{\overline{P}_{T}(k,\tau_{r})}{288} |k\tau|^2,\qquad |k\tau| \ll 1.
\label{fiveOe}
\end{eqnarray}
The results of Eqs. (\ref{fiveOb})--(\ref{fiveOc}) and (\ref{fiveOd})--(\ref{fiveOe}) can be compared with
the analog results obtainable in the Landau-Lifshitz parametrization. Consider first the radiation
phase (i.e. $\tau < \tau_{m}$) where
\begin{equation}
\Omega_{gw}^{(L)}(k,\tau)
= - \frac{5}{72} |k\, \tau|^2 \overline{P}_{T}(k,\tau_{r}),\qquad
\Sigma_{gw}^{(L)}(k,\tau) = \frac{5}{216} |k\, \tau|^2 \overline{P}_{T}(k,\tau_{r}),\qquad |k\, \tau| \ll 1.
\label{sevenO}
\end{equation}
Equation (\ref{sevenO}) gives the spectral energy density and the spectral pressure in critical units
during the radiation epoch (i.e.
for $\tau_{m} > \tau > - \tau_{r}$) and when the relevant wavelengths are larger than the Hubble radius (i.e.
$|k \tau| \ll 1$ with $|k \tau_{r}| \ll 1$). Similarly we can also deduce the spectral energy density
during the matter stage:
\begin{equation}
\Omega_{gw}^{(L)}(k,\tau) = - \frac{11}{480} \,|k\tau|^2 \,\overline{P}_{T}(k,\tau_{r}),\qquad
\Sigma_{gw}^{(L)}(k,\tau) = \frac{11}{1440} \, |k\tau|^2 \overline{P}_{T}(k,\tau_{r}), \qquad |k\, \tau| \ll 1.
\label{nineO}
\end{equation}
Equations (\ref{sevenO}) and (\ref{nineO}) imply that the spectral
energy density in critical units is negative even if it is still true that $\Sigma_{gw}^{(L)}(k,\tau) = - \Omega_{gw}^{(L)}(k,\tau)/3$.
If we compare Eqs. (\ref{fiveOc}) and (\ref{fiveOe}) with Eqs. (\ref{sevenO}) and (\ref{nineO}) we see that
the overall sign of the energy density and of the pressure are completely reversed. While Eqs. (\ref{fiveOc}) and (\ref{fiveOe}) could be obtained on a general ground without specifying the details of the mode functions, the overall normalization of Eqs. (\ref{sevenO}) and (\ref{nineO}) does depend on
the details of the mode function and not only on the
relation between the spectral energy density and the power spectrum.
\subsection{Stationary random process inside the Hubble radius}
If the tensor amplitude is not a quantum field operator but it describes a stationary random process
the spatial variation can be approximately neglected by
focussing on the conformal time dependence:
\begin{equation}
h_{ij}(\tau) = \sum_{\lambda} e_{ij}^{(\lambda)} \, h_{\lambda}(\tau), \qquad
h_{\lambda}(\tau) = \frac{1}{\sqrt{2 \pi}} \int_{- \infty}^{\infty} e^{i \,\omega\,\tau}\, h_{\lambda}(\omega)\, d\tau.
\label{elevenO}
\end{equation}
In this case the spectral energy density is only determined
by the temporal variation of the tensor amplitude and can be deduced within
the Brill-Hartle scheme with the caveat that the result
will only apply inside the Hubble radius.
If the random process is stationary,
by definition the autocorrelation function will only depend on the time difference
at which the two amplitudes are evaluated, i.e. $\langle h_{\lambda}(\tau) h_{\lambda^{\prime}}(\tau)\rangle = \delta_{\lambda\, \lambda^{\prime}} \Gamma(\tau - \tau^{\prime})$. The temporal autocorrelation implies that in Fourier space
\begin{equation}
\langle h_{\lambda}(\omega) h_{\lambda^{\prime}}(\omega^{\prime})\rangle = S_{h}(\omega) \delta_{\lambda\, \lambda^{\prime}} \delta(\omega+ \omega^{\prime}),
\label{twelveO}
\end{equation}
where $S_{h}(\omega)$ is the spectral density. Using Eqs.(\ref{elevenO}) inside Eq. (\ref{rhoB})
the expectation value of the energy density in the Brill-Hartle-Isaacson approach follows
from the stochastic average of Eq. (\ref{twelveO}):
\begin{equation}
\overline{\rho}_{gw} = \frac{1}{4 \ell_{P}^2 a^2 } \langle \, \partial_{\tau} \, h_{ij}\, \partial_{\tau} \, h^{i\,j} \rangle = \frac{1}{2\, \pi a^2 \ell_{P}^2} \int \frac{d\, k}{k} \, k^3\, S_{h}(k).
\label{thirteenO}
\end{equation}
Since the expression of $\Omega_{gw}(k,\tau)$ inside the Hubble radius is unambiguous,
Eq. (\ref{thirteenO}) implies then a specific relation between
the power spectrum $P_{T}$, the spectral amplitude $S_{h}$ and the spectral
energy density in critical units:
\begin{equation}
\Omega_{gw}(k) = \frac{k^2}{ 12\, H^2\, a^2} P_{T}(k) \equiv \frac{k^{3}}{6 \pi H^2 a^2 } \, S_{h}(k).
\label{fourteenO}
\end{equation}
If we pass from the angular frequencies $\omega$ to the frequencies $\nu$ (and recall that in the natural units
adopted here $\omega = k = 2 \pi \nu$) Eq. (\ref{fourteenO}) can also be phrased as:
\begin{equation}
P_{T}(\nu) = 4 \nu S_{h}(\nu) , \qquad \Omega_{gw}(\nu) = \frac{4 \pi^{2}\,\nu^3}{3 H^2 a^2 } \, S_{h}(\nu).
\label{fifteenO}
\end{equation}
The result of Eq. (\ref{fifteenO}) demonstrates that the tensor amplitudes can be considered as isotropic random fields characterized by stationary autocorrelation functions. In this case Eqs. (\ref{sixE}), (\ref{sevenE}) and (\ref{twelveO})
must be viewed as averages of classical stochastic processes not necessarily related to quantum field
operators.
\subsection{Frame-invariance of the effective action}
The effective energy-momentum pseudo-tensor shall now be evaluated
in a generalized Jordan frame where the scalar-tensor action reads:
\begin{equation}
S_{J} = \int d^4 x\, \sqrt{ - G}\,\biggl[ - \frac{A(\varphi)}{ 2 \ell_{P}^2}\, R_{J} + \frac{B(\varphi)}{2} G^{\alpha\beta} \partial_{\alpha} \varphi \partial_{\beta} \varphi
- V(\varphi) \biggr],
\label{oneP}
\end{equation}
where $A(\varphi)$ and $B(\varphi)$ are dimensionless and depend on the scalar field $\varphi$. The second-order variation of Eq. (\ref{oneP})
can be easily obtained by repeating the same steps leading to Eqs. (\ref{twoB}) and (\ref{twoBa})--(\ref{twoBb}) and the
result is:
\begin{eqnarray}
\delta^{(2)}_{t} S_{J} &=&\int d^{4} x \, \biggl\{ \frac{1}{2 \ell_{P}^2}\,\biggl[ A(\varphi) \, \overline{G}^{\alpha\beta} \, \overline{{\mathcal Z}}_{\alpha\beta} \,\,\delta^{(2)}_{t} \sqrt{-G} + A(\varphi)\, \sqrt{ -\overline{G}} \biggl( \delta^{(2)}_{t} G^{\alpha\beta} \,\overline{{\mathcal Z}}_{\alpha\beta}
\nonumber\\
&+&
\delta^{(1)}_{t} G^{\alpha\beta} \, \delta^{(1)}_{t} {\mathcal Z}_{\alpha\beta} + \overline{G}^{\alpha\beta}\,\,\delta^{(2)}_{t} {\mathcal Z}_{\alpha\beta} \biggr)
\nonumber\\
&-& \delta^{(2)}_{t}\biggl( \sqrt{-G} \, G^{\alpha\beta} \, \Gamma_{\alpha\lambda}^{\,\,\,\,\,\lambda} \, \partial_{\beta} A\biggr)
+ \delta^{(2)}_{t}\biggl( \sqrt{-G} \, G^{\alpha\beta} \, \Gamma_{\alpha\beta}^{\,\,\,\,\,\lambda} \, \partial_{\lambda} A\biggr) \biggr]
\nonumber\\
&+& \delta_{t}^{(2)} \sqrt{-G} \biggl(\frac{B}{2} \overline{G}^{\alpha\beta} \partial_{\alpha} \varphi \partial_{\beta} \varphi
- V(\varphi)\biggr) + \sqrt{-\overline{G}} \frac{B}{2} \delta_{t}^{(2)} G^{\alpha\beta} \,\partial_{\alpha} \varphi \partial_{\beta}\varphi \biggr\}.
\label{onePa}
\end{eqnarray}
Equation (\ref{onePa}) contains comparatively more terms than the analog results
valid in the case $A\to 1$ (see e.g. (\ref{twoB})) where various contributions disappear and are replaced by a pair of total derivatives
that do not affect the final result. After some lengthy but straightforward algebra the explicit form of the second-order action reads:
\begin{eqnarray}
S_{t\, J} &=& \delta^{(2)} S_{J} = \frac{1}{8 \ell_{P}^2} \int d^{4}x \sqrt{-\overline{G}} \, \overline{G}^{\alpha\beta} \, \, A(\varphi) \, \partial_{\alpha} h^{\,\,\,\,(J)}_{i\,j}
\partial_{\beta} h^{(J)\,\,\,i\,j},
\nonumber\\
&-& \frac{1}{8\ell_{P}^2} \int d^{4} x \, a^2_{J} A(\varphi) \,h^{\,\,\,\,(J)}_{k\ell} \,\, h^{(J)\,\,k \ell} \bigg[ 4 {\mathcal H}^{\prime} + 2 {\mathcal M}^{\prime}
+ 2 ({\mathcal H}^2 + {\mathcal H} {\mathcal M} + {\mathcal M}^2)
\nonumber\\
&+& \frac{ 2 \ell_{P}^2}{A} \biggl( \frac{B}{2} \varphi^{\prime \, \,2} - V\, a_{J}^2\biggr)\biggr],
\label{onePb}
\end{eqnarray}
where ${\mathcal M} = A^{\prime}/A$.
The tensor amplitude $h^{(J)}_{ij}$ entering Eq. (\ref{onePb}) is defined directly
in the Jordan frame, i.e. $\delta_{t}^{(1)} G_{ij} = -a_{J}^2 \, h^{\,(J)}_{ij}$; $a_{J}$ is the scale factor appearing in
the $J$-frame, i.e. $\overline{G}_{\alpha\beta} = a^2_{J} \,\,\eta_{\alpha\beta}$.
The expression inside the squared bracket of Eq. (\ref{onePb}) vanishes identically since it
corresponds to the $(ij)$ component of the background equations derived from the extremization of the action (\ref{onePa}) with
respect to the variation of the metric. By considering the tensor amplitude $h_{ij}^{\,\,(J)}$ and the background metric as independent variables the effective energy-momentum tensor in the $J$-frame follows from Eq. (\ref{onePb}) and it is:
\begin{equation}
T_{\mu\nu}^{(J)} = \frac{A}{4 \ell_{P}^2} \biggl[ \partial_{\mu} h^{\,\,\,\,(J)}_{k\ell} \partial_{\nu} \overline{h}^{(J)\,\,\,\, k\ell} -
\frac{1}{2} \overline{G}_{\mu\nu} \biggl( \overline{G}^{\alpha\beta} \partial_{\alpha} h^{\,\,\,\,(J)}_{k\ell} \partial_{\beta} \overline{h}^{(J)\,\,\,\, k\ell}\biggr) \biggr],
\label{threeP}
\end{equation}
in full analogy with the result of Eq. (\ref{fourB}). The energy density in the $J$-frame becomes
\begin{equation}
\rho_{gw}^{(J)} = \frac{A}{8 \ell_{P}^2 a^2_{J}} \biggl[ \partial_{\tau} h^{\,\,\,\,(J)}_{k\ell} \partial_{\tau} \overline{h}^{(J)\,\,k\ell}
+ \partial_{m} h^{\,\,(J)}_{k\ell}\partial^{m} \overline{h}^{(J)\,\,k\ell} \biggr].
\label{fourP}
\end{equation}
The conformal rescaling $A \, G_{\alpha\beta} = g_{\alpha\beta}$ brings the action (\ref{onePb}) from the $J$-frame the Einstein frame:
\begin{equation}
a_{J}^2 \,A = a^2, \qquad A \,a^2_{J} h^{(J)}_{ij} = a^2\, h_{ij},
\label{fiveP}
\end{equation}
where the first equality follows from the conformal rescaling of the
background (i.e. $A \overline{G}_{\alpha\beta} = \overline{g}_{\alpha\beta}$)
while the second equality is implied by the relation between the first-order tensor fluctuations in the two frames
(i.e. $A \delta_{t}^{(1)} G_{ij} = \delta_{t}^{(1)} g_{ij}$). Equation. (\ref{fiveP}) also requires that $h_{ij} = h^{\,(J)}_{ij}$ so that the action of Eq. (\ref{twoB}) coincides with the Einstein frame action of Eq. (\ref{threeB}). As a consequence, the energy densities in the two frames are related as:
\begin{equation}
\rho_{gw}^{(J)} = A^2 \, \rho_{gw}^{(E)} \equiv\, \frac{\sqrt{- \overline{g}}}{\sqrt{ - \overline{G}}} \rho_{gw}^{(E)},
\label{sixP}
\end{equation}
where $\rho_{gw}^{(E)}$ coincides with Eq. (\ref{rhoF}). Since the energy density of a radiation
plasma also scales as $\rho_{r}^{(J)} = A^2 \, \rho^{(E)}_{r}$, Eq. (\ref{fiveP}) implies that $\rho_{gw}^{(J)}/\rho_{r}^{(J)}= \rho_{gw}^{(E)}/\rho_{r}^{(E)}$. This
observation ultimately implies that the spectral energy density in critical units is the same in the two conformally
related frames (i.e. $\Omega_{gw}^{(J)} = \Omega_{gw}^{(E)}$). Let us remark, as we close, that the class of scalar-tensor theories of Eq. (\ref{oneP}) is purely illustrative and the
effective action of the relic gravitons may also inherit further parity-violating contribution \cite{twentysix,twentyseven}; in this case a more general form
of the effective action has been proposed in \cite{twentyeight} and it is relevant for the
description of the polarized backgrounds of relic gravitons. This development is however
not central to the present considerations.
\renewcommand{\theequation}{6.\arabic{equation}}
\section{Concluding remarks}
\setcounter{equation}{0}
\label{sec6}
The energy density of the relic gravitons is not univocally and unambiguously defined. The various suggestions proposed so far
coincide when the rate of variation of the background geometry is smaller than the frequency
of the corresponding gravitons. However, in cosmological backgrounds the rate of variation
of the space-time curvature can also exceed the typical frequencies
of the gravitons. The energy-momentum pseudo-tensor of the relic gravitons should fulfil
four plausible criteria: it should be frame-invariant
and gauge-invariant, it should not violate the weak energy condition and it should be derived in general terms, i.e.
without explicitly demanding that the rate of variation of the background
geometry is either faster or slower than the frequencies of the corresponding gravitons.
An energy-momentum pseudo-tensor with these features
follows from the effective action of the relic gravitons
by considering the tensor fluctuations and the background metric as independent
variables. In its simplest realization the effective action coincides with
the result of Ford-Parker and it is defined in all the relevant physical regimes.
The spectral energy density in critical units derived within this approach is gauge-invariant and
frame invariant since its value in two conformally related frame does not change.
If we assume, a priori, that the typical frequencies of the gravitons must exceed the rate of
variation of the geometry we are implicitly following the logic of the Brill-Hartle-Isaacson pseudo-tensor
whose results are applicable when the wavelengths of the corresponding
gravitons are shorter than the Hubble radius. This proposal can be extended to encompass
wavelengths larger than the Hubble radius; if this is done the Brill-Hartle-Isaacson result coincides
with the effective energy-momentum tensor derived from the second-order variation of the
action. Finally the Landau-Lifshitz pseudo-tensor does not assume that the frequencies must exceed
the rate of variation of the geometry but its expression depends explicitly on the expansion rates.
Hence the actual results for the energy density and of the pressure easily
follow from the specific evolution of the mode functions but they are difficult to
assess in general terms. In various realistic and semi-realistic situations the energy density
computed in the Landau-Lifshitz approach always becomes negative when the typical wavelengths
are larger than the Hubble radius. It seems difficult to attribute a profound physical
significance to this occurrence: since we explicitly demonstrated that there exist effective pseudo-tensors
not leading to a negative energy density, there are no reasons
to conclude that relic gravitons must inevitably violate the weak energy condition
as they evolve beyond the Hubble radius.
All in all the effective action of the relic gravitons discussed here
leads to a computable energy-momentum pseudo-tensor
that can be assessed in the asymptotic physical regimes even without a
detailed knowledge of the background evolution. In this context the energy density is positive
semi-definite and the whole description can be easily extended to a conformally related frame.
The other strategies examined in this investigation give reasonable results
only when the relevant wavelengths are shorter than the
Hubble radius. Even if the present conclusions have been reached in the framework
of a quantum mechanical averaging scheme rooted in the properties of the relic gravitons,
we argued that the same conclusions can be obtained by considering the tensor amplitudes
as isotropic random fields characterized by stationary autocorrelation functions.
\section*{Acknowledgements}
The author wishes to thank T. Basaglia, A. Gentil-Beccot and S. Rohr of the CERN Scientific Information Service
for their kind assistance.
\newpage
|
1,108,101,565,227 | arxiv | \section{Introduction}
\subsection{Lines on the cubic surface and quintic threefold}
This article concerns the lines contained in the Dwork pencil of quintic threefolds. These manifolds, which we denote by $\mathcal{M}_\psi}\newcommand{\Ps}{\Psi$, are realised as hypersurfaces in $\mathbb{P}^4$ by the quintics
\begin{equation}
\sum_{j=1}^5 x_j^5 - 5 \psi\, x_1x_2x_3x_4x_5 ~=~0~.
\label{DworkPencil}\end{equation}
The study of the lines on quintic threefolds has a history going back to Schubert in the 19th century, who calculated that the {\em generic} quintic contains 2875 lines, in fact Schubert performed the calculation twice, using different methods \cite{Schubert1, Schubert2}. The quintics of the Dwork pencil are, however, far from being generic and are known to contain continuous families of lines.
Before summarizing the history of our understanding of lines on the quintic it might be useful to recall that this study began as a natural extension of the classical study of lines on cubic surfaces. These lines were discovered by Cayley and Salmon. The story is famous: Cayley remarked in a letter that counting constants suggested a finite number, and Salmon gave immediately the number 27 in response to the letter. The results of this correspondence were published in 1849~\cite{CayleyLines, SalmonLines}.
The configuration of the lines and their intricate symmetries have been of topic of fascination to algebraic geometers ever since. A classical source of information is the book of Henderson~\cite{HendersonLines}.
There are differences between the cubic and the quintic; in order to appreciate these let us recall the most elementary facts. The Fermat cubic in $\mathbb{P}^3$ is given by the equation
\begin{equation}
\sum_{r=1}^4 y_r^3~=~0~.
\label{FermatCubic}\end{equation}
This surface contains the lines $y_r = (u, -\o^j u, v, -\o^k v)$, where $\o$ denotes a nontrivial cube root of unity and $1\leq j,k\leq3$. By permuting the coordinates we find 27 lines that lie in the cubic and this is the total number. The beautiful and surprising fact is that if we deform the cubic, the lines deform with the surface so that there are always 27 lines. For a generic cubic it will be hard to see the lines explicitly. In fact
C. Jordan~\cite{Jordan1870} showed that the Galois group on which the determination of the lines depends is in general a simple group of order 25,920 which can be identified with the Weyl group of the lattice $E_6$ modulo its center. A modern reference for these results is~\cite{HarrisGaloisGroups}.
For the Dwork pencil \eqref{DworkPencil} the situation is already more complicated, even for the case of the Fermat quintic with $\psi=0$. For this case we may write down analogues of the lines that exist for the Fermat cubic
$$
x_j~=~(u, -\zeta^k u, v, -\zeta^\ell v,0)~,
$$
with $\zeta$ a nontrivial fifth root of unity and $1\leq k,\ell\leq5$. By permuting coordinates and taking all values of $k$ and $\ell$ we find 375 such lines, which we will refer to here as the isolated lines\footnote{These lines are often known as the exceptional lines, however, to refer to them as such here would invite confusion with the exceptional lines of the del Pezzo surface $\hbox{dP}_5$, to which we shall make frequent reference. These lines are indeed isolated for $\psi\neq 0$, but, as we shall see, they lie in continuous families of lines for~$\psi=0$.}.
Note that, since one of the coordinates vanishes identically, these lines lie in $\mathcal{M}_\psi$ for all $\psi$.
There are other lines also. Consider those of the form
$$
x_j~=~(u,-\zeta^k u, av, bv, cv)~~~\text{with}~~~a^5 + b^5 + c^5 ~=~ 0~.
$$
For given $k$, these give rise to a cone of lines, that all pass through the point $(1, -\zeta^k,0,0,0)$, and are parametrized by the curve $a^5+b^5+c^5=0$ in $\mathbb{P}^2$. By counting the different values of~$k$ and the inequivalent permutations of the coordinates we see that there are 50 cones of lines. The cones contain the isolated lines. In fact, the isolated lines are the lines in which the cones meet. For example
the cones $(u,-u,av,bv,cv)$ and $(\tilde{a}u,\tilde{b}u,v,-v,\tilde{c}u)$ meet in the isolated line
$(u,-u,v,-v,0)$. Each cone contains 15 isolated lines and meets 15 other cones in these lines. If two cones intersect, they do so in precisely one of the isolated lines.
In~\cite{MR1024767} it is shown that there are no further lines in $\mathcal{M}_0$ beyond the cones and the isolated lines and, furthermore, that, under a sufficiently general deformation, each isolated line splits into 5 lines and each cone breaks up into 20 discrete lines, yielding the correct total of
$50{\times}20 + 5{\times}375 = 2875$ discrete lines.
A quintic threefold deforms with 101 parameters, and for generic values of these parameters there are, as has been observed, 2875 discrete lines. It is known, however, that there are families of quintic threefolds that deform with 100 parameters, for which the configuration of lines is degenerate~\cite{KatzDegenerations}.
Let us return now to the one parameter family $\mathcal{M}_\psi$ for $\psi\neq0$. The manifolds of the Dwork pencil have a large group of automorphisms isomorphic to $\mathcal{S}_5{\rtimes}\mathcal{G}$, where $\mathcal{S}_5$ is the permutation group acting on the five coordinates and $\mathcal{G}\cong (\mathbb{Z}/5\mathbb{Z})^3$ has the action
$$
(x_1,\, x_2,\, x_3,\, x_4,\, x_5)\longrightarrow (\zeta^{n_1}\, x_1,\, \zeta^{n_2}\, x_2,\, \zeta^{n_3}\, x_3,\,
\zeta^{n_4}\, x_4,\, \zeta^{n_5}\, x_5)~~~\text{with}~~~\sum_{j=1}^5 n_j=0\bmod 5~.
$$
In the 1980's one of the present authors (BvG) found special lines that lie in the $\mathcal{M}_\psi$. These eponymous lines are important in what follows so we shall pause, presently, to review their properties. For the moment we simply note that there are 5,000 such lines, so since this number exceeds 2875, there must be, possibly in addition to discrete lines, a continuous family~\cite{MR1085631}. It was subsequently proved by
Anca Musta\c{t}\textbreve{a}\xspace~\cite{Mustata:fk}, using sophisticated methods, that, for $\psi}\newcommand{\Ps}{\Psi\neq 0$, $\mathcal{M}_\psi$ contains two continuous families of lines, parametrized by isomorphic curves, $\widetilde{C}_\pm$, of genus 626, and the 375 isolated lines as the only lines that do not lie in the continuous families. The genus 626 curves have Euler number $\chi=2-2{\times}626=-1250$. It follows from the theory of the Abel-Jacobi mapping (see some further remarks in \sref{ABcurves}) that under a generic deformation, each of these curves gives rise to 1250 discrete lines, so that, all together, there are again $375 + 2{\times}1250=2875$ lines.
One of our aims here is to parametrize the two families of lines, $\widetilde{C}_\pm$, explicitly. The surprise is that the explicit parametrization is not as complicated as might have been anticipated.
\subsection{The van Geemen lines}
If the $\mathcal{M}_\psi}\newcommand{\Ps}{\Psi$ were to contain 2875 lines `as expected' we would want to find the $2875\, {-}\, 375 = 2500$ lines that are missing (assuming that the special lines are to be counted with multiplicity one). Now $\mathcal{S}_5$ has subgroups of order three, for example the subgroup that permutes $(x_2,x_4,x_5)$ cyclically (the reason for choosing this particular subgroup is to conform with a choice of parametrisation that will come later). The number of missing lines is not divisible by three so some would have to be fixed (as lines but not necessarily pointwise) by the subgroup. This motivates seeking lines that are invariant under the proposed subgroup.
The points that are invariant under the subgroup are of the form
$$
(a,d,b,d,d)~,~~~(0,1,0,\o,\o^2)~, ~~~(0,1,0,\o^2,\o)~.
$$
It is immediate that the plane $(a,d,b,d,d)$ does not contain a line of $\mathcal{M}_\psi}\newcommand{\Ps}{\Psi$ and that the line passing through
$(0,1,0,\o,\o^2)$ and $(0,1,0,\o^2,\o)$ does not lie in $\mathcal{M}_\psi}\newcommand{\Ps}{\Psi$. Consider however the line
\begin{equation}
u\,\big(1,d,b,d,d\big) + (v - d u)\,\big(0,1,0,\o,\o^2 \big)~=~\big(u,v,bu,cu+\o v, -\o^2(cu-v)\big)~,
\label{VanGzero}\end{equation}
where $c=(1-\o)d$. This line lies in $\mathcal{M}_\psi}\newcommand{\Ps}{\Psi$ provided
\begin{equation}
b~=~\frac32\, \psi}\newcommand{\Ps}{\Psi\gamma}\newcommand{\G}{\Gamma^2~,~~~c~=~\frac12\,(1-\o)\psi}\newcommand{\Ps}{\Psi\,\gamma}\newcommand{\G}{\Gamma~,
\label{VanGCondone}\end{equation}
with $\gamma}\newcommand{\G}{\Gamma$ a solution of the tenth order equation
\begin{equation}
\gamma}\newcommand{\G}{\Gamma^{10}-\frac19\,\gamma}\newcommand{\G}{\Gamma^5+\left(\frac{2}{3\psi}\newcommand{\Ps}{\Psi}\right)^5=0~.
\label{VanGCondtwo}\end{equation}
Given that the lines \eqref{VanGzero}, subject to \eqref{VanGCondone} and \eqref{VanGCondtwo} lie in
$\mathcal{M}_\psi$ it is clear that so do lines of the form
\begin{equation}
\Big(u,\, v,\, \zeta^{-k-\ell}\, b u,\, \zeta^k (cu+\o v),\, -\zeta^\ell \o^2 (cu - v)\Big) ~,
\label{VanGone}\end{equation}
with $\zeta$ is a nontrivial fifth root of unity, $1\leq k,\ell\leq 5$, since these are images of the previous line under the action of $\mathcal{G}$. The van Geemen lines are the lines that are equivalent to this more general form, up to permutation of coordinates. These, more general, lines are no longer invariant under the cyclic permutation of
$(x_2,x_4,x_5)$. However, since they are in the $\mathcal{S}_5{\rtimes}\mathcal{G}$ orbit of \eqref{VanGzero}, which has an
$\mathcal{S}_5$ stabilizer of order three, the more general lines each have a stabilizer of order three.
There are changes of coordinates that preserve the general form of a van~Geemen line. Setting
$u=\zeta^{k+\ell}\tilde{u}/b$ effectively interchanges the $u$ and $bu$ terms by bringing the line \eqref{VanGone} to the form
$$
\Big(\, \zeta^{-k-\ell}\, \tilde{b}\tilde{u},\, v,\, \tilde{u},\, \zeta^k (\tilde{c}\tilde{u}+\o v),\,
-\zeta^\ell \o^2 (\tilde{c}\tilde{u} - v)\,\Big)
$$
where
$$
\tilde{b}=\frac{\zeta^{2(k+\ell)}}{b}=\frac32\psi}\newcommand{\Ps}{\Psi\,\tilde{\gamma}\newcommand{\G}{\Gamma}^2~~~\hbox{and}~~~
\tilde{c}=\zeta^{k+\ell}\,\frac{c}{b} = \frac12(1-\o)\psi}\newcommand{\Ps}{\Psi\,\tilde{\gamma}\newcommand{\G}{\Gamma}~~~\hbox{with}~~~
\tilde{\gamma}\newcommand{\G}{\Gamma}=\zeta^{k+\ell}\frac{2}{3\psi}\newcommand{\Ps}{\Psi\gamma}\newcommand{\G}{\Gamma}~,
$$
and in these relations $\tilde{\gamma}\newcommand{\G}{\Gamma}$ is another root of equation \eqref{VanGCondtwo}.
If we return to \eqref{VanGone} and write
$$
v_1=v~,~~~v_2= cu+\o v~,~~~v_3=-\o^2(cu-\o v)
$$
and change coordinates and parameters by setting
$$
\tilde{v} = \zeta^k v_2~,~~~\tilde{b}=\zeta^{2k}b~,~~~\tilde{c}=\zeta^k c
$$
then we have
$$
\tilde{v}_1\buildrel\rm def\over = \tilde{v}=\zeta^k v_2~,~~~\tilde{v}_2\buildrel\rm def\over = \tilde{c}u+\o\tilde{v}=\zeta^k v_3~,~~~
\tilde{v}_3\buildrel\rm def\over = -\o^2(\tilde{c}u-\tilde{v})=\zeta^k v_1
$$
and the effect of the coordinate transformation is
$$
(u,\, v_1,\, \zeta^{-k-\ell}bu,\, \zeta^k v_2,\, \zeta^\ell v_3) =
(u,\, \zeta^{-k}\tilde{v}_3,\, \zeta^{2k-\ell}\tilde{b}u,\, \tilde{v}_1,\, \zeta^{\ell-k}\tilde{v}_2)~.
$$
Note that the change in $b$ and $c$ is consistent with $\gamma}\newcommand{\G}{\Gamma\to\tilde{\gamma}\newcommand{\G}{\Gamma}=\zeta^k\gamma}\newcommand{\G}{\Gamma$ and $\tilde{\gamma}\newcommand{\G}{\Gamma}$ is another root of~\eqref{VanGCondtwo}. In this way one may, in effect, rotate the quantities $v_j$ cyclically, however we are left with two orderings of the $v_j$ that cannot be transformed into each other.
The counting is that, up to coordinate redefinitions, there are 10 ways to choose two positions for the components $u$ and $bu$ and a further two choices in the placing of the components $v_j$. There are two choices for $\o$, five for $\gamma}\newcommand{\G}{\Gamma$, given $\gamma}\newcommand{\G}{\Gamma^5$, and 25 ways to choose $k$ and $\ell$. Thus there are, in total,
$10{\times}2{\times}2{\times}5{\times}25=5,000$ van Geemen lines. In this accounting we consider
\eqref{VanGCondtwo} to be a quadratic equation for $\gamma}\newcommand{\G}{\Gamma^5$ and we do not count the two roots separately since these are interchanged by the coordinate transformation that interchanges $u$ and $bu$. The fact that there are 5,000 van Geemen lines while $\#(\mathcal{S}_5{\rtimes}\mathcal{G})=5!{\times}5^3=15,000$ again implies (though one can also check this directly) that each of these lines has a stabilizer of order exactly~three.
Since the number of lines, if discrete, must be 2875, counted with multiplicity, the fact that 5000 lines have been identified implies that, while there may be discrete lines, there must also be a continuous family of lines.
If we pick a particular value for $\gamma}\newcommand{\G}{\Gamma$ and act with an element of $\mathcal{G}$
as above on the line
$$
\big(u,\, v,\, b u,\, cu+\o v,\, - \o^2 (cu - v)\big)$$
and then set $u=\zeta^{-n_1}\tilde{u}$, $v=\zeta^{-n_2}\tilde{v}$, $\gamma}\newcommand{\G}{\Gamma=\zeta^{n_1-n_2}\tilde{\gamma}\newcommand{\G}{\Gamma}$ and make the corresponding changes $b=\zeta^{2(n_1-n_2)}\tilde{b}$ and $c=\zeta^{n_1-n_2}\tilde{c}$ then we obtain the line
$$
(\tilde{u},\, \tilde{v},\, \zeta^{n_1-2n_2+n_3}\tilde{b}\tilde{u},\,
\zeta^{n_4-n_2}(\tilde{c}\tilde{u}+\o\tilde{v}),\, -\zeta^{n_5-n_2}\o^2(\tilde{c}\tilde{u}-\tilde{v}))~.
$$
In this way we obtain 125 copies of a van Geemen line by acting with $\mathcal{G}$ on a particular line, provided that we understand $\mathcal{G}$ to act on $\gamma}\newcommand{\G}{\Gamma$ as indicated.
\subsection{The Wiman pencil}
In 1897 Wiman~\cite{Wiman} noted the existence of a remarkable plane sextic curve $C_0$, with four nodes, that is invariant under the permutation group $\mathcal{S}_5$. These automorphisms appeared the more mysterious owing to the fact that, of the 120 automorphisms, 96 are realised nonlinearly. The story was taken up by
Edge~\cite{Edge} after some eighty years, who noted that $C_0$ is ``only one, though admittedly the most interesting'' of a one parameter family of four-nodal sextics $C_\vph$ on which the group $\mathcal{S}_5$ acts. The action is such that the subgroup $\mathcal{A}_5$, of even permutations, preserves each $C_\vph$ while the odd permutations interchange $C_\vph$ with $C_{-\vph}$. The curve $C_0$ is known as the Wiman curve and the one parameter family $C_\vph$ is known as the Wiman pencil. Edge notes also that it is natural to blow up the plane in the four nodes of the curves. One obtains, in this way, smooth curves which, in this introduction, we will also denote by $C_\vph$. These smooth curves live in the quintic del Pezzo surface\footnote{There is difference in convention between mathematicians and physicists in writing $\hbox{dP}_n$. A physicist tends to mean $\mathbb{P}^2$ blown up in $n$ points, in general position, while a mathematician often means the del Pezzo surface of degree $n$. In the `mathematician's' convention, which we use here, the surface which results from blowing up $\mathbb{P}^2$ in $n\leq 8$ points, in general position, is $\hbox{dP}_{9-n}$.} $\hbox{dP}_5$.
With our explicit parametrization of the families of lines $\widetilde{C}_\pm$, and benefit of hindsight, we find what should have been suspected from the outset: the curves $\widetilde{C}_\pm$ are 125:1 covers of the curves $C_{\pm\vph}$ of the Wiman pencil. Where the parameter $\vph$ is related to the parameter of the quintic by
$$
\vph^2~=~\frac{32}{\psi}\newcommand{\Ps}{\Psi^5} - \frac34~.
$$
The remarkable action of $\mathcal{S}_5$ on the curves of the Wiman pencil is seen to correspond to the symmetry of the configuration of the lines of the Dwork quintics.
\subsection{Layout of this paper}
In \sref{families} we present the explicit parametrization of the families of lines. This gives rise to curves
$C_{\pm\vph}^0$ whose resolutions have 125:1 covers $\widetilde{C}_\vph$ which parametrize the lines. The curves $C_{\pm\vph}^0$ are first presented as curves in $\mathbb{P}^1{\times}\mathbb{P}^1$ that have three nodes. It is noted that the two curves $C_\vph^0$ and $C_{-\vph}^0$ intersect in the three nodes and in 14 other points. Resolution of the nodes replaces each of the nodes by two points which continue to be points of intersection of the two curves. Thus there are 20 points of intersection and it is noted that each of these correspond to van Geemen lines. It is natural to blow up $\mathbb{P}^1{\times}\mathbb{P}^1$ in the three points corresponding to the nodes in order to produce smooth curves $C_{\pm\vph}$. While it is not the case that $\mathbb{P}^1{\times}\mathbb{P}^1$ is $\mathbb{P}^2$ blown up in a point, it is the case that $\mathbb{P}^1{\times}\mathbb{P}^1$ blown up in three points is the same as $\mathbb{P}^2$ blown up in four points, which is the del Pezzo
surface $\hbox{dP}_5$. We review the geometry of $\hbox{dP}_5$ in \SS\ref{dp5}. The first fact to note is that the automorphism group of $\hbox{dP}_5$ is the permutation group $\mathcal{S}_5$. There is also an embedding $\hbox{dP}_5\hookrightarrow \mathbb{P}^5$ which is useful owing to the fact that the $\mathcal{S}_5$ transformations become linear, as automorphisms of $\mathbb{P}^5$, in this presentation of the surface. The surface $\hbox{dP}_5$ has 10 exceptional curves. These are the blow ups of the four points of $\mathbb{P}^2$ together with the six lines that pass through the six pairs of points. Three of these exceptional curves resolve the nodes of $C_\vph^0$ and so intersect the resolved curve in two points. These points correspond, as noted previously, to van Geemen lines. The $\mathcal{S}_5$ automorphisms permute the 10 exceptional curves so we expect that each of the 10 exceptional curves of $\hbox{dP}_5$ will intersect $C_\vph$ in two points corresponding to van Geemen lines. Checking that this is indeed so is the subject
of~\SS\ref{secondlook}. In order to properly understand the intersections of the exceptional curves with the $C_\vph$ we consider the Pl\"ucker coordinates of the lines of the quintic and the embedding
$\hbox{dP}_5\hookrightarrow \mathbb{P}^9$. We give also, in this section, a detailed discussion of the 125:1 cover $\widetilde{C}_\vph \to C_\vph$.
In \SS\ref{singularmanifolds} we turn to the form of the curves $C_\vph$ for the cases $\psi}\newcommand{\Ps}{\Psi^5=0,1,\infty$ that the manifold $\mathcal{M}_\psi}\newcommand{\Ps}{\Psi$ either requires special consideration, for the case $\psi}\newcommand{\Ps}{\Psi=0$, or is singular. For the conifold there are two values $\vph=\pm 5\sqrt{5}/2$ which correspond to $\psi^5=1$. For these, we find that the curve $C_\vph$ develops six nodes and may be resolved to a $\mathbb{P}^1$. Thus $\widetilde{C}_\vph$ is the union of 125
$\mathbb{P}^1$'s. The group $\mathcal{A}_5$ acts on each of these and we describe this action.
A number of technical points are relegated to appendices.
\subsection{The zeta function and the $\mathcal{A}$ and $\mathcal{B}$ curves}\label{ABcurves}
It is of interest to study the manifolds $\mathcal{M}_\psi}\newcommand{\Ps}{\Psi$ of the Dwork pencil over the finite field $\IF_p$. The central object of interest, in this situation, is the $\zeta$-function. For general $\psi$, that is $\psi^5\neq 0,1,\infty$, this takes the form~\cite{Candelas:2004sk}
$$
\zeta_\mathcal{M}(T,\psi}\newcommand{\Ps}{\Psi)~=~
\frac{R_{\bf 1}(T,\psi}\newcommand{\Ps}{\Psi)\, R_\mathcal{A}(p^\r T^\r,\psi}\newcommand{\Ps}{\Psi)^\frac{20}{\r}\, R_\mathcal{B}(p^\r T^\r,\psi}\newcommand{\Ps}{\Psi)^\frac{30}{\r}}{(1-T)(1-pT)(1-p^2T)(1-p^3T)}~.
$$
In this expression the $R$'s are quartic polynomials in their first argument and, here,
$\r$ \hbox{$(=1,2~\text{or}~4)$} is the least integer such that $p^\r{-}1$ is divisible by 5.
The quartic $R_{\bf 1}$, for example, has the structure
$$
R_{\bf 1}(T,\psi}\newcommand{\Ps}{\Psi)~=~1 + a_{\bf 1}(\psi}\newcommand{\Ps}{\Psi)\,T + b_{\bf 1}(\psi}\newcommand{\Ps}{\Psi)\,pT^2 + a_{\bf 1}(\psi}\newcommand{\Ps}{\Psi)\,p^3T^3 + p^6 \,T^4
$$
with $a_{\bf 1}$ and $b_{\bf 1}$ integers that vary with $\psi}\newcommand{\Ps}{\Psi\in\IF_p$. The other factors
$R_\mathcal{A}$ and $R_\mathcal{B}$ have a similar structure. The numerator of the $\zeta$-function corresponds to the
Frobenius action on $H^3(\mathcal{M}_\psi)$. It is intriguing that these factors are related to certain genus 4 Riemann curves $\mathcal{A}$ and $\mathcal{B}$. What is meant by this is that there is a genus 4 curve $\mathcal{A}$, that varies with $\psi$, with $\zeta$-function satisfying
$$
\zeta_\mathcal{A}(T,\psi}\newcommand{\Ps}{\Psi)~=~\frac{R_\mathcal{A}(T,\psi}\newcommand{\Ps}{\Psi)^2}{(1-T)(1-pT)}~,
$$
and there is an analogous relation for another curve $\mathcal{B}$. The intriguing aspect is that the curves $\mathcal{A}$ and
$\mathcal{B}$ are not directly visible in $\mathcal{M}_\psi$.
The theory of the Abel-Jacobi mapping provides a context of explaining this phenomenon. More precisely, a loop $\gamma}\newcommand{\G}{\Gamma \in H_1(\widetilde{C}_{\pm\vph})$ determines a 3-cycle $T(\gamma}\newcommand{\G}{\Gamma) \in \mathcal{M}_\psi$ which is the union of the lines corresponding to the points of $\gamma}\newcommand{\G}{\Gamma$. By duality one obtains a map
$a: H^3(\mathcal{M}_\psi) \to H^1(\widetilde{C}_{\pm\vph})$, whose
kernel should have dimension 4 and giving rise to the factor $R_{\bf 1}$, whereas its image should correspond
to the other factors of the numerator of the $\zeta$-function. How exactly the geometry of the $\mathcal{A}$ and $\mathcal{B}$ curves are related to $\widetilde{C}_\vph$ will be described elsewhere and will not be pursued in this paper.
We remark further that the map $a$ has as Hodge-component a map
$$
\a: H^1(\O^2_{\mathcal{M}_\psi}) \longrightarrow H^0\big(\O^1_{\widetilde{C}_{\pm\vph}}\big)~.
$$
Now the first space
can be interpreted as the $101$ dimensional space of infinitesimal deformations of quintic $\mathcal{M}_\psi$, thought
of as the space of degree 5 polynomials $P$ modulo the Jacobian ideal. It follows from the work of H. Clemens that zeros of the holomorphic 1-form $\alpha(P)$ on $\widetilde{C}_{\pm\vph}$ correspond precisely to the lines that can be infinitesimally lifted over the deformation of $\mathcal{M}_\psi$ determined by $P$. As the
curves~$\widetilde{C}_{\pm\vph}$ both have genus $626$, a differential form has $2{\times}626-2=1250$ zeros. Thus we see that $2{\times}1250=2500$ lines will emerge from the $\widetilde{C}_\vph$, which together with the $375$ isolated lines gives a total of $2875$ lines that we find on a generic~quintic.
\newpage
\section{The families of lines}\label{families}
\subsection{Explicit parametrization}\label{Explpar}
Suppose now that, for a line, no coordinate is identically zero. Each $x_i$ is a linear combination of coordinates $(u,v)$ on the line. At least two of the coordinates must be linearly independent as functions of $u$ and $v$. Let us take these independent coordinates to be $x_1$ and $x_2$ then we may take the line to be of the form
\begin{equation}
x~=~(u,\, v,\, bu + rv,\, cu+sv,\, du+tv)~.
\label{genline}\end{equation}
The condition that such a line lies in the quintic imposes the following conditions on the six coefficients:
\begin{equation}\begin{split}
b^5+c^5+d^5+1 ~&=~0\\[3pt]
b^4 r+c^4 s+d^4 t - b c d\, \psi ~&=~0\\[3pt]
2\,(b^3 r^2 + c^3 s^2 + d^3 t^2) - (c d r+b d s+b c t)\,\psi ~&=~0\\[3pt]
2\,(b^2 r^3 + c^2 s^3 + d^2 t^3) - (d r s+b s t+c r t)\,\psi ~&=~0\\[3pt]
b r^4+c s^4+d t^4 - r s t\, \psi ~&=~0\\[3pt]
r^5+s^5+t^5+1 ~&=~0~.\\
\end{split}\label{sixeqs}\end{equation}
Although there are six equations, we will see that there is a one dimensional family of solutions for the coefficients. However, before coming to this, consider the special case that the coordinates $x_j$ are not all linearly independent as functions of $u$ and $v$. Such a case is equivalent to taking take $r=0$, say, in \eqref{sixeqs}. With this simplification it is straightforward to solve the equations and we find that this case corresponds precisely to the van Geemen~lines.
If we now seek lines that are neither the isolated lines nor the van Geemen lines then we can take all the parameters $b,c,d,r,s,t$ to be nonzero and we also know that all the coordinates are linearly independent as functions of $u$ and $v$. It follows that for a general line, one that is not a isolated line or a van Geemen line, that \eqref{genline} is, in fact, a general form. The first two coordinates of a general line are linearly independent so we choose coordinates so that $x_1=u$ and $x_2=v$ and then the remaining coordinates are linear forms as indicated. Note that we do not have to take separate account of permutations.
In order to simplify \eqref{sixeqs} it is useful to start by scaling the coefficients and the parameter
$$
b~=~c b'~,\quad d~=cd'~,\quad r~=~sr'~,\quad t~=~st'~,\quad \psi}\newcommand{\Ps}{\Psi~=~cs\psi}\newcommand{\Ps}{\Psi'~.
$$
This removes $c$ and $s$ from the four central relations. Further scalings lead to additional simplification. This process leads to the following transformation of the variables and parameter
$$
r~=~s\k~,\quad b~=~c\k\t~,\quad d~=~c\k\t\d~,\quad t~=~s\k\t\d\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma~,\quad
\psi}\newcommand{\Ps}{\Psi~=~\frac{cs}{\d\k^2\t}\,\tilde{\psi}\newcommand{\Ps}{\Psi}~.
$$
This has the advantage that, after cancellation, the equations become
\begin{equation}\begin{split}
1 + c^5\big[ 1 + \k^5\t^5 (1+\d^5)\big]~&=~0\\[12pt]
1 + \k^5\t^4\, (1\,+\,\d^5\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t\,) ~&=~\tilde{\psi}\newcommand{\Ps}{\Psi}\,\t\\[7pt]
1 + \k^5\t^3 (1+\d^5\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t^2)~&=~\frac12\,\tilde{\psi}\newcommand{\Ps}{\Psi}\,(1+\t+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)\\[7pt]
1 + \k^5\t^2 (1+\d^5\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^3\t^3)~&=~\frac12\,\tilde{\psi}\newcommand{\Ps}{\Psi}\,(1+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)\\[7pt]
1 + \k^5\t\, (1 + \d^5\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^4\t^4)~&=~\tilde{\psi}\newcommand{\Ps}{\Psi}\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\\[10pt]
1 + s^5\big[1 + \k^5(1+\d^5\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^5\t^5)\big]~&=~0~.\\
\end{split}\label{sixeqstransf}\end{equation}
and depend on $\d$ and $\k$ only through $\d^5$ and $\k^5$. Combining the second, third, fourth and fifth relations with multiples $(1,-2,2,-1)$ results in the cancellation of both the constant and $\tilde{\psi}\newcommand{\Ps}{\Psi}$ dependent terms. In this way we find
\begin{equation}
\d^5~=~\frac{(1-\t)(1-\t+\t^2)}{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t^4 (1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)}~.
\label{delta5}\end{equation}
Solving the central four relations also for $\k^5$ and $\tilde{\psi}\newcommand{\Ps}{\Psi}$, we find
\begin{equation}
\k^5~=-\frac{(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)}{\t (1 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)(1 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t + \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t^2)}~~~\text{and}~~~
\tilde{\psi}\newcommand{\Ps}{\Psi}~=~2\,\frac{(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1-\t)}{1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t^2}~.
\label{psitilde}\end{equation}
Moreover the three relations in \eqref{delta5} and \eqref{psitilde} exhaust the content of the four central equations in \eqref{sixeqstransf}.
The first and last relations in \eqref{sixeqstransf} now give $c$ and $s$ in terms of $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma$ and $\t$. Finally, on substituting what we know into the relation
$$
\psi}\newcommand{\Ps}{\Psi^5~=~\frac{c^5s^5}{\d^5\k^{10}\t^5}\,\tilde{\psi}\newcommand{\Ps}{\Psi}^5 ~,
$$
\vskip5pt
we obtain a constraint $F(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)=0$ where
\begin{equation}\begin{split}
&F(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t) ~=~32\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t^2\, (1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)^2 (1{-}\t)^2 (1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)^2 \,-\\[5pt]
&(1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma{+}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)(1{-}\t{+}\t^2)(1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t{+}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t^2)\!
\Big[1{-}\t(1{+}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma){+}\t^2(1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma{+}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)\Big]\!\!
\Big[1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma(1{+}\t){+}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2(1{-}\t{+}\t^2)\Big]\psi}\newcommand{\Ps}{\Psi^5. \\[7pt]
\end{split}\label{F}\end{equation}
We are now able to give the lines in terms of $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma$ and $\t$. Let $\a(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$ and $\b(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)$ be given by the~relations
\begin{equation}\begin{split}
\a(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)^5~&=~\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^4\, (1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1-\t)(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)\Big[1-\t(1+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)+\t^2(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)\Big] \\[5pt]
\b(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)^5~&=~(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)~.
\end{split}\notag\end{equation}
Then we have
\begin{equation}\begin{split}
x_1~&=~\a(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)\, u \\[5pt]
x_2~&=~\a(\t,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)\, v \\[5pt]
x_3~&= -\t^\frac45\,\b(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)\, \left(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\, u + v\right) \\[5pt]
x_4~&=~~~\b(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)\, \left(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\, u + \t\, v\right) \\[3pt]
x_5~&= -\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^\frac45\,\b(\t)\, \left(u +\t\, v\right)~. \\
\end{split}\label{fam}\end{equation}
Musta\c{t}\textbreve{a}\xspace has show that the family of lines has two irreducible components that are isomorphic. This requires $F$ to factorise and this is indeed the case. Setting
\begin{equation}
\vph^2~=~\frac{32}{\psi}\newcommand{\Ps}{\Psi^5} - \frac34
\label{phirelation}\end{equation}
and
\begin{equation}\begin{split}
G~&=~3\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t^2 - \frac12\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t(1{+}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1{+}\t)(1{+}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t) +(1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma{+}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)(1{-}\t{+}\t^2)(1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t{+}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t^2) \\[5pt]
H~&=~\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t(1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1{-}\t)(1{-}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)\\
\end{split}\label{GandH}\end{equation}
we have
$$
F~=-\psi}\newcommand{\Ps}{\Psi^5\, F_{+}F_{-}~~~\text{with}~~~F_{\pm}~=~G\pm \vph\, H~.
$$
The curves defined by the vanishing of $F_{\pm}(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$ are smooth, apart from singularities at the point $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)=(1,1)$. Near $(1,1)$ we have the asymptotic form
\begin{equation}
F_{\pm}(1+\epsilon}\newcommand{\ve}{\varepsilon_1,1+\epsilon}\newcommand{\ve}{\varepsilon_2,\psi}\newcommand{\Ps}{\Psi)\,\sim\, \epsilon}\newcommand{\ve}{\varepsilon_1^2+\epsilon}\newcommand{\ve}{\varepsilon_1\epsilon}\newcommand{\ve}{\varepsilon_2+\epsilon}\newcommand{\ve}{\varepsilon_2^2~=~(\epsilon}\newcommand{\ve}{\varepsilon_1 - \o\epsilon}\newcommand{\ve}{\varepsilon_2)(\epsilon}\newcommand{\ve}{\varepsilon_1 - \o^2\epsilon}\newcommand{\ve}{\varepsilon_2)~,
\label{odp}\end{equation}
so these singularities are ordinary double points. The finite singularities of $F$ are therefore $(1,1)$ together with the solutions of $G=H=0$.
The statement that \eqref{fam} describes all general lines has the following consequence. Clearly if a line can be expressed in the form \eqref{fam} then any permutation of the coordinates $x_k$ yields another line so if the parametrization is general then there must be a reparametrization of $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$ and
$(u,v)$ that yields this same effect. This is indeed so and the following table gives four such transformations that suffice to generate the permutation group on the~$x_k$.
The table gives the action of the $\mathcal{S}_5$ generators on $G$ and $H$. We see that the odd elements of the group interchange $F_{+}$ with $F_{-}$. So each of $F_{\pm}$ is preserved by the alternating
subgroup~$\mathcal{A}_5$. Since the odd group elements exchange $F_{+}$ with $F_{-}$ we see that the lines are parametrised by isomorphic curves.
Among the permutations of the $x_k$ there is a cyclic permutation of three coordinates which is of importance. The composition of the exchanges $x_3\leftrightarrow x_5$ and $x_4\leftrightarrow x_5$ generates a cyclic permutation of $(x_3,x_4,x_5)$. As an action on $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$ we have
$$
g_3(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)~=~\left(\t,\,\frac{1}{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t}\right)~.
$$
The action of $g_3$ is expressed most symmetrically by setting $\r=1/\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t$, so that $\r\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t=1$, then $g_3$ permutes
$(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t,\r)$ cyclically. We may rewrite the polynomials $G$ and $H$ so as to make the symmetry under $g_3$ manifest. We have
\begin{equation}\begin{split}
\frac{G}{(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)^2}~&=~3 - \frac12\,(1+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1+\t)(1+\r) + (1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)(1-\t+\t^2)(1-\r+\r^2)\\[5pt]
\frac{H}{(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)^2}~&=~ - (1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1-\t)(1-\r)~.\\
\end{split}\label{P1cubed}\end{equation}
\subsection{The curves in $\mathbb{P}^1{\times}\mathbb{P}^1$ defined by $F_{\pm}$\label{cp1p1}}
We have found curves in $\mathbb{C}^2$ defined by $F_{\pm}=0$ whose coverings parametrize lines on
$\mathcal{M}_\psi$, with parameters related by \eqref{phirelation}. Let us denote the locus $F_{+}{=}0$ by $C^0_\vph$, the locus $F_{-}{=}0$ is then $C^0_{-\vph}$.
Compactifying $\mathbb{C}^2$ to $\mathbb{P}^1{\times}\mathbb{P}^1$, we obtain a (singular) projective curve of bidegree $(4,4)$. To be explicit, this singular curve is the subset
$$
\left\{\Big((\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1:\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2),(\t_1:\t_2)\Big)\,\in\,\mathbb{P}^1\times\mathbb{P}^1:\quad
\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2^4\t_2^4\, F_{\pm}\left(\frac{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1}{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2}, \frac{\t_1}{\t_2}\right)~=~0\,\right\}.
$$
The points at infinity are on the lines $\{\infty\}{\times} \mathbb{P}^1$ and $\mathbb{P}^1{\times}\{\infty\}$ (we write $\infty$ for $(1:0)\in\mathbb{P}^1$):
$$
\begin{array}{ccc}
(\infty,-\o)~,\quad & (\infty,-\o^2)~,\quad & (\infty,0)~,\\[3pt]
(-\o,\infty)~,\quad & (-\o^2,\infty)~,\quad & (0,\infty)~.\\
\end{array}
$$
By means of a Gr\"obner basis calculation, one finds that, for the case that $\mathcal{M}_\psi}\newcommand{\Ps}{\Psi$ is smooth, that is
$\psi}\newcommand{\Ps}{\Psi^5\neq 1,\infty$, the curves each have three singular points,
$(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t) = (1,1)$, $(0,\infty)$, $(\infty, 0)$.
The genus of a smooth bidegree $(d,d')$ curve is $(d-1)(d'-1)$, so if the curve were smooth it would have genus $3{\times} 3=9$. Owing to the singular points, its desingularization has genus at most $6$. The singular points are all related by the operations of \tref{S5transfs} and \eqref{odp} shows the singular points to be ordinary double points, hence the genus of the desingularization is~$9-3=6$.
Consider now the following list of the 17 points in which the curves $C^0_{\pm\vph}$ intersect (we abuse notation by not distinguishing between the curve in $\mathbb{C}^2$ and its compactification in $\mathbb{P}^1{\times}\mathbb{P}^1$).
$$
\begin{array}{>{}r<{~} >{}r<{~} >{}r<{~} >{}r<{~} >{}r<{~} >{}r<{~} >{}r<{~}}
&&(0,\infty)~, & (\infty,0)~, & (1,\,1)~, \\[3pt]
(0,-\o)~, & (0,-\o^2)~, & (1,-\o)~, & (1,-\o^2)~, & (-\o,-\o^2)~, & (-\o,\infty)~, & (-\o^2,\infty)~,\\[3pt]
(-\o, 0)~, & (-\o^2,0)~, & (-\o,1)~, & (-\o^2,1)~, & (-\o^2,-\o)~, & (\infty,-\o)~, & (\infty,-\o^2)~.\\
\end{array}
$$
\begin{table}
\def\vrule height15pt depth10pt width0pt{\vrule height25pt depth20pt width0pt}
\def\vrule height20pt depth14pt width0pt{\vrule height20pt depth14pt width0pt}
\begin{center}
\begin{tabular}[H]{|>{$\displaystyle}c<{$} |>{$\displaystyle}c<{$} |>{$\displaystyle}c<{$}
|>{$\displaystyle}c<{$}|}
\hline
\multispan4{\vrule height20pt depth14pt width0pt\vrule\hfil\large $\mathcal{S}_5$ \, generators\hfil\vrule}\\
\hline
\vrule height20pt depth14pt width0pt (\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\,\t)~~\hbox{transf.} & (u,\,v)~~\hbox{transf.}& \parbox{1.7cm}{\centering effect on coords.}&
\hbox{effect on $(G,H)$}\\
\hline\hline
\vrule height15pt depth10pt width0pt (\t,\, \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma) & (v,\, u) &\parbox[c]{1.5cm}{$x_1\leftrightarrow x_2$\\$x_3\leftrightarrow x_5$}
& (G, H)\\ \hline
\vrule height15pt depth10pt width0pt \left(\frac{1}{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma},\,\frac{1}{\t}\right) & (-1)^\frac{1}{5}(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)^\frac{8}{5}\,(v,\, u)
& x_1\leftrightarrow x_2
&\frac{1}{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^4\t^4}\,(G, -H)\\ \hline
\vrule height15pt depth10pt width0pt \left(\frac{1}{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma},\, \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t\right) & (-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^\frac{9}{5}\, u ,\, -\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^{-\frac{1}{5}}\, v) & x_4\leftrightarrow x_5
&\phantom{\t^2} \frac{1}{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2}\, (G, -H)\\ \hline
\vrule height15pt depth10pt width0pt \left(\frac{1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t}{1-\t},\, 1-\t\right)
& \left(\frac{(1-\t)\,(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma u+v)}{(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)^\frac{1}{5}(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)^\frac{4}{5}},\;
-\frac{(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)^\frac{1}{5}\,v}{(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)^\frac{1}{5}}\right) & x_1\leftrightarrow x_3
& \left(\frac{\t}{1-\t}\right)^2\!(G,-H)\\ \hline
\end{tabular}
\capt{6.2in}{S5transfs}{The action of four operations, on the coordinates and on the $F_\pm$, that generate
$\mathcal{S}_5$.}
\end{center}
\end{table}
\vfill
\begin{table}
\def\vrule height15pt depth10pt width0pt{\vrule height25pt depth20pt width0pt}
\def\vrule height20pt depth14pt width0pt{\vrule height20pt depth14pt width0pt}
\def\hskip-5pt{\hskip-5pt}
\begin{center}
\resizebox{6.5in}{!}{
\begin{tabular}[H]{|>{$\displaystyle\hskip-5pt}c<{\hskip-5pt$} |>{$\displaystyle\hskip-5pt}c<{\hskip-5pt$}
|>{$\displaystyle\hskip-5pt}c<{\hskip-5pt$} |>{$\displaystyle\hskip-5pt}c<{\hskip-5pt$}|}
\hline
\multispan4{\vrule height20pt depth14pt width0pt\vrule\hfil\large Van Geemen Lines\hfil\vrule}\\
\hline
\vrule height20pt depth14pt width0pt (\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_*,\,\t_*) & \t~~\hbox{for}~~\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_* + \ve & (u, v) & \hbox{Line}\\
\hline\hline
\vrule height15pt depth10pt width0pt (0,\,-\o) & {-}\o {+} 9\gamma}\newcommand{\G}{\Gamma^5\ve
& \left(\frac{c \tilde{u}}{\ve}, -\tilde{v}\right)
& \big(\tilde{u}, \tilde{v}, -\o^2(c\tilde{u}{-}\tilde{v}), c\tilde{u}{+}\o \tilde{v}, b\tilde{u}\big)\\
\hline
\vrule height15pt depth10pt width0pt (1,\,-\o) & - \o {+}9\o\gamma}\newcommand{\G}{\Gamma^5\ve
& \left(-\frac{\o^2\tilde{v}}{\ve^\frac{1}{5}}, - \frac{c\tilde{u}{+}\o\tilde{v}}{\ve^\frac{1}{5}}\right)
&\big(\tilde{v}, c\tilde{u}{+}\o\tilde{v},-\o^2(c\tilde{u}{-}\tilde{v}), b\tilde{u}, \tilde{u}\big)\\
\hline
\vrule height15pt depth10pt width0pt (-\o,-\o^2) & {-}\o^2 {+} \o \left(\frac{2}{3\psi}\newcommand{\Ps}{\Psi}\right)^5\!\frac{\ve}{\gamma}\newcommand{\G}{\Gamma^{10}}
& \left(\frac{c\tilde{u}{+}\o\tilde{v}}{(1{-}\o^2)^\frac{1}{5}\ve^\frac{1}{5}},
- \frac{\o^2(c\tilde{u}{-}\tilde{v})}{(1{-}\o^2)^\frac{1}{5}\ve^\frac{1}{5}}\right)
&\big(c\tilde{u}{+}\o\tilde{v},-\o^2(c\tilde{u}{-}\tilde{v}), b\tilde{u},\tilde{v}, \tilde{u}\big)\\
\hline
&\vrule height15pt depth10pt width0pt 1{+}\o^2\ve{+}(\o{+}9\gamma}\newcommand{\G}{\Gamma^5) \ve^2
&\left(\frac{\o c\tilde{u}}{\ve^\frac{6}{5}}{-}\frac{(\o c\tilde{u}{-}\tilde{v})}{2\ve^\frac{1}{5}},
-\frac{\o c\tilde{u}}{\ve^\frac{6}{5}}{-}\frac{(\o c\tilde{u}{-}\tilde{v})}{2\ve^\frac{1}{5}}\right)
&\big(b\tilde{u}, \tilde{u}, \tilde{v}, -\o^2(c\tilde{u}{-}\tilde{v}), c\tilde{u}{+}\o\tilde{v}\big)\\
\smash{\raise27pt\hbox{$(1,\, 1)$}}
&\vrule height15pt depth10pt width0pt 1{+}\o\ve{-}(\o{+}9\gamma}\newcommand{\G}{\Gamma^5)\ve^2
&\left(-\frac{\o c\tilde{u}}{\ve^\frac{6}{5}}{+}\frac{(\o c\tilde{u}{+}\tilde{v})}{2\ve^\frac{1}{5}},
\frac{\o c\tilde{u}}{\ve^\frac{6}{5}}{+}\frac{(\o c\tilde{u}{+}\tilde{v})}{2\ve^\frac{1}{5}}\right)
&\big(\tilde{u}, b\tilde{u}, \tilde{v}, c\tilde{u}{+}\o\tilde{v}, -\o^2(c\tilde{u}{-}\tilde{v})\big)\\
\hline
\end{tabular}
}
\capt{5in}{vanGlines}{The limiting process that gives rise to the van Geemen lines.}
\end{center}
\end{table}
\newpage
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.3in]{plottab4.pdf}
\vskip0pt
\place{0.2}{3.45}{$\infty$}
\place{1.09}{3.47}{0}
\place{2.03}{3.47}{1}
\place{-0.05}{3.63}{$\infty$}
\place{0.05}{4.55}{0}
\place{0.05}{5.5}{1}
\place{3.55}{3.45}{$\infty$}
\place{4.41}{3.47}{0}
\place{5.35}{3.47}{1}
\place{3.28}{3.63}{$\infty$}
\place{3.37}{4.55}{0}
\place{3.37}{5.5}{1}
\place{0.2}{0.13}{$\infty$}
\place{1.09}{0.13}{0}
\place{2.03}{0.13}{1}
\place{-0.05}{0.3}{$\infty$}
\place{0.05}{1.22}{0}
\place{0.05}{2.17}{1}
\place{3.55}{0.13}{$\infty$}
\place{4.41}{0.13}{0}
\place{5.35}{0.13}{1}
\place{3.28}{0.3}{$\infty$}
\place{3.37}{1.22}{0}
\place{3.37}{2.17}{1}
\capt{6.25in}{animation}{These are plots of the curves $F_{+}=0$, in red, and $F_{-}=0$, in blue, for real
$(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$ as $\psi}\newcommand{\Ps}{\Psi^5$ ranges from 0 to 1. The diagram is misleading with respect to the points $(1,1)$, $(0,\infty)$ and $(\infty,0)$ which lie on the curve for all $\psi}\newcommand{\Ps}{\Psi$ but for $\psi}\newcommand{\Ps}{\Psi\neq 0$ the neighborhoods of the curve on which they lie intersect the plane on which $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$ are both real only in points. The figures show also the images of the 10 exceptional curves of $\hbox{dP}_5$. These are the 3 points $(1,1)$, $(0,\infty)$ and
$(\infty,0)$ together with the 7 lines $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=0,1,\infty$, $\t=0,1,\infty$ and $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t=1$. After resolution, the exceptional curves corresponding to the points $(1,1)$, $(0,\infty)$ and $(\infty,0)$ intersect each of the curves $F_\pm =0$ in two points. So too do the other exceptional curves, though the intersections are in complex points not visible in the~figure. The resolved curves are smooth apart from the cases $\psi}\newcommand{\Ps}{\Psi^5=0,1,\infty$. As
$\psi\to 0$ the curves tend to the exceptional lines of $\hbox{dP}_5$ and, as $\psi\to 1$, the curves
$F_\pm =0$ each develop 6 nodes corresponding to the limiting points shown in the final~figure.}
\end{center}
\end{figure}
The list of the points of intersection includes the three points, just discussed, in which the curves are both singular. Note that these points do not depend on $\vph$.
We know that at least some of the van Geemen lines must lie in the continuous families. Indeed, Musta\c{t}\textbreve{a}\xspace has shown that they all lie in the continuous families, since the only isolated lines are the 375 lines that we have identified as such.
The van Geemen lines are, however, not easy to see from the parametrisation \eqref{fam}. It is a surprising fact that these lines appear precisely as limits, as we approach the points in which the curves $C^0_{\pm\vph}$ intersect. For the points $(0,-\o)$, $(1,-\o)$, $(-\o,-\o^2)$ and the singular point $(1,1)$, this resolution is given in \tref{vanGlines}. All the other resolutions may be obtained from these by acting with the $\mathcal{S}_5$ operations of \tref{S5transfs}. Each of the nonsingular points of intersection $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_*,\t_*)$ gives rise to two van Geemen lines, one in each of the families. The two possible values
$$
\gamma}\newcommand{\G}{\Gamma^5~=~\frac19\left( \frac12\mp\frac{\text{i}\vph}{\sqrt{3}} \right)
$$
correspond, respectively, to the two curves $C^0_{\pm\vph}$. For the three singular points each curve has self intersection so the resolution produces two lines for each curve, again the two choices for $\gamma}\newcommand{\G}{\Gamma^5$, as above, correspond, respectively, to the two curves $C^0_{\pm\vph}$. In this way we find
$14{\times}2{+}3{\times}4=40$ lines which become $40{\times}125=5000$ lines under the action of $\mathcal{G}$. Thus we have found all the van Geemen lines as resolutions of intersection of the curves $C^0_{\pm\vph}$.
The appearance of fifth roots in \eqref{fam} indicates that we have to allow for different branches and the effect of fifth roots of unity. In \eqref{fam} we have to choose a fifth root of unity for each of $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma$, $\t$, $\a(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$ and $\b(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)$. This might suggest a $\mathbb{Z}_5^4$ covering, however multiplying all the coordinates $x_j$ by a common factor is of no consequence, so there is in fact a $\mathbb{Z}_5^3$ covering and we
can allow for different branches of solutions by acting with $\mathcal{G}$ on a given branch. Somewhat surprisingly monodromy around the singularities of $F_{\pm}$ does not generate~$\mathcal{G}$. Instead the monodromy simply multiplies all the components $x_j$ of a line by a common factor of $\zeta^k$ for some $k$. Thus there is no local ramification of the solution. We will give a better description of the 125:1 cover in \SS\ref{cover125}.
\subsection{A partial resolution of the singularities of $C^0_\vph$}
We have seen that the curves $C^0_\vph$ have three singular points. We wish to resolve these singularities. It is interesting to note that two of these singularities can be resolved very naturally. It was remarked previously that by introducing a new parameter $\r$, subject to the constraint $\r\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t=1$, the equations $F_\pm=0$ can be written, as in \eqref{P1cubed}, so as to be manifestly symmetric under an
$\mathcal{S}_3$ subgroup of the permutation symmetry. Once we introduce $\r$, we are dealing with the nonsingular surface $\r\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t=1$ embedded in $(\mathbb{P}^1)^3$. If written in homogeneous coordinates, this surface is given by the trilinear equation
\begin{equation}
\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1\r_1~=~\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2\r_2~.
\label{TrilinearEq}\end{equation}
The vanishing locus of a nonsingular trilinear polynomial in $(\mathbb{P}^1)^3$ is isomorphic to the del~Pezzo surface
$\hbox{dP}_6$, which we may think of as $\mathbb{P}^2$ blown up in three points. Two of these blow ups resolve the singularities at $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)=(0,\infty)$ and $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)=(\infty,0)$. Consider the first of these singularities. In homogeneous coordinates the location of the singularity is
$$
\Big((\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2),\,(\t_1,\t_2)\Big)~=~\Big((0,1),\,(1,0)\Big)~.
$$
For these values \eqref{TrilinearEq} is satisfied for all values of $(\r_1,\r_2)$, so the singular point has been replaced by an entire $\mathbb{P}^1$. A Gr\"obner basis calculation shows that the curves defined by $F_\pm=0$ are now only singular at the point $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t,\r)=(1,1,1)$.
The surfaces $\hbox{dP}_6$, $\mathbb{P}^1{\times}\mathbb{P}^1$ and $\mathbb{P}^2$ are all toric and it is clear from their respective fans that $\hbox{dP}_6$ is obtained from $\mathbb{P}^2$ by blowing up three points and may also be obtained from
$\mathbb{P}^1{\times}\mathbb{P}^1$ by blowing up two points (for the relation between the blow ups of $\mathbb{P}^2$ and
$\mathbb{P}^1{\times}\mathbb{P}^1$ see \SS\ref{P2blowup}). Since we wish to resolve the remaining singularity of the curves $F_\pm =0$, it is natural to blow up one further point. This brings us to a consideration of $\hbox{dP}_5$.
\newpage
\section{The quintic del Pezzo surface $\hbox{dP}_5$}\label{dp5}
\subsection{Blowing up three points in $\mathbb{P}^1{\times}\mathbb{P}^1$}
The curves $C^0_\vph$ in $\mathbb{C}^2$ define singular curves of bidegree $(4,4)$ in $\mathbb{P}^1{\times}\mathbb{P}^1$ which in general have three ordinary double points in $(\sigma,\tau)=(1,1),(0,\infty),(\infty,0)$. The blow up of $\mathbb{P}^1{\times}\mathbb{P}^1$ in these three points is the quintic del Pezzo surface $\hbox{dP}_5$.
The blow up is given by the polynomials of bidegree $(2,2)$ which are zero in these three points (see Section \ref{picdp5}).
The polynomials of bidegree $(2,2)$ are a $9=3^2$-dimensional vector space with basis $\sigma_1^{a}\sigma_2^b\tau_1^c\tau_2^d$, $a+b=2=c+d$.
The blow up map can thus be given by
$$
\Psi:\,\mathbb{P}^1\times\mathbb{P}^1\,\dashrightarrow\, \hbox{dP}_5\quad(\subset\mathbb{P}^5)~,\qquad
(\sigma,\tau)\,\longmapsto\,(z_0,\ldots,z_5)~,
$$
with the $6$ functions (written inhomogeneously for simplicity):
$$ z_0 := \sigma^2\tau^2 - 1,\quad
z_1 := \sigma\tau^2 - 1,\quad
z_2 := \sigma^2\tau - 1,\quad
z_3 := \sigma\tau - 1,\quad
z_4 := \tau - 1,\quad
z_5 := \sigma - 1.
$$
The image of $\mathbb{P}^1{\times}\mathbb{P}^1$ is $\hbox{dP}_5$, in its anti-canonical embedding into $\mathbb{P}^5$.
To find the inverse, notice that $(z_3-z_5,z_4)=(\sigma\tau-\sigma,\tau-1)=(\sigma,1)$ in $\mathbb{P}^1$.
Thus the inverse map $\Phi$, which is everywhere defined, is given by
$$
\Phi:\,\hbox{dP}_5\,\longrightarrow\,\mathbb{P}^1\times\mathbb{P}^1~,\qquad z:=(z_0,\ldots,z_5)\,\longmapsto\,
\Big((z_3-z_5,z_4),\,(z_3-z_4,z_5)\Big)
$$
(one should notice however that this formula for $\Phi$ only works on an open subset of $\hbox{dP}_5$, using certain quadratic relations between the $z_i$ which are satisfied on $\hbox{dP}_5$, one can extend $\Phi$ to all of $\hbox{dP}_5$).
The surface $\hbox{dP}_5\subset\mathbb{P}^5$ is defined by 5 quadratic equations.
An example of such an equation is
$$
q_0\,=\,0~,\qquad\mbox{with}\quad q_0\,:=\,(z_1-z_3)z_5\,-\,(z_2-z_3)z_4~,
$$
in fact $\Big((\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t^2-1)-(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t-1)\Big)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma-1)=\Big((\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t-1)-(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t-1)\Big)(\t-1)$.
The image of the curve $C^0_\vph$ is defined by an additional quadratic equation,
which we will discuss in section \ref{eqndp}.
\subsection{Automorphisms of $\hbox{dP}_5$}\label{autdp}
As we will recall below, the group of automorphisms of the algebraic surface $\hbox{dP}_5$ is $\mathcal{S}_5$.
The action of $\mathcal{S}_5$ on $\mathbb{P}^1\times\mathbb{P}^1$, given by the birational transformations given in the previous section, induces these automorphisms on $\hbox{dP}_5$.
The automorphisms of $\hbox{dP}_5$ act linearly on the $z_i$'s
(since they are the sections of the anti-canonical bundle of $\hbox{dP}_5$). Thus we get a much simpler description of the $\mathcal{S}_5$ action.
We will now determine the matrices of the four elements in $\mathcal{S}_5$ given in \tref{S5transfs}.
One should notice that, for example, $(z_0,\ldots,z_5)$ and $(-z_0,\ldots,-z_5)$ define the same point in $\mathbb{P}^5$, but they are distinct as points in $\mathbb{C}^6$. To obtain a linear representation of $\mathcal{S}_5$ on $\mathbb{C}^6$ one has to make the choices we give below.
The element $(12)(35)$ acts as $(\sigma,\tau)\mapsto(\tau,\sigma)$ on $\mathbb{P}^1{\times}\mathbb{P}^1$
and as
\begin{align*}
(12)(35):\quad z\,&\longmapsto\,(-z_0,\,-z_2,\,-z_1,\,-z_3,\,-z_5,\,-z_4)\\
\intertext{on $\mathbb{C}^6$. Notice that the trace of $(12)(35)$ on $\mathbb{C}^6$ is $-2$.
The second permutation is $(12)$ which acts as $(\sigma,\tau)\mapsto(\sigma^{-1},\tau^{-1})$, so as
$\big((\sigma_1,\sigma_2),\,(\tau_1,\tau_2)\big)\mapsto \big((\sigma_2,\sigma_1),\,(\tau_2,\tau_1)\big)$ in homogeneous coordinates. This gives the map, with trace zero,}
(12):\quad z\,&\longmapsto\, (-z_0,\,-z_0+z_5,\,-z_0+z_4,\,-z_0+z_3,\,-z_0+z_2,\,-z_0+z_1)~.\\
\intertext{The permutation $(45)$ acts non-linearly, $(\sigma,\tau)\mapsto(\sigma^{-1},\sigma\tau)$,
substituting this in the polynomials $z_i$ and multiplying by $-\sigma$ gives the action on $\mathbb{C}^6$:}
(45):\quad z\,&\longmapsto\, (-z_1+z_5,\,-z_0+z_5,\,-z_4+z_5,\,-z_3+z_5,\,-z_2+z_5,\,z_5)~.\\
\intertext{Finally we have $(13)$ acting as $(\sigma,\tau)\mapsto \big((1-\sigma\tau)(1-\tau),1-\tau\big)$, substituting and multiplying by $(1-\tau)/\tau$ gives the linear map:}
(13):\quad z\,&\longmapsto \,
(-z_0+z_2+2z_3-2z_5,\, -z_1+2z_3+z_4-z_5,\,z_2-2z_5,\,z_3-z_5,\,z_4,-z_5)~.\\[-15pt]
\end{align*}
We have verified that this gives indeed a linear representation of $\mathcal{S}_5$ on $\mathbb{C}^6$. Computing the traces and comparing with a character table of $\mathcal{S}_5$ (see section \ref{eqndp}), we find that this representation is the unique irreducible $6$-dimensional representation of $\mathcal{S}_6$.
\subsection{Exceptional curves in $\hbox{dP}_5$}
We obtained $\hbox{dP}_5$ as the blow up of $\mathbb{P}^1\times\mathbb{P}^1$ in the three points $(1,1),(0,\infty),(\infty,0)$. Thus on $\hbox{dP}_5$ we have three $\mathbb{P}^1$'s, the exceptional curves over these points. These are lines in $\mathbb{P}^5$ lying on $\hbox{dP}_5$. To find them, it suffices to find just one and then apply suitable elements of $\mathcal{S}_5$ to find the others.
The points $(a:b)$ on the exceptional curve over $(1,1)$ are the limit points of the image of $(\sigma,\tau)=(1+\epsilon a,1+\epsilon b)$ for $\epsilon\rightarrow 0$ under the blow up map. One finds the line
$$
E_{12}\,:\quad (2a+2b,\,a+2b,\,2a+b,\,a+b,\,b,\,a),\qquad (a,\,b)\,\in\,\mathbb{P}^1~.
$$
In fact, $\Psi(E_{12})=\big((a+b-a,b),\,(a+b-b,a)\big)=\big((b,b),\,(a,a)\big)=\big((1,1),(1,1)\big)$ which is indeed $(1,1)$ in inhomogeneous coordinates.
From equation \eqref{odp} we infer that the (strict transforms of the) curves $C^0_\vph$ intersect $E_{12}$ in two points, independent of $\vph$, which correspond to
$(a:b)=\big((\omega,1),(\omega^2,1)\big)$. In the following we shall give parametrisations of the other exceptional curves. In each case, the parameters $(a,b)$ will be understood as the coordinates of a $\mathbb{P}^1$.
\begin{table}[t]
\def\vrule height18pt depth12pt width0pt{\vrule height18pt depth12pt width0pt}
\def\vrule height15pt depth10pt width0pt{\vrule height15pt depth8pt width0pt}
\begin{center}
\begin{tabular}[H]{| >{$}c<{$} | >{$}c<{$} | >{$}c<{$} | >{$}c<{$} |}
\hline
\multispan4{\vrule height18pt depth12pt width0pt\vrule\hfil\large Exceptional curves in $\hbox{dP}_5$ \hfil\vrule}\\
\hline
\vrule height15pt depth10pt width0pt \hbox{Name} & \hbox{Parametrization} & \hbox{Image in}~\mathbb{P}^1\times \mathbb{P}^1 & \hbox{Special points}\\
\hline\hline
\vrule height15pt depth10pt width0pt E_{12} &(2a+2b,a+2b,2a+b,a+b,b,a)
& (1,1)
& \hbox{singular point}\\
\hline
\vrule height15pt depth10pt width0pt E_{13} & (a,b,0,0,0,0)
& \t\,=\,\infty
&(-\o,\infty),\,(-\o^2,\infty) \\
\hline
\vrule height15pt depth10pt width0pt E_{14} & (0,0,a,0,0,b)
& (\infty,0)
& \hbox{singular point}\\
\hline
\vrule height15pt depth10pt width0pt E_{15} & (a,a,a,a,b,a)
& \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\,=\,0
& (0,-\o),\,(0,-\o^2)\\
\hline
\vrule height15pt depth10pt width0pt E_{23} & (a,a,a,a,a,b)
& \t\,=\,0
& (-\o,0),\,(-\o^2,0) \\
\hline
\vrule height15pt depth10pt width0pt E_{24} & (0,a,0,0,b,0)
& (0,\infty)
&\hbox{singular point} \\
\hline
\vrule height15pt depth10pt width0pt E_{25} & (a,0,b,0,0,0)
& \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\,=\,\infty
& (\infty,-\o),\,(\infty,-\o^2) \\
\hline
\vrule height15pt depth10pt width0pt E_{34} & (a,\,b,\,a,\,b,\,0,\,b)
& \tau\,-\,1\,=\,0
&(-\o,1),\,(-\o^2,1)\\
\hline
\vrule height15pt depth10pt width0pt E_{35} & (0,\,b,\,a,\,0,\,b,\,a)
& \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t\,-\,1\,=\,0
& (-\o,-\o^2),\,(-\o^2,-\o)\\
\hline
\vrule height15pt depth10pt width0pt E_{45} & (a,a,b,b,b,0)
& \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\,-\,1\,=\,0
& (1,-\o),\,(1,-\o^2)\\
\hline
\end{tabular}
\capt{5.7in}{ExcCurves}{The ten exceptional curves in $\hbox{dP}_5$, showing their images in $\mathbb{P}^5$ and in $\mathbb{P}^1{\times}\mathbb{P}^1$. The table also gives the points in which the divisors meet the curve $C_\vph^0$.}
\end{center}
\end{table}
%
One verifies that this line is mapped into itself under the action of $(12),(34),(45)\in \mathcal{S}_5$, which generate a subgroup of order $2{\times}6=12$ in $\mathcal{S}_5$. Acting with elements of $\mathcal{S}_5$ on $E_{12}$ produces $9$ other lines, which are denoted by $E_{ij}=E_{ji}$, $1\leq i,j\leq 5$ and $i\neq j$, compatible with the action of $\mathcal{S}_5$.
We now discuss some of these lines in $\hbox{dP}_5$ and their source in $\mathbb{P}^1\times \mathbb{P}^1$.
The line in $\hbox{dP}_5$ which is the exceptional curve over $(0,\infty)$ can be found with a limit as above and it is
\begin{align*}
E_{24}\,&:\quad (0,\,a,\,0,\,0,\,b,\,0)~,\\
\intertext{again one verifies easily that $\Phi(E_{24})=\big((0-0,b)),\,(0-b,0)\big)=\big((0,1),\,(1,0)\big)$ which is~$(0,\infty)$. As $(12)(35)$ permutes $\sigma$ and $\tau$, and thus $(0,\infty)$ and $(\infty,0)$, the exceptional curve over $(\infty,0)$~is}
E_{14}\,&:\quad (0,\,0,\,a,\,0,\,0,\,0,\,b)~.\\
\end{align*}
\vskip-10pt
The rulings $(1,1){\times}\mathbb{P}^1$ and $\mathbb{P}^1{\times}(1,1)$ passing through $(1,1)$ are also mapped to lines, for example, in inhomogeneous coordinates:
$$
\Psi(1,\,a)\,=\,(a^2-1,\,a^2-1,\,a-1,\,a-1,\,a-1,\,0)\,=\,(a+1,\,a+1,\,1,\,1,\,1,\,0)~,
$$
which shows that the curve defined by $\sigma=1$ maps to a line, which is $E_{45}$, on $\hbox{dP}_5$:
\begin{align*}
E_{45}\,&:\quad (a+b,\,a+b,\,b,\,b,\,b,\,0)~.\\
\intertext{Similarly, the curve $\tau=1$ (obtained from the first by $(12)(35)\in\mathcal{S}_5$) maps to the line}
E_{34}\,&:\quad (a+b,\,b,\,a+b,\,b,\,0,\,b)~.\\
\intertext{In this way each of the three points $(1,1),(0,\infty),(\infty,0)$ provides us with three lines on
$\hbox{dP}_5$, so we already have $9$ lines. For example, the curve $\tau=\infty$ maps to the line}
E_{13}\,&:\quad (a,b,0,0,0,0)~.\\
\intertext{A final $10$th line is given by the image of the unique curve of bidegree $(1,1)$ passing through these three points. Its equation is $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1- \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2=0$, i.e.\ $\sigma\tau=1$,
so it can be parametrized by $(a,a^{-1})$ and its image under $\Psi$ is
$$
\Psi(a,a^{-1})\,=\,(0,\,a^{-1}-1,\,a-1,\,0,\,a^{-1}-1,\,a-1)\,=\,
(0,\,1,\,-a,\,0,\,1,\,-a)~.
$$
Thus we have found the line}
E_{35}\,&:\quad (0,\,b,\,-a,\,0,\,b,\,-a)~.\\
\end{align*}
\subsection{The curves $C_\vph$ and the Wiman pencil}\label{eqndp}
We will use representation theory of $\mathcal{S}_5$ to find the equations of the curves $C_\vph$.
The coordinates on $\mathbb{P}^5$ are $z_0,\ldots,z_5$ and the action of $\mathcal{S}_5$ on these coordinates was given in section \ref{autdp}. Comparing the traces with \tref{CharTab}, we find that the linear functions are in the unique $6$-dimensional irreducible representation of $\mathcal{S}_5$.
\begin{table}
\begin{center}
\def\vrule height18pt depth12pt width0pt{\vrule height17pt depth10pt width0pt}
\def\vrule height15pt depth10pt width0pt{\vrule height13pt depth6pt width0pt}
\begin{tabular}[H]{|>{\hskip5pt $\bf}l <{$}|>{\hskip5pt}c<{\hskip5pt}|>{\hskip3pt} r<{\hskip8pt}|r<{\hskip15pt}
| >{\hskip5pt}r<{\hskip13pt}|r<{\hskip12pt}| r<{\hskip15pt}| r<{\hskip20pt}|}
\hline
\multispan8{\vrule height18pt depth12pt width0pt\vrule\hfil\large Characters of $\mathcal{S}_5$\hfil\vrule}\\ \hline
\vrule height15pt depth10pt width0pt&$e$&(12)\hspace*{-5pt}&(12)(34)\hspace*{-15pt}&(123)\hspace*{-10pt}&(1234)\hspace*{-12pt}&(12345)\hspace*{-15pt}&(123)(45)\hspace*{-20pt}\\
\hline\hline
\vrule height15pt depth10pt width0pt 1 &1&1&1&1&1&1&1\\ \hline
\vrule height15pt depth10pt width0pt 1_b &1&-1&1&1&-1&1&-1\\ \hline
\vrule height15pt depth10pt width0pt 4 &4&2&0&1&0&-1&-1\\ \hline
\vrule height15pt depth10pt width0pt 4_b &4&-2&0&1&0&-1&1\\ \hline
\vrule height15pt depth10pt width0pt 5 &5 &1 &1&-1& -1& 0& 1\\ \hline
\vrule height15pt depth10pt width0pt 5_b &5&-1 &1&-1 &1 & 0& -1\\ \hline
\vrule height15pt depth10pt width0pt 6 &6&0&-2&0&0&1&0\\ \hline
\end{tabular}
\vskip8pt
\capt{4.5in}{CharTab}{The character table of $\mathcal{S}_5$. This proves useful in identifying the image of $C_\vph$ in $\mathbb{P}^5$.}
\end{center}
\end{table}
The 21-dimensional representation $S_2$ of $\mathcal{S}_5$ on the quadratic functions $z_iz_j$
has character $\chi_2$ given by $\chi_2(g)=(\chi(g)^2+\chi(g))/2$,
where $\chi$ is the character of $\mathcal{S}_5$ on the linear functions.
Decomposing it into irreducible characters one finds:
$$
S_2\,=\,{\bf 1}\oplus{\bf 1_b}\oplus{\bf 4}\oplus2\cdot{\bf 5}\oplus{\bf 5_b}~.
$$
Let $G_z,H_z\in I_2$ be polynomials which span $\bf 1$ and $\bf 1_b$ respectively.
Thus $G_z$ is $\mathcal{S}_5$-invariant and hence $G_z=0$ defines a $\mathcal{S}_5$ curve in $\mathcal{P}_5$.
Similarly, as $gH_z=\epsilon(g) H_z$,
where $\epsilon(g)$ is the sign of the permutation $g$,
the curve $H_z=0$ is $\mathcal{S}_5$-invariant.
Such polynomials can be found as $\sum_g g(z_0z_1)$ and
$\sum_g \epsilon(g)g(z_0z_1)$, where the sum is over all $g\in \mathcal{S}_5$.
To relate these polynomials in the $z_i$ to those in $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t$, recall that
the $z_i$ correspond to a basis of the polynomials of bidegree $(2,2)$
which vanish in $(1,1),(0,\infty),(\infty,0)$. More precisely, using the map $\Psi$,
we have
$$
\Psi^*(z_0)\,=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t^2-1,\qquad\ldots,\quad \Psi^*(z_5)\,=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma-1~.
$$
The unique $\mathcal{S}_5$ invariant quadratic polynomial $G_z$ is:
\begin{equation}\begin{split}
G_z~:=~2z_0^2 &- 2z_0z_1 - 2z_0z_2 - 2z_0z_3 + z_0z_4 + z_0z_5 + 2z_1^2 + z_1z_2 - 2z_1z_3 \\
& - 2z_1z_4 + 2z_2^2 - 2z_2z_3 - 2z_2z_5 + 6z_3^2 - 2z_3z_4 - 2z_3z_5 + 2z_4^2 + z_4z_5 + 2z_5^2~,\\
\end{split}\notag\end{equation}
and we have verified that
$$
\Psi^*G_z\,:=\,G_z(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2\t^2-1,\ldots,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma-1)\,=\,G(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)~.
$$
Similarly, the unique quadratic polynomial $H_z$ invariant under $\mathcal{S}_5$, up to a sign, is
$$
H_z~:=~\frac13(-2z_0z_3 + z_0z_4 + z_0z_5 - z_1z_2 + 2z_1z_3 + 2z_2z_3 - 2z_3z_4 - 2z_3z_5 + z_4z_5)
~,
$$
and one finds that
$$
\Psi^*G_z\,=\,H(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)~,
$$
where, in the above, $G(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$ and $H(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$ are the polynomials \eqref{GandH}.
The curves $C^0_\vph$ in $\mathbb{P}^1\times\mathbb{P}^1$ which have equation
$F_+=G+\vph H=0$
are thus the images under the blow down
$\Phi:\hbox{dP}_5\rightarrow \mathbb{P}^1\times\mathbb{P}^1$ of the curves
$C_\vph$ in $\hbox{dP}_5$ defined by $G_z+\vph H_z=0$.
This pencil of curves $\{C_\vph\}_{\vph\in\mathbb{P}^1}$ is known as the Wiman pencil.
The curve $C_0$, defined by $G_z=0$, is smooth and has automorphism group $\mathcal{S}_5$,
and is known as the Wiman curve.
The curves $C_\vph$ have a 125:1 cover $\widetilde{C}_\vph$ which parametrizes lines on the
quintic threefold $\mathcal{M}_\psi$ Dwork pencil, where $\vph$ and $\psi$ are related as in Section \ref{Explpar}.
We will turn to this covering in section \ref{cover125}.
We conclude with one final remark on the
curve $C_\infty$ defined by $H_z=0$ in $\hbox{dP}_5$. The homogeneous polynomial defined by $H$ of must be of bidegree $(4,4)$,
so besides the 5 factors in the dehomogenized equation, we should take into account two more:
$$
H(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2,\t_1,\t_2)\,=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1(\t_2-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1)(s_2-\t_1)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1)\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2
$$
Thus the curve $H=0$ actually has $7$ irreducible components which all map to lines in $\hbox{dP}_5$ as we observed earlier. Moreover, $H=0$ passes through the three points which get blown up.
In fact one can check (see also \sref{picdp5}) that the curve $H_z=0$ in $\hbox{dP}_5$ has 10 irreducible components, which are the 10 lines in $\hbox{dP}_5$, each with multiplicity one.
On each of the $10$ lines in $\hbox{dP}_5$ there are two points which correspond to van Geemen
lines. Each line is invariant under a subgroup of order $12$ of $\mathcal{S}_5$ and these two points are the fixed point set of any of the two elements of order three in the subgroup.
\newpage
\section{A second look at the curves parametrizing the lines}\label{secondlook}
\subsection{The Pl\"ucker map}\label{plmap}
The explicit parametrization of the lines in the Dwork pencil was given in Eq~\eqref{fam}.
We will now study their Pl\"ucker coordinates, which will be the key to understanding the 125:1 cover $\widetilde{C}_\vph\rightarrow C_\vph$.
Given a line $l$ in $\mathbb{P}^4$ spanned by two points
$x=(x_1,\ldots,x_5)$ and $y=(y_1,\ldots,y_5)$,
its Pl\"ucker coordinates $\pi_{ij}(l)=-\pi_{ji}(l)$ are defined as:
$$
\pi_{ij}(l)\,:=\,x_iy_j\,-\,x_jy_i,\qquad l=\langle x,y\rangle\quad\subset \mathbb{P}^4.
$$
The $10$ Pl\"ucker coordinates $\pi_{ij}(l)$ with $1\leq i<j\leq 5$,
viewed as projective coordinates on $\mathbb{P}^9$, determine $l$ uniquely.
The van Geemen lines given in equation (\ref{VanGone}) are spanned by the rows of the matrix (the first corresponding to the point with $(u,v)=(1,0)$ and the second to $(u,v)=(0,1)$)
$$
\left(
\begin{array}{ l<{\hskip15pt} l >{\hskip5pt}c<{\hskip5pt} l<{\hskip5pt} l }
1 & 0 &\zeta^{-k-l}b &\zeta^k c &-\zeta^l \o^2 c \\[5pt]
0 & 1 & 0 &\zeta^k\o &\phantom{-}\zeta^l\o^2
\end{array}
\right)\!\raisebox{-14pt}{.}
$$
One notices that $\pi_{13}=0$, and that the other $\pi_{ij}$ are non-zero.
Recall that these lines are invariant under the cyclic permutation of
$(x_2,x_4,x_5)$.
As the other van~Geemen lines are obtained from this one by the action of the group $\mathcal{S}_5{\rtimes}\mathcal{G}$, we conclude that a van~Geemen line has exactly one Pl\"ucker coordinate $\pi_{ij}$ which is zero, the indices $ij$ are such that the stabilizer of the line is conjugated in $\mathcal{S}_5{\rtimes}\mathcal{G}$ to the cyclic subgroup generated by $(klm)\in\mathcal{S}_5$ where $\{i,j,k,l,m\}=\{1,\ldots,5\}$.
These indices $i,j$ can also be obtained as follows.
The point on $C_\vph$ determined by such a line lies on the intersection of this curve with the line $E_{pq}$ on $\hbox{dP}_5$ and the sets of indices $\{i,j\}$ and $\{p,q\}$ are the same. We will now see that, conversely, a line in the Dwork pencil for which one of the $\pi_{ij}$ is zero are is a van Geemen line.
The elements of the group $\mathcal{G}$ acts by multiplying the coordinates
$x_1,\ldots,x_5$ of $\mathbb{P}^4$ by fifth roots of unity. Hence the induced action
of $\mathcal{G}$ on the Pl\"ucker coordinates is also by multiplication by fifths roots of unity.
The fifth powers $\pi_{ij}^5$ of the Pl\"ucker coordinates are thus invariant under $\mathcal{G}$ and hence functions on $C_\vph$, more precisely, the quotients $\pi_{ij}^5/\pi^5_{pq}$ define meromorphic functions on $C_\vph$.
These functions are easy to find.
The Pl\"ucker coordinates of the lines parametrized by the 125:1 cover of $C_\vph$ in \eqref{fam} are the determinants of the $2\times 2$ minors of the following matrix:
$$
\left(
\begin{array}{ c c >{\hskip3pt} l >{\hskip7pt}l >{\hskip3pt}l}
\alpha(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t) & 0 & -\t^{4/5}\beta(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma &\beta(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma &-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^{4/5}\beta(\t)\\[12pt]
0 &\;\alpha(\t,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)\;&-\t^{4/5}\beta(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma) &\beta(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)\t &-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^{4/5}\beta(\t)\t \\
\end{array}
\right)\!\raisebox{-18pt}{.}
$$
From this we compute for example, with $\beta(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)^5=(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)$:
$$
\pi_{35}^5\,=\,\Big(\t^{4/5}\beta(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma{\cdot}\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^{4/5}\beta(\t)\t -
\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^{4/5}\beta(\t){\cdot}\t^{4/5}\beta(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)\Big)^5
\,=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^4\t^4\beta(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)^5\beta(\t)^5(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t-1)^5~.
$$
In this way we get $10$ polynomials in $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t$ of rather high degree,
but they do have a common factor, which is:
$$
p_c\,:=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^4\t^4(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma-1)(\t-1)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t-1)~.
$$
The quotients $\pi_{ij}^5/p_c$ can be homogenized to polynomials
of bidegree $(6,6)$ in $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2$ and $\t_1,\t_2$:
$$
p_{ij}(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2,\t_1,\t_2)\,:=\,(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2)^6(\pi_{ij}^5/p_c)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1/\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2,\t_1/\t_2)~.
$$
These $p_{ij}$ are reducible. Their irreducible components can be used to define meromorphic functions on $C_\vph$
with quite interesting zeroes and poles as we will see in the next sections and in the Appendix.
We introduce some notation for the irreducible components of the $p_{ij}$.
The polynomial defining the curve in $\mathbb{P}^1{\times}\mathbb{P}^1$ which maps to the line
$E_{ij}$ on $\hbox{dP}_5$ is denoted by $m_{ij}$, and we give them in \tref{Divs}.
We have two polynomials, of bidegree $(2,0)$ and $(0,2)$ respectively,
$$
l_1\,:=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1^2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2 + \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2^2,\qquad
l_2\,:=\,\t_1^2 - \t_1\t_2 +\t_2^2
$$
which are reducible (\,$l_1=(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1+\o\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1+\o^2\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2)$\,) and $l_i=0$ intersects the curve $C_\vph^0$ in special points corresponding to van Geemen lines.
The intersection of $C_\vph$ with another curve is written as a divisor $\sum_P n_pP$, which is a formal finite sum with $P\in C_\vph$ and $n_p\in\mathbb{Z}$ the intersection multiplicity. We write $D_{ij}=P_{ij}+Q_{ij}$
for the divisor given by the pair of special points on $C_\vph$ which correspond to the van Geemen lines indexed by $ij$. Thus if $ij=45$, we can take $P_{ij}=(1,-\o)$ and $Q_{ij}=(1,-\o^2)$, viewed as points on the smooth model $C_\vph$ of $C^0_\vph$.
In case $ij=12$ we take the two points of $C_\vph$ which map to the singular point $(1,1)$ of $C^0_\vph$. On $\hbox{dP}_5$, these divisors are the intersection divisors of $C_\vph$ and the lines $E_{ij}$:
$$
D_{ij}\,:=\,C_\vph\,\cap\,E_{ij}~.
$$
However, pulling back the divisors $m_{ij}=0$, we do not get the divisors $E_{ij}$, but we also get contributions from the singular points.
In Table \ref{Divs} we give the precise results.
With this notation, table \ref{ExcCurves} shows that
$$
(l_1=0)\,\cap\,C_\vph \,=\,D_{13}\,+\,D_{23}\,+\,D_{34}\,+\,D_{35}~,
$$
applying $(12)(35)\in\mathcal{S}_5$ one obtains $(l_2=0)\cap\,C_\vph$.
Finally there are three polynomials, of bidegree $(2,2)$, which turn out to be reducible.
The first is
{\renewcommand{\arraystretch}{1.5}
$$
\begin{array}{rcl}
k_{14}&:=&\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1^2\t_1^2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1^2\t_1\t_2 + \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1^2\t_2^2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_1\t_2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2^2 + \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2^2\t_2^2\\
&=&
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1 + \o^2\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_2 + \o\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2)
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1 + \o\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_2 + \o^2\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2)~.
\end{array}
$$
}
The first factor defines a curve in $\mathbb{P}^1{\times}\mathbb{P}^1$ which can be parametrized by
$$
(s,t)\,\longmapsto\,\Big((\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2),\,(\t_1,\t_2)\Big)
\,=\, \Big((-\o t ,\o^2t+s),\,(s,t)\Big)~,\qquad(s,t)\,\in\,\mathbb{P}^1~.
$$
The intersection of this curve with $C^0_\vph$, which is defined by $F_+=0$, is obtained from
$$
F_+\big((-\o t ,\o^2t+s),\,(s,t)\big)\,=\,(2\vph - 2\o - 1)st^3(s-t)^3(s +\o^2t)~.
$$
One finds the points $(-\o^2,0)$, $(\infty,-\o^2)$, which are in the divisors $D_{23}$ and $D_{25}$ respectively with multiplicity one and the singular points
$(0,\infty)$ and $(1,1)$ with multiplicity three. So the curve must be tangent to one branch of $C^0_\vph$ in these points. The equation of the other factor is the complex conjugate, so we conclude that
$$
(k_{14}=0)\,\cap\,C_\vph \,=\,D_{23}\,+\,D_{25}\,+\,3D_{24}+3D_{12}~.
$$
\begin{table}
\def\vrule height18pt depth12pt width0pt{\vrule height18pt depth12pt width0pt}
\def\vrule height15pt depth10pt width0pt{\vrule height14pt depth9pt width0pt}
\begin{center}
\begin{tabular}[H]{|>{$}c<{$} | >{$~}l<{~$} |>{$}l<{$} |}
\hline
%
\multispan3{\vrule height18pt depth12pt width0pt\vrule\hfil\large Curves and intersection divisors with $C_\vph$ \hfil\vrule}\\
\hline
\vrule height15pt depth10pt width0pt \hbox{Name} & \hfil\hbox{Defining polynomial} &\hfil \hbox{Intersection with $C_\vph$}\\
\hline\hline
\vrule height15pt depth10pt width0pt k_{12} &\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1^2\t_1^2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_1\t_2 + \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2^2\t_2^2
& D_{34}\,+\,D_{45}\,+\,3D_{14}\,+\,3D_{24}\\
\hline
\vrule height15pt depth10pt width0pt m_{13} & \t_2
& D_{13}\,+\,D_{24}\\
\hline
\vrule height15pt depth10pt width0pt k_{14} & (\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1 + \o^2\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_2 + \o\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2)
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1 + \o\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_2 + \o^2\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2)
& D_{23}\,+\,D_{25}\,+\,3D_{12}+3D_{24}\\
\hline
\vrule height15pt depth10pt width0pt m_{15} & \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1
& D_{15}\,+\,D_{24}\\
\hline
\vrule height15pt depth10pt width0pt m_{23} & \t_1
& D_{23}\,+\,D_{14}\\
\hline
\vrule height15pt depth10pt width0pt k_{24} & (\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1 + \o^2\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_1 + \o\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2)
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1 + \o\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_1 + \o^2\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2)
& D_{13}\,+\,D_{15}\,+\,3D_{12}+3D_{14}\\
\hline
\vrule height15pt depth10pt width0pt m_{25} & \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2
& D_{14}\,+\,D_{25}\\
\hline
\vrule height15pt depth10pt width0pt m_{34} & \t_1-\t_2
& D_{12}\,+\,D_{34}\\
\hline
\vrule height15pt depth10pt width0pt m_{35} & \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1\,-\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2
& D_{12}\,+\,D_{14}\,+\,D_{24}\,+\,D_{35}\\
\hline
\vrule height15pt depth10pt width0pt m_{45} & \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\,-\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2
& D_{12}\,+\,D_{45}\\
\hline
\vrule height15pt depth10pt width0pt l_1 & \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1^2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2 + \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2^2
& D_{13}\,+\,D_{23}\,+\,D_{34}\,+\,D_{35}\\
\hline
\vrule height15pt depth10pt width0pt l_2 & \t_1^2 - \t_1\t_2 + \t_2^2
& D_{15}\,+\,D_{25}\,+\,D_{35}\,+\,D_{45}\\
\hline
\end{tabular}
\capt{4.0in}{Divs}{The meromorphic functions on $C_\vph$ that arise as irreducible factors of the quantities
$\pi}\renewcommand{\P}{\Pi}\newcommand{\vp}{\varpi_{ij}^5/p_c$ discussed in \sref{plmap}.}
\end{center}
\end{table}
%
\goodbreak
The other two polynomials are
$$
k_{24}\,:=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1^2\t_1^2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_1^2 + \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2^2\t_1^2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_1\t_2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_1^2 + \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1^2\t_1^2
$$
which is obtained from $k_{14}$ by $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2)\leftrightarrow (\t_1,\t_2)$, i.e.\ by applying $(12)(35)$ in $\mathcal{S}_5$, and
$$
k_{12}\,:=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1^2\t_1^2 - \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_1\t_2 + \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2^2\t_2^2\,=\,
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1+\o \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1\t_1+\o^2 \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2\t_2)~.
$$
A computation, similar to the one above, shows that
$$
(k_{12}=0)\,\cap\,C_\vph \,=\,D_{34}\,+\,D_{45}\,+\,3D_{14}+3D_{24}~.
$$
\tref{Divs} gives the zero divisors of these $12$ polynomials.
\begin{table}
\def\vrule height18pt depth12pt width0pt{\vrule height18pt depth12pt width0pt}
\def\vrule height15pt depth10pt width0pt{\vrule height15pt depth10pt width0pt}
\begin{center}
\begin{tabular}[H]{|>{$}c<{$} |>{$~}l<{~$} || >{$}c<{$} | >{$~}l<{~$}|}
\hline
%
\multispan4{\vrule height18pt depth12pt width0pt\vrule\hfil\large Factorization of the $p_{ij}:=\pi_{ij}^5/p_c$ \hfil\vrule}\\
\hline
\vrule height15pt depth10pt width0pt \hbox{Name} & \hfil\hbox{Factorization} & \hbox{Name} & \hfil\hbox{Factorization} \\
\hline\hline
\vrule height15pt depth10pt width0pt p_{12} &m_{34}\,m_{35}\,m_{45}\,k_{14}\,k_{24}\quad
& p_{24} &m_{13}\,m_{15}\,m_{35}\,k_{12}\,k_{14} \\
\hline
\vrule height15pt depth10pt width0pt p_{13} & m_{13}^4\,m_{25}\,m_{45}\,k_{24}\,l_1
& p_{25} & m_{13}\,m_{34}\,m_{35}^4\,k_{14}\,l_2
\\
\hline
\vrule height15pt depth10pt width0pt p_{14} & m_{23}\,m_{25}\,m_{35}\,k_{12}\,k_{24}
&p_{34} & m_{15}\,m_{25}\,m_{34}^4\,k_{12}\,l_1
\\
\hline
\vrule height15pt depth10pt width0pt p_{15} & m_{15}^4\,m_{23}\,m_{34}\,k_{24}\, l_2
& p_{35} & m_{35}^4\,l_1\,l_2
\\
\hline
\vrule height15pt depth10pt width0pt p_{23} & m_{15}\,m_{23}^4\,m_{45}\,k_{14}\,l_1
& p_{45} & m_{13}\,m_{23}\,m_{45}^4\,k_{12}\,l_2
\\
\hline
\end{tabular}
\capt{5.5in}{divpl}{Factorizations of the $p_{ij}$ in terms of the functions of the previous table.}
\end{center}
\end{table}
%
In \tref{divpl} we list the factorizations of the bidegree $(6,6)$ polynomials $p_{ij}=\pi_{ij}^5/p_c$ on $\mathbb{P}^1{\times}\mathbb{P}^1$.
Using Table \ref{Divs} one can compute the divisors
$(p_{ij}=0)\cap C^0_\vph$.
For example, one finds that
\begin{equation}\begin{split}
\frac{\pi_{35}^5}{p_c}~&=~\frac{(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t)^4(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma)(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)(1-\t)(1-\t+\t^2)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t-1)^5}
{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^4\t^4(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma-1)(\t-1)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t-1)}\\[10pt]
&=~(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t-1)^4(1-\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma+\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma^2)(1-\t+\t^2)~,
\end{split}\notag\end{equation}
so, after homogenizing and using the notation from \tref{Divs}, we get:
$$
p_{35}\,=\,m_{35}^4l_1l_2~.
$$
Thus we get:
$$(p_{35}=0)\cap C_\vph\,=\,
4(D_{12}+D_{14}+D_{24}+D_{35})+
D_{13}+D_{15}+D_{23}+D_{25}+D_{34}+2D_{35}+D_{45}~.
$$
In this way one can determine the divisor $(p_{ij}=0)\cap C_\vph$ for all $ij$.
One finds, quite remarkably, that they can be written as:
$$
(p_{ij}=0)\cap C_\vph\,=\,D_b\,+\,5D_{ij}~,
$$
where the divisor $D_b$ does not depend on ${ij}$ and is given by:
$$
D_b\,=\,
4(D_{12}+D_{14}+D_{24})+
D_{13}+D_{15}+D_{23}+D_{25}+D_{34}+D_{35}+D_{45}~,
$$
so it is the sum of the $10$ divisors $D_{ij}$, but the ones corresponding to the singular points of $C^0_\vph$ have multiplicity four.
Given $l\in \widetilde{C}_\vph$, at least one of the $\pi_{ij}(l)$ is obviously non-zero.
Thus the zeroes of the common factor $p_c$ as well as the contribution
coming from the common zeroes of all $p_{ij}$'s are artefacts of the parametrization.
These common zeroes are the $10\cdot 2=20$ points in $D_b$ and these correspond to van Geemen lines. To find the fifth powers of the Pl\"ucker coordinates of these points, one must take a limit on the curve $C_\vph$ (alternatively, one can use
the explicit parametrizations of these lines given in equation (\ref{VanGone})).
The surface parametrizing the families of lines in all $\mathcal{M}_\psi$ is thus a
$(\mathbb{Z}/5\mathbb{Z})^3$-covering of the blow up of $\hbox{dP}_5$ in the 20 points~of~$D_b$.
An important consequence of our computations is that
the meromorphic functions $\pi_{ij}^5/\pi_{pq}^5$ have zeroes of order 5 in the two points in $D_{ij}$ and have poles of order five in the two points in $D_{pq}$ since the (apparent) common zeroes of both cancel. These poles and zeroes correspond to van Geemen lines.
To be precise,
if $l$ is a line which has $\pi_{ij}(l)=0$, then it also has $\pi_{pq}(l)\neq0$ for some $pq$ and thus $\pi_{ij}^5(l)/\pi_{pq}^5(l)=0$, which shows that $l$
corresponds to a point in $D_{ij}$ on $C_\vph$,
hence $l$ is a van Geemen line.
\subsection{The 125:1 cover $\widetilde{C}_\vph$ of $C_\vph$}\label{cover125}
We will now describe the Riemann surface $\widetilde{C}_\vph$ more precisely.
We will use the construction of Riemann surfaces by means of polynomials
$g_nT^n+g_{n-1}T^{n-1}+\ldots+g_0$ where the $g_i$ are meromorphic functions on a given Riemann surface. For example, the Fermat curve defined by $X^n+Y^n+Z^n=0$ in $\mathbb{P}^2$ is the Riemann surface of the polynomial
$T^n+(x^n+1)$ where $x$ is the meromorphic function on $\mathbb{P}^1$
which gives the projective coordinate.
As we showed in section \ref{plmap},
the meromorphic function $\pi_{ij}^5/\pi_{pq}^5$,
viewed as function on $C_\vph$,
has zeroes of order 5 in the two points in $D_{ij}$
and it has poles of order five in the two points in $D_{pq}$ and is holomorphic, with no zeroes, on the rest of $C_\vph$.
We define the following meromorphic functions on $\widetilde{C}_\vph$ and $C_\vph$ respectively:
$$
f_{ij}\,:=\,\pi_{ij}/\pi_{45},\qquad
g_{ij}\,:=\,(\pi_{ij}/\pi_{45})^5\,=\,p_{ij}/p_{45}~.
$$
Notice that $f_{ij}/f_{pq}=\pi_{ij}/\pi_{pq}$, so we get all quotients of the Pl\"ucker coordinates from these $f_{ij}$. Obviously, $f_{ij}$ is a root of the polynomial
$T^5-g_{ij}$. The other roots of the polynomial are the $\zeta^af_{ij}$ with $a=1,\ldots,4$.
The Riemann surface of this polynomial can be described as follows. Choose a coordinate neighbourhood $U_x$ which biholomorphic to a disc $\Delta\subset\mathbb{C}$, with $0\in \Delta$ and local complex chart $z_x:U_x\rightarrow\Delta$ with $z_x(x)=0$.
If $x\in C_\vph$ and $g_{ij}$ has no poles or zeroes on $U_x$, this Riemann surface is locally the disjoint union of $5$ copies of $U_x$.
If $x$ is a zero of $g_{ij}$, we can write
$$
g_{ij}\,=\,z_x^5(1\,+\,a_1z_x\,+\,a_2z_x^2\,+\,\ldots )~.
$$
Restricting the open subset, we may assume that $1+a_1z_x+\ldots=h^5$ for a holomorphic function $h$ on $U_x$ without zeroes and poles.
On $U_x$ the polynomial is $T^5-(z_xh)^5=\prod_a(T-\zeta^az_xh)$, showing that the Riemann surface is still a disjoint union of $5$ copies of $U_x$. Another way to argue is that the subset $\{(z,t)\in\Delta^2:t^5=z^5\}$ is a local model of the Riemann surface. This local model must be blown up in $(0,0)$ in order to get a smooth model.
For the poles of $g_{ij}$, which also have multiplicity five, one finds similarly that
the Riemann surface is a disjoint union of $5$ copies of $U_x$. We refer to \cite{ForsterRiemannSurfaces} for these constructions of Riemann surfaces.
Thus the fact that each zero and pole of $g_{ij}$ has multiplicity five guarantees that the Riemann surface $\mathcal{X}_{ij}$ of the polynomial $T^5-g_{ij}$ is an unramified covering of $C_\vph$. Since the $f_{ij}$ are meromorphic on $\widetilde{C}_\vph$, there must exist holomorphic maps
$$
\widetilde{C}_\vph\,\longrightarrow\,\mathcal{X}_{ij}\,\longrightarrow\,C_\vph
$$
with the first map of degree $25$. The second map, of degree $5$, is obtained from the polynomial
$T^5-g_{ij}$. By Musta\c{t}\textbreve{a}\xspace's results, $\widetilde{C}_\vph$ is a connected Riemann surface, hence also $\mathcal{X}_{ij}$ is connected. (Another way to see this is to notice that otherwise the polynomial
$T^5-g_{ij}$ would be reducible. As its roots are the $\zeta^af_{ij}$,
this would imply that there would be a meromorphic function $h_{ij}$ on $C_\vph$ with $h_{ij}^5=g_{ij}$. Then $h_{ij}$ would have poles, with multiplicity one, in only two points. Thus $C_\vph$ would be hyperelliptic. This is not the case, as the
map to $\mathbb{P}^5$ induced by $\Phi$ is the canonical embedding of $C_\vph$.)
This construction can be iterated by considering the polynomial $T^5-g_{pq}$ on $\mathcal{X}_{ij}$, for example, or by considering the fiber product of the Riemann surfaces $\mathcal{X}_{ij}$ and
$\mathcal{X}_{pq}$ over $C_\vph$. The main result is that $\widetilde{C}_\vph$ can be obtained with this construction from three suitably choosen $g_{ij}$, for example, the $g_{i5}$, $i=1,2,3$. We have already remarked that the covering is unramified over points in the $D_{ij}$. Over each such point, we have found $125$ van Geemen lines.
Unramified covers with group $\mathcal{G}\cong (\mathbb{Z}/5\mathbb{Z})^3$
correspond to normal subgroups $K\subset \pi_1(C_\vph)$
of the fundamental group of $C_\vph$ with quotient
$\pi_1(C_\vph)/K\cong \mathcal{G}$.
We will discuss an algebro-geometric approach to the covers with line bundles in \sref{restrictionmap}.
\newpage
\section{Special members of the Dwork pencil}\label{singularmanifolds}
\subsection{The case $\psi=0$, $\vph=\infty$}
Due to the relation defining $\vph^2$ in terms of $\psi$, the case $\psi=0$ corresponds to $\vph=\infty$. The quintic threefold $\mathcal{M}_\psi$ is the Fermat quintic, and we already discussed the lines on this threefold in the introduction. The curve $C_\infty$ is the union of the $10$ lines on $\hbox{dP}_5$. Now we would like to describe $\widetilde{C}_\infty$ in more detail, using the description of the general $\widetilde{C}_\vph$ as a 125:1 covering of $C_\vph$ defined by the polynomials $T^5-g_{ij}$, with $g_{ij}=p_{ij}/p_{45}$.
First of all, we restrict our attention to the line $E_{15}\subset C_\infty$, which corresponds to the curve $\sigma=0$ in $\mathbb{P}^1{\times}\mathbb{P}^1$. Putting $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=0$ in all
$p_{ij}$, we notice first of all that as $m_{15}=\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma$, we get:
$$
p_{15}\,=\,p_{23}\,=\,p_{24}\,=\,p_{34}\,=\,0\qquad\mbox{on}\quad E_{15}~.
$$
The restrictions of the other $p_{ij}$ are easy to compute, one finds:
$$
p_{12}\,=\,p_{25}\,=\,(\t-1)(\t^2-\t+1),\quad
p_{23}\,=\,p_{35}\,=\,\t^2-\t+1,\quad
p_{45}\,=\,-p_{14}\,=\,(\t+1)(\t^2-\t+1)~.
$$
Thus we get a 25:1 covering of $E_{15}\cong\mathbb{P}^1$, with coordinate $\t$, given by the two polynomials
$$
T^5\,+\,\t\,-\,1,\qquad U^5\,+\,\t\,+\,1~.
$$
The first polynomial has a zero, of order one, in $\t=1$ and a pole, of order one, in $\t=\infty$. The Riemann surface we get is the 5:1 cyclic cover of $\mathbb{P}^1$
totally branched over these points. In particular it is a $\mathbb{P}^1$ with coordinate $t$ satisfying $t^5=\tau-1$. Substituting this in the other polynomial,
we get the polynomial
$U^5+t^5+2$, which defines (up to rescaling) the degree 5 Fermat curve.
Using the $\mathcal{S}_5$ action, we find that $\widetilde{C}_\infty$
is the union of 10 Fermat curves of degree 5, as expected.
Now we consider the lines parametrized by the component of
$\widetilde{C}_\infty$ lying over $E_{15}$.
We already observed that for a line $l$ in this component we have $\pi_{ij}(l)=0$
for $ij=23,24,34$ since these $p_{ij}$ are zero. So if $l$ is spanned by $x,y$ then
$(x_2,x_3,x_4)$ and $(y_2,y_3,y_4)$ are linearly dependent, hence we may assume that $y=(y_1,0,0,0,y_5)$. As $l\subset\mathcal{M}_0$ we get $y_1^5+y_5^5=0$, so we can put $y_1=1$, $y_5=\zeta^a$.
As also $p_{15}=0$, the vectors $(x_1,x_5)$ and $(y_1,y_5)$ are dependent and substracting a suitable multiple of $y$ from $x$ we see that $x=(0,x_2,x_3,x_4,0)$, and still $l=\langle x,y\rangle$.
As $l\subset \mathcal{M}_0$ we get $x_2^5+x_3^5+x_4^5=0$, which, after permuting the coordinates 2 and 5, gives the family of lines parametrized by the quintic Fermat curve described in the introduction. Thus we succeeded in recovering the lines on the Fermat quintic threefold with our description of $\widetilde{C}_\vph$.
\subsection{The cases $\psi^5=1$, $\vph^2=125/4$}
In case $\psi^5=1$, the threefold $\mathcal{M}_\psi$ has 125 ordinary double points
and has been studied extensively in \cite{SchoenMumfordHorrocksBundle}.
For convenience, we will take $\psi=1$, $\vph=5\sqrt{5}/2$,
the other cases are similar.
A computation shows that the corresponding curves $C_\vph^0$
acquire 6 more ordinary double points. Since each double point lowers the genus by one, the desingularizations, which we denote by $\widehat{C}_\vph$ to distinguish them from the (singular) curves $C_\vph$ in $\hbox{dP}_5$, are thus isomorphic to $\mathbb{P}^1$.
The $\pi_{ij}^5$, which are functions on $C_\vph^0$, are now fifth-powers
of functions on $C_\vph^0$.
Hence the 125:1 cover $\widetilde{C}_\vph$ of $C_\vph$, given by the polynomials
$T^5-(\pi}\renewcommand{\P}{\Pi}\newcommand{\vp}{\varpi_{ij}/\pi_{45})^5$,
is the union of 125 copies of $\widehat{C}_\vph\cong \mathbb{P}^1$.
This corresponds to the fact that $\mathcal{M}_1$ contains 125 quadrics, each isomorphic to $\mathbb{P}^1{\times}\mathbb{P}^1$. Each quadric has two families of lines, given by the $\{x\}\times\mathbb{P}^1$ and $\mathbb{P}^1\times\{x\}$ where $x$ runs over~$\mathbb{P}^1$. Thus we get $2\cdot 125$ families of lines parametrized by $\mathbb{P}^1$ in $\mathcal{M}_1$.
These correspond to the components of the coverings $\widetilde{C}_\vph$
of the $C_\vph$ with $\vph^2=125/4$.
We will first discuss the lines on one of the quadrics, denoted by $Z$, in $\mathcal{M}_1$.
We also give explicitly a (complicated) map from
$\mathbb{P}^1$ to $C_\vph^0$
which is a birational isomorphism.
One can then check that the fifth powers of the Pl\"ucker coordinates
are now indeed fifth powers on $C_\vph^0$, hence the cover $\widetilde{C}_\vph\rightarrow C_\vph$ becomes reducible.
The threefold $\mathcal{M}_1$ has $125$ ordinary double points,
they are the orbit of the point $q:=(1,1,1,1,1)$ under the action of $\mathcal{G}$.
In the paper \cite{SchoenMumfordHorrocksBundle} it is shown that there are $125$ hyperplanes
(i.e.\ linear subspaces $\mathbb{P}^3\subset\mathbb{P}^4$), which form one $\mathcal{G}$-orbit,
each of which cuts $\mathcal{M}_1$ in a smooth quadratic surface and a cubic surface.
To see such a hyperplane, one writes the
equation (\ref{DworkPencil}) for $\mathcal{M}_1$ as a polynomial in
the elementary symmetric functions in $x_1,\ldots,x_5$:
$$
s_1\,:=\,\sum_{i=1}^5x_i~,\qquad s_2\,:=\,\sum_{i<j}x_ix_j~,\quad
\ldots~,\quad s_5\,:=\,x_1x_2x_3x_4x_5~.
$$
The equation is then
$$
\mathcal{M}_1\,:\qquad
s_2s_3\,+\,s_1\left(s_4-s_2^2-s_1s_3+s_1^2s_2- \smallfrac{1}{5}s_1^4\right)\,=\,0~.
$$
Thus the hyperplane $H$ defined by $s_1=0$ cuts $\mathcal{M}_1$ in the surface defined by
$s_2s_3=0$. One verifies that the quadric $Z\subset \mathcal{M}_1$ defined by
$s_1=s_2=0$ is a smooth quadric in $H\cong\mathbb{P}^3$.
In $H$ we have $x_5=-(x_1+\ldots+x_4)$, hence $2s_2=(\sum x_i)^2-\sum x_i^2$
restricts to
$-2\sum_{i\leq j} x_ix_j$.
Hence
$$
Z\,\cong\,\left\{(x_1,\ldots,x_4)\in\mathbb{P}^3:\;\sum_{i\leq j} x_ix_j\,=\,0\,\right\}~.
$$
First of all, we are going to find the van Geemen lines in $Z$.
Recall that these are the lines which are fixed under an element of order three
in $\mathcal{S}_5$. Taking this element to be $(123)$,
we thus try to find a constant $b$ such that the line, in $H$, parametrized by
$$
u\,\big(1,\o,\o^2,0,0 \big)\,+\,v\,\big(1,1,1,b,-(b+3)\big) ~=~
\big(u+v,\o u+ v,\o^2u+v,bv,-(b+3)v\big)~,
$$
lies in $Z\subset X$. Next we impose $s_2=0$ and we find the condition:
$$
b^2\,+\,3b\,+\,6\,=\,0,\qquad\mbox{hence}\quad
b_{\pm}\,=\,\frac{-3\pm\sqrt{-15}}{2}~,
$$
and we get two lines $l_\pm$ in $Z$ which meet in the point $(1,\o,\o^2,0,0)$.
From this one easily finds the other van Geemen lines on $Z$.
The surface $Z$ is a nonsingular quadric in $\mathbb{P}^3$ hence is isomorphic to $\mathbb{P}^1{\times}\mathbb{P}^1$. We wish to parametrize $Z$ and this parametrization is simplified by making an appropriate choice of coordinate on the first $\mathbb{P}^1$, which we regard as the curve $C_\vph$ that parametrizes the lines. The group $\mathcal{A}_5$, which is isomorphic to the icosahedral group, acts on $C_\vph$ and it is convenient to choose a coordinate $z$ adapted to this action. In the standard discussions of the automorphic functions of the icosahedral group~\cite{HigherTranscendentalFunctions, Forsyth} one considers the projection of an icosahedron on to the circumscribing sphere and then the further projection of the image onto the equatorial plane, taking the south pole as the point of projection. There are thus two natural choices of coordinates, depending on whether the orientation of the icosahedon is chosen such that the south pole coincides with a vertex or the image of the center of a face. The standard treatments place a vertex at the south pole. We shall refer to this choice of coordinate, $w$, as the icosahedral coordinate. It can be checked that the 10 van Geemen lines of $C_\vph$ correspond to projection onto the circumscribing sphere of the centers of the faces of the icosahedron, or equivalently, to the vertices of the dual dodecahedron. For our purposes, it is therefore natural to work with a `dodecahedral coordinate' $z$ that corresponds to aligning the icosahedron such that the south pole of the circumscribing sphere corresponds to a vertex of the dual dodecahedron. The two coordinates may be chosen such that the relation between them is
$$
\o z~=~\frac{w_\infty\, w +1}{w\, -\, w_\infty}~,
$$
where $w_\infty$ denotes the $w$-coordinate of the dodecahedral vertex at the north pole of the circumscribing sphere. This can be chosen to be
$$
w_\infty~=~\frac14\left( 3+\sqrt{5} + \sqrt{6(5+\sqrt{5})}\right)~.
$$
It is convenient to fix a primitive $15$-th root of unity $\eta=e^{2\pi i/15}$. Then we also have a fifth and a third root of unity, $\zeta,\o$ respectively, and expressions for $\sqrt{5}$ and $w_\infty$:
$$
\zeta\,:=\,\eta^3,\qquad \o\,:=\,\eta^5,\qquad
\sqrt{5}\,=\,1+\zeta+\zeta^{-1},\qquad w_\infty\,=\, -2\eta^7 + \eta^5 - \eta^4 + \eta^3 - \eta + 2~.
$$
Using the van Geemen lines, it is easy to find the following parametrization $\Upsilon:\mathbb{P}{\times}\mathbb{P}^1\rightarrow Z$,
where we reinstate the $x_5$ coordinate for symmetry reasons,
{\renewcommand{\arraystretch}{1.3}
$$
\Upsilon:\,(z,u)\,\longmapsto
\left(\begin{array}{c}
x_1\\ x_2\\xi}\newcommand{\X}{\Xi_3\\xi}\newcommand{\X}{\Xi_4\\xi}\newcommand{\X}{\Xi_5
\end{array}\right)
\,=\,
\left(\begin{array}{r >{\hskip-7pt}l >{\hskip-7pt}l >{\hskip-7pt}l}
-(b+3)\,c u z &&& +\,5b\\
c u z& +\, 5 u & +\, \o dz &+\, 5\\
c u z&+\, 5 \o u &+\, d z& +\, 5\\
b\,c u z&&&-\,5(b+3)\\
c u z& +\,5\o^2 u& +\,\o^2 d z& +\, 5
\end{array}\right)~,
$$
}
where we make use of the following coefficients:
$$
\begin{array}{rcl}
b&:=&-\eta^7 +\eta^5-2 \eta^4+ \eta^3 - \eta^2 -2 \eta~,\\[3pt]
c&:=&-2 \eta^7 + \eta^5 - 2 \eta^4 + 2 \eta^3 - 2 \eta^2 - 2 \eta + 2~,\\[3pt]
d&:=&-10 \eta^7 + 10 \eta^3 - 10 \eta^2 + 5~,
\end{array}
$$
in particular, $b^2+3b+6=0$.
For fixed $z$, we have a map $\mathbb{P}^1\rightarrow Z$
whose image is a line $l_z$ in $Z$ parametrized by $u$.
One can check that the action of $\mathcal{A}_5$, which has generators of order 2, 3 and 5, on the coordinates
$x_1,\ldots,x_5$ corresponds to the action of the following
M\"obius transformations:
$$
M_2(z)\,:=\,-1/z,\qquad
M_3(z)\,:=\,\o z,\qquad
M_5(z)\,:=\,
\frac{(\zeta w_\infty^2+1)\, z + (\zeta-1)\, \o^2 w_\infty}{(\zeta-1)\, \o\, w_\infty\,z + (\zeta+w_\infty^2)}~,
$$
where the order 5 transformation $M_5$ is simply the transformation $w\to \zeta w$, when written in terms of the icosahedral coordinate.
The polynomial whose roots, together with $z=0$ and $z=\infty$, correspond to the dodecahedral vertices is
$$
8\, z^{18} - 57 \sqrt{5}\, z^{15} - 228\, z^{12} - 494 \sqrt{5}\, z^9 + 228\, z^6 - 57 \sqrt{5}\, z^3 - 8~.
$$
The van Geemen lines correspond to the nine pairs of roots $\{z_{*},-1/z_{*}\}$ together with $\{0,\infty\}$.
For the M\"obius transformation $M_k$ one has
$$
l_{M_k(z)}\,=\,g_k(l_z)~,\qquad\mbox{with}\quad
l_z\,:=\,\{\Upsilon(z,u)\,\in\,\mathbb{P}^4:\,u\in\,\mathbb{P}^1\}~,
$$
where, in this context,
$$
g_k\,=\,(14)(25),\;(253),\;(54321)~~~\text{for}~~~k\,=\,2,3,5~.
$$
The orbit of the line $l_z$, with $z=0$, which is a van Geemen line fixed by
$(253)$, consists of $20$ van Geemen lines.
On $\mathcal{M}_1$ there are also lines fixed by an element of order five in $\mathcal{A}_5$.
These are the lines that cause the extra double points on $C_\vph$.
The element of order five $(12345)\in\mathcal{A}_5$ has five isolated fixed points in $\mathbb{P}^4$, four of which lie on $Z=H\cap \mathcal{M}_1$, in fact they are singular points of $\mathcal{M}_1$.
They are, for $i=1,\ldots,4$:
$$
q_i\,:=\,(\, \zeta_i^j \,)_{1\leq j\leq 5}~,\qquad
\big(\{q_1,q_2,q_3,q_4\}\subset \mbox{Sing}(\mathcal{M}_1)\big)~.
$$
One easily checks that $\lambda q_1+\mu q_2+\nu q_3$ lies on $Z$
only if $\mu=0$ or $\nu=0$ and thus the lines $\lambda q_1+\mu q_2$ and
$\lambda q_1+\nu q_3$ do lie in $Z$.
So the intersection of the $\mathbb{P}^2$ spanned by $q_1,q_2,q_3$ with the quadric $Z$
consists of two lines, each of which is spanned by two nodes:
$$
\langle q_1,\,q_2,\,q_3\,\rangle\,\cap\,Z\,=\,
\langle q_1,\,q_2\rangle\,\cup\,\langle q_1,\,q_3\rangle~.
$$
Both of these lines are invariant under the $5$-cycle $(12345)\in\mathcal{S}_5$,
and similarly we get lines $\langle q_2,\,q_4\rangle$ and $\langle q_3,\,q_4\rangle$.
The two lines $\langle q_1,\,q_2\rangle$, $\langle q_3,\,q_4\rangle$ on $Z$
are fixed by the 5-cycle and they do not intersect,
hence they are from the same ruling.
Applying $\mathcal{A}_5$, we get 12 lines, actually six pairs, with a stabilizer of order five
in each ruling. These create the 6 double points in~$C_\vph$.
We now briefly discuss the curve $C_\vph^0$ and a parametrization.
The curve $C_{\vph}^0$ has 6 more ordinary double points where $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)$
take the values
$$
\left( \pm\smallfrac12 (1+\sqrt{5}),\, \pm \smallfrac12 (1+\sqrt{5}) \right)\, ,~
\left( \pm \smallfrac12 (1+\sqrt{5}),\, \smallfrac12 (3-\sqrt{5}) \right)~~\text{and}~~
\left(\smallfrac12 (3-\sqrt{5}) ,\, \pm \smallfrac12 (1+\sqrt{5})\right)\, ,
$$
where, in the first expression, the same sign is chosen for each component.
A parametrization of $C_\vph^0$ is given by
$$
\mathbb{P}^1\,\longrightarrow\,C_\vph^0,\qquad
z\,\longrightarrow\,(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)\,=\,\big(R_1(z),\,R_2(z)\big)
$$
with rational functions
\begin{equation}\begin{split}
R_1(z,\vph)~&=~\frac{20z^4 + (15 - 2\vph) z^3 + 3(5 + 2\vph)z^2 - (15 - 2\vph)z + 20}
{6z(5z^2 + 2\vph z - 5)}~,\\[10pt]
R_2(z,\vph)~&=~\frac{1}{R_1(-\o^2 z,-\vph)}~.
\end{split}\notag\end{equation}
In particular, one has $F_+\big(R_1(z),R_2(z)\big)=0$ for all $z$.
The $\mathcal{A}_5$-action on $C^0_\vph$ lifts to an action of $\mathcal{A}_5$ by M\"obius transformations on $\mathbb{P}^1$.
Generating transformations are the same $m_k$ as given earlier
and one has
$$
\Big( R_1\big(m_k(z)\big),\,R_2\big(m_k(z)\big)\Big)\,=\,g_k\big(R_1(z),\,R_2(z)\big)~,
$$
where again,
$$
g_k\,=\,(14)(25),\;(253),\;(54321)~~~\text{for}~~~k\,=\,2,3,5~.
$$
This parametrization can be found using the $\mathcal{A}_5$-action.
The coordinate function $\sigma$ is known to be the quotient map by the subgroup generated by $(12)(34)$ and $(13)(24)$.
Since we require the map to be equivariant, the points in $z\in\mathbb{P}^1$ which are zeroes/poles of $R_1$ must correspond to van Geemen lines and these are fixed points of order three. As also the fiber over $R_i^{-1}(1)$ must consist of such fixed points, the $R_i$ are easily found.
The points $z=0$, $z=\infty$ in $\mathbb{P}^1$ are both mapped to the singular point
$(\infty,0)\in C^0_\vph\subset\mathbb{P}^1{\times}\mathbb{P}^1$. Using Table \ref{ExcCurves}
we find that these two points correspond to the divisor $D_{14}$.
The M\"obius transformations $m_2$ and $m_3$ on $\mathbb{P}^1$ fix the set $\{0,\infty\}$
and this is indeed consistent with the fact that the permutations $(14)(25)$ and $(253)$ fix the index set $\{1,4\}$ and thus fix the divisor $D_{14}$ on $C_\vph$.
As $\mathcal{A}_5$ acts transitively on the set of $20$ points which are in $\cup D_{ij}$,
the $\mathcal{A}_5$-orbit of $0\in\mathbb{P}^1$ consists of the $20$ points which map bijectively to this set.
The two fixed points of $m_5$ in $\mathbb{P}^1$ map to the singular point
$\big(\!-\smallfrac12 (1+\sqrt{5}),\, \smallfrac12 (3-\sqrt{5})\big)$ in~$C^0_\vph$.
The $\mathcal{A}_5$-orbit of any of these points is a set of 12 points, each a fixed point of an order 5 element in $\mathcal{A}_5$, which maps to one of the other six `extra' singular points of $C_\vph^0$.
Finally we consider the fifth powers of Pl\"ucker coordinates of the lines
parametrized by $C_\vph$. These functions are given by the polynomials
$p_{ij}(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_1,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_2,\t_1,\t_2)$ listed in Table \ref{divpl}.
We pull them back to $\mathbb{P}^1$ along the parametrization, so we
take homogeneous coordinates $(z,w)$ on $\mathbb{P}^1$ and we
consider the $10$ polynomials
$$
\tilde{p}_{ij}(z,w)\,:=\,
p_{ij}(R_{1,n}(z,w),\,R_{2,n}(z,w),\,R_{1,d}(z,w),\,R_{2,d}(z,w))
$$
where we introduced
$$
R_i(z,w)\,:=\,w^4R_i(z/w)\,=\,R_{i,n}(z,w)/R_{i,d}(z,w)
$$
and the $R_{i,n}(z,w)$, $R_{i,d}(z,w)$ are
homogeneous polynomials of degree four.
As $p_{ij}$ is homogeneous of bidegree $(6,6)$,
the polynomials $\tilde{p}_{ij}$ are homogeneous of degree
$6\cdot 4+6\cdot 4=48$.
Using the results from Section \ref{plmap} and the fact that we are now on a $\mathbb{P}^1$,
the divisor $D_b$, which has degree $38$, is now defined by a homogeneous polynomial $\tilde{p}_b(z,w)$ of degree $38$.
Each of the $\tilde{p}_{ij}$ is divisible by $\tilde{p}_b$ with quotient $\tilde{q}_{ij}$ which is homogeneous of degree $10$.
We know that each $\tilde{q}_{ij}$ has two zeroes, with multiplicity $5$,
in the two points of $D_{ij}$. We checked that this is indeed the case.
In fact, we verified that the point $(\ldots,q_{ij}(z),\ldots)\in \mathbb{P}^9$ is the point
$(\ldots,\pi_{ij}(l_z)^5,\ldots)$, the point whose coordinates are the fifth powers of the Pl\"ucker coordinates of the line $l_z\subset \mathcal{M}_1$. Each $\pi_{ij}(l_z)$ is easily seen to be a quadratic polynomial in $z$.
Thus the parametrizations of $Z$ and
$C^0_\vph$ are compatible and the $2\cdot 125$ families of
lines on the 125 quadrics in $\mathcal{M}_1$
are the limits of the two families of lines on the general $\mathcal{M}_\psi$.
In particular, the curve $\widetilde{C}_\vph$ is now reducible, having $125$ components, the desingularization of each component is a $\mathbb{P}^1$.
Moreover, the Pl\"ucker map from $\widetilde{C}_\vph$ to the Grassmannian in $\mathbb{P}^9$
is given by $10$ degree two polynomials on each of the components, the fifth power of these polynomials are the $\tilde{q}_{ij}$.
We checked that the $125$ components of $\widetilde{C}_\vph$ intersect as follows. On each component there are the 12 fixed points of certain elements of order five in $\mathcal{S}_5{\rtimes}\mathcal{G}$. In each such point, exactly two components meet and
moreover, distinct components only meet in such fixed points.
Thus $\widetilde{C}_\vph$ has $(125\cdot 12)/2=750$ ordinary double points.
This allows us to compute the arithmetic genus $p_a(\widetilde{C}_\vph)$ of the curve $\widetilde{C}_\vph$, since
$1-p_a(\widetilde{C}_\vph)$ is the Euler characteristic of the intersection graph of the components of $C_\vph$. This graph has $125$ vertices and
$750$ edges, hence it has Euler characteristic $125-750=-625$. Thus indeed $p_a(\widetilde{C}_\vph)=626$, as expected.
\subsection{The case $\psi=\infty$, $\vph^2=-3/4$}\label{psiinfty}
In case $\psi=\infty$ the threefold $\mathcal{M}_\psi$ is defined by $x_1\cdots x_5=0$,
so it is the union of $5$ hyperplanes.
The corresponding curves $C_\vph^0$ become reducible,
in fact the polynomial $F_+$ has 5 factors for these values of $\vph$:
$$
F_+\,=\,(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma +\o^2)(\t+\o^2)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t+\o)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t + \o\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma + \o^2)(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t +\o\t + \o^2)~,
$$
and $F_-$ is obtained by $\o\leftrightarrow\o^2$.
These factors are also factors of $l_1$, $l_2$, $k_{12}$, $k_{14}$ and $k_{24}$ respectively (see Table \ref{Divs}).
The components of these $C_\vph$, and their classes in $\text{Pic}(\hbox{dP}_5)$,
are discussed at the end of Section \ref{picdp5}.
Each component of $C_\vph$ parametrizes lines in one of the hyperplane $x_i=0$ in $\mathcal{M}_\psi$, these $x_i$ are $x_3$, $x_5$, $x_4$, $x_2$ and $x_1$ respectively.
The cover $\widetilde{C}_\vph\rightarrow C_\vph$ is non-trivial in this case and we will not analyze it any further here.
For example, assume that we are on the component where $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=-\o^2$.
Recall that the $p_{ij}(-\o^2,\t)$ are, upto a common factor, the
$\pi_{ij}^5(-\o^2,\t)$. These polynomials are listed in Table \ref{divpl} and
one finds that
$$
p_{ij}(-\o^2,\t)\,=\,0\qquad\mbox{for}\quad ij\in\{13,23,34,35\}~.
$$
Thus this component of $C_\vph$ parametrizes lines $l$
which have $\pi_{i3}(l)=0$ for all $i$.
Such a line $l$ lies in the hyperplane $x_3=0$,
because else we may assume that $l=\langle x,y\rangle$ with $x_3\neq 0$,
in which case we may assume that $y_3=0$ and moreover one $y_j$, $j\neq 0$ must also be non-zero, but then $\pi_{j3}\neq 0$.
Moreover, after dividing the six non-zero polynomials $p_{ij}(-\o^2,\t)$
by a common factor of degree $3$,
the quotients $q_{ij}(\t)$ are degree two polynomials in $\t$.
Define
$$
n_1\,:=\,\t+(\o-1)/3,\quad n_2\,:=\,\t+\o-1,\quad
n_4\,:=\,\t+\o+1,\quad n_5\,:=\,\t-\o-1~.
$$
Then we have, for $i,j\in \{1,2,4,5\}$ and $q_{ij}=0$ else, that for certain $c_{ij}\in\mathbb{C}$,
$$
q_{ij}\,=\,c_{ij}n_in_j,\qquad\mbox{and}\quad
(\ldots,p_{ij}(-\o^2,\t),\ldots)\,=\,(\ldots,q_{ij}(\t),\ldots)\quad(\subset\mathbb{P}^9)~.
$$
\newpage
\section{Appendix}
\subsection{The Picard group of $\hbox{dP}_5$}\label{picdp5}
We recall the basic facts on the geometry of the quintic del Pezzo surface $\hbox{dP}_5$.
We will use some more advanced algebraic geometry in this section to put the results we found in a more general perspective.
It is most convenient to view $\hbox{dP}_5$ as the blow up of $\mathbb{P}^2$ in four distinct points, no three on a line.
One can then choose coordinates such that these four points are
$$
p_1=(1,0,0),\quad p_2=(0,1,0), \quad
p_3:=(0,0,1),\quad p_4:=(1,1,1)~.
$$
The blow up map $\mathbb{P}^2\dashrightarrow \hbox{dP}_5$ is given by the cubic polynomials which are zero in the $p_i$.
There is an obvious action of $\mathcal{S}_3$ by automorphisms of $\hbox{dP}_5$ induced by permutation of the coordinates.
The action of $\mathcal{S}_3$ extends to a linear action of $\mathcal{S}_4$,
the subgroup of $\text{PGL}(3,\mathbb{C})=\text{Aut}(\mathbb{P}^2)$ which permutes the four points.
In fact, the map
$$
\sigma_{34}:\,\mathbb{P}^2\,\longrightarrow\,\mathbb{P}^2,\qquad (x:y:z)\,\longmapsto\,
(x-z,y-z,-z)
$$
fixes the first two points and exchanges the last two.
Finally the standard (birational) Cremona transformation
$$
\sigma_{45}:\,\mathbb{P}^2\,\longrightarrow\,\mathbb{P}^2,
\qquad (x,y,z)\,\longmapsto\,(x^{-1},y^{-1},z^{-1})\,=\,(yz,\,xz,\,xy)
$$
induces another automorphism of $\hbox{dP}_5$, which together with the
$\mathcal{S}_4$ generates a group isomorphic to $\mathcal{S}_5$ and $\mathcal{S}_5=\text{Aut}(\hbox{dP}_5)$.
The quintic Del Pezzo surface $\hbox{dP}_5$ has $10$ exceptional divisors,
which we denote by $E_{ij}=E_{ji}$ with $1\leq i<j\leq 5$. The divisors $E_{i5}$ are the exceptional divisors over the points $p_i$ and the $E_{ij}$, with $1\leq i<j\leq 4$ are (somewhat perversely, but this helps in understanding
the intersection numbers) is the strict transform of the line
$l_{ij}$ spanned by $p_k$ and $p_l$, with $\{i,j,k,l\}=\{1,2,3,4\}$.
So the pull-back of the line $l_{ij}$ in $\mathbb{P}^2$ to $\hbox{dP}_5$ has divisor $E_{ij}+E_{k5}+E_{l5}$ and, for example, $l_{12}$ is defined by $x-y=0$,
$l_{24}$ is defined by $y=0$. In particular we have
$$
E_{ij}\,=\,l\,-\,E_{k5}\,-\,E_{l5}\qquad(\in\,\text{Pic}(\hbox{dP}_5))~.
$$
With these conventions, the intersection numbers are
$$
E_{ij}^2\,=\,0~,\qquad E_{ij}E_{ik}\,=0\quad
\mbox{if}\quad\sharp\{i,j,k\}=3~,\qquad
E_{ij}E_{kl}\,=1\quad\mbox{if}\quad\sharp\{i,j,k,l\}=4~.
$$
The intersection graph of the $E_{ij}$ has $10$ vertices and $15$ edges, each vertex is on three edges. This graph is known as the Petersen graph and is presented in \fref{petersengraph}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=2.5in]{PetersenGraph.pdf}
\vskip10pt
\capt{5.5in}{petersengraph}{The Petersen graph, which summarizes the combinatorics of the intersections of the exceptional divisors $E_{ij}$. In the figure, the exceptional divisors correspond to vertices and their intersections correspond to the edges.}
\end{center}
\end{figure}
Let $l\in \text{Pic}(\hbox{dP}_5)$ be the class of the pull-back of a line in $\mathbb{P}^2$.
One has $l^2=+1$.
Then the canonical class $K_{\hbox{dP}_5}$ of $\hbox{dP}_5$ is determined by
$$
-K_{\hbox{dP}_5}\,=\,3l\,-\,E_{15}\,-\,E_{25}\,-\,E_{35}\,-\,E_{45}
\qquad (\in\,\text{Pic}(\hbox{dP}_5))~,
$$
we have $(-K_{\hbox{dP}_5})^2=9-4\cdot 1=5$.
In particular, the anti-canonical map of $\hbox{dP}_5$ is induced by the cubics on the four nodes of $C_\vph$. One also has
$$
\text{Pic}(\hbox{dP}_5)\,=\, \mathbb{Z} l\,\oplus\,\mathbb{Z} E_{15}\,\oplus\,\mathbb{Z} E_{25}\,\oplus\mathbb{Z} E_{35}\,\oplus\,\mathbb{Z} E_{45}~.
$$
The action of $\mathcal{S}_5$ on $\text{Pic}(\hbox{dP}_5)$ is as follows.
The permutations which fix $5$ are induced by linear maps on $\mathbb{P}^2$
and thus act by fixing $l$ and permuting the indices of the $E_{ij}$.
The transposition $(45)$ is induced by the Cremona transformation. The pull-back of a line is a conic on $p_1,p_2,p_3$, thus
$\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma_{45}^*l=2l-E_{15}-E_{25}-E_{35}$ and the image of the line on $p_i,p_j$ is the point $p_k$, with $\{i,j,k\}=\{1,2,3\}$ so
$\sigma_{45}^*E_{k5}=l-E_i-E_j$. The point $p_4=(1,1,1)$ is mapped to itself, so $\sigma^*E_{45}=E_{45}$.
The Picard group of $\mathbb{P}^1{\times}\mathbb{P}^1$ is generated by the classes of the divisors $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=0$ and $\t=0$. The holomorphic map
$\Phi:\hbox{dP}_5\rightarrow \mathbb{P}^1{\times}\mathbb{P}^1$ induces the pull-back homomorphism
$$
\Phi^*:\,\text{Pic}(\mathbb{P}^1{\times}\mathbb{P}^1)\,\longrightarrow \, \text{Pic}(\hbox{dP}_5)~,
\qquad
\left\{\begin{array}{rcl}
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=0)&\longmapsto&E_{15}\,+\,E_{24}~,\\
(\t=0)&\longmapsto&E_{23}\,+\,E_{14}~.
\end{array}\right.
$$
Notice that the curve $E_{15}$ maps to $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=0$ under $\Phi$,
but the point $(0,\infty)$ on $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=0$ is blown up, so its exceptional divisor $E_{24}$ contributes to $\Phi^*(s=0)$.
In the standard basis of $\text{Pic}(\hbox{dP}_5)$ we have
$$
\begin{array}{rcl}
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=0)&\longmapsto&E_{15}\,+\,E_{24}\,=\,l\,-\,E_{35}~,\\
(\t=0)&\longmapsto&E_{23}\,+\,E_{14}\,=
\,2l\,-\,E_{15}\,-\,E_{25}\,-\,E_{35}\,-\,E_{45}~,
\end{array}
$$
showing that $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t$ are related to lines through the point $p_3$ and conics on the four points $p_1,\ldots,p_4$ in $\mathbb{P}^2$.
The curves of bidegree $(n,m)$ in $\mathbb{P}^1{\times}\mathbb{P}^1$
pull-back along $\Phi^*$ to curves with class
$n(l-E_{35})+m(2l-\sum_iE_{i5})$ in $\text{Pic}(\hbox{dP}_5)$.
In case a point $p$ which gets blown up is a point with multiplicity $r$
on a curve, its pull-back to $\hbox{dP}_5$ is reducible. One component is the exceptional divisor over $p$, which has multiplicity $r$, and the other is called the strict transform of the curve. In particular, a curve of type $(2,2)$ which passes through all three points which get blown up with multiplicity one thus has four components, and its strict transform has class
$$
2(l-E_{35})\,+\,2(2l-\sum_iE_{i5})\,-\,E_{12}\,-\,E_{14}\,-\,E_{24}~.
$$
Using that $E_{ij}=l-E_{k5}-E_{l5}$ we find this class is equal to:
$$
3l\,-\,E_{15}\,-\,E_{25}\,-\,E_{35}\,-\,E_{45}\,=\,-K_{\hbox{dP}_5}~.
$$
Thus we see that the rational map $\Psi$ from Section \ref{dp5}
induces the anti-canonical map on $\hbox{dP}_5$ and it is known that this map embeds $\hbox{dP}_5$ in $\mathbb{P}^5$.
The curves $C_\vph^0$ have bidegree $(4,4)$ and they have multiplicity $2$
in each of the three points which get blown up, hence their strict transform
$C_\vph$ in $\hbox{dP}_5$ has class $-2K_{\hbox{dP}_5}$.
It is easy to verify that the sum of the 10 exceptional divisors also has this class:
$$
\sum_{i<j}E_{ij}\,=\,6l\,-\,2E_{15}\,-\,2E_{25}\,-\,2E_{35}\,-\,2E_{45}\,=\,
-2K_{\hbox{dP}_5}~.
$$
As the (reducible, singular) curve $\cup E_{ij}$ is also $\mathcal{A}_5$-invariant, it should be in the Wiman pencil, and in fact it is $C_\infty$.
As each $C_\vph$ has class $-2K_{\hbox{dP}_5}$, the intersection number of two such curves
is $(-2K_{\hbox{dP}_5})^2=4\cdot 5=20$. Thus the Wiman pencil has 20 basepoints. We already
found $2\cdot 10=20$ points in the divisors $D_{ij}$ in the base locus,
so the base locus is the union of the ten $D_{ij}$.
\subsection{The case $\psi=\infty$, $\vph^2=-3/4$ again}
We now give a more intrinsic description of the curves $C_\vph$ with $\vph^2=-3/4$ from Section \ref{psiinfty}.
In Section \ref{psiinfty}
we found that the curves $C_\vph$ with $\vph^2=-3/4$
have $5$ irreducible components,
the first two of bidegree $(1,0)$, $(0,1)$ respectively, the last three of bidegree $(1,1)$.
Only the last three curves pass through the base points $(1,1)$, $(0,\infty)$ and $(\infty,0)$: each contains two base points and each base point is on exactly two components.
The class of the component of bidegree $(1,1)$ passing through
$(1,1)$ and $(0,\infty)$ is
$$
(l\,-\,E_{35})\,+\,(2l\,-\,E_{15}\,-\,E_{25}\,-\,E_{35}\,-\,E_{45})\,-E_{12}\,-\,E_{24}\,=\,
l\,-\,E_{25}~,
$$
and similarly, the components passing through
$(\infty,0)$, $(0,\infty)$ and $(1,1)$, $( \infty,0)$ are $l-E_{45}$
and $l-E_{15}$ respectively.
Thus the classes
of the strict transforms of these components are:
$$
l\,-\,E_{15},\quad l\,-\,E_{25},\,\quad\,l\,-\,E_{35},\quad l\,-\,E_{45},\,\quad
2l\,-\,(E_{15}+\ldots+E_{45})~.
$$
These five classes are in one orbit under $\mathcal{A}_5$ (in fact under $\mathcal{S}_5$), and they correspond to the 5 coordinates on $\mathbb{P}^4$, as we also found in
Section \ref{psiinfty}.
The $5$ components are not uniquely determined by their classes.
In fact, each class determines a pencil of curves.
The first four correspond to the pencil of lines through the point $p_i$
and the last to the pencil of conics on $p_1,\ldots,p_4$.
We will now use the action of $\mathcal{S}_5$ on $\hbox{dP}_5$ to find two specific curves in these pencils.
Each pencil is fixed by the subgroup, isomorphic to $\mathcal{S}_4$,
in $\mathcal{S}_5$ which fixes the class. Thus an element $g$ of the $\mathcal{S}_4$ corresponding to the
pencil maps a curve in the pencil to another curve in the pencil. For example,
the pencil of conics on $p_1,\ldots,p_4$ is fixed by the standard $\mathcal{S}_4\subset \mathcal{S}_5$.
A conic $C_{(\lambda,\mu)}$, with $(x,y)\in\mathbb{P}^1$, from this pencil has equation
$$
\lambda (x-z)y\,+\,\mu(y-x)z\,=\,0~.
$$
There are three reducible conics in the pencil, $(x-z)y=0$, $(y-x)z=0$ and
$(z-y)x=-(x-z)y-(y-x)z$, these consist of pairs of exceptional curves.
If $g\in \mathcal{S}_4$ lies in the Klein subgroup $\mathcal{K}:=\langle (12)(34),(13)(24)\rangle$, then
one verifies easily that $g$ maps $C_{(\lambda,\mu)}$ into itself.
Thus the action of $\mathcal{S}_4$ on $\mathbb{P}^1$ factors over quotient $\mathcal{S}_4/\mathcal{K}\cong\mathcal{S}_3$.
The element $(123)\in \mathcal{S}_5$ maps onto a generator of the subgroup $\mathcal{A}_3$ of the quotient group $\mathcal{S}_3$. It acts on the pencil as
$$
\lambda (x-z)y+\mu(y-x)z\,\longmapsto\,
\lambda (y-x)z+\mu(z-y)x\,=\,-\mu(x-z)y+(\lambda-\mu) (y-x)z,
$$
in particular, it has two fixed points $(\lambda,\mu)=(1,-\o), (1,-\o^2)$.
Thus the 5 classes above give $2\cdot 5=10$
curves in $\hbox{dP}_5$ which we denote by $D_{ia}$, $D_{ib}$, $i=1,\ldots,5$.
Upto permutations of $a,b$, the two curves $\cup_i C_{ia}$ and $\cup_i C_{ib}$
are invariant under the action of $\mathcal{A}_5$ and they are the
$C_\vph$, for $\vph^2=-3/4$.
It is interesting to notice that the action of $S_4$ on the five pencils shows that the
five maps from $C_\vph$ to $\mathbb{P}^1$ they define are actually
$(\mathbb{Z}/2\mathbb{Z})^2$-quotient maps. For example,
from Table \ref{S5transfs}
one finds that $(12)(45), (14)(25)\in\mathcal{S}_5$ act as
$$
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)\,\longmapsto\, \left(\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\,\frac{1}{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t}\right),\qquad
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)\,\longmapsto\, \left(\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\,\frac{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t-1}{\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma(\t-1)}\right)
$$
on $\mathbb{P}^1{\times}\mathbb{P}^1$. Thus they
fix $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma$, hence they act on the fibers of the projection map $(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma,\t)\mapsto \sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma$.
So this projection map is invariant under the Klein subgroup $\langle (12)(34),(13)(24)\rangle$ of $\mathcal{S}_5$.
As the map has degree four, it follows that
the quotient of $C_\vph^0$ by this Klein subgroup is $\mathbb{P}^1$,
with quotient map $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma$.
\subsection{From $\mathbb{P}^2$ to $\mathbb{P}^1\times\mathbb{P}^1$ and back}\label{P2blowup}
In Section \ref{dp5} we obtained $\hbox{dP}_5$ as the blow up of $\mathbb{P}^1{\times}\mathbb{P}^1$
in three points. Blowing down the four exceptional curves $E_{15}$, $\ldots$, $E_{45}$ on $\hbox{dP}_5$, we get $\mathbb{P}^2$. The composition of these maps is a
birational map between $\mathbb{P}^1{\times}\mathbb{P}^1$ and $\mathbb{P}^2$.
To find it, we observe that $\Phi^*$ acts on the following divisors as:
$$
\begin{array}{cclcccl}
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=0)&\longmapsto&E_{15}\,+\,E_{24}~,&\qquad\qquad&
(\t=0)&\longmapsto&E_{23}\,+\,E_{14}~,\\
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=\infty)&\longmapsto&E_{14}\,+\,E_{25}~,&&
(\t=\infty)&\longmapsto&E_{13}\,+\,E_{24}~,\\
(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=1)&\longmapsto&E_{12}\,+\,E_{45}~,&&
(\t=1)&\longmapsto&E_{12}\,+\,E_{34}~.
\end{array}
$$
Thus the function $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma$ on $\mathbb{P}^1{\times}\mathbb{P}^1$
corresponds to the pencil of lines in $\mathbb{P}^2$ passing through the point
$p_3=(0,0,1)$ and in fact the meromorphic function $y/x$ on $\mathbb{P}^2$
gives the same divisors on $\hbox{dP}_5$.
Similarly $\t$ corresponds to the pencil of conics in $\mathbb{P}^2$
passing through all four $p_i$ and its divisors match those of
the meromorphic function $x(y-z)/y(x-z)$ on $\mathbb{P}^2$.
Therefore the birational map from $\mathbb{P}^2$ to $\mathbb{P}^1{\times}\mathbb{P}^1$
is given by
$$
\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\,=\,\frac{y}{x}~,\qquad \t\,=\,\frac{x(y-z)}{y(x-z)}~.
$$
As then $y=\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma x$, one finds upon substitution in $\t=x(y-z)/y(x-z)$ and some manipulations that
$$
x:=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t - 1~,\qquad y\,:=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma(\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma\t - 1)~,\qquad z\,:=\,\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma(\t - 1)~.
$$
gives the inverse birational map.
These three polynomials are linear combinations of the polynomials $z_0,\ldots,z_5$ from Section \ref{dp5}, thus this map factors indeed over $\hbox{dP}_5$.
It is amusing to verify that this works as advertised:
take for example the curve defined by $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=0$ on $\mathbb{P}^1{\times}\mathbb{P}^1$,
it maps to the exceptional divisor $E_{15}$ in $\hbox{dP}_5$ according to Table
\ref{ExcCurves}, and thus it should map to the point $p_1=(1,0,0)\in\mathbb{P}^2$,
which it does: $(x,y,z)=(-1,0,0)=(1,0,0)$.
Conversely, the line $l_{24}$ spanned by $p_1,p_3$ maps to $E_{24}$ in $\hbox{dP}_5$
and next $E_{24}$ is mapped, according to the same table,
to the point
$(0,\infty)$ in $\mathbb{P}^1{\times}\mathbb{P}^1$.
Indeed, $l_{24}$ is parametrized by $(a:0:b)$ and
thus $\sigma}\renewcommand{\S}{\Sigma}\newcommand{\vs}{\varsigma=y/x=0$ and $\t=x(y-z)/y(x-z)=-ab/0=\infty$.
The curve $C_\vph^0$ in $\mathbb{P}^1{\times}\mathbb{P}^1$ is defined by $F_+=0$.
We found polynomials $f_e,f_o$ in $x,y,z$ such that
$$
(xy(x-z))^4F_+\left(\frac{y}{x},\frac{x(y-z)}{y(x-z)}\right)\,=\,
\big(xy(x-y)\big)^2\big(f_e(x,y,z)\,-\,\vph f_o(x,y,z)\big)~.
$$
Thus the equation for the image of $C^0_\vph$ in $\mathbb{P}^2$ is:
$$
f_e(x,y,z)\,-\,\vph f_o(x,y,z)\,=\,0~.
$$
This equation is homogeneous of degree six, it has an even and an odd part (under the action of $S_3$ which permutes the variables), where
$$
f_e\,=\,2s_1^2s_2^2\,-\,6s_1^3s_3\,-\,6s_2^3\,+\,19s_1s_2s_3\,-\,9s_3^2~,
$$
and the $s_i$ are the elementary symmetric function in $x,y,z$:
$$
s_1\,:=\,x\,+\,y\,+\,z~,\qquad
s_2\,:=\,xy\,+\,xz\,+\,yz~,\qquad
s_3\,:=\,xyz~.
$$
The odd part is
$$
f_o\,:=\,2xyz(x-y)(x-z)(y-z)~.
$$
In particular, any odd element in $\mathcal{S}_5$ maps $C_\vph$ to $C_{-\vph}$,
as we have already seen. The singular locus of the curve defined by $f_e-\vph f_o=0$ in $\mathbb{P}^2$
consists of four ordinary double points in $p_1,\ldots,p_4$.
We refer to \cite{DolgachevClassAlgGeom} and \cite{ShepherdBarronInvariantTheory} for more on the intimate relations between $\hbox{dP}_5$ and genus six curves.
\subsection{The restriction map $\text{Pic}(\hbox{dP}_5)\rightarrow \text{Pic}(C_\vph)$}\label{restrictionmap}
Let $C$ be compact Riemann surface of genus $g$, and let $\mbox{Div}(C)$ be the group of divisors on $C$.
The Picard group of compact Riemann surface $C$ is the group of divisors on the surface modulo linear equivalence. So if $P(C)$ denotes the group of divisors of meromorphic functions, then
$$
\text{Pic}(C)\,=\,\mbox{Div}(C)/P(C)~.
$$
Since a divisor $D$ is a finite sum of points, with multiplicities, it has a well defined degree:
$$
\mbox{deg}\,:\;\mbox{Div}(C)\,\longrightarrow\,\mathbb{Z}\,,\qquad
D\,=\,\sum_{p}n_pp\;\longmapsto\; \sum_p n_p~.
$$
As a meromorphic function has the same number of poles as zeroes (counted with multiplicity), one can define a subgroup $\text{Pic}^0(C)$ of $\text{Pic}(C)$ by:
$$
\text{Pic}^0(C)\,:=\,\mbox{Div}^0(C)/P(C)~.
$$
By Abel's theorem, $\text{Pic}^0(C)=Jac(C)$, the Jacobian of $C$, which is the $g$-dimensional complex torus defined as the quotient of $\mathbb{C}^g$ by the period lattice,
that is, fixing a basis $\o_1,\ldots,\o_g$ of the vector space of holomorphic 1-forms on
$C$, the period lattice consists of the vectors $(\int_\gamma \o_1,\ldots,\int_\gamma\o_g)$ where $\gamma $ runs over all closed loops on $C$.
These groups fit together in an exact sequence:
$$
0\,\longrightarrow\,\text{Pic}^0(C)\,\longrightarrow\,\text{Pic}(C)\,
\stackrel{\mbox{deg}}{\longrightarrow}\, \mathbb{Z}
\,\longrightarrow\,0~.
$$
As we have seen in section \ref{cover125}, a divisor $D$, whose class has order $n$
in $\text{Pic}^0(C)$, so $nD$ is the divisor of a meromorphic function $f$ will define an unramified $n$:1 cover of $C$.
As $\text{Pic}^0(C)$ is a complex torus, it is isomorphic, as a group,
to $(\mathbb{R}/\mathbb{Z})^{2g}$. The classes $D$ with $nD=0$ are thus a subgroup isomorphic to $(\mathbb{Z}/n\mathbb{Z})^{2g}$. In particular, if $C=C_\vph$ and thus $g=6$, and $n=5$ we get a subgroup $(\mathbb{Z}/5\mathbb{Z})^{12}$ of five-torsion classes, whereas the subgroup of $\text{Pic}^0(C_\vph)$ generated by the
$D_{ij}-D_{kl}$ is a $(\mathbb{Z}/5\mathbb{Z})^{3}$,
since the covering defined by the $\sqrt[5]{g_{ij}}$ is $\widetilde{C}_\vph\rightarrow C_\vph$, has degree 125 (here $g_{ij}$ has divisor $5(D_{ij}-D_{45})$ as in
Section \ref{cover125}).
We will now identify the specific $(\mathbb{Z}/5\mathbb{Z})^3\subset \text{Pic}^0(C)$
which creates the covering $\widetilde{C}_\vph\rightarrow C_\vph$.
It turns that there is a quite naturally defined subgroup of $\text{Pic}(C_\vph)$,
which is a priori unrelated to the Dwork pencil,
but which arises as a consequence of the special position of the curves
$C_\vph$ in $\hbox{dP}_5$.
The inclusion of $C_\vph$ in the del Pezzo surface $\hbox{dP}_5$
induces the restriction map (a homomorphism of groups)
$$
i^*:\,\text{Pic}(\hbox{dP}_5)\,\longrightarrow\,\text{Pic}(C_\vph)~,\qquad\
\mbox{with}\quad i:\,C_\vph\,\hookrightarrow\,\hbox{dP}_5~.
$$
Applying the adjunction formula, we find the canonical class on $C_\vph$:
$$
K_{C_\vph}\,=\,i^*\left(C_\vph\,+\,K_{\hbox{dP}_5}\right)\,=\,
i^*\left(-K_{\hbox{dP}_5}\right)
$$
where we used that the curve $C_\vph$ in $\hbox{dP}_5$ has class $-2K_{\hbox{dP}_5}$.
In particular, the composition $C_\vph\hookrightarrow\hbox{dP}_5\hookrightarrow\mathbb{P}^5$
is the canonical map. As it is an isomorphism on its image (by definition of $C_\vph$),
the curves $C_\vph$ are not hyperelliptic.
The degree two divisor $D_{ij}$ was defined as the intersection divisor of the line
$E_{ij}\subset \hbox{dP}_5$ with the curve $C_\vph\subset\hbox{dP}_5$, hence
$$
D_{ij}\,=\,i^*(E_{ij})~.
$$
The group $\text{Pic}(\hbox{dP}_5)\cong\mathbb{Z}^5$ has $\mathbb{Z}$-basis
$l,E_{15},\ldots,E_{45}$.
As $l=E_{ij}+E_{k5}+E_{l5}$, where $\{i,j,k,l\}=\{1,\ldots,4\}$,
we see that the divisor $i^*l$ has degree $6$ and the
$i^*E_{pq}=D_{pq}$ have degree two.
Thus the image of the composition of $i^*$ with
$\mbox{deg}:\text{Pic}(C)\rightarrow\mathbb{Z}$ is the subgroup $2\mathbb{Z}$
and the kernel of this composition is isomorphic to $\mathbb{Z}^4$.
We denote this kernel by $\text{Pic}(\hbox{dP}_5)^0$:
$$
\text{Pic}(\hbox{dP}_5)^0\,:=\,
\ker(\mbox{deg}\circ i^*)\,=\oplus_{i=1}^4\mathbb{Z}\alpha_i~,
$$
where the $\mathbb{Z}$-basis $\alpha_i$ of $\text{Pic}(\hbox{dP}_5)^0$ is defined by
$$
\alpha_1\,=\,E_{15}-E_{25},\quad \alpha_2\,=\,E_{25}-E_{35},\quad
\alpha_3\,=\,E_{35}-E_{45},\quad \alpha_4\,=\,l-E_{15}-E_{25}-E_{35}~.
$$
As we have $C_\vph= -2K_{\hbox{dP}_5}$ in $\text{Pic}(\hbox{dP}_5)$, the divisors on $\hbox{dP}_5$ which intersect $C_\vph$ in a divisor of degree $0$ form the subgroup
$K_{\hbox{dP}_5}^\perp$, so
$$
\text{Pic}(\hbox{dP}_5)^0\,=\,K_{\hbox{dP}_5}^\perp\,=\,
\{\,D\,\in\,\text{Pic}(\hbox{dP}_5)\,:\,D\cdot K_{\hbox{dP}_5}\,=\,0\,\}~.
$$
We recall the well-known fact that the intersection matrix of the $\alpha_i$ is the Cartan matrix of the root system $A_4$, up to sign:
$$
(\alpha_i,\alpha_j)\,=\,\left\{\begin{array}{rcl} -2&\mbox{if}&i=j~,\\
1&\mbox{if}&|i-j|=1~,\\ 0&\mbox{else}~.&\end{array}\right.
$$
We now have a homomorphism, induced by $i^*$,
but denoted by the same symbol,
$$
i^*:\,\text{Pic}(\hbox{dP}_5)^0\,\longrightarrow \,Pic^0(C)~.
$$
As the $\alpha_i$, $i=1,2,3$, are of the form $E_{ij}-E_{pq}$,
their images $i^*(\alpha_i)$ are of the form $D_{ij}-D_{pq}$
which are elements of order five in $\text{Pic}^0(C)$.
Finally, using that $l=E_{34}+E_{15}+E_{25}$, we see that
$$
\alpha_4\,=\,l\,-\,E_{15}\,-\,E_{25}\,-\,E_{35}\,=\,
E_{34}\,+\,E_{15}\,+\,E_{25}\,-\,(E_{15}\,+\,E_{25}\,+\,E_{35})\,=\,
E_{34}\,-\,E_{35}~,
$$
hence $i^*(\alpha_4)=D_{34}-D_{45}$ is also 5-torsion.
The image of $i^*$ is generated by the classes $i^*(\alpha_j)$:
{\renewcommand{\arraystretch}{1.5}
$$
\begin{array}{rcl}
\mbox{im}(i^*)&=&
\langle D_{15}\,-\,D_{25},\, D_{25}\,-\,D_{35},\,D_{35}-D_{45},\,
D_{34}\,-\,D_{35}\,\rangle\\
&=&\langle D_{15}\,-\,D_{45},\, D_{25}\,-\,D_{45},\,D_{35}-D_{45},\,
D_{34}\,-\,D_{45}\,\rangle~.
\end{array}
$$
}
Thus $\mbox{im}(i^*)\cong(\mathbb{Z}/5\mathbb{Z})^n$ for some $n\leq 4$.
There is a further relation between these classes, given by the divisor of the function $k_{14}/l_1l_2$
(notice that we take the quotient of two polynomials of bidegree $(2,2)$, so this quotient is a well-defined meromorphic function on $C^0_\vph$). We use Table \ref{Divs}
to find the divisor $(k_{14})$ of $k_{14}$ in $\mbox{Div}(C_\vph)$ (and we simply write $l$ for $i^*(l)$):
{\renewcommand{\arraystretch}{1.5}
$$
\begin{array}{rcl}
(k_{14})&:=&
D_{23}+D_{25}+3D_{24}+3D_{12}\\
&=&(l-D_{15}-D_{45})+D_{25}+3(l-D_{15}-D_{35})+3(l-D_{35}-D_{45})\\
&=&7l-4D_{15}+D_{25}-6D_{35}-4D_{45}\\
&=&7(D_{34}+D_{15}+D_{25})-4D_{15}+D_{25}-6D_{35}-4D_{45}\\
&=&7D_{34}+3D_{15}+8D_{25}-6D_{35}-4D_{45}~.
\end{array}
$$
}
Similarly, the divisor of $l_1$ is:
{\renewcommand{\arraystretch}{1.5}
$$
\begin{array}{rcl}
(l_1)&=&D_{13}+D_{23}+D_{34}+D_{35}\\
&=&(l-D_{25}-D_{45})+(l-D_{15}-D_{45})+D_{34}+D_{35}\\
&=&2l-D_{15}-D_{25}+D_{35}-2D_{45}+D_{34}\\
&=&2(D_{34}+D_{15}+D_{25})-D_{15}-D_{25}+D_{35}-2D_{45}+D_{34}\\
&=&3D_{34}+D_{15}+D_{25}+D_{35}-2D_{45}~.
\end{array}
$$
}
As $(l_2)=D_{15}+D_{25}+D_{35}+D_{45}$, the linear equivalence $(k_{14})=(l_1)+(l_2)$ gives the following relation
in $\text{Pic}(C_\vph)$:
$$
7D_{34}+3D_{15}+8D_{25}-6D_{35}-4D_{45}
\,=\,
3D_{34}+2D_{15}+2D_{25}+2D_{35}-D_{45}~.
$$
Using $5D_{25}=5D_{35}$ in $\text{Pic}(C_\vph)$ we get:
$$
4D_{34}\,=\,-D_{15}-6D_{25}+8D_{35}+3D_{45}\,=\,
-D_{15}-D_{25}+3D_{35}+3D_{45}~\qquad(\in \text{Pic}(C_\vph))~.
$$
Now we write $4D_{34}=-D_{34}+5D_{45}$ and use $5D_{34}=5D_{45}$ in $\text{Pic}(C_\vph)$ to obtain
$$
D_{34}\,=\,D_{15}+D_{25}-3D_{35}+2D_{45}~\qquad(\in \text{Pic}(C_\vph))~.
$$
This gives the following relation in $\text{Pic}^0(C_\vph)$:
$$
D_{34}-D_{45}\,=\,(D_{15}-D_{45})\,+\,(D_{25}-D_{45})\,+\,2(D_{35}-D_{45})~.
$$
Therefore $\mbox{im}(i^*)\subset \text{Pic}^0(C_\vph)$
can be generated by three elements and thus $n\leq 3$.
We give a table which gives the$(a_1,a_2,a_3)\in(\mathbb{Z}/5\mathbb{Z})^3$ such that
the classes $e_{ij}:=D_{ij}-D_{45}=i^*(E_{ij}-E_{45})$
can be written as $a_1e_{15}+a_2e_{25}+a_3e_{35}$.
$$
\begin{array}{rclrclrcl}
e_{15}&\mapsto&(1,0,0), & e_{25}&\mapsto&(0,1,0), &
e_{35}&\mapsto&(0,0,1),\\
e_{12}&\mapsto&(2,2,1), &
e_{13}&\mapsto&(2,1,2), &e_{23}&\mapsto&(1,2,2), \\
e_{14}&\mapsto&(2,1,1),&\quad e_{24}&\mapsto&(1,2,1),&\quad e_{34}&\mapsto&(1,1,2)~.\\
\end{array}
$$
As we have seen, $5D_{ij}=5D_{pq}$ for any indices.
Thus we have a rather peculiar divisor class of degree $5\cdot 2=10$ in $\text{Pic}(C_\vph)$. This is actually the canonical class on $C_\vph$. In fact,
{\renewcommand{\arraystretch}{1.5}
$$
\begin{array}{rcl}
K_{C_\vph}&=&i^*(-K_{\hbox{dP}_5}) \\
&=&i^*(3l-E_{15}-E_{25}-E_{35}-E_{45})\\
&=& 3(D_{34}+D_{15}+D_{25})-D_{15}-D_{25}-D_{35}-D_{45}\\
&=&3(2D_{15}+2D_{25}-3D_{35}+2D_{45})-D_{15}-D_{25}-D_{35}-D_{45}\\
&=&5D_{15}+5D_{25}-10D_{35}+5D_{45}\\
&=&5D_{15}
\end{array}
$$
}
where in the last step we used that $5D_{ij}=5D_{15}$.
To show that $n\geq 3$ and thus $n=3$, we use the $\mathcal{A}_5$-action on the
subgroup $\mbox{im}(i^*)\cong(\mathbb{Z}/5\mathbb{Z})^n$ of $\text{Pic}^0(C_\vph)$.
As $\mathcal{A}_5$ is a simple group, the image of $\mathcal{A}_5$ in $\text{Aut}(\mbox{im}(i^*))$
is either the identity or isomorphic to $\mathcal{A}_5$.
In the first case, by applying $(23)(45)\in \mathcal{A}_5$ to $D_{12}-D_{45}$
we would get that $D_{12}-D_{45}=D_{13}-D_{45}$ in $\text{Pic}^0(C_\vph)$
and hence that $D_{12}-D_{45}$ is the divisor of a meromorphic function, which is not the case as $C_\vph$ is not hyperelliptic.
Thus we obtain an injective homomorphism $\mathcal{A}_5\rightarrow GL(n,\mathbb{Z}/5\mathbb{Z})$.
If $n=1$, this is impossible as $\sharp \mathcal{A}_5 > \sharp GL(1,\mathbb{Z}/5\mathbb{Z})=4$.
If $n=2$, we consider the action of the subgroup
$\{e,(12)(34),(13)(24),(14)(23)\}$, which is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^2$.
The eigenvalues of $(12)(34)$ on $(\mathbb{Z}/5\mathbb{Z})^2$ are $1,1$; $1,-1$ or $-1,-1$.
The first case is impossible since the homomorphism is injective and an automorphism with eigenvalues $1,1$ which is not the identity has order $5$.
The last case is also impossible since then either $(12)(34)$ should commute with all other elements of $\mathcal{A}_5$ or the element would have order five again.
Thus the eigenvalues are $1,-1$ and we can diagonalize the automorphism. The same is true for the other two non-trivial elements in the subgroup. Since the subgroup is commutative, these three automorphisms can be diagonalized on the same basis. But then one of the automorphisms must be $-I$, which commutes with any other automorphism, again a contradiction.
Therefore $n\geq 3$.
We conclude that
$\mbox{im}(i^*)\subset \text{Pic}^0(C_\vph)$ consists of the divisor classes which create the unramified 125:1 covering $\widetilde{C}_\vph\rightarrow C_\vph$.
So for any non-zero
$D\in \mbox{im}(i^*)$ there is a meromorphic function $f_D$ on $C_\vph$
with divisor $5D$ and the Riemann surface $C_D$ defined by the polynomial
$T^5-f_D$ is a 5:1 unramified cover of $C_\vph$ which fits in a diagram
$\widetilde{C}_\vph\rightarrow C_D\rightarrow C_\vph$. The fiber product over $C_\vph$ of three suitably choosen $C_D$ will be isomorphic to $\widetilde{C}_\vph$.
Since the image of the curve $C_\vph$ in the canonical embedding lies in a unique del Pezzo surface of degree 5 (see \cite{ShepherdBarronInvariantTheory}), we have the remarkable fact that the curves $\widetilde{C}_\vph$ which parametrize the lines in Dwork pencil are
intrinsically determined by the curves $C_\vph$ in the Wiman pencil.
In this section we have shown that $i^*$ maps $\text{Pic}(\hbox{dP}_5)^0\cong Q(A_5)$
onto $(\mathbb{Z}/5\mathbb{Z})^3$. In terms of root systems, this map is well-known.
The root lattice $Q(A_5)$ is a sublattice, of index $5$, of the weight lattice $P(A_5)$. Thus $5P(A_5)$ is a sublattice, of index $125$, of $Q(A_5)$
and thus $Q(A_5)/5P(A_5)\cong (\mathbb{Z}/5\mathbb{Z})^3$.
The map $i^*$ can be identified with the quotient map
$Q(A_5)\rightarrow Q(A_5)/5P(A_5)$.
\vskip2in
{\bf\large Acknowledgements}
\vskip10pt
We wish to thank S. Katz for discussions. PC and XD also wish to thank the Perimeter Institute and ICTP, Trieste, for support and hospitality while they were engaged on this this~project.
\newpage
\raggedright
\bibliographystyle{utphys}
|
1,108,101,565,228 | arxiv | \section{Introduction}
Fix a closed, convex set $\Omega \subseteq \mathbf{R}^d$. For $p \ge 1$, let $\mathcal{P}_p(\Omega)$ denote the set of nonnegative Borel probability measures $\mu$ supported in $\Omega$ with finite $p^{th}$ moment $\int_\Omega |x|^p \d\mu < \infty$. Equip this space with the $p$-Wasserstein distance $W_p$, i.e.
$$
W_p^p(\mu,\nu):=\inf\left\{\int_{\Om\times\Om}|x-y|^p\d\g(x,y):\ \g\in\Pi(\mu,\nu) \right\},
$$
where $\Pi(\mu,\nu):=\left\{\g\in\cP_p(\Om\times\Om):\ (\pi^x)_\sharp\g=\mu,\ (\pi^y)_\sharp\g=\nu\right\}$ denotes the set of transportation plans between $\mu$ and $\nu$ and $\pi^x,\pi^y:\Om\times\Om\to\Om$ stand for the canonical projections $\pi^x(a,b)=a$, $\pi^y(a,b)=b$. We denote by $\Pi_o(\mu,\nu)\subseteq\Pi(\mu,\nu)$ the set of optimal plans that realize the value $W_p(\mu,\nu)$. It is well-known (see for instance \cite{AmbGigSav}) that $(\cP_p(\Om),W_p)$ defines a geodesic metric space. Moreover, if $\Om$ is compact then $W_p$ metrizes the weak-$\star$ convergence of probability measures in $\cP_p(\Om).$
In this paper, we are interested in properties of projection operators ${\rm{P}}_{\Omega}^p:\cP_p(\Om)\to \cK$, where $\cK\subseteq\cP_p(\Om)$ is a given closed and geodesically convex proper subset of $\cP_p(\Om)$. In particular, the main question we are interested in is the so-called {\it nonexpansiveness property} that reads as
\begin{align}\label{question:nonexpansive?}
{\rm Is\ it\ true\ that\ \ }W_p\left({\rm{P}}_{\Omega}^p[\mu], {\rm{P}}_\Om^p[\nu]\right)\le W_p(\mu,\nu),\ \forall\ \mu,\nu\in\cP_p(\Om)\ ?\tag{Q}
\end{align}
For $\mu\in\cP_p(\Om)$, the projection ${\rm{P}}_\Om^p[\mu]$ is defined as the solution of the variational problem
\begin{equation}\label{def:proj}
{\rm{P}}_{\Omega}^p[\mu]:={\rm{argmin}}\left\{\frac{1}{p}W_p^p(\rho,\mu):\ \rho\in\cK\right\}.
\end{equation}
A few comments on the definition of this operator are necessary. The existence of a solution in this minimization problem is an easy consequence of the direct method of calculus of variations. Indeed, for $\mu\in\cP_p(\Om)$ and $C>0$, the set $\{\rho\in\cP_p(\Om): \ W_p(\rho,\mu)\le C\}$ is tight and the objective functional is weakly lower semicontinuous with respect to the narrow convergence of probability measures. However, for ${\rm{P}}_{\Omega}^p[\mu]$ to be well-defined, we would need to have the uniqueness of a minimizer in \eqref{def:proj}. This turns out to be a subtle question and it is linked to the strict convexity of $\rho\mapsto W_p^p(\rho,\mu)$ and/or the curvature properties of $(\cP_p(\Om),W_p)$.
While $\rho\mapsto W_p^p(\rho,\mu)$ is known to be convex with respect to the `flat' convex combination of probability measures, i.e. along $[0,1]\ni t\mapsto (1-t)\rho_0+ t\rho_1$, its strict convexity typically fails, unless additional conditions are imposed on $\mu$ (for instance absolute continuity with respect to $\cL^d\mres\Om$; see \cite[Proposition 7.17-7.19]{San}). From the geometric viewpoint however, when studying properties of projection operators, it is more natural to consider the notion of geodesic convexity (which is also referred to as {\it displacement convexity} in the case of $(\cP_p(\Om),W_p)$; see \cite{McC, AmbGigSav}). This notion is intimately linked to the curvature properties of the space. By \cite[Section 7.3]{AmbGigSav} we know that when $d\ge 2$, $(\cP_2(\Om),W_2)$ is a positively curved space in the sense of Alexandrov, and so the mapping $\rho\mapsto W_2^2(\rho,\mu)$ in general is not geodesically $\lambda$-convex, for any $\lambda\in\R$. Similarly, it could be expected that $(\cP_p(\Om),W_p)$ is also non-negatively curved for $p\neq 2$. However, to the best of our knowledge, a precise result in this direction is not available in the literature at this point.
These considerations let us conclude that the uniqueness of the projection onto closed and geodesically convex sets $\cK$ fails in general. To illustrate this fact, let us consider the following example. Let $\Om=\R^2$ and let $\cK:=\left\{\frac12\delta_{(-x,1)}+\frac12\delta_{(x,-1)}:\ x\in[-1,1]\right\}$. Then, $\cK$ is a closed geodesically convex set in $(\cP_2(\Om),W_2)$. Let $\mu:=\frac12\delta_{(-1,0)}+\frac12\delta_{(1,0)}$. Clearly, both measures $\rho_0:=\frac12\delta_{(-1,1)}+\frac12\delta_{(1,-1)}$ and $\rho_1:=\frac12\delta_{(1,1)}+\frac12\delta_{(-1,-1)}$ belong to $\cK$ and have the same minimal $W_2$ distance from $\mu$. So the projection of $\mu$ onto $\cK$ cannot be defined in a unique way in this case.
Because of this reason, in our study we will focus on some particular geodesically convex closed subsets $\cK\subset\cP_p(\Om)$ onto which we can guarantee the uniqueness of the projected measure in \eqref{def:proj}.
For a given $\lambda>0$, we consider
\begin{align}\label{def:Kp}
\cK^\lambda_p(\Omega) := \left\{\rho \in \mathcal{P}_p(\Omega) \cap L^1(\Omega) \text{ and } 0 \leq \rho \leq \lambda {\rm{\ a.e.}}\right\},
\end{align}
that is the subset of absolutely continuous probability measures having densities uniformly bounded above by $\lambda$. Since the value of $\lambda$ will not play any role in our analysis, in the rest of the paper for simplicity of the exposition we will simply set $\lambda=1$ and we use the notation $\cK_p(\Omega)$ for $\cK^1_p(\Omega)$.
As we show in Lemma \ref{lem:admissibles_convex}, $\cK_p(\Om)$ is a closed geodesically convex subset of $(\cP_p(\Om),W_p)$. More importantly, arguments verbatim to the ones in \cite[Proposition 5.2]{DePMesSanVel} (that considered only the case $p=2$) let us conclude that the projection problem \eqref{def:proj} onto $\cK_p(\Om)$ has a unique solution for any $p>1$. A secondary motivation behind the consideration of these particular subsets is the following: in recent years the set $\cK_2(\Om)$ received some special attention in applications of optimal transport techniques to study the well-posedness and further properties of PDEs arising in crowd motion models under {\it density constraints}. For a non-exhaustive list of references on this subject we refer to \cite{MauRouSan,MesSan, DePMesSanVel,Mes}.
Coming back to the original motivation of our study, on the next pages of this note we investigate the question of nonexpansiveness of the projection operator ${\rm{P}}_\Om^p$ onto the set $\cK_p(\Om)$. When $d=1$, it is well-known that $\cP_2(\R)$ is isometrically isomorphic to a closed convex subset of a Hilbert space (the space of nondecreasing functions belonging to $L^2([0,1];\R)$, see \cite[Section 9.1]{AmbGigSav}). Therefore, $(\cP_2(\R),W_2)$ can be regarded as a flat space and so it is expected that ${\rm{P}}_\R^2$ is nonexpansive onto closed geodesically convex subsets $\cK\subset\cP_2(\R)$. Indeed, every closed geodesically convex subset $\cK\subset\cP_2(\R)$ corresponds to a closed convex subset of $L^2([0,1];\R)$. For instance, the space $\cK_2(\R)$ defined in \eqref{def:Kp} corresponds to $\{X\in L^2([0,1];\R):\ X'\ge 1\ {\rm{a.e.}}\}$. Therefore, the projection problem from $(\cP_2(\R),W_2)$ onto $\cK_2(\R)$ can be transferred to a projection problem in a Hilbertian setting, which has the nonexpansive property. Returning to the original setting via the isometric isomorphism, it follows that $\textnormal{P}_{\mathbf{R}}^2$ is nonexpansive. When $p\neq 2$, the nonexpansiveness property of projection operators on $L^p$ spaces is a more subtle question (see for instance \cite{Bru}) and therefore a conclusion similar to the one when $p=2$ seems to be nontrivial. In the case when $d>1$, even for $p=2$, it is not possible to identify $(\cP_2(\R^d),W_2)$ with the subset of a Hilbert space (and in particular, as discussed before, this space will not be flat). Therefore in those cases, the questions of nonexpansiveness seems to be highly nontrivial.
When $p=2$, Theorem \ref{thm:almost_lipschitz} presents a sort of {\it weak nonexpansiveness property} of the projection in arbitrary dimensions. Here we show that the left hand side of the inequality in \eqref{question:nonexpansive?} is always bounded above by the transportation cost of a certain suboptimal plan between the original measures. This suboptimal plan becomes optimal in two borderline scenarios: either when $d=1$ or when one of the original measures $\mu,\nu$ is a Dirac mass. So, this yields the nonexpansiveness of the projection operator when $d=1$ (see Corollary \ref{cor:1d}) or when one of the measures is a Dirac mass (see Corollary \ref{cor:dirac_nonexpansive}).
By \cite[Theorem 2.3]{KriRep} and \cite[Proposition 2.4 of Chapter II.2]{BriHae} we know that in the case of smooth Riemannian manifolds with non-positive sectional curvature or in the case of Alexandrov spaces having non-positive curvature the projection operator onto closed geodesically convex sets is nonexpansive. To the best of our knowledge, it is unclear whether the non-positive curvature condition of these spaces in general (beyond Riemannian manifolds) is also a necessary condition to ensure the nonexpansiveness of the projection operator in general. When $d\ge 2$, as we previously discussed, $(\cP_p(\Om),W_p)$ is expected to be non-negatively curved (even for $p\neq 2$). Therefore, there is a good reason to anticipate that there exist closed geodesically convex subsets of $\cP_p(\Om)$ such that the projection operator ${\rm{P}}_\Om^p$ onto these sets (whenever it is well-defined) fails to be nonexpansive.
This is precisely what we show in Proposition \ref{prop:small_p_counterexample}. Here, we will show in particular that there exists $p(d)>1$ small such that for any $p\in (1,p(d))$, the projection operator ${\rm{P}}_{\R^d}^p$ onto $\cK_p(\R^d)$ fails to be nonexpansive. Our proof is constructive, i.e. we construct a counterexample to the nonexpansiveness property. In our construction, we provide a quantitative asymptotic description of $p(d)$ as a function of $d$, for $d$ large. Interestingly, relying again on Corollary \ref{cor:dirac_nonexpansive}, our construction does not provide a counterexample for $p=2$. Heuristically, our result would provide an argument (in combination with \cite[Proposition 2.4 of Chapter II.2]{BriHae}) for the fact that $(\cP_p(\Om),W_p)$ is positively curved, when $p>1$ is close to $1$.
The structure of the rest of the paper is simple: in Section \ref{sec:prelim} we recall some preliminary results from optimal transport and we study some geometric properties of the projection operator ${\rm{P}}_\Om^2$. These properties seem to be interesting on their own right: we show that ${\rm{P}}_{\R^d}^2$ preserves barycenters of measures (see Proposition \ref{prop:barycenters}), and it satisfies a certain translation invariance with respect to distances between measures (see Proposition \ref{prop:translation_invariance}). Section \ref{sec:results} contains the proofs of our main results: in Theorem \ref{thm:almost_lipschitz} we show the `weak nonexpansiveness' property of ${\rm{P}}_\Om^2$, and deduce the full nonexpansiveness in the two cases mentioned above (see Corollaries \ref{cor:1d} and \ref{cor:dirac_nonexpansive}) Finally, Proposition \ref{prop:small_p_counterexample} constructs the counterexample to the nonexpansiveness of ${\rm{P}}_{\R^d}^p$ onto $\cK_p(\R^d)$ when $d\ge 2$ and $p\in (1,p(d))$, and studies the asymptotic behavior of $p(d)$ as the dimension becomes large.
\section{Preliminary results and some geometric properties of ${\rm{P}}_\Om^2$}\label{sec:prelim}
\subsection{Preliminary results from optimal transport} Some properties of the projection operator ${\rm{P}}_\Om^2$ onto the set $\cK_2(\Om)$ were studied in \cite{DePMesSanVel}. In particular, arguments verbatim to the ones presented there (see in particular Proposition 5.2) yield the following lemma.
\begin{lem}
Let $p\in(1,+\infty)$ and let $\Om\subseteq\R^d$ be closed and convex. Then for $\cK=\cK_p(\Om)$ with $
\cL^d(\Om)\ge 1$ and for any $\mu\in\cP_p(\Om)$, the problem \eqref{def:proj} has a unique solution ${\rm{P}}_\Om^p[\mu]$. Moreover, there exists $B\subseteq\Om$ Borel measurable such that
$${\rm{P}}_\Om^p[\mu]=\mu^{\rm{ac}}{\mathbbm{1}}_B+ \mathbbm{1}_{\Om\setminus B}.$$
\end{lem}
\begin{rem}
In the case of $p=2$, the projection ${\rm{P}}_{\Omega}^2$ behaves well with respect to interpolation along {\it generalized geodesics}. Let $\mu,\nu_0,\nu_1 \in \mathcal{P}_2(\Omega)$ with $\mu$ absolutely continuous. Then there are optimal maps $T_0,T_1$ which send $\mu$ to $\nu_0,\nu_1$ respectively. For $t \in [0,1]$, define the generalized geodesic connecting $\nu_0$ and $\nu_1$ with respect to $\mu$ by $\nu_t = (T_t)_{\sharp}\mu$, where $T_t = (1-t)T_0 + tT_1$. In \cite{MauRouSan}, using the displacement 1-convexity of $W_2^2(\mu,\cdot)$ along generalized geodesics, i.e. for all $t\in[0,1]$
\begin{align*}
W_2^2(\mu,\nu_t) \leq (1-t)W_2^2(\mu,\nu_0) + tW_2^2(\mu,\nu_1) - t(1-t)W_2^2(\nu_0,\nu_1),
\end{align*}
it was shown that ${\rm{P}}_\Om^2$ is locally $\frac12$-H\"older continuous. Since this argument is relying on the ``Hilbertian like'' behavior of $W_2$, it is unclear to us whether such reasoning could be carried through for $p\neq 2$.
\end{rem}
\begin{lem}\label{lem:admissibles_convex}
Let $\Om\subseteq\R^d$ be closed and convex such that $\cL^d(\Om)\ge 1$ and $p\in(1,+\infty)$. The subspace $\cK_p(\Omega)$ defined in \eqref{def:Kp} is closed and geodesically convex in $(\cP_p(\Om),W_p)$.
\end{lem}
\begin{proof}
The closedness of $\cK_p(\Om)$ is straight forward.
Let $\mu_0,\mu_1\in\cK_p(\Om)$. To show that $\cK_p(\Om)$ is geodesically convex, we will show that the constant speed geodesic $[0,1]\ni t\mapsto\mu_t$ connecting $\mu_0$ to $\mu_1$ is absolutely continuous with a density bounded above by $1$. This is a consequence of \cite[Theorem 7.28]{San}, however for completeness we provide a direct proof of this result.
First, by \cite[Lemma 3.14]{Kel}, we have that $\mu_t\ll\cL^d\mres\Om$ for all $t\in[0,1]$.
To show the upper bound on $\mu_t$, we rely on the interpolation inequality for the Jacobian determinants of optimal transport maps provided in \cite[Theorem 3.13]{Kel} (see also \cite{CorMcCSch} for $p=2$). Let $\phi:\Om\to\R$ be the unique $c$-concave Kantorovich potential in the transport of $\mu_0$ onto $\mu_1$. Then, by \cite[Theorem 3.4]{Kel}, $T(x):=x-|\nabla\phi(x)|^{q-2}\nabla\phi(x)$ (where $1/p+1/q=1$) is the unique optimal transport map between $\mu_0$ and $\mu_1$. Moreover, $T_t(x):=x-t|\nabla\phi(x)|^{q-2}\nabla\phi(x)$ is the unique optimal transport map between $\mu_0$ and $\mu_t$.
Let us denote by $\Om_{\rm{id}}\subset\Om$ the set where $\phi$ is differentiable and $\nabla\phi=0$. Then, reasoning as in \cite{Kel}, we know that there exists a set $B\subseteq \Om\setminus\Om_{\rm{id}}$ of full measure such that $\phi $ is twice differentiable on $B$ with $\det(DT(x))>0$ if $x\in B$. Then by \cite[Theorem 3.13]{Kel} we have
\begin{equation}\label{eq:Jac}
\det(DT_t(x))^{\frac{1}{d}}\ge (1-t) + t \det(DT(x))^{\frac1d}.
\end{equation}
We remark that because our underlying space $\Om$ is flat, the volume distortion coefficients present in the previous inequality (stated in \cite{Kel} for general Finslerian manifolds) become 1.
By \eqref{eq:Jac}, if $\det(DT(x))\ge 1$, we conclude that $\det(DT_t(x))\ge 1$, while if $\det(DT(x))\le 1$, then $\det(DT_t(x))\ge \det(DT(x))$. In conclusion,
$$
\det(DT_t(x))\ge \min\{1,\det(DT(x))\}.
$$
Now, since $\mu_t=(T_t)_\sharp\mu_0$, when restricted to the set $B$, the change of variable formula yields
\begin{align*}
\mu_t(T_t(x)) = \frac{\mu_0(x)}{\det(DT_t(x))}\le \frac{\mu_0(x)}{\min\{1,\det(DT(x))\}}\le\max\{\mu_0(x),\mu_1(T(x))\}\le 1.
\end{align*}
When restricted to the relative complement of $B$, $T_t$ essentially is the identity map, where the upper bound is also clearly preserved. The result follows.
\end{proof}
\subsection{Barycenters and translation invariance of ${\rm{P}}_\Om^2[\mu]$}
Suppose for now that $\Omega = \mathbf{R}^d$, so we do not have to worry about boundary issues. Then there are a few symmetries which one can exploit in Question \ref{question:nonexpansive?}. First, the projection operator ${\rm{P}}_{\Omega}^p$ commutes with translations. When $p=2$, the projection also preserves barycenters:
\begin{prop}\label{prop:barycenters}
Let $\mu \in \mathcal{P}_2( \mathbf{R}^d)$ and $\rho:= {\rm{P}}_{ \mathbf{R}^d}^2\mu$. Then
\begin{align*}
\int_ {\mathbf{R}^d} x \d\rho = \int_{ \mathbf{R}^d} x \d\mu.
\end{align*}
\end{prop}
\begin{proof}
For $h \in \mathbf{R}^d$, let $\tau \colon x \mapsto x+h$ denote the translation map by $h$. Then $\tau_{\sharp}\rho \in \cK_2( \mathbf{R}^d)$. Thus if $\gamma$ is an optimal plan between $\mu$ and $\rho$, then by Lemma \ref{lem:plan_translate} $(\id,\tau)_{\sharp}\gamma$ is optimal for $W_2(\mu,\tau_{\sharp}\rho)$. Thus, by the optimality of both $\rho$ and $\gamma$,
\begin{align*}
W^2_2(\mu,\rho)&=\int_{ \mathbf{R}^d\times \mathbf{R}^d} |x-y|^2 \d\gamma \leq W_2^2(\mu,\tau_{\sharp}\rho)= \int_{ \mathbf{R}^d\times\mathbf{R}^d} |x-y|^2 \d[(\id,\tau)_{\sharp}\gamma]\\
& = \int_{ \mathbf{R}^d\times \mathbf{R}^d} |x-y-h|^2 \d\gamma = \int_{ \mathbf{R}^d\times \mathbf{R}^d}|x-y|^2\d\gamma - 2h \cdot \int_{ \mathbf{R}^d\times \mathbf{R}^d} (x-y)\d\gamma + |h|^2.
\end{align*}
We conclude that $\displaystyle\int_{ \mathbf{R}^d\times \mathbf{R}^d}(x-y)\d\gamma = 0$, since otherwise one could set $\displaystyle h:=\lambda\int_{ \mathbf{R}^d\times \mathbf{R}^d}(x-y)\d\gamma$ and by sending $\lambda\downarrow 0$, the previous inequality would yield a contradiction. The result follows.
\end{proof}
\begin{lem}\label{lem:plan_translate}
Let $\mu,\nu\in\mathcal{P}_2(\mathbf{R}^d)$ and let $\nu'\in\mathcal{P}_2(\mathbf{R}^d)$ a translation of $\nu$, i.e. $\nu'=\tau_\sharp\nu$, where $\tau:x\mapsto x+h$ (for some given $h\in\mathbf{R}^d$). If $\gamma\in\mathcal{P}_2(\mathbf{R}^d\times\mathbf{R}^d)$ is optimal for $W_2^2(\mu,\nu)$, then $(\id,\tau)_\sharp\gamma$ is optimal for $W_2(\mu,\nu').$
\end{lem}
\begin{proof}
It is immediate to check that $\tilde\gamma:=(\id,\tau)_\sharp\gamma$ is an admissible plan for $W_2(\mu,\nu').$
By \cite[Theorem 5.10]{Vil} (see also \cite[Section 1.6.2]{San}) it is enough to show that $\tilde\gamma$ has cyclic monotone support. Let $n\in\mathbb{N}$. We notice that a collection of $n$ points from $\spt(\tilde\gamma)$ has the form $(x_i,y_i+h)_{i=1}^n$, where $(x_i,y_i)\in\spt(\gamma)$, $i\in\{1,\dots,n\}$. Let $\sigma:\{1,\dots,n\}\to\{1,\dots,n\}$ be a permutation of $n$ letters. Then we have
\begin{align*}
\sum_{i=1}^n |x_i-y_i-h|^2 & = \sum_{i=1}^n |x_i-y_i|^2-2\sum_{i=1}^n(x_i-y_i)\cdot h + n|h|^2\\
&\le \sum_{i=1}^n |x_i-y_{\sigma(i)}|^2-2\sum_{i=1}^n(x_i-y_i)\cdot h + n|h|^2 = \sum_{i=1}^n |x_i-y_{\sigma(i)}-h|^2,
\end{align*}
where in the inequality we have used the cyclic monotonicity of $\spt(\gamma)$. The result follows.
\end{proof}
From these observations one obtains the following ``translation invariance" when $p = 2$ and the ambient space is $ \mathbf{R}^d$.
\begin{prop}\label{prop:translation_invariance}
Let $\mu,\nu \in \mathcal{P}_2(\mathbf{R}^d)$ and $\nu'$ a translate of $\nu$. Then
\begin{align*}
W_2^2(\mu,\nu) - W_2^2(\textnormal{P}_{ \mathbf{R}^d}^2[\mu],\textnormal{P}_{ \mathbf{R}^d}^2[\nu]) = W_2^2(\mu,\nu') - W_2^2(\textnormal{P}_{ \mathbf{R}^d}^2[\mu],\textnormal{P}_{ \mathbf{R}^d}^2[\nu']).
\end{align*}
\end{prop}
\begin{proof}
Denote $\rho := \mathcal{P}_{ \mathbf{R}^d}^2[\mu]$, $\sigma := \mathcal{P}_{ \mathbf{R}^d}^2[\nu]$, and $\sigma' := \mathcal{P}_{ \mathbf{R}^d}^2[\nu']$. Let $\gamma\in\Pi_o(\mu,\nu)$ and $\eta\in\Pi_o(\rho,\sigma)$. If $\tau \colon x \mapsto x+h$ is the translation map which pushes forward $\nu$ onto $\nu'$, then we can construct optimal plans $\gamma',\eta'$ from $\mu,\rho$ to $\nu',\sigma'$, respectively, by $\gamma' = (\id,\tau)_{\sharp}\gamma$ and $\eta' = (\id,\tau)_{\sharp}\eta$ (see Lemma \ref{lem:plan_translate}; here we have also used the fact that the projection of the translate of a measure is the translate of the projection).
Thus
\begin{align*}
W_2^2(\mu,\nu') - W_2^2(\rho,\sigma')
&= \int_{ \mathbf{R}^d\times \mathbf{R}^d} |x-y-h|^2 \d\gamma - \int_{ \mathbf{R}^d\times \mathbf{R}^d} |x-y-h|^2 \d\eta
\\&= \int_{ \mathbf{R}^d\times \mathbf{R}^d} |x-y|^2 \d\gamma - \int_{ \mathbf{R}^d\times \mathbf{R}^d} |x-y|^2 \d\eta\\
& - 2h \cdot \int_{ \mathbf{R}^d\times \mathbf{R}^d} (x-y) \d\gamma + 2h \cdot \int_{ \mathbf{R}^d\times \mathbf{R}^d} (x-y) \d\eta
\\&= W_2^2(\mu,\nu) - W_2^2(\rho,\sigma) + 2h \cdot \paren{\int_{ \mathbf{R}^d} x \d\rho - \int_{ \mathbf{R}^d} x \d\mu} + 2h \cdot \paren{\int_{ \mathbf{R}^d} y \d\nu - \int_{ \mathbf{R}^d} y \d\sigma},
\end{align*}
and the last two terms vanish by Proposition \ref{prop:barycenters}.
\end{proof}
In particular, any counterexample $\mu,\nu$ to nonexpansiveness must remain a counterexample when $\mu,\nu$ are replaced by translates of themselves. This already eliminates several candidates $\mu,\nu$ that may seem like potential counterexamples at first sight.
\section{Main Results}\label{sec:results}
Throughout this section, let $\mu,\nu \in \mathcal{P}_p(\Omega)$, and set $\rho = {\rm{P}}_{\Omega}^p(\mu)$ and $\sigma = {\rm{P}}_{\Omega}^p(\nu)$. Denote the optimal transport plan from $\rho$ to $\sigma$ by $\eta$. Note that since $\rho,\sigma$ are absolutely continuous, $\eta$ is induced by a map.
\subsection{Weak nonexpansiveness of the projection when $p = 2$.}
In the theorem below one bounds the distance squared between $\mu$ and $\nu$ by the transportation cost of a slightly suboptimal transport plan. This is a sort of ``weak nonexpansiveness.''
\begin{thm}\label{thm:almost_lipschitz}
Let $\Omega\subseteq \mathbf{R}^d$ be a closed convex set. Let $T,U:\Omega\to\Omega$ stand for the optimal maps from $\rho,\sigma$ to $\mu,\nu$, respectively. Take $p = 2$ and $\gamma := (T,U)_{\sharp}\eta\in\Pi(\mu,\nu)$. Then
\begin{align*}
W_2^2(\rho,\sigma) \leq \int_{\Omega\times\Omega} |x-y|^2 \d\gamma(x,y).
\end{align*}
\end{thm}
\begin{proof}
One can write
\begin{align*}
\int_{\Omega\times\Omega} |x-y|^2 \d\gamma
&= \int_{\Omega\times\Omega} |T(x)-U(y)|^2 \d\eta
= \int_{\Omega\times\Omega} |x-y + T(x)-x + y-U(y)|^2 \d\eta
\\&= \int_{\Omega\times\Omega} |x-y|^2 \d\eta + 2\int_{\Omega\times\Omega} (x-y) \cdot (T(x)-x+y-U(y)) \d\eta\\
& + \int_{\Omega\times\Omega} |T(x)-x+y-U(y)|^2 \d\eta
\\&\geq \int_{\Omega\times\Omega} |x-y|^2 \d\eta + 2\int_{\Omega\times\Omega} (x-y) \cdot (T(x)-x) \d\eta + 2\int_{\Omega\times\Omega} (y-x) \cdot (U(y)-y) \d\eta.
\end{align*}
Thus it suffices to show that
\begin{align*}
\int_{\Omega\times\Omega} (x-y) \cdot (T(x)-x) \d\eta \geq 0
\qquad \text{and (by symmetry)} \qquad
\int_{\Omega\times\Omega} (y-x) \cdot (U(y)-y) \d\eta \geq 0.
\end{align*}
For $t \in (0,1)$, let $\pi_t(x,y) = (1-t)x+ty$. Then $\rho_t:=(\pi_t)_{\sharp}\eta \in \cK_2(\Omega)$ by the geodesic convexity of $\mathcal{K}_2(\Omega)$. The optimality of $\rho$ in the definition of $\textnormal{P}_{\Omega}^2(\mu)$, together with the fact that $\tilde\eta:=(T,\pi_t)_\sharp\eta\in \Pi(\mu,\rho_t)$, implies
\begin{align*}
W_2^2(\mu,\rho)
&\le W_2^2(\mu,\rho_t)\leq \int_{\Omega\times\Omega}|x-y|^2\d\tilde\eta= \int_{\Omega\times\Omega} |T(x)-\pi_t(x,y)|^2 \d\eta
= \int_{\Omega\times\Omega} |T(x)-x + t(x-y)|^2 \d\eta\\
&= \int_{\Omega\times\Omega} |T(x)-x|^2 \d\eta + 2t \int_{\Omega\times\Omega} (x-y) \cdot (T(x)-x) \d\eta + t^2 \int_{\Omega\times\Omega} |x-y|^2 \d\eta\\
&=W_2^2(\mu,\rho)+ 2t \int_{\Omega\times\Omega} (x-y) \cdot (T(x)-x) \d\eta+ t^2W_2^2(\rho,\sigma).
\end{align*}
Thus, we have obtained
\begin{align*}
-tW_2^2(\rho,\sigma)\le 2 \int_{\Omega\times\Omega} (x-y) \cdot (T(x)-x) \d\eta.
\end{align*}
Letting $t \rightarrow 0$, we conclude that
\begin{align*}
\int_{\Omega\times\Omega} (x-y) \cdot (T(x)-x) \d\eta \geq 0,
\end{align*}
as desired.
\end{proof}
When $\Omega \subseteq \mathbf{R}$, this theorem is enough to deduce that the answer to Question \ref{question:nonexpansive?} is \emph{yes}.
\begin{cor}\label{cor:1d}
Suppose $\Omega \subseteq \mathbf{R}$. Then ${\rm{P}}_{\Omega}^2$ is nonexapansive.
\end{cor}
\begin{proof}
The plan $\gamma$ defined in Theorem \ref{thm:almost_lipschitz} is monotonically increasing, hence optimal.
\end{proof}
Theorem \ref{thm:almost_lipschitz} also implies nonexpansiveness when $\Pi(\mu,\nu)$ is a singleton. This is for instance the case when one of the measures $\mu,\nu$ is a Dirac mass.
\begin{cor}\label{cor:dirac_nonexpansive}
Suppose that $\mu,\nu$ are such that $\Pi(\mu,\nu)$ is a singleton. Then
\begin{align*}
W_2(\rho,\sigma) \leq W_2(\mu,\nu).
\end{align*}
\end{cor}
\begin{proof}
There is only one transport plan between $\mu$ and $\nu$, so using the notation of Theorem \ref{thm:almost_lipschitz}, $\gamma\in\Pi(\mu,\nu)$ must be this plan. The result follows.
\end{proof}
\begin{rem}
In general, $\gamma\in\Pi(\mu,\nu)$ in the statement of Theorem \ref{thm:almost_lipschitz} does not need to be optimal. In the case of $\Omega = \mathbf{R}^2$, consider for instance $\mu = \frac{1}{2}\delta_{(R,0)} + \frac{1}{2}\delta_{(-R,0)}$ and $\nu = \frac{1}{2}\delta_{(t,1)} + \frac{1}{2}\delta_{(-t,-1)}$, where $R$ is large and $t$ is small. For $t > 0$, the optimal map from $\mu$ to $\nu$ sends all the mass from $(R,0)$ to $(t,1)$, and all the mass from $(-R,0)$ to $(-t,-1)$. On the other hand, for $t < 0$, the optimal map sends all the mass from $(R,0)$ to $(-t,-1)$, and all the mass from $(-R,0)$ to $(t,1)$. This means that the optimal plan from $\mu$ to $\nu$ does not vary continuously with $t$ (it is discontinuous at $t = 0$). However, one can see that the plan $\gamma$ does depend continuously on $t$, so it cannot be optimal.
\end{rem}
\subsection{Failure of nonexpansiveness of the projection for $p=1 + o(1)$}
A restriction on $p$ is necessary in the statement of Proposition \ref{prop:small_p_counterexample}. For $p$ very close to $1$, the proof of the following proposition illustrates a counterexample to the nonexpansive property of ${\rm{P}}_{\mathbf{R}^d}^p$ onto $\cK_p(\mathbf{R}^d)$.
\begin{prop}\label{prop:small_p_counterexample}
Let $\Omega = \mathbf{R}^d$ with $d > 1$. Then there exists $p(d) > 1$ such that ${\rm{P}}_{\Omega}^p$ is not nonexpansive with respect to the $p$-Wasserstein distance for $1 < p < p(d)$. In fact, one can take
\begin{align*}
p(d) = 1 + \frac{1}{O(d^2 \log d)}.
\end{align*}
\end{prop}
In the proof below we use the following conventions: for positive quantities $A,B$ possibly depending on various parameters, we write $A \lesssim B$ if $A \leq CB$ with $C$ an absolute constant, $A \gtrsim B$ if $B \lesssim A$, and $A \sim B$ if $A \lesssim B \lesssim A$.
\begin{proof}[Proof of Proposition \ref{prop:small_p_counterexample}]
Let $R>0$ be the radius of the ball of volume $\frac{1}{2}$. Let $\mu = \frac{1}{2}\delta_{(0,\dots,0)} + \frac{1}{2}\delta_{(2R,0,\dots,0)}$ and $\nu = \delta_{(0,\dots,0)}$. Then $\rho = {\rm{P}}_{\Omega}^p(\mu)$ and $\sigma = {\rm{P}}_{\Omega}^p(\nu)$ are the restriction of Lebesgue measure to
\begin{align*}
\{x \in \mathbf{R}^d : |x| \leq R \text{ or } |x-(2R,0,\dots,0)| \leq R\}
\qquad \text{and} \qquad
\{x \in \mathbf{R}^d : |x| \leq 2^{1/d}R\}
\end{align*}
respectively. Let $\gamma\in\Pi(\mu,\nu)$ and $\eta\in\Pi(\rho,\sigma)$ be arbitrary transport plans.
We will show that
\begin{align}\label{eqn:W1_ctrexample}
\int_{\mathbf{R}^d\times\mathbf{R}^d} |x-y| \d\eta > \int_{\mathbf{R}^d\times\mathbf{R}^d} |x-y| \d\gamma
\end{align}
with an explicit lower bound on the difference; then we obtain the desired inequality
\begin{align*}
\int_{\mathbf{R}^d\times\mathbf{R}^d} |x-y|^p \d\eta > \int_{\mathbf{R}^d\times\mathbf{R}^d} |x-y|^p \d\gamma
\end{align*}
for $p \in [1,p(d))$ by continuity in $p$.
The right hand side of \eqref{eqn:W1_ctrexample} is necessarily equal to
\begin{align}\label{eqn:W1_mu_nu}
\int_{\mathbf{R}^d\times\mathbf{R}^d}|x-y|\d\gamma = R = \int_{\R^d} x_1 \d\rho - \int_{\R^d} y_1 \d\sigma = \int_{\mathbf{R}^d\times\mathbf{R}^d} (x_1-y_1) \d\eta.
\end{align}
Thus to get a quantitative form of \eqref{eqn:W1_ctrexample}, it is enough to estimate
\begin{align}\label{eqn:to_lbd}
\int_{\mathbf{R}^d\times\mathbf{R}^d} (|x-y|-|x_1-y_1|) \d\eta
\end{align}
from below. Given $x \in \mathbf{R}^d$, denote $x' = (x_2,\dots,x_d) \in \mathbf{R}^{d-1}$. Let
\begin{align*}
E = \{y \in \mathbf{R}^d \colon 1.1^{1/d}R \leq |y'| \leq 1.9^{1/d}R \text{ and } |y_1| \leq \sqrt{2^{2/d}-1.9^{2/d}}R\} \subseteq \{x \in \mathbf{R}^d \colon |x| \leq 2^{1/d}R\} = \spt\sigma.
\end{align*}
Suppose $(x,y) \in \spt\eta$ with $y \in E$. Then $|x'| \leq R$, so
\begin{align*}
|x'-y'| \geq (1.1^{1/d}-1)R
= R \int_{0}^{1/d} 1.1^t \log 1.1 \d t
\gtrsim R/d,
\end{align*}
On the other hand
\begin{align*}
|x-y| \leq \diam\{\spt\rho \cup \spt\sigma\} \lesssim R.
\end{align*}
Combining these two facts yields
\begin{align*}
|x-y| - |x_1-y_1| \gtrsim \Big(\sqrt{1+\frac{1}{d^2}} - 1\Big)R \gtrsim \frac{R}{d^2}.
\end{align*}
Plugging this into \eqref{eqn:to_lbd} and recalling \eqref{eqn:W1_mu_nu}, we deduce that
\begin{align}\label{eqn:abstract_lbd}
\int_{\R^d\times\R^d} |x-y| \d\eta - \int_{\R^d\times\R^d} |x-y| \d\gamma &\geq \int_{\R^d\times\R^d} (|x-y| - |x_1-y_1|) \d\eta\\
&\nonumber \geq \int_{\R^d \times E} (|x-y|-|x_1-y_1|) \d\eta
\gtrsim \sigma(E) \frac{R}{d^2}.
\end{align}
In computing $\sigma(E)$, it will be convenient to write $\Vol_n(r)$ for the volume of the $n$-dimensional ball of radius $r$, and $\Rad_n(v)$ for the radius of the $n$-dimensional ball of volume $v$. Then $R = \Rad_d(1/2)$ by definition, and by classical formulas,
\begin{align*}
\Vol_n(r) = \frac{\pi^{n/2}}{\Gamma(n/2+1)} r^n
\qquad \text{ and } \qquad
\Rad_n(v) = \frac{\Gamma(n/2+1)^{1/n}}{\sqrt{\pi}} v^{1/n} \sim \sqrt{n} v^{1/n}.
\end{align*}
Thus $R \sim \sqrt{d}$, and
\begin{align*}
\sigma(E) &= 2\sqrt{2^{2/d} - 1.9^{2/d}} R [\Vol_{d-1}(1.9^{1/d}R) - \Vol_{d-1}(1.1^{1/d}R)]
\\&\sim \frac{R}{\sqrt{d}} \Vol_{d-1}(R)
= \frac{1}{\sqrt{d}} \frac{R\Vol_{d-1}(R)}{\Vol_d(R)} \Vol_d(R)
\sim \frac{1}{\sqrt{d}} \frac{\Gamma(d/2+1)}{\Gamma((d-1)/2+1)}
\sim 1,
\end{align*}
where the final estimate follows from Stirling's asymptotic for the Gamma function and from the fact that by the choice of $R$, $\Vol_d(R)=\frac12$.
From \eqref{eqn:abstract_lbd} we therefore conclude
\begin{align}\label{eqn:W1_case}
\int_{\R^d\times\R^d} |x-y| \d\eta - \int_{\R^d\times\R^d} |x-y| \d\gamma \gtrsim \frac{1}{d^{3/2}}.
\end{align}
The inequality we need to prove is
\begin{align}\label{eqn:goal}
\int_{\R^d\times\R^d} |x-y|^p \d\gamma < \int_{\R^d\times\R^d} |x-y|^p \d\eta
\end{align}
for $p \in (1,p(d))$. If $c > 0$ is the implied constant in \eqref{eqn:W1_case}, then \eqref{eqn:goal} will follow from \eqref{eqn:W1_case} as long as $p$ is small enough that
\begin{align*}
\bracket{\int_{\R^d\times\R^d}|x-y|^p\d\gamma - \int_{\R^d\times\R^d}|x-y|\d\gamma} + \bracket{\int_{\R^d\times\R^d}|x-y|\d\eta - \int_{\R^d\times\R^d}|x-y|^p\d\eta} < \frac{c}{d^{3/2}}.
\end{align*}
The first term in brackets is simply
\begin{align*}
\int_{\R^d\times\R^d} |x-y|^p \d\gamma - \int_{\R^d\times\R^d} |x-y| \d\gamma = 2^{p-1}R^p - R.
\end{align*}
Because of the general inequality $t-t^p \leq p-1$ for all $t \geq 0$ and $p\ge 1$, the second term in brackets must be at most $p-1$. Thus \eqref{eqn:goal} holds whenever
\begin{align*}
2^{p-1}R^p - R + p - 1 < \frac{c}{d^{3/2}}.
\end{align*}
One can estimate
\begin{align*}
2^{p-1}R^p - R \lesssim (p-1) R^p \log R
\lesssim (p-1) d^{p/2} \log d
\end{align*}
for $p$ bounded, so it is enough if
\begin{align*}
(p-1)(1+d^{p/2}\log d) < \frac{c'}{d^{3/2}}
\end{align*}
for some smaller absolute constant $c' > 0$. This is true for
\begin{align*}
p < 1 + \frac{1}{O(d^2 \log d)},
\end{align*}
as long as the implied constant is sufficiently large.
\end{proof}
\begin{rem}
Let us note as a consequence of Corollary \ref{cor:dirac_nonexpansive} that this construction is not a counterexample when $p =2$.
\end{rem}
\medskip
\noindent
{\sc Acknowledgements}\ \ We thank Hugo Lavenant for his feedback on an earlier version of the manuscript.
|
1,108,101,565,229 | arxiv | \section{Introduction}
Large scale historically significant phenomena can trigger societal transformations by shifting citizens' preferences and beliefs on important metrics, such as trust in institutions and the role of government in the economy. How such change happens, however, has been difficult to examine. Previous studies have relied mostly on survey data that is collected with large time gaps, and on cross-sectional samples of respondents, making it difficult to understand how citizens process new information while events unfold, and what factors might change or reinforce their prior beliefs. In this study, we implement a seven-wave longitudinal survey on a representative sample of Americans that we track from April to October 2020, an eventful period characterized by a public health and economic crisis that occurred during a Presidential election campaign.\footnote{One concern with using subjective response scales to measure preferences or beliefs is that respondents may have different opinions about what some scales or questions might mean. This is less problematic in a longitudinal study such as ours.} Across survey waves, we track respondents' preferences for welfare and temporary relief policies, their trust in institutions, and how they process information about the crisis. In addition to a rich set of socio-economic and demographic controls, we record respondents' direct and indirect exposure to the crisis as well as their media diet.
\par In line with previous studies documenting the impact of economic crises, we find that during the COVID-19 pandemic Americans reduced their trust in most institutions while increasing their support for a greater role of government in the economy \citep{margalit2019, giuliano2014, garand2010, cogley2008, piketty1995}. Our methodology allows us to see that such preferences and beliefs are more likely to change when citizens are directly affected by the crisis, rather than through mere exposure. Losing a large portion of income or having a family member or close friend hospitalized with the virus are associated with an increased support for welfare policies such as universal basic income, assistance to the elderly, financial assistance to low income students, and keep prices under control. Income loss was also associated with higher support for temporary relief spending on policies such as cash transfers to families and businesses, and protection of essential workers, while at the same time it decreased support for other economic policies, such as helping industry grow. This suggests that citizens did not necessarily increased their support for greater public spending overall, but rather differentiated across types of government intervention.
\par We further support these findings by running a series of checks to control for endogeneity and alternative assumptions. In Section \ref{robust} and in the Appendix, we replicate the analysis using an alternative measure of direct economic shock: whether a respondent lost at least 20\% of their income between the first and last wave of the survey - that is, whether they incurred a more permanent income shock. We find that the effects remain almost identical. As we track multiple outcomes across welfare and institution trust areas, it is also possible that a higher number of outcomes increases the probability of detecting significant results. To mitigate this risk, we undertake multiple hypothesis testing following \citet{anderson2008multiple}, and then replicate the analysis using Average Effect Sizes by bundling outcome measures, as in \citet{giuliano2014}. Further, since some measures of shock might be correlated with some of our controls, such as income and age, we reduce risks of endogeneity by replicating our analysis using entropy balancing weights, thus reducing differences with respect to a set of balance conditions between respondents that incurred and those who did not incurr a shock. Lastly, we replicate the regressions using alternative outcomes, alternative regression models (e.g. logit and fixed effects), and voting intentions instead of party affiliation. All the results remain similar and consistent.
\par The COVID-19 crisis in the U.S. also occurred at a time of a divisive Presidential election campaign. Previous studies suggest that citizens might make sense of an ongoing crisis by engaging in motivated reasoning that confirms their priors, thus potentially cancelling out the effects of direct shocks on preferences \citep{alesina2020, kranton2016, cook2016, lewandowsky2012, nyhan2010}. To understand the mitigating or reinforcing role of political identity on preferences, we first measure the partisan gap between Democrats and Republicans on institutional trust and policy preferences. We then show that this gap increased during the crisis, an effect that is largely driven by respondents who consumed predominantly partisan leaning news. Teasing out the mechanisms behind this trend, we see that respondents who consumed partisan news were more likely to misunderstand the gravity of the crisis. By May 2020, Republicans who consumed predominantly Republican leaning news were more likely to underestimate the COVID-19 death rate, while Democrats who consumed Democratically leaning news were more likely to overestimate it. To study whether erroneous beliefs were a function of motivated reasoning or simply lack of exposure to similar information sources \citep{alesina2020, kranton2016, cook2016, lewandowsky2012, nyhan2010}, we implemented an experiment where half of the respondents were suggested, without any incentive, to check the COVID-19 death rate from the U.S. Centers for Disease Control and Prevention (C.D.C.). We find that this light-touch intervention significantly re-aligns respondents' beliefs with those of the C.D.C. and has a directional positive effect in changing their judgment of how public authorities handled the crisis. In the last wave of the survey, around four months later, we find that the treatment had a persistent effect. Using a causal forest methodology to estimate heterogeneous treatment effects \citep{athey2019estimating}, we find that the experiment was most effective on more educated respondents who consumed Democratic leaning news. Conversely, direct and indirect economic and health shocks caused by the crisis played a comparatively less important role. This suggests that a direct experience with a crisis might not necessarily make citizens more responsive to information campaigns aimed at re-calibrating their (mis)perceptions.
\par Our study makes several contributions. First, we are one of the first to investigate the role of lived experiences and media consumption during a crisis on the same group of respondents, thus contributing to bridge two streams of studies on the role of crisis on welfare preferences and political divisions. Second, our methodology allows us to demonstrate that changes in institutional trust and preferences for public policies can occur very rapidly during a crisis, in some cases in as little as few weeks, as opposed to extended periods as previously suggested. Third, we show that changes in political polarization on policies and institutional trust are more easily explained by the consumption of politically leaning news rather than by direct experiences. Lastly, we contribute to the growing literature on survey experiments that use information treatments to reduce biased perceptions and demonstrate that these low-cost interventions can have long-term effects regardless of a person's experience with a crisis.
\par The paper is structured as follows. Section 1 describes the methodology and the outcomes we tracked over time - namely, support for welfare and temporary relief policies, trust in institutions, and understanding of the gravity of the crisis. In section 2, we explain how we define the different types of shocks and how we calculated respondents' biased media consumption. In the third section, we disentangle how shocks and media shaped Americans' beliefs during the crisis. We then zoom in on the effects of biased media consumption on the understanding of the gravity of the crisis, and present the results of the survey experiment to correct for misinformation. Section 4 reports a set of robustness checks, showing that our results are consistent across several changes in assumptions and model specifications. We conclude with a summary of our findings in the final section.\\
\section{Methodology}
We partnered with NORC at the University of Chicago, the organization responsible for administering the General Social Survey survey (GSS), to implement a survey to a sub-sample of their AmeriSpeak® Panel.\footnote{Funded and operated by NORC at the University of Chicago, AmeriSpeak® is a probability-based multi-client household panel sample designed to be representative of the US household population. Randomly selected US households are sampled using area probability and address-based sampling, with a known, non-zero probability of selection from the NORC National Sample Frame. These selected households are then contacted by US mail, telephone, and field interviewers (face to face). The panel provides sample coverage of approximately 97\% of the U.S. household population. Those excluded from the sample include people with P.O. Box only addresses, some addresses not listed in the USPS Delivery Sequence File, and some addresses in newly constructed dwellings. While most AmeriSpeak households participate in surveys by web, non-internet households can participate in AmeriSpeak surveys by telephone. Households without conventional internet access but with web access via smartphones are allowed to participate in AmeriSpeak surveys by web.} We recruited about 1,440 U.S. citizens (see Table \ref{tab:summary_stats} in the Appendix for a summary of demographic and socio-economic characteristics) which we surveyed seven times between April and October 2020 \footnote{The use of a longitudinal multi-wave panel survey has several advantages. First, we are able to choose the timing and frequency of our survey waves in a way that best allows us to answer our research questions, without the need to wait for two years or more in between data collection periods. Second, we can ask the same set of questions more than twice, thus reducing any possible volatility or inconsistency in respondents' answers. Third, we minimize the risk of recollection bias, as events that occurred in a person's life are more salient and gives respondents a better opportunity to provide more accurate answers about their economic and health situation during a crisis. At the same time, this methodology also doesn't force us to ask questions about preferences and shocks within the same survey wave, which might bias respondents' answers. Fourth, because we follow the same panel of respondents over time, we have baseline data that we can compare against when evaluating changes to their views accounting for their point of departure. This is particularly important when analyzing whether crises lead to convergence (e.g. increasing support for welfare policies among those who were previously not supporting it) or polarization (e.g. decreasing or increasing support for a policy among those who did not have a strong opinion).}.
\par In the first wave of the survey we collected baseline data on the main outcomes of interest (e.g. policy preferences and trust in institutions) as well as media consumption and beliefs (e.g. political ideology). The subsequent weekly waves allowed us to track respondents' lived experiences during the most dramatic first month of the pandemic. The next two waves were administered on a monthly basis, on the week commencing May 18 and June 22 2020, respectively, and recorded respondents' perception of the gravity of the crisis. These waves focused on how Americans were coping in the weeks immediately after a possible health or economic shock, while the event was still vivid in their minds.\footnote{In order to minimize possible priming bias, we always left the shock questions at the end of the survey. Further, while we collected information on economic and health shock in every wave, in the last wave we asked again respondents to report these shocks on a monthly and more detailed basis} Lastly, we implemented a seventh and last wave of the survey on the week commencing October 19, 2020. We purposely timed the last wave to both track any changes to respondents' beliefs and preferences immediately prior to the Presidential elections. The summary of the questions asked in each wave is presented in \autoref{tab:questions}.
\subsection{Outcomes}
Across survey waves, we collected participants' responses to the following set of outcomes: (i) preferences for welfare policies, (ii) preferences for temporary relief policies, (iii) trust in institutions, and (iv) how respondents perceived the gravity of the crisis.
\textbf{Preferences for welfare policies}. To study how the crises affected preferences for welfare policies, we administered a module of questions based on the GSS questionnaire, which asks respondents whether they think it should be the government's responsibility to intervene in a series of policy areas. Respondents can provide an answer for each of these policies on a 4-point scale from ``\textit{Definitely should not be}'' to ``\textit{Definitely should be}.'' The policy areas are the following: (1) provide mental health care for persons with mental illnesses, (2) help individuals affected by natural disasters, (3) keep prices under control, (4) provide a decent standard of living for the old, (5) provide a decent standard of living for the unemployed, (6) provide everyone with a guaranteed basic income, (7) provide industry with the help it needs to grow, (8) reduce income differences between the rich and the poor, (9) give financial help to university students from low-income families \footnote{In our survey we replicate the exact wording of the GSS survey. Later we compare our baseline findings to previous GSS waves.} We asked these questions in waves 1, 4 and 7. In addition, we also asked respondents a question about universal healthcare. The question read as follows: ``\textit{Do you favor or oppose a universal health care system covered by the government so that every American can have equal access to health care, even if this means that you will have to pay higher taxes?}" \footnote{Response options were on a 5-point scale from ``\textit{Strongly oppose}'' to ``\textit{Strongly favor}''}.\footnote{We purposely asked this question in a way that encouraged respondents to think carefully about costs and benefits of a universal healthcare system, and limited saliency bias that might arise from the ongoing crisis on universal health care.} We also asked this question in waves 1, 4, and 7 of our survey. \\
\indent \textbf{Preferences for temporary relief policies}. In addition to tracking Americans' preferences for government intervention in the economy, we also tracked their support for the temporary relief policies that federal and state governments adopted to respond to the crisis. The objective was to see whether government interventions in times of crisis that do not consist in permanent changes to the welfare system were less polarizing.\citep{druckman2021, druckman2013}.\footnote{Indeed, recent surveys suggest that, despite deepening partisan divisions, Americans tend to agree on several policy areas. See for instance: \url{https://cgoap.net/ and https://vop.org/wp-content/uploads/2020/08/Common_Ground_Brochure.pdf} }. These temporary policy questions, which we asked in waves 4 and 7, asked respondents to what extent they agreed with the following statements: (1) "\textit{the government should transfer money directly to families and businesses for as long as lockdown measures are kept in place}", (2) "\textit{the government should do more to protect essential workers from contracting the virus}", (3) "\textit{the government should spend more on public healthcare to reduce the number of preventable deaths}". \\
\indent \textbf{Trust in institutions}. Previous studies have documented that economic crises result in loss of trust in institutions \citep{algan2017, dotti2016, giuliano2014}. To measure how trust in institutions might have changed during the crisis, we asked our respondents the following set of questions, which, again, replicates the wording of the GSS: "\textit{How much confidence do you have in the people running the following institutions?}"\footnote{Like the GSS questions, response options were on a 5-point scale, from "\textit{Complete confidence}" to "\textit{No confidence at all}"}. The list of institutions was the following: (1) U.S. Congress and Senate, (2) White House, (3) scientific community, (4) banks and financial institutions, (5) private sector, (6) hospitals and healthcare professionals, (7) health insurance companies. We asked all of these trust questions in waves 1, 4, and 7. \\
\indent \textbf{Information processing and interpretation of reality}. Americans experienced the ongoing crisis not solely through direct experiences, but also through the news they consumed. Since news sources portrayed the gravity of the crisis in different ways, depending on their political alignment, we were interested in understanding how news consumption might also have shaped citizens' evolving views during the crisis. Respondents were asked in the fifth wave of the survey (i.e. week commencing May 18th) to forecast the expected additional death rate by the end of the year, and to judge the work done by the authorities in containing the pandemic. In the sixth wave of the survey, about a month later, we asked respondents to report the current COVID-19 death rate.\footnote{As part of this question, we also randomly exposed half of the respondents to a link to the official figure on the C.D.C. website, as we will explain in more detail in the dedicated section of this paper.} Lastly, in the seventh wave of the survey (i.e. in the third week of October), we asked respondents to reflect on how they believed the U.S. coronavirus death rate compared to the rest of the world. \par
\subsection{Shocks}
One of the objectives of our study is to disentangle the channels that might lead citizens to update their beliefs across the previously listed set of outcomes. In our study, we focus on four types of shocks: direct and indirect, economic and health. Direct shocks refer to major life events that affected the respondents personally, while indirect shocks refer to exposure to a crisis because of the historical time or geographic location. \\
\indent \textbf{Direct economic shocks}. To measure direct economic shocks, we asked all respondents in the last wave of the survey to report their (and their spouse, if present) monthly gross income between February and October. \footnote{In addition to asking in most waves whether respondents incurred any economic or health shock, in the last wave we asked them to report the exact amount of household income on every month as well as if they knew anyone hospitalized each month. This allows us to have a more granular and quantifiable measure of economic shock beyond the timing of our survey waves} Further, we also asked respondents' (and their spouse, if present) monthly additional sources of income, the monthly number of hours worked, and whether they received any financial support from the government or non-government organizations at any time during the crisis. This data allows us to estimate both the timing and the magnitude of the economics shocks incurred by respondents' households between waves. We measure direct income shocks in two different ways, and we show that they provide comparable results. In our main specification, we consider whether respondents have lost more than 20\% of their income (combining both income from work and other sources) between any two months between February (or the baseline month) and October (or the outcome month) 2020 to capture the effects of a sudden large drop in income. In the Appendix, we show that the results remain unchanged when adopting a less stringent measure of 10\% income loss between any two months.
$$shock_1 = \begin{cases} 1, & \mbox{if } \frac{income_{t}-income_{t-1}}{income_{t-1}}\leq -0.20 \\ 0, & \mbox{otherwise } \end{cases} $$
In our sample of respondents who participated in the first and the last survey waves (i.e. \textit{n}=1,076), we find that about 38\% of respondents lost at least 20\% of their household income, between any two months, between February and October 2020 \footnote{As reported in Table \ref{tab:balancetable_shock} in the Appendix, respondents who lost at least 20\% of their household income between any two months from March to October 2020 are more likely to be young, with a low baseline income and to belong to a racial minority group. Furthermore, women, Democrats, and those who live in a metropolitan area have incurred such a negative income shock with marginally significant higher probability, while co-habitation (or marriage) seems to smooth the financial impact of the pandemic. We control for all these characteristics in our analysis and show how using different specifications does not change our main results}. \\
\indent \textbf{Direct health shocks}. Our main measure of direct health shock is whether respondents had a family member, a friend, an acquaintance \footnote{We consider this combined measure, as 2.4\% of the respondents has a family member who has been hospitalized, 9.8\% has a relative, 14.1\% a friend and 14.9\% an acquaintance.} who was hospitalized with COVID-19 \footnote{To control for additional direct health shocks, we also asked respondents their type of health insurance (e.g. public or private), whether they have caring responsibilities towards an elderly or someone with disabilities, which are at greater risk of complications from contracting the virus, and if they knew a healthcare professional who had been working closely with COVID-19 patients}. In our longitudinal sample, we find that by October 2020, about 30\% of our respondents knew someone (among family, friends or acquaintances) who was hospitalized with COVID-19, while 69\% knew someone who tested positive. About 33\% tested positive for COVID-19 themselves. \\
\indent \textbf{Indirect economic shocks}. In addition to the direct shock measures, we also control for indirect shocks. It is possible that many Americans changed their preferences and beliefs even just by mere exposure to the crisis, such as by knowing someone who got affected economically by the crisis or living in area that suffered a relatively higher economic distress compared to the rest of the country. In the months our data covers, the pandemic crisis affected some communities more than others \citep{dyer2020covid, wright2020poverty}. Measuring economic variations between two months of the same year, however, is a challenge. Many macroeconomic indicators such as unemployment rate or business closures are rarely available at the county level, are often only released at an aggregate level or on a frequency that is less regular than the timing of our survey waves, making any meaningful comparison difficult. Therefore, to measure indirect economic shocks we use data collected and updated in real-time by the Harvard's Opportunity Insights team on the weekly percentage variations in consumer expenditures with respect to the first week of January 2020 \citep{chetty2020}. This variable is seasonally adjusted and is available at the county level, which we match with the respondents' residential information. \footnote{The Opportunity Insights team uses anonymized data from several private companies to construct public indices of consumer spending, employment, and other outcomes. See \citet{chetty2020} for further information on series construction.}\\
\indent \textbf{Indirect health shocks}. Collecting information at a zip code or country level on indirect health shocks, such as COVID-19 cases and deaths, is also not an easy task. In the early stages of the pandemic, many States followed different guidelines in recording COVID-19 deaths, and they all implemented different strategies for testing. While our survey questions were detailed enough to account for possible exposure to the virus (i.e. by asking whether respondents knew a family member, friend, or acquaintance who tested positive), we complement it with data on the number of COVID-19 cases in their county of residence. While this measure might be subject to different protocols depending on the State, these figures were likely to be the same ones reported by the local media. We consider COVID-19 cases\footnote{We exploited the data collected by the New York Times from state and local governments and health departments, and available here \url{https://github.com/nytimes/covid-19-data}.} at the county level reported by the middle of each week We then consider the population size at the county level in 2019 and construct the following measure\footnote{We multiply this measure by 100 to ease the interpretation of the coefficients in our regressions} of increase in cases between week \textit{t} and \textit{t-1} in county \textit{c}: $\frac{cases_{ct} - cases_{ct-1}}{population_c}$. When, instead, we consider an outcome that is not in changes, we focus on the logarithm of the cumulative number of cases weighted by the county population: $ln \left( \frac{cases_{ct}}{population_c} * 100,000 \right) $.
\subsection{Politically leaning news}. To understand how the media might have shaped Americans' views, we collected information on respondents' preferred news sources (including international news and social media) and the number of hours of news they consumed \footnote{The question asked: ``Do you get your news from any of these sources (either on television or on the internet)?", and the multiple option answers were: ``ABC, CBS, NBC, CNN, Fox News, MSNBC, and 'other, please specify'" (e.g. some respondents added The NY Times, The Washington Post, BBC, NPR, and PBS). The objective of these questions was to assess whether individuals were exposed to news that might have been politically polarizing. While there is no exact methodology to measure and rate the partisan bias of news sources \citep{budak2016fair, groseclose2005}, and within each source, different programs might cover the same news in different tones \citep{bursztyn2020misinformation}, we were more interested in evaluating whether respondents were exposed to different points of view during the crisis.}. Based on the the news sources indicated by our respondents, we constructed a ``bias score" using the ``\textit{AllSides.com}" platform, one of the most commonly used source of partisan media bias analysis.\footnote{\url{https://www.allsides.com/unbiased-balanced-news}}. The website assigns a score from 1 (Extremely Left) to 5 (Extremely Right) to major sources of news by analyzing their written content on a regular basis.
\par Matching the scores by Allsides\footnote{We use the scores of the first week of April 2020, our baseline wave} to the respondents' choices, we create an index summing the scores of each source consulted by an individual and divided by the maximum number of possible points.
\begin{center}
\textit{Media slant index, for an individual consuming N sources of news} = $ \frac{\sum_{n =1}^{N} score_{n}}{N} $
\end{center}
This variable measures how politically homogeneous the news sources that respondents consumed are, by taking any value between 1 and 5: the closer a respondent is to 1 the more they consume homogeneous (i.e. less politically diversified) left-leaning media, while the closer they are to 5 the more homogeneous and right-leaning is their media consumption. A score towards the middle indicated either that respondents consume unbiased news, or that they consume news that are biased in both directions, and so that they are exposed to both sides. In other words, we created a measure of echo-chamber effect hat might naturally arise from a heavy consumption of politically biased media. Based on this specification, we see that 51\% of Republicans who consume Republican leaning news, and 46\% Democrats consume Democratic leaning news. Among independents and non-voters, around 25\% (24\%) consume Republican leaning news (Democratic leaning new).
\subsection{Estimation}
To estimate changes in main outcomes (i.e. preferences for welfare and economic policies, preferences temporary relief policies, trust in institutions), we rely on the same estimation approach. For brevity, we present the approach referring to trust in institutions as an example. \footnote{For the outcomes on information processing and the interpretation of reality, we use mostly OLS regressions, which we explain in greater detail in the relevant sections of the paper.} Since most of our outcomes are measures in a Likert-scale, we construct a variable equal to one if the respondent decreased (increased) their confidence in a given institution (their support in a government policy), between the first and the last wave. \footnote{In the Online Appendix we replicate the same analyses with the inverted binary variables, i.e. decreased support for policies and increased trust for institutions, and show that the results are unchanged.} This approach helps us to overcome some of the limitations of survey-based measures previously highlighted by \citet{bond2019sad} and \citet{zanella2019}. We flag respondents who could have not further decreased (increased) their trust (policy preference), since they had already given the minimum (maximum) value on the given scale in the baseline (i.e. wave 1). We then estimate the following OLS regression, considering only the respondents who participated in both waves:
$$Y_{ic}= \alpha + X_i \beta + S_i\theta_1 + Z_c \theta_2 + Yb_{i}\gamma + \epsilon_{ic} $$
\noindent with $Y_{ic}$ being a dummy variable equal to 1 if the respondent decreased (increased) their level of trust in a certain institution (or support for a policy) between the first and the seventh wave (and between the fourth and seventh wave for temporary relief policies). $X_i$ is a vector of time-invariant demographic characteristics; $S_i$ is a vector including the direct health and economic shocks that affected respondents between survey waves when we collected outcome measures; $Z_c$ is a vector of indirect health or wealth shocks at the county (or zip code) level, reported in variation between the first and the last wave (and the fourth and last wave for the temporary policies). $Yb_{i}$ is a dummy variable equal to 1 if the respondent was at the lower bound in wave 1, i.e. if they already gave the highest or lowest score, i.e. could not possibly further decrease (or increase) their score. In all our regressions we apply survey weights, making our sample representative of the U.S. population, and we adjust the standard errors considering the primary sampling units (PSUs) and strata that the population was divided into. Survey weights are recalculated in every wave to keep the sample representative of the population. In the Appendix, we present the analyses on survey attrition and show that these are not correlated with the outcomes. Lastly, we flag respondents who completed the surveys in a time that is equal or shorter than the first percentile of sample duration, which we consider as a proxy of limited attention during the survey. As we consider multiple outcomes, we replicate our analyses using Average Effect Sizes (AES), as in \citet{kling2004moving, clingingsmith2009estimating, giuliano2014, heller2017thinking}. Further, we perform a series of multiple hypothesis tests, which we show in the Appendix. In the Appendix, we also repeat our main analyses adopting other estimation techniques: we perform a logistic regression on the same specification presented above, we run a fixed effect model using our data in a panel format, and we vary the way in which we measure shocks. We show that the main results remain unchanged.
\section{Results}
We begin our analysis by looking at the overall support for policies and institutional trust across survey waves. The first notable result is that, while our first wave was implemented shortly after the number of COVID-19 cases started soaring in the country, our baseline levels of policy support and institutional trust are comparable to previous GSS waves, as shown in the Appendix in Tables \ref{tab:GSS_policies}, \ref{tab:GSS_trust1} and \ref{tab:GSS_trust2}. When comparing our baseline wave (April) to our last wave (October), we see that the share of Democrats supporting public spending overall remained around an average of 87\% while Republicans decreased to a lower 66\%, though still higher compared to previous years. We see a similar trend on the three temporary relief policies, which had an average of 80\% of support among Democrats in April and 76\% in October, compared to an average of 42\% of support among Republicans in April that decreased to a lower average of 32\%, and from 66\% to 51\% among Independents and non-voters\footnote{All the percentages reported in this section account for survey weights.}. However, these differences mask important heterogeneity.
\par To provide a more granular measure of the gap, we calculate the partisan gap on welfare and temporary relief policies as the difference in support between the average score by Democrats and Republicans who don't consume politically homogeneous media, and then repeat the same calculation for the distance in average scores between Democrats who consumed Democratic biased media and Republicans who consumed Republican biased media. We then replicate this approach for shocks. The summary plots are show in in Figures \ref{fig:policies_1} to \ref{fig:confid_3} in Appendix \ref{A3}. Among Democrats and Republicans who did not consume politically homogeneous media, there was a decrease in partisan distance in seven out of the 10 policies we tracked, while the distance increased among consumers of politically biased media from both parties, which was already large in April and increased further for seven out of the 10 policies. Similarly, on the temporary relief policies, we find that the partisan gap increased comparatively more for politically biased media consumers. When replicate the same calculations for partisan distance that might arise as a result of direct economic or health shocks, we see that the results are more mixed. A direct economic shock reduced political distance on six out of the 10 policies we track, and sometimes significantly so \footnote{For example, Republicans who lived through a direct personal shock during the crisis - that is, they either lost at least 20\% of their income or knew someone hospitalized with COVID-19, were marginally more likely to increase support for a guaranteed basic income (27\% compared to 17\% of Republicans who did not incur neither shock; F(1,171)=3.0282, \textit{p}=0.0836). Among Democrats, where support for the policy is already at a higher 67\% baseline level, those who incurred a shock were not significantly different from those who did not incur neither direct shock (22\% vs 25\%, F(1,169)=0.301, \textit{p}=0.583).}.
\par Moving to respondents' trust in institutions over time, we see that overall the partisan gap gets larger compared to previous years, but specifically on the scientific community where the Democrats-Republican gap doubles by the end of October 2020. This is mostly due to a significant drop among Republicans, where 61\% trusted the scientific community in 2018, 51\% in April 2020, and 36\% in October 2020 (compared to 59\%, 68\% and 70\% of Democrats), and specifically by Republicans consuming Republican leaning news (see\ref{fig:confid}). This is in line with \cite{hmielowski2014attack} who show that consuming more conservative media decreases trust in scientists which, in turn, negatively affected support for climate change related policies. More recently and related to the COVID-19 pandemic, \citep{bursztyn2020misinformation}, \cite{ajzenman2020more}, and \cite{mariani2020words} find a similar result also in other countries. We also record a large increase in the partisan gap on trust in the institutions that played a major role in managing the crisis, namely hospitals, Congress and Senate, and the White House, as a result of direct economic or health shocks.
\par In the next sections we disentangle the effects of shocks, party, and media, controlling for a rich set of socio-economic and demographic characteristics. We first show the results of the regressions estimating changes in preferences for welfare policies, temporary relief policies, and trust in institutions. We then show a separate set of regressions on how respondents processed information depending on their direct experiences and media diet, which we complement with a randomized information experiment.
\subsection{How shocks and media changed support for policies}
Figure \ref{fig:welfare_preferences} reports the coefficients and confidence intervals of the regressions estimating the role of shocks, party, and media in changing respondents' preferences for policies between April and October 2020 (for full specification see Tables \ref{tab:media_policies_A}, \ref{tab:media_policies_B}, and \ref{tab:media_policies_C} in the Appendix). Having lost at least 20\% of income is associated with a marginal increase in support for the introduction of a guaranteed basic income and assistance for the elderly, while it decreases the belief that the government should help industry grow. The income shock coefficient is even larger on the the increase in support for greater intervention by the government in all the temporary relief policies, as shown in Figure \ref{fig:covid_preferences}. Similarly, knowing someone who was hospitalized with COVID-19 led to an increase in support for a greater government intervention to assist the elderly, presumably because most hospitalization concerned older Americans who were more vulnerable to the virus, as well a marginal increase in support for helping low-income students and keeping prices under control. An indirect economic shock, namely living in a county that recovered faster its consumer expenditure, is associated with stronger support for a reduction in income inequality, help citizens affected by natural disasters, and keeping prices under control. This measure of indirect shock is also correlated with stronger support for all temporary relief policies. Our interpretation of these correlations is that whether a shock affected a person directly or indirectly changes the type of policies they support. A person who incurred a direct shock might now be more appreciative of welfare policies that are targeted at the individual level and can improve the livelihood of their own families, while respondents who have not been directly affected but lived in areas that witnessed a faster economic recovery will be more appreciative of economic policies that can boost internal demand and restart the economy. This interpretation is in line with the analysis by \citep{chetty2020}, who noted that economic policies during a pandemic have different effects on households based on their income level. Thus, it is possible that families who lost part of their income during the crisis would now favor more social insurance policies that help mitigate the economic hardship they lived through, while higher income households might be more likely to assume that more traditional macroeconomic policies aimed at stimulating internal demand would still be effective at reducing the unemployment rate.
Across all outcomes, we also note important differences between Democrats and Republicans. As report in Tables \ref{tab:media_policies_A}, \ref{tab:media_policies_B}, and \ref{tab:media_policies_C}, the sign of the Republican party dummy variable is almost always negative while the opposite is true for the Democratic party variable. In the second column of each outcome, we see that this polarizing effect can be mostly explained by respondents who consumed politically biased media, in line with other studies \citep{gentzkow2011newspapers, dellavigna2007, allcott2020polarization, grossman2020political, simonov2020persuasive}.
\begin{figure}[H]
\caption{The effect of shocks and media on welfare policy preferences}
\label{fig:welfare_preferences}
\begin{center}
\includegraphics[height=19cm]{welfare_preferences_grid.jpeg}
\end{center}
\begin{minipage}{1\linewidth \setstretch{0.75}}
{\scriptsize{\textit{Notes}:}
\scriptsize All regressions are OLS regressions that take into account population survey weights and the sampling procedure. The dependent variable is a dummy=1 if the respondent increased their belief that it should be a government's responsibility to provide the following policies. The control variables include: gender, race, age, education, parental status, caring responsibilities for an elderly or a person with disability, baseline income in February 2020, cohabitation with a partner, labor force participation and employment status in February 2020, health insurance provider, if the respondent had financial difficulties before the pandemic, macro-region, metro vs rural, the population density at the zip code, and two dummy variables indicating if they consume at least 30min a week of international news and if they have at least one social media account. We also control for whether respondents completed the survey in a shorter time than the 99$^{th}$ percentile as well as ceiling effects.}
\end{minipage}
\end{figure}
\begin{figure}[H]
\caption{The effect of shocks and media on temporary relief policies}
\label{fig:covid_preferences}
\begin{center}
\includegraphics[height=6cm]{covid_preferences_grid.jpeg}
\end{center}
\begin{minipage}{1\linewidth \setstretch{0.75}}
{\scriptsize{\textit{Notes}:}
\scriptsize All regressions are OLS regressions that take into account population survey weights and the sampling procedure. The dependent variable is a dummy=1 if the respondent increased their belief that it should be a government's responsibility to provide the following policies. The control variables include: gender, race, age, education, parental status, caring responsibilities for an elderly or a person with disability, baseline income in February 2020, cohabitation with a partner, labor force participation and employment status in February 2020, health insurance provider, if the respondent had financial difficulties before the pandemic, macro-region, metro vs rural, the population density at the zip code, and two dummy variables indicating if they consume at least 30min a week of international news and if they have at least one social media account. We also control for whether respondents completed the survey in a shorter time than the 99$^{th}$ percentile as well as ceiling effects.}
\end{minipage}
\end{figure}
\subsection{How shocks and media change trust in institutions}
The impact of crisis on institutional trust has been explored by previous studies. \citet{algan2017} and \citet{dotti2016} find that Europeans decreased their trust in the European Parliament and national parliaments after the 2008 global financial crisis (GFC), and \citet{knell2015} finds similar negative trends when it comes to banks, particularly among people who were directly affected by the crash. Similarly, \cite{aksoy2020political} and \cite{aassve2021epidemics} show that exposure to previous epidemics negatively affects trust in the government and public health systems. In Figure \ref{fig:covid_preferences} we see that while demand for government spending increased among those who have been affected by this crisis, respondents were also more likely to reduce their confidence in the people running most institutions (see Tables \ref{tab:trust_a} and \ref{tab:trust_b} in Appendix A.4 for full specifications). Complementing the findings by \cite{giuliano2014}, we show that such effects can occur very rapidly and regardless of a person's age. In particular, we find that losing at least 20\% of the household income in any two months during the crisis significantly decreased trust in financial institutions and in the private sector - two closely related entities - as well as in the Congress and Senate, and hospitals. As shown in the Appendix, some of these effect are even stronger among respondents whose income in October was at least 20\% lower than in April - that is, those who did not recover from the economic shock by the last wave of our survey. Looking at our measures of indirect shocks, we don't see large effects besides that an increase in consumer expenditures between April and October is positively correlated with a decrease in confidence in the White House. We explain this with the fact that this measure is sensitive to its baseline: indeed, the larger the initial drop, the larger the possible subsequent increase in consumer expenditures. Conversely, we see that respondents who lived in counties that recovered more quickly from the initial drop in consumer spending were less likely to have reduced their confidence in health insurance companies and hospitals, presumably as they associated the economic recovery with better crisis response by institutions.
\begin{figure}[H]
\caption{The effect of shocks and media on trust in institutions}
\label{fig:institution_preferences}
\begin{center}
\includegraphics[height=16cm]{institution_preferences_grid.jpeg}
\end{center}
\begin{minipage}{1\linewidth \setstretch{0.75}}
{\scriptsize{\textit{Notes}:}
\scriptsize All regressions are OLS regressions that take into account population survey weights and the sampling procedure. The dependent variable is a dummy=1 if the respondent increased their belief that it should be a government's responsibility to provide the following policies. The control variables include: gender, race, age, education, parental status, caring responsibilities for an elderly or a person with disability, baseline income in February 2020, cohabitation with a partner, labor force participation and employment status in February 2020, health insurance provider, if the respondent had financial difficulties before the pandemic, macro-region, metro vs rural, the population density at the zip code, and two dummy variables indicating if they consume at least 30min a week of international news and if they have at least one social media account. We also control for whether respondents completed the survey in a shorter time than the 99$^{th}$ percentile as well as ceiling effects.}
\end{minipage}
\end{figure}
\par We note again substantial differences across parties. Compared to the Independents and non-voters, Republicans were less likely to have decreased trust in the U.S. Congress and Senate and in the White House, while the exact opposite is true for Democrats (by October, only about 3\% of Democrats had a lot of confidence in the White House, compared to 52\% of Republicans and 18\% of Independent and non-voters). Democrats were also less likely to have decreased their trust in the scientific community and in hospitals, regardless of whether they incurred any shock. In early April, 67\% of the Democrats, 50\% of the Republicans and 51\% of the other respondents declared to have a ``great deal" or ``complete" confidence in the scientific community, whereas by the end of October, the percentage of respondents reporting the same trust had increased to 69\% for the Democrats, but it had dropped to 44\% for the Independents and to 36\% for the Republicans. These opposite directional effects seem to support the claim that the crisis might have further polarized Americans' trust in institutions due more to their political party and media consumption than direct negative experiences.
\par Overall, these results suggest that direct negative experiences during a crisis play an important role in increasing support for welfare policies and greater government spending, as well as reducing trust in institutions. We have also shown that these effects can occur very rapidly, sometimes over a period of one to six months, and rarely return to pre-crisis levels in an equally short time. Further, these effects are robust to several specifications and a rich set of controls, as shown in greater detail in Section \ref{robust}. We also find that political party affiliation per se doesn't fully explain the polarizing trends, and that Democrats and Republicans who lived through similar negative experiences might tend to react in similar ways when it comes to policy support and confidence in institutions. We find, instead, that consuming mostly politically biased media is associated with a stronger polarizing trend. This bears the question as to whether citizens might be more likely to converge their views on several issues in the absence of polarizing media outlets.r In the next section, we study more closely the mechanisms through which a biased media consumption might have increased polarization, and we show that most of the ``distance" between Democrats and Republicans can be explained by different understanding of the gravity of the crisis.
\subsection{Information processing and the interpretation of reality}
Previous studies have documented circumstances in which individuals might either update their beliefs or engage in motivated reasoning when exposed to new information \citep{gentzkow2006media, barabas2004deliberation,gaines2007, taber2006motivated}. Evidence on either behaviors as a consequence of a direct experience during a crisis is more scant. In this section we aim to tease out the role of direct shocks \textit{and} exposure to new information in updating beliefs. To do this, we focus on respondents' grasp of the COVID-19 death rate, arguably the most salient indicator of the gravity of the crisis. While cases were soaring in some states and cities across the country, the rapidly increasing number of hospitalizations and deaths is what prompted some states and cities to introduce stringent non-pharmaceutical measures to contain the spread of the virus.
\par We first wanted to document whether there existed a partisan gap in expectations about the gravity of the pandemic based on this metric. In our fifth wave of the survey (week commencing May 18), we showed all respondents the COVID-19 death rate according to the CDC up to the week before - i.e. 90,000 deaths by May 17. We then asked them to forecast how many more Americans they thought would die by the end of the year due to COVID-19.\footnote{The questions asked: \textit{By May 17, the U.S. Centers for Disease Control and Prevention (CDC) stated that about 90,000 Americans have so far died from COVID-19 (coronavirus). In addition to this, how many more Americans do you think will die by the end of this year due to coronavirus?}} After they answered the question, all respondents saw their expected total death rate by the end of the year (i.e. the sum of the CDC figure and their additional estimate), and were prompted to look at this figure again and judge how public authorities had been managing the crisis.\footnote{The question asked: \textit{Looking again at your estimated number of total coronavirus deaths in the U.S. by the end of the year, and considering how public authorities in the country have been managing the pandemic crisis, do you think the estimate you expect can be defined as a: Great success/ Success / Failure / Great Failure}. We specifically chose the wording 'public authorities' to partly reduce political priming effects.} The objective of these questions is twofold. Firstly, we wanted to study whether respondents held different beliefs and expectations about the danger of the virus at the first peak of the crisis; by making the latest figure by the CDC common knowledge to all respondents, we also aimed to partially control for differences in knowledge. Secondly, we were interested in understanding how partisanship affected their interpretation of reality, and whether respondents engaged in a rationalization process in line with their political views.\footnote{\citet{gaines2007} studies a similar setting showing results of a survey where Americans were asked to state the need and support for the Iraqi war in 2003: while the majority of all respondents thought it was unlikely that the U.S. would ever find weapons of mass destruction, Democrats were more likely to concluded that they simply did not exist while Republicans were more likely to state that they believed the Iraqi government moved or destroyed the weapons.}.
\par We find that, among Republicans, 24\% believed the rate would be 10,000 deaths or fewer (the lowest available option) compared to just 9\% of Democrats. The trend is reversed for the high bound estimates, where 10\% of Republicans believe there were going to be additional 100,000 deaths or more, compared to 31\% of Democrats. A Kruskal-Wallis equality-of-populations rank test confirms that these differences are statistically significant ($\chi^2$= 93.25, p$<$0.001). Among Independents and non-voters, we find a less polarized and more equally distributed estimate of additional deaths by the end of the year: about 19\% expect the number to be 10,000 or fewer, about 21\% to be between 20,000 and 30,000, another 19\% to be 50,000 and about 18\% to be 100,000 or more. \\
\par We further investigate correlates of these difference by performing a regression, reported in table \ref{tab:expected_death}. We find that not only the discrepancy in forecasts is indeed significantly different across party lines, but that this is further exacerbated by their source of information. Democrats consuming Democratic leaning news estimated, on average, about 11,500 more deaths that those consuming unbiased sources, while the opposite is true for Republicans consuming Republican leaning media, who reported about 11,000 deaths less. We then look at whether the additional death rate that they estimated could be considered as a success or a failure. Also in this case, we observe a strong partisan effect: 15\% of Democrats would consider their own estimate as a success, while the percentage increase to 45\% for the Independents and the non-voters, and it reaches a 73\% for the Republicans (F(2, 393.90) = 76.3895, $p<$ 0.0001). Also in this case, consuming politically leaning news further exacerbates this difference: Democratic leaning news are correlated with a decrease in the probability of considering the death rate as a success of 11.5 percentage points, whereas Republican leaning ones with an increase of 18 percentage points. The effects of party and media are mostly robust to the inclusion of the expected number of deaths as a control, as shown in Column (3) of table \ref{tab:expected_death}. This suggests that political polarization might keep playing a role also after controlling for expectations - that is, Democrats and Republicans seem to judge the gravity of crisis differently even when holding similar beliefs. At the same time, however, we also see that the closer Democrats and Republicans are in their beliefs, the lower is their distance in assessing the gravity of the crisis. In figure \ref{fig:forecast_death}, we plot the correlation between the expected additional deaths and whether respondents considered this figure as a success. Following \cite{chetty2014measuring}, we report a binscatter, controlling for a set of variables, and using a restricted model in which each covariate has the same coefficient in each by-value sample \footnote{Binscatter is a binned scatterplot, in which the x-axis variable (estimated deaths) are grouped into equal-sized bins and the mean of the x- and y-axis within each bin are computed. This allows us to visual the expected number of respondents considering the estimated death rate as a success, conditional on the value that they had assigned} \footnote{We also repeated the same exercise by plotting the residuals of a regression with a dummy variable indicating whether the additional expected deaths were a success, as the dependent variable, and a set of controls as explanatory variables. This way, we control for the demographic characteristics that might be correlated with both our outcome (success) and our explanatory variable (forecast deaths). Results are robust also to this specification.}.
\begin{figure}[H]
\caption{Share of respondents believing that the annual COVID-19 death rate in 2020 could be considered a success, by political party and expected death rate.}
\label{fig:forecast_death}
\begin{center}
\includegraphics[height=8cm]{US_death_rate_forcast_success_binscat_controls.png}
\end{center}
\begin{minipage}{1\linewidth \setstretch{0.75}}
{\scriptsize{\textit{Notes}:}
\scriptsize The figure shows a binned scatterplot in which the x-axis variable (estimated deaths) are grouped into equal-sized bins and the mean of the x- and y-axis within each bin are computed. The plot controls for a set of variables.}
\end{minipage}
\end{figure}
These results suggest that while Americans assessed the gravity of the situation through political lenses, there might have been a slight convergence in views as the distance in (mis)perceptions about the death rate got lower. An important notation here is also that media consumption might be endogenous. As such, citizens might have preferences for media sources that are less diversified and more aligned with their views, or might consider a media source as more reliable if it confirms their prior beliefs \citep{gentzkow2006media}. This, in turn, will influence how they perceived the gravity of the crisis and their support for response policies. To disentangle this effect we implement an experiment where we randomize exposure to the same information source. \\
\textbf{Survey experiment}. In the sixth wave of the survey (week commencing June 22), we asked every respondent the same following questions: \textit{How many people have died in your state because of coronavirus from the first death until today?} and \textit{How many people have died in the U.S. because of coronavirus from the first death until today?}\footnote{To avoid any survey fatigue effects, we asked these questions within the first block of ten questions of the survey.}. Immediately prior to seeing these questions, half of the respondents, the treatment group\footnote{see table \ref{tab:balancetable_deathexp} in the Appendix for the balance tables)}, was shown a blank page with only the following text: \textit{Please answer the following questions carefully. If you wish to do so, you can look up the answer on the official CDC website at the following link: https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/cases-in-us.html}. The link was spelled out, as opposed to a hyperlink, so respondents could see the full URL of the webpage and could choose not to click on it and move to the next question. If they clicked on the link, a separate internet browser window would open, redirecting them to the webpage of the CDC that displayed clearly the total number of cases and deaths in the country, and a map that showed the same statistics in each state by simply hovering the mouse over the interested area (see figure \ref{fig:wte} in the Appendix).
\par We find that the treatment significantly increased the time respondents took in answering the question, particularly among Republicans, suggesting that respondents did not avoid this new information even if they were not incentivized to consult the link \footnote{Due to privacy regulation, we could not check whether respondents clicked on the link, but we are able to track the time they spent answering the questions, which we use as a proxy for engagement with the website. We see that treated respondents spent an average of 40 seconds to answer the first question on the total number of deaths in their state, compared to a lower 26.5 seconds in the control group (Adj.Wald test with survey weights: F(1,189)=14.49; \textit{p}$<$0.001), but about the same time to answer the second question on number of deaths in the U.S. (25.5 seconds in the control group and 25.6 in the treatment group; Adj. Wald test with survey weights: F(1,189)=0.00; \textit{p}=0.973). These estimates are confirmed in a linear regression. We also find differences across political lines, with the treatment being effective at increasing the time Republicans (50.8 seconds for control group and 89.1 for the treated one, Adj. Wald test F(1,189)=5.59; \textit{p}=0.015)) and Independents (42.2 seconds for control group and 52.4 in the treated one, Adj. Wald test F(1,189)=4.72; \textit{p}=0.033)) spent answering the questions, but we do not notice significant effects between Democrats in the control and treatment group. In other words, Republicans did not discard or avoid the new information, even if they might have anticipated the objective of the question asked \citep{saccardo2020}.}. The treatment also significantly increased the share of respondents who reported the state death rate according to the CDC, especially among Republicans (from 41\% to 60.4\% in the treated group, F(1,189)=6.319, \textit{p}=0.013) than the Democrats (from 51.5\% to 61.2\% in the treated group, F(1,189)=2.903, \textit{p}=0.09)\footnote{We analyzed whether the treatment had a stronger impact on respondents who expected a low number (i.e. below the median) of additional deaths in wave 5. Results show a positive but not significant effect}. These effects are confirmed in a series of regressions showing that treatment increased significantly the likelihood of reporting the correct death rate both at the State and the country level.
\par After answering the question on the actual number of deaths in their state and in the U.S., all respondents were asked: \textit{Looking again at your estimated number of total coronavirus deaths in your state and in the US so far, and considering how public authorities in the country have been managing the pandemic crisis, do you think the current death rate can be defined as a: Great success; success; failure; or great failure}. Among Democrats, already 88\% consider the outcome a failure or a great failure, but having answered the death rate questions according to CDC figures further increases the likelihood of stating so (from 85\% among those who didn't answer it correctly to 92\%, F(1,190)=3.187, \textit{p}=0.076). Among Republicans, a lower 40\% overall considered the death rate a failure or great failure of how public authorities managed the crisis, but also here answering the death rate questions as per CDC official figure reduced this likelihood, although not significantly, from 40\% among those who didn't answer it correctly to 29\%, F(1,189)=20.026, \textit{p}=0.156). Importantly, we do not observe a backfiring effect of information exposure among Republicans, suggesting that respondents might not have engaged in motivated reasoning \citep{nyhan2021}.
\begin{figure}[H]
\caption{Judgment as a function of accurate information }
\label{f:deaths__success_f}
\begin{center}
\includegraphics[height=11cm]{death_success_treat.png}
\end{center}
\begin{minipage}{1\linewidth \setstretch{0.75}}
{\scriptsize{\textit{Notes}:}
\scriptsize The figure on top shows the share of respondents who correctly estimated the number of COVID-19 deaths in both their state and the U.S., by party and treatment group. The figure at the bottom shows the share of respondents who believed the COVID-19 death rate could be considered a success, by party and by whether they were in the treatment or the control group. Error bars are 95\% confidence intervals.}
\end{minipage}
\end{figure}
\par As estimating the number of deaths according to the CDC might be endogenous to a person's political beliefs, we exploit the exogenous variation in likelihood of correctly estimating the number of deaths caused by our treatment, which was randomly assigned. Hence, we study whether the number of deaths affected respondents' judgement, controlling for a set of demographic characteristics, media consumption, and shocks:
$$ Pr(Success_{ic}) = \alpha + \beta Shock_{i} + \gamma Shock_{c} + \theta_1 Rep_i + \theta_2 Dem_i + $$
$$ \phi Treat_i + \delta X_{ic} + \epsilon_{ic} $$
The dependent variable in our regression is the probability of considering the current deaths as a ``success"; $Shock_i$ and $Shock_c$ indicated whether the respondent incurred a direct or indirect shock \footnote{The indirect economic shock in this regression is the variation in consumer spending between the time of the survey wave and the baseline of January 2020.}, and $X_{ic}$ captures a set of demographic variables. In table \ref{tab:death_exp_main}, we show the results of the OLS regressions described above. In the first two columns, we show that the treatment succeeded in increasing the chances of stating the death rate as per CDC figures, both at the federal and the national level, while in the remaining columns we report the effect of the treatment on the likelihood of declaring the number of deaths a success. In table \ref{tab:death_tr_effect} in the Appendix, we show that the treatment was effective in increasing the time respondents spent answering the questions. In columns (3)-(6), we further break down the outcomes of the experiment, separating between those who under, over or correctly estimated the number of deaths at the State or the US level. We see that Republicans were significantly more likely to underestimate the number of State and US deaths, while Democrats less so. In the control group, which was not shown the link to the CDC website, 35\% of the Republicans under-reported both the number of the US and State deaths, while the Democrats doing so were 18\% and 26\%, respectively. Similarly, 35\% of the Democrats overestimated the number of deaths in the US compared to 27\% of the Republicans. These results suggest that exposure to the same information can correct for partisan gap in estimating the gravity of a crisis, in line with recent studies \citep{haaland2019}. We also find a directional, although not significant, change in the way respondents judged the gravity of the crisis and the success of the response by public authorities as a result of this intervention.
\begin{table}[H]
\centering
\caption{The effect of providing factual information in changing misunderstanding and assessment of the gravity of the crisis.}
\label{tab:death_exp_main}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lcccccc} \hline \hline
& (1) & (2) & (3) & (4) & (5) & (6) \\
& \begin{tabular}[c]{@{}c@{}}Correctly \\ estimated\\ US \& State\\ deaths\end{tabular} & \begin{tabular}[c]{@{}c@{}}Correctly\\ estimated\\ US \& State\\ deaths\end{tabular} & \begin{tabular}[c]{@{}c@{}}US \& State \\ deaths\\ are a\\ success\end{tabular} & \begin{tabular}[c]{@{}c@{}}US \& State \\ deaths\\ are a\\ success\end{tabular} & \begin{tabular}[c]{@{}c@{}}US \& State \\ deaths\\ are a\\ success\end{tabular} & \begin{tabular}[c]{@{}c@{}}Correctly \\ stated\\ the US deaths\\ VS the world\end{tabular} \\ \hline
& & & & & & \\
CDC Tx & 0.118*** & 0.149*** & -0.0415 & -0.0198 & -0.0404 & 0.0125 \\
& (0.0305) & (0.0341) & (0.0313) & (0.0370) & (0.0368) & (0.0423) \\
CDC Tx*Rep news & & -0.0370 & & -0.0996 & -0.0613 & 0.236** \\
& & (0.0643) & & (0.0719) & (0.0729) & (0.0922) \\
CDC Tx*Dem news & & -0.0905 & & -0.000831 & -0.00802 & -0.0417 \\
& & (0.0624) & & (0.0617) & (0.0571) & (0.0671) \\
Democrat & 0.0615 & 0.0596 & -0.130*** & -0.130*** & -0.0533* & -0.111*** \\
& (0.0402) & (0.0404) & (0.0307) & (0.0306) & (0.0291) & (0.0416) \\
Republican & -0.0330 & -0.0331 & 0.230*** & 0.230*** & 0.143*** & -0.0336 \\
& (0.0369) & (0.0366) & (0.0431) & (0.0432) & (0.0425) & (0.0386) \\
Lost 20\% income & -0.0307 & -0.0313 & -0.0109 & -0.0108 & 0.00956 & -0.0441 \\
& (0.0395) & (0.0398) & (0.0383) & (0.0380) & (0.0351) & (0.0429) \\
Knows hospitalized & -0.0730* & -0.0746* & -0.0135 & -0.0145 & -0.0164 & -0.0166 \\
& (0.0422) & (0.0418) & (0.0329) & (0.0333) & (0.0324) & (0.0389) \\
ln COVID-19 cases & -0.0178 & -0.0187 & -0.00475 & -0.00522 & -0.0150 & 0.0191 \\
& (0.0178) & (0.0179) & (0.0191) & (0.0195) & (0.0199) & (0.0251) \\
Consumer exp - June & 0.158 & 0.153 & -0.204* & -0.228** & -0.179 & 0.0111 \\
& (0.124) & (0.123) & (0.115) & (0.114) & (0.111) & (0.119) \\
Dem leaning news & 0.0188 & 0.0635 & -0.0419 & -0.0420 & -0.0175 & 0.0280 \\
& (0.0369) & (0.0478) & (0.0380) & (0.0526) & (0.0503) & (0.0577) \\
Rep leaning news & -0.0267 & -0.00890 & 0.267*** & 0.311*** & 0.214*** & -0.284*** \\
& (0.0514) & (0.0565) & (0.0420) & (0.0508) & (0.0562) & (0.0726) \\
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Expected additional death\\ rate is a success (w5)\end{tabular}} & & & & & 0.390*** & \\
& & & & & (0.0390) & \\
Constant & 0.300** & 0.297** & 0.396*** & 0.395*** & 0.174 & 0.954*** \\
& (0.140) & (0.140) & (0.139) & (0.140) & (0.134) & (0.249) \\
& & & & & & \\
Controls & Yes & Yes & Yes & Yes & Yes & Yes \\
Observations & 1,141 & 1,141 & 1,137 & 1,137 & 1,137 & 948 \\
R-squared & 0.158 & 0.160 & 0.285 & 0.287 & 0.390 & 0.102 \\
Mean dep. var. & 0.330 & 0.330 & 0.335 & 0.335 & 0.335 & 0.552 \\ \hline
\multicolumn{7}{l}{%
\begin{minipage}{1.25\columnwidth}%
\small \textit{Notes}: Standard errors in parentheses. *** p\textless{}0.01, ** p\textless{}0.05, * p\textless{}0.1. The dep. var. in Col (1) and (2) is a dummy=1 if the respondent provided the correct death rate, while col (2), (3) and (4) it is a dummy=1 if the respondents believed the COVID-19 death rate at the National and State level was a success. Col. (6) reports a regression predicting whether the respondent correctly stated that the US death rate was higher than in most countries in the world, in wave 7. The control variables include: gender, race, age, education, parental status, caring responsibilities for an elderly or a person with disability, baseline income in February 2020, cohabitation with a partner, labor force participation and employment status in February 2020, health insurance provider, if the respondent had financial difficulties before the pandemic, macro-region, metro vs rural, and the population density at the zip code. We also control for whether respondents completed the survey in a shorter time than the 99$^{th}$ percentile. Finally, we consider social media usage and the amount of international news consumed.
\end{minipage}%
}
\end{tabular}%
}
\end{table}
Lastly, we consider whether respondents have a preferences for consistency in their (motivated) response \citep{falk2018information}. To do this, we replicate the same regressions as above, this time adding a dummy for whether the respondent stated in the previous wave (i.e. wave 5) that the expected additional death rate could be considered a success. We find that this dummy significantly increases the probability that respondents considered the actual death rate as a success \footnote{We also replicate the same analysis by looking at whether the treatment had heterogenous effects depending on the size of the gap between the forecast in wave 5 and the actual measure in wave 6. We find that the treatment had a similar effect regardless of how `'far" a person's forecast was.}. In other words, those who considered in May that the forecast death rate was a success were more likely to consider the death rate in the subsequent wave as a success. However, the inclusion of this control does not change the statistical significance of the treatment effect in the first stage regression, nor the significance of party identity and biased media in the second stage. As an additional check, we instrument ``correctly estimated the state and country number of deaths" with the treatment assignment, which provides an exogenous variation. The results of the first and the second stage of a Two Stage Least Square regression (2SLS) are presented in table \ref{tab:covid_iv}, in the Appendix.
\par Since the treatment effects are large and significant, we are interested in studying whether such simple and non-incentivized suggestion to improve one's beliefs had persistent effects. To do this, in wave 7 (end of October), more than 3 months later than the survey experiment wave, we asked respondents how the U.S. COVID-19 death rate compared to the rest of the world. The possible answers ranged from ``\textit{The highest in the world}'' to ``\textit{The lowest in the world}'', in a four point scale. In column 6 of table \ref{tab:death_tr_effect} we see that the treatment had a persistent, significant, and large effect in changing respondents' beliefs about the gravity of the crisis in the long-run. Further, we see from the interaction terms that the treatment helped to counterbalance the role played by the consumption of biased media (see Appendix for the full regression on this question alone). To show this graphically, in graph \ref{f:death_rate_pol}, we report the percentage of individuals selecting each option by political party\footnote{When this survey wave was administered, the U.S. cumulative death rate was the 10th highest in the world, with 685 deaths per million inhabitants. We consider the cumulative death rate per million inhabitants reported by the website ``Our World In Data" on October, the 26th 2020 (url: \url{https://ourworldindata.org/covid-deaths})}. While about half of the respondents correctly selected that the death rate was higher than most countries, answers varied significantly across parties, with Democrats overestimating the U.S. ranking (40\% believe it to be highest in the world, compared to 20\% of the Independents, and 12\% of the Republicans) and the Republicans underestimating it (40\% reporting it lower than most countries, compared to 20\% of the Independents, and 7\% of the Democrats). Also in this case, a person's news consumption mattered: consuming Republican leaning news is associated with a high probability of incorrectly stating the the death rate compared to the rest of the world. \\
\begin{figure}[H]
\caption{The long-term effect of information treatment on beliefs.} \label{f:death_rate_pol}
\begin{center}
\includegraphics[height=11cm]{death_experiment_us_pol_party3_tc.png}
\end{center}
\begin{minipage}{1\linewidth \setstretch{0.75}}
{\scriptsize{\textit{Notes}:}
\scriptsize The figure shows the share of respondents who correctly estimated the death rate of the U.S. compared to the rest of the world by political party and by whether they were in the treatment group in the previous similar question we asked more than 3 months earlier. Error bars show 95\% confidence intervals.}
\end{minipage}
\end{figure}
In sum, we see that Democrats and Republicans who consumed media that was more politically leaning towards their ideology were more likely to hold erroneous beliefs about the gravity of a crisis, potentially mitigating the convergent effect that shocks played. However, supplying individuals with the same information had a significant and long-term effect in closing this partisan gap\footnote{A debated issue with survey-based measures is whether some answers are biased by a cheerleading effect - that is, survey respondents' inclination to respond to questions to signal support for their political party rather than what they actually believe in. Recent studies, however, show that cheerleading effects might be less of a concern, and that respondents do engage in motivated reasoning even in financially incentivized contexts \citep{peterson2021partisan}.}. We complement our experiment analysis by estimating the treatment effect across sub-groups of respondents. We do this to understand whether some individuals are more responsive to information interventions and if so what the characteristics of these individuals are. Further, this methodology also allows us to see the effects of shocks and media on increasing respondents' responsiveness to information treatments. \\
\par \textbf{Heterogeneous treatment effects}. We employ a causal forest methodology, as in \citet{athey2019estimating}, to predict conditional average treatment effects for each respondent. This approach allows us to construct non-biased partitions of the data ex-ante from which a valid CATE may be achieved. To improve precision, we first train a pilot random forest on all baseline characteristics included in the OLS regression to identify relative variable importance to the model. We then train a second forest using only the subset of covariates which score above mean importance to eliminate possible confounding effects \citep{basu2018iterative}. We run tests to detect any heterogeneity in our primary outcomes of interest: (1) correctly identifying state and national COVID-19 death rates and (2) evaluating these rates as a success. Additionally, we test for heterogeneity in sustained informational effects, which are measured through a question in the next wave evaluating if respondents correctly identify the relative US death rate to other countries. Should our causal forest identify treatment heterogeneity in a given outcome, we will explore which characteristics may be correlated with higher estimated treatment response. We begin by testing whether the causal forest detects treatment heterogeneity through an ensemble approach. While there is not a clear consensus on causal forest validation, one approach suggested by \citet{athey2019estimating} is the use of a “best linear predictor” method, which fits the conditional average treatment effects (CATE) as a linear function of out-of-bag causal forest estimates. This approach provides strong evidence of a useful association between causal forest estimates and heterogeneity in treatment effect for outcome (1) (correctly estimating state and national COVID death rates) but not the others - this is consistent with the non-significance of our OLS estimate for outcome (2), but suggests we are not powered to detect variation in sustained informational effects. We also employ a series of tests suggested by \citep{davis2020rethinking} to verify that out-of-bag predictions and actual treatment effects are related, and find that the results for outcome (1) are consistent with our calibration test (see Appendix Table \ref{tab:bestlinearfit}). Together, these tests suggest there is an meaningful source of heterogeneity in treatment effectiveness that is worthy of further examination.
We use a quartile breakout by predicted treatment effects for our primary outcome of interest(i.e., correct estimation of state and national US death rates). Appendix Table \ref{tab:causalforestquartiles} provides summary statistics by quartile for our baseline characteristics, as well as the mean CATE prediction. Results show that direct shocks are not correlated with higher treatment efficacy. Further, directly affected individuals do not have a higher average correct reporting rate, suggesting this is not due to already answering the question correctly regardless of treatment. Another key finding is that higher levels of education attainment positively mediate informational treatment efficacy. Specifically, a subset of respondents with a a bachelor’s degree or higher display significantly higher treatment effects, representing over 60\% of the highest quartile. In contrast, respondents with a high school education experience constant diminishing representation in each subsequent quartile. This suggests that some highly educated respondents may be particularly responsive to informational treatments. Lastly, Democratic respondents who consumed more Democratic leaning news were also more responsive to the treatment compared to other political sub-groups, suggesting that responsiveness to a certain source of information, in this case the C.D.C., might be correlated with political ideology.
\section{Robustness checks} \label{robust}
\textbf{Alternative measures of shocks}. In the results presented in the main section of the paper, we considered as a direct economic shock whether respondents lost at least 20\% of their household income between any two months between the baseline survey wave and the last survey wave. We replicate the same model specifications using two different assumptions of direct economic shock: (a) whether respondents lost at least 20\% of their household income between February and October - that is, they incurred a more permanent loss in income, thus excluding those who eventually recovered from their loss by our last wave, and (b) the percentage decrease in income between the baseline and the outcome month, to account for possible different magnitudes of the level of shock. The two measures are respectively:
$$shock_2 = \begin{cases} 1, & \mbox{if } \frac{income_{final}-income_{baseline}}{income_{baseline}} \leq -0.20 \\ 0, & \mbox{otherwise } \end{cases} $$
and
$$shock_3 = \begin{cases} \frac{income_{final}-income_{baseline}}{income_{baseline}}, & \mbox{if } < 0 \\ 0, & \mbox{otherwise} \end{cases} $$ \\
Among respondents in our sample who participated in the first and the last survey waves (i.e. \textit{n}=1,076), about 27\% of our respondents lost at least 20\% of their income in a permanent way between February and October, compared to 38\% who lost it between any two months, but potentially recovered. When looking at the continuous measure of shock, we find that between February and October, about 4\% of respondents reported having lost all of their household income while about 17\% lost up to half of their household income. In tables \ref{tab:policy_inc2}, \ref{tab:policy_inc3} and \ref{tab:covid_policy_inc23} in the Appendix, we report the results of the regressions on policy preferences and trust in institutions using these two alternative measures of shocks. The magnitude and the coefficient signs are consistent with what our main specification: direct income shocks increased support for most government interventions, with the exclusion of providing mental healthcare and universal healthcare, whose associated coefficients are not significant, and help the industry grow. Support for the latter significantly decreased among respondents who incurred a shock, regardless of how it's measured, in line with our main results. For what concerns temporary relief policies, we see an even stronger support, both in terms of outcomes and significance, among respondents who incurred an income shock and had not recovered by October, suggesting that support for welfare policies increased with the severity of a person's income loss. We also report results related to institutional trust in tables \ref{tab:trust_inc2} and \ref{tab:trust_inc3}. Also in this case, the coefficients are consistent with what our main specification: incurring an economic shock is associated with an increase in the likelihood of having lost confidence in institutions, and particularly so in the U.S. Congress and Senate and in the private sector.
\textbf{Multiple hypothesis testing}. In our analyses we consider multiple sets of outcomes. For a given $\alpha$ level of tolerance for type I error, as the number of tested outcomes increases also the probability of rejecting at least one null hypothesis increases. To take this into account, we adjust the p-values of the shocks controlling for the False Discovery Rate (FDR). Given a set of $H$ hypothesis, if $F$ is the number of false rejections and $C$ the number of correct rejections, then $T = F + C$ is the total number of rejections. The FDR is the expected proportion of all rejections that are type I errors, or $E[\frac{F}{T}]$. In the Online Appendix, we report the p-values associated with economic and health direct shocks, corrected for the FDR, following the algorithm proposed by \cite{anderson2008multiple}. From these tests, we see that most of our results hold and remain at least marginally significant.
\textbf{Alternative measures of outcomes and regression models}. In line with \citet{giuliano2014}, we focused on analysing an increase in support for policies and government interventions, and a decrease in institutional trust. However, we also considered the opposite direction - that is, a decrease in support for welfare and an increase in institutional trust. We report these results in an Online Appendix and show that they are in line with what presented above: Democrats are significantly less likely to have decreased their support for most of government interventions, while Republicans are more likely to do so, and the biased media diet further increased this trend. On the other side, Democrats are less likely to have increased their trust in President Trump and in the U.S. Congress and Senate, but have significantly increased their confidence in people running the scientific community, whereas the opposite is true for respondent supporting the Republican party.
\par Lastly, since most of our outcomes are binary variables, for completeness, we also show that our results hold when using a logistic regression instead of OLS, as also shown in the Online Appendix.
\textbf{Average effect sizes.} Another robustness check we perform is testing whether our results hold when considered as a bundle, which allows to make more general claims. To do so, we replicate the analyses using an Average Effect Sizes, as in \cite{kling2004moving, clingingsmith2009estimating, giuliano2014, heller2017thinking}. To perform such an analysis, one needs to make several assumptions about the nature of the outcomes being studied, since an AES estimation requires stacking multiple outcomes. As we have seen in the main specifications of our results, support for policies and trust in institutions change in different directions according to a person's party, and depending on the nature of the shock they incurred. As such, this requires grouping dependent variables in sub-groups using a more subjective judgment. In the Appendix, we propose one plausible stacking approach and show that the results remain qualitatively similar to those presented in the previous sections. We group the variables according to the type of institutions or policies considered. When analyzing policies, we separate between questions related to whether it's a government responsibilities to provide a set of services and those concerning coronavirus relief. Within the first ones, we further split the variables in two groups: one considering traditional macroeconomic policies (keep prices under control and help industry grow), and ones focused on welfare issues (reduce inequality, provide for the unemployed, provide help to university students from a disadvantage background, and provide a basic income, universal healthcare, provide mental health care services to people with mental illnesses, provide for the elderly and help those affected by natural disasters). For what concerns institutional trust, we separate between government-related institutions (the U.S. Congress and Senate and the White House), science-related ones (scientific community, hospitals and health care professionals, and health insurance companies), and the ones related to the economy (banks and financial institutions, and the private sector). Again, we see that our results remain qualitatively identical to the main specifications presented in the body of the paper.
\textbf{Entropy weights}. The COVID-19 pandemic affected communities and citizens differently, also depending on their income levels. As such, some shocks, such as incurring an income loss, are correlated with several demographic characteristics, including income, gender and race. Even though we consider variations at the individual level, which reduces concerns related to endogeneity, we cannot entirely exclude that those who have been affected by a shock were systematically different from those who did not, and that their preferences and opinions would have varied in a different way. In order to minimize this potential source of endogeneity, we repeat our analyses with entropy balancing weights. The entropy balancing technique re-weights the observations in order to reduce the differences with respect to a set of balance conditions across treated and control units (in our case those who incurred a shock vs those who did not) \footnote{See \cite{hainmueller2013ebalance} for the Stata package and \cite{hainmueller2012entropy} for the theory behind this approach. We opt for applying entropy balancing weights, instead of performing any matching technique, in order to avoid excluding any observation.}. These survey weights still take into account the population weights, so the resulting weights still reflect the whole population. In the Online Appendix, we report the regression results using entropy balancing weights. Coefficients do not vary in a substantial way with regards to the magnitude and the signs, suggesting that the level of endogeneity is not of particular concern in the interpretation of our results.
\textbf{Voting intentions}. The COVID-19 crisis occurred at a time of great political polarization in the U.S., also due to the Presidential elections. The months just before the elections of November 2020 saw greater division among the public, with some voters not necessarily reflecting themselves in one of the two main parties but rather in the Presidential nominees. To account for different political identity effects, we replicate our analysis considering voting intentions, which we collected from our respondents in the middle of May. Results are presented in the Online Appendix. Again, the sign and the magnitude of the coefficients associated with the political parties are consistent across specifications. The only marginal differences we note are that Trump voters are significantly less likely to have increased their belief that it's a government responsibility to provide for the unemployed, to provide a basic income or to reduce inequality, while Republicans, in general, were not. However, Biden voters, unlike Democrats, have not significantly increased their support for coronavirus-related policies or for other government interventions. However, such differences are minor and the coefficient signs are consistent with our main specifications.
\textbf{Fixed effects}. We also perform similar analyses to those presented above, but considering a model with longitudinal data and controlling for fixed effects at the individual level.
$$ y_{ict}= \alpha_i + wave_t + shock_{it} +shock_{ct} + \epsilon_{ict}$$
with $y_{ict}$ being one outcome of interest for individual $i$, in county $c$, in time $t$; $shock_{it}$ and $shock_{ct}$ being a shock for individual $i$ or county $c$, in time $t$; $\alpha_i$ the individual fixed effects, and $wave_t$ the survey wave. Variables referring to direct shocks are dummy variables flagging if the respondent incurred a shock in any time preceding the current wave, so if the event occurred in a certain month, the shock variable will be equal to one for all the subsequent observations. In this way, we track the impact of having had an income loss or knowing someone hospitalized at least once in our time frame, similarly to what measured in the regression in differences.
Since the individual effects absorb all time-invariant variables, from the main specification, we cannot assess whether respondents' political views affected their opinions and preferences in time. Thus, we repeat the same analysis but subgroups, considering a sample of Republicans and one of Democrats. The results of the analysis concerning institutional trust are presented in the Online Appendix. Again, we can see that the results don't change in any dramatic way. The fixed effect model allows us to assess how support for government interventions and institutional trust have varied in time. Since the beginning of April, respondents have decreased their belief that the government should keep prices under control, and this seems to be driven by the Republicans, and we observe a similar pattern for two other welfare policies: support for the unemployed and for the elderly. For what concerns trust, the Democrats increased their confidence in the U.S. Senate and Congress between the first and the last week of April, but by mid-May, the level of trust had dropped back to the baseline levels. On the contrary, confidence in President Trump dropped significantly both in May and October, and the coefficients remain negative for both sub-samples of Democrats and Republicans, although they are not significant for the latter ones. Trust in financial institutions and in the private sector has oscillated in time, while confidence in scientific institutions has dropped in time across all parties, reaching the lowest point in June.
\section{Conclusions}
Large scale crises can lead to significant and persistent changes in citizens' beliefs and preferences. Previous studies suggested that such changes occur slowly, over long periods of exposure to a crisis or regime change. Using a longitudinal multi-wave survey, we show that such changes can actually occur very rapidly. We also find that most of the changes in preferences for policies and institutional trust occur after a direct negative experiences with the crisis, rather than just exposure per se. A direct economic or health shock during a crisis increases citizens' preferences for greater government spending, in particular on welfare assistance, and decreases trust in institutions. Changes in the political polarization on the same set of policies and institutions can instead be largely explained by whether a person consumes mostly news sources that are aligned with their political views. We show that the main channel through which news source influence polarization is by creating misperceptions about the gravity of the crisis. Throghout the crisis, Democrats were more likely to overestimate the COVID-19 death rate while Republicans were more likely to underestimate it. In a non-incentivized light-touch experiment, we find that exposing respondents to the same source of information from an official government source reduces the misinformation partisan gap, with this effect persisting several months, potentially counteracting media-led biases. Our results contribute to a growing literature on how crises transform societies, pointing to the importance of tracking preferences frequently and disentangling the mechanisms behind such changes.
\newpage
\subsection*{GMO}
\begin{table}[]
\centering
\caption{}
\label{tab:my-table}
\begin{tabular}{lcccc}\hline
& 1 & 2 & 3 & 4 \\
VARIABLES & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} \\\hline
& & & & \\
Treatment match & -0.0660 & -0.0415 & -0.0125 & -0.0688 \\
& (0.213) & (0.196) & (0.220) & (0.254) \\
Treatment mismatch & 0.0478 & 0.0457 & 0.208 & 0.289 \\
& (0.193) & (0.184) & (0.194) & (0.227) \\
Treatment match party*Rep & 0.579** & 0.498** & 0.516* & 0.670** \\
& (0.234) & (0.220) & (0.291) & (0.282) \\
Treatment mismatch party*Rep & 0.229 & 0.177 & 0.335 & -0.0426 \\
& (0.284) & (0.286) & (0.300) & (0.264) \\
Republican & -0.370*** & -0.260** & -0.253** & -0.262** \\
& (0.124) & (0.1000) & (0.101) & (0.101) \\
Democrat & 0.184 & 0.188 & 0.185 & 0.178 \\
& (0.165) & (0.167) & (0.174) & (0.164) \\
Republican leaning news & & 0.0911 & 0.174 & 0.113 \\
& & (0.0965) & (0.106) & (0.0906) \\
Democratic leaning news & & 0.140 & 0.245* & 0.153 \\
& & (0.106) & (0.137) & (0.102) \\
Confid. sci. com. in w1 & & 0.407*** & 0.416*** & 0.464*** \\
& & (0.0834) & (0.0840) & (0.101) \\
Confid. fed. gov. in w1 & & 0.0148 & 0.00546 & 0.00609 \\
& & (0.157) & (0.153) & (0.160) \\
Treatment match party*Biased news & & & -0.0708 & \\
& & & (0.247) & \\
Treatment mismatch party*Biased news & & & -0.358 & \\
& & & (0.282) & \\
Treatment match party*Biased news*Rep & & & -0.0348 & \\
& & & (0.318) & \\
Treatment mismatch party*Biased news*Rep & & & -0.156 & \\
& & & (0.462) & \\
Treatment match party*Confid. sci. com. & & & & 0.0491 \\
& & & & (0.307) \\
Treatment mismatch party*Confid. sci. com. & & & & -0.361 \\
& & & & (0.281) \\
Treatment match party*Confid. sci. com. & & & & -0.317 \\
& & & & (0.356) \\
Treatment mismatch party*Confid. sci. com.*Rep & & & & 0.303 \\
& & & & (0.343) \\
Constant & 3.256*** & 3.118*** & 3.051*** & 3.078*** \\
& (0.271) & (0.300) & (0.308) & (0.307) \\
& & & & \\ \hline
Demographic controls & Yes & Yes & Yes & Yes \\
Controls for network and information & No & Yes & Yes & Yes \\
Observations & 1,089 & 1,089 & 1,089 & 1,089 \\
Standard errors in parentheses; *** p<0.01, ** p<0.05, * p<0.1 \\ \hline
\end{tabular}
\end{table}
\subsection*{Mask experiment}
\begin{table}[H]
\centering
\label{tab:my-table}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lcccc} \hline
& 1 & 2 & 3 & 4 \\
VARIABLES & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} \\ \hline
& & & & \\
Treatment match & 0.177 & 0.279 & 0.0582 & 0.448 \\
& (0.394) & (0.382) & (0.450) & (0.439) \\
Treatment mismatch & -0.371 & -0.284 & -0.308 & -0.111 \\
& (0.380) & (0.357) & (0.394) & (0.574) \\
Treatment match party*Rep & -0.126 & -0.283 & -0.0877 & 0.279 \\
& (0.530) & (0.570) & (0.729) & (0.630) \\
Treatment mismatch party*Rep & -0.0619 & -0.0833 & 0.496 & -0.0168 \\
& (0.594) & (0.510) & (0.583) & (0.719) \\
Republican & -0.716** & -0.402 & -0.426 & -0.375 \\
& (0.286) & (0.321) & (0.318) & (0.316) \\
Democrat & 1.665*** & 1.119*** & 1.134*** & 1.121*** \\
& (0.312) & (0.346) & (0.339) & (0.351) \\
Direct financial shock w4 & 0.256 & 0.269 & 0.279 & 0.284 \\
& (0.206) & (0.194) & (0.194) & (0.195) \\
Knows someone hospitalized for COVID-19 & -0.0609 & -0.200 & -0.200 & -0.245 \\
& (0.220) & (0.197) & (0.198) & (0.190) \\
log county deaths/100,000 by w5 & & 0.147** & 0.145** & 0.140** \\
& & (0.0593) & (0.0590) & (0.0595) \\
Family/friends have health pre-condition & & 0.00288 & 0.00110 & 0.0218 \\
& & (0.170) & (0.166) & (0.168) \\
Republican leaning news & & -0.532*** & -0.469** & -0.506** \\
& & (0.204) & (0.228) & (0.204) \\
Democratic leaning news & & -0.0795 & -0.116 & -0.0931 \\
& & (0.256) & (0.275) & (0.255) \\
Confid. in sci. comm in w1 & & 1.165*** & 1.170*** & 1.378*** \\
& & (0.175) & (0.168) & (0.221) \\
Confid. in fed. gov. in w1 & & -0.0597 & -0.0518 & -0.108 \\
& & (0.297) & (0.301) & (0.299) \\
Treatment match party*Biased news & & & 0.499 & \\
& & & (0.583) & \\
Treatment mismatch party*Biased news & & & 0.0462 & \\
& & & (0.550) & \\
Treatment match party*Biased news*Rep & & & -0.432 & \\
& & & (0.928) & \\
Treatment mismatch party*Biased news*Rep & & & -1.035 & \\
& & & (0.749) & \\
Treatment match party*Confid. sci. com. & & & & -0.265 \\
& & & & (0.461) \\
Treatment mismatch party*Confid. sci. com. & & & & -0.274 \\
& & & & (0.540) \\
Treatment match party*Confid. sci. com.*Rep & & & & -1.124* \\
& & & & (0.659) \\
Treatment mismatch party*Confid. sci. com.*Rep & & & & -0.193 \\
& & & & (0.677) \\
Constant & 3.918*** & 2.416 & 2.551* & 2.326 \\
& (0.765) & (1.510) & (1.483) & (1.511) \\ \hline
& & & & \\
Demographic controls & Yes & Yes & Yes & Yes \\
Controls for network and information & No & Yes & Yes & Yes \\
Observations & 1,198 & 1,197 & 1,197 & 1,197 \\
Standard errors in parentheses; *** p$<$0.01, ** p$<$0.05, * p$<$0.1 \\ \hline
\end{tabular}%
}
\end{table}
\subsection*{GMO & mask }
\begin{table}[H]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lcccc} \hline
& (1) & (2) & (3) & (4) \\
VARIABLES & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} \\ \hline
& & & & \\
Treatment match & -0.0401 & -0.0415 & 0.0337 & 0.279 \\
& (0.234) & (0.196) & (0.433) & (0.382) \\
Treatment mismatch & 0.109 & 0.0457 & -0.351 & -0.284 \\
& (0.217) & (0.184) & (0.407) & (0.357) \\
Treatment match party*Rep & 0.710** & 0.498** & 0.0181 & -0.283 \\
& (0.284) & (0.220) & (0.621) & (0.570) \\
Treatment mismatch party*Rep & 0.206 & 0.177 & -0.0849 & -0.0833 \\
& (0.300) & (0.286) & (0.644) & (0.510) \\
Republican & -0.473*** & -0.260** & -0.671** & -0.402 \\
& (0.119) & (0.1000) & (0.301) & (0.321) \\
Democrat & 0.0699 & 0.188 & 1.957*** & 1.119*** \\
& (0.172) & (0.167) & (0.368) & (0.346) \\
Direct financial shock w4 & & & & 0.269 \\
& & & & (0.194) \\
log county deaths/100,000 by w5 & & & & 0.147** \\
& & & & (0.0593) \\
Knows hospitalized for COVID-19 & & & & -0.200 \\
& & & & (0.197) \\
Family/friends health pre-condition & & & & 0.00288 \\
& & & & (0.170) \\
Republican leaning news & & 0.0911 & & -0.532*** \\
& & (0.0965) & & (0.204) \\
Democratic leaning news & & 0.140 & & -0.0795 \\
& & (0.106) & & (0.256) \\
Confid. sci. com. in w1 & & 0.407*** & & 1.165*** \\
& & (0.0834) & & (0.175) \\
Confid. fed. gov. in w1 & & 0.0148 & & -0.0597 \\
& & (0.157) & & (0.297) \\
Constant & 2.836*** & 3.118*** & 4.002*** & 2.416 \\
& (0.0608) & (0.300) & (0.109) & (1.510) \\ \hline
Controls & No & Yes & No & Yes \\
Observations & 1,089 & 1,089 & 1,198 & 1,197\\ \hline
\multicolumn{5}{l}{Standard errors in parentheses; *** p$<$0.01, ** p$<$0.05, * p$<$0.1 }\\
\end{tabular}%
}
\end{table}
\subsection*{GMO & mask - interactions with biased news and trust in the scientific community}
\begin{table}[H]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lcccc}
\hline
& 1 & 2 & 3 & 4 \\
VARIABLES & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} \\ \hline
& & & & \\
Treatment match & -0.0125 & -0.0688 & 0.0582 & 0.448 \\
& (0.220) & (0.254) & (0.450) & (0.439) \\
Treatment mismatch & 0.208 & 0.289 & -0.308 & -0.111 \\
& (0.194) & (0.227) & (0.394) & (0.574) \\
Treatment match party*Rep & 0.516* & 0.670** & -0.0877 & 0.279 \\
& (0.291) & (0.282) & (0.729) & (0.630) \\
Treatment mismatch party*Rep & 0.335 & -0.0426 & 0.496 & -0.0168 \\
& (0.300) & (0.264) & (0.583) & (0.719) \\
Republican & -0.253** & -0.262** & -0.426 & -0.375 \\
& (0.101) & (0.101) & (0.318) & (0.316) \\
Democrat & 0.185 & 0.178 & 1.134*** & 1.121*** \\
& (0.174) & (0.164) & (0.339) & (0.351) \\
Direct financial shock w4 & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & 0.279 & 0.284 \\
& \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & (0.194) & (0.195) \\
Knows someone hospitalized for COVID-19 & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & -0.200 & -0.245 \\
& \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & (0.198) & (0.190) \\
log county deaths/100,000 by w5 & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & 0.145** & 0.140** \\
& \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & (0.0590) & (0.0595) \\
Family/friends have health pre-condition & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & 0.00110 & 0.0218 \\
& \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & (0.166) & (0.168) \\
Republican leaning news & 0.174 & 0.113 & -0.469** & -0.506** \\
& (0.106) & (0.0906) & (0.228) & (0.204) \\
Democratic leaning news & 0.245* & 0.153 & -0.116 & -0.0931 \\
& (0.137) & (0.102) & (0.275) & (0.255) \\
Confid. in sci. comm in w1 & 0.416*** & 0.464*** & 1.170*** & 1.378*** \\
& (0.0840) & (0.101) & (0.168) & (0.221) \\
Confid. in fed. gov. in w1 & 0.00546 & 0.00609 & -0.0518 & -0.108 \\
& (0.153) & (0.160) & (0.301) & (0.299) \\
Treatment match party*Biased news & -0.0708 & & 0.499 & \\
& (0.247) & & (0.583) & \\
Treatment mismatch party*Biased news & -0.358 & & 0.0462 & \\
& (0.282) & & (0.550) & \\
Treatment match party*Biased news*Rep & -0.0348 & & -0.432 & \\
& (0.318) & & (0.928) & \\
Treatment mismatch party*Biased news*Rep & -0.156 & & -1.035 & \\
& (0.462) & & (0.749) & \\
Treatment match party*Confid. sci. com. & & 0.0491 & & -0.265 \\
& & (0.307) & & (0.461) \\
Treatment mismatch party*Confid. sci. com. & & -0.361 & & -0.274 \\
& & (0.281) & & (0.540) \\
Treatment match party*Confid. sci. com.*Rep & & -0.317 & & -1.124* \\
& & (0.356) & & (0.659) \\
Treatment mismatch party*Confid. sci. com.*Rep & & 0.303 & & -0.193 \\
& & (0.343) & & (0.677) \\
Constant & 3.051*** & 3.078*** & 2.551* & 2.326 \\
& (0.308) & (0.307) & (1.483) & (1.511) \\ \hline
& & & & \\
Controls & Yes & Yes & Yes & Yes \\
Observations & 1,089 & 1,089 & 1,197 & 1,197 \\\hline
\multicolumn{5}{l}{Standard errors in parentheses; *** p$<$0.01, ** p$<$0.05, * p$<$0.1 }\\
\end{tabular}%
}
\end{table}
\end{document} |
1,108,101,565,230 | arxiv | \section{Introduction}
Cosmological observations have indicated that about $95\%$ of the energy content of the universe is of unknown origin. About $25\%$ of this unknown energy, known as {\it dark matter \cite{Bertone:2016nfn} } behaves like a perfect fluid with equation of state
$w\simeq 0$ which is similar to that of matter with velocity much smaller than the velocity of light $c$ that interacts only gravitationally. The other $70\%$ usually called {\it dark energy \cite{Frieman:2008sn, Copeland:2006wr}} behaves like a perfect fluid with equation of state $w\simeq -1$ which is similar to that of a cosmological constant $\Lambda$ \cite{Padmanabhan:2002ji, Peebles:2002gy, Carroll:2000fy, Sahni:1999gb}. The {\it standard $\Lambda CDM$ model \cite{Planck:2018vyg} } assumption is that dark matter consists of a particle which can be discovered in accelerator experiments while dark energy is actually the cosmological constant. This interpretation however is being challenged by three facts:
\begin{itemize}
\item Despite long and persistent efforts of a few decades it has not been possible to identify the dark matter particle in Earth bound experiments \cite{Rogers:2020ltq,XENON:2019gfn}.
\item The required cosmological constant value is too low to be consistent with any particle physics theory (the fine tuning problem) \cite{Padilla:2015aaa}.
\item The internal observational consistency of $\Lambda CDM$ has been challenged recently by conflicting best fit values of parameters (tensions \cite{Perivolaropoulos:2021jda}) of the standard model. The most prominent and persistent of these tensions is the {\it Hubble tension \cite{DiValentino:2021izs}}: The Hubble parameter $H_0$ as measured from the CMB sound horizon \cite{Planck:2018vyg} standard ruler under the assumption of $\Lambda CDM$ is in $5\sigma$ conflict with the best fit value obtained using the local distance ladder method with Type Ia Supernovae (SnIa) \cite{Riess:2021jrx}.
\end{itemize}
It is therefore becoming increasingly likely that the assumed properties of the two main fluids of the universe may deviate from the standard model assumptions.
One of the most efficient probes of the detailed properties of cosmological fluids is gravitational lensing \cite{Zwicky:1937zzb,Dyson:1920cwa,Walsh:1979nx,Bozza:2010xqn,Bartelmann:2010fz,Cunha:2018acu,He:2017alg,Piattella:2016nzt,Ali:2017ofu,Lake:2001fj,Rindler:2007zz,Takizawa:2021jxa,Virbhadra:2008ws,Virbhadra:1999nm,Wambsganss:1998gg}. Gravitational lensing can probe directly the local metric parameters in a generic model independent manner and therefore is a useful tool for the detection of signatures of either exotic fluids \cite{Finelli:2006iz} or modified gravity \cite{Mannheim:2005bfa,Wheeler:2013ora,Kiefer:2017nmo,Li:2011ur}. In the presence of such effects the General Relativistic (GR) vacuum metric would get modified \cite{Mannheim:1988dj,Edery:1997hu,Cutajar:2014gfa,Grumiller:2010bz} at both the solar system \cite{Ozer:2017oik,Edery:1997hu,Kagramanova:2006ax,Sereno:2007rm,Iorio:2007ub} and the galactic and cluster scales \cite{Varieschi:2008va,Chang:2011bp,Pizzuti:2017diz}. Such modifications would need to be distinguished from other effects like non-spherically symmetric matter near a gravitational lens galaxy/cluster or projected along the line of sight \cite{McCully:2016yfe}. Despite of these effects, upper bounds on the spherical metric parameters can still be obtained by assuming that any deviation from the Schwarzschild metric is due to dark fluids and not to other effects. In this context, any order of magnitude estimate of the extended metric parameters would be considered as an upper bound.
The deflection angle of the light emitted by a background source deflected by a foreground lens (eg cluster) is an observable quantity \cite{DES:2017tby} that depends on the metric parameters \cite{Lim:2016lqv}. Even though such strong lensing systems are difficult to identify \cite{Metcalf:2018elz,Jacobs:2017xhn,Petrillo:2017njm}, in the context of an approximately spherically symmetric lens, the measured deflection angle can lead to direct measurement of the metric parameters provided that the metric is modeled in a general enough context \cite{Rindler:2007zz,Jha:2023qqz,Ishak:2007ea,Ishak:2008zc,Sultana:2012zz,He:2017alg,Azreg-Ainou:2017obt,Younas:2015sva,Lim:2016lqv}. The simplest modeling of the metric around a lens system is the Schwarzschild vacuum metric which has been extensively used for the search of unseen matter associated with the lens. In the context of this metric, the deflection angle to lowest order is \cite{Weinberg:1972kfs}
\begin{equation}
{\hat \alpha}=\frac{4m}{r_0}
\label{hatasch1}
\end{equation}
where $m$ is the mass of the lens and $r_0$ is the distance of closest approach of the light-ray to the lens (the impact parameter). Thus, measurement of $\hat \alpha $ can lead to estimates and constraints on the mass $m$ of the lens and comparison with the visible part of the mass can lead to estimate of the dark matter content of the lens.
In the context of generalized metrics, the additional parameters may also be constrained by the measurement of $\hat \alpha$. For example in the presence of vacuum energy (a cosmological constant $\Lambda$ with a term $\frac{\Lambda}{3} r^2$ in the spherically symmetric metric) the predicted deflection angle becomes \cite{Rindler:2007zz}
\begin{equation}
{\hat \alpha_{SdS}}\simeq \frac{4m}{r_0} (1-\frac{2m^2}{r_0^2}-\frac{\Lambda r_0^4}{24m^2})
\label{hatasds0}
\end{equation}
Using cluster lensing data, this form of generalized $\hat \alpha$ has lead to constraints of the value of $\Lambda$ ($\Lambda \lesssim 10^{-54} cm^{-2}$ \cite{Ishak:2007ea}) which approaches the precision of corresponding cosmological constraints obtained from measurements of the expansion rate of the Universe at various redshifts $\Lambda \simeq 10^{-56} cm^{-2}$. Similarly a generalized spherically symmetric metric with a Rindler term $\sim b\; r$ has lead to constraints on the Rindler acceleration term $b\lesssim 10^{-2} m/sec^2$ from solar system quasar lensing data \cite{Carloni:2011ha}. It is therefore interesting to consider other spherically symmetric generalizations of the Schwarzschild metric and impose constraints on their parameters using gravitational lensing data. Previous studies have investigated the effects of special cases of spherical generalized metrics \cite{Zhang:2021ygh} on gravitational lensing \cite{Azreg-Ainou:2017obt,Younas:2015sva} and other observables \cite{Sheykhi:2010yq} like galactic clustering \cite{Khanday:2021kjy}.
In the present analysis, we derive general analytical expressions of the predicted deflection angle in the context of a wide range of spherically symmetric metrics. This class of metrics includes as special cases the Schwarzschild deSitter (SdS) metric \cite{Rindler:2007zz}, the Reissner-Nordstrom metric \cite{Eiroa:2002mk}, nonlinear electrodynamics charged black hole\cite{Gurtug:2020wpi,Gurtug:2019yzu}, Yukawa black holes\cite{Benisty:2022txp}, the global monopole metric \cite{Barriola:1989hx, Platis:2014sna}, the Rindler-Grumiller metric \cite{Carloni:2011ha,Grumiller:2010bz,Perivolaropoulos:2019vgl,Lim:2019akv,Sultana:2012qp,Carloni:2011ha,Gregoris:2021plc}, the Weyl gravity vacuum metric \cite{Mannheim:1988dj,Edery:1997hu} the $SdS_w$ metric \cite{Zhang:2021ygh,Fernando:2012ue,Uniyal:2014paa} the Kiselev black hole \cite{Kiselev:2002dx,Younas:2015sva,Liu:2022hbp,Alfaia:2021cnk,Shchigolev:2016gro,Abbas:2019olp}, the Kiselev charged black hole \cite{Azreg-Ainou:2017obt,Atamurotov:2022knb} and the interior Kottler metric \cite{Schucker:2010rkd, Antoniou:2016obw} (for a good review of such spherical inhomogenous solutions see \cite{Faraoni:2021nhi}). Then we compare these expressions with the measured values of the deflection angle in the context of cluster scale systems thus imposing constraints on the metric parameters that appear in the analytic expressions of the deflection angle $\hat \alpha$. In this context, after deriving the analytical expressions for $\hat \alpha$, we use observations of Einstein radii around distant galaxies and clusters of galaxies to derive the measured lensing deflection angle. Using observational data of a selected list
of Einstein radii around clusters and galaxies, we derive upper bound, order of magnitude constraints, on the new metric parameters in the context of a wide range of models. These results provide an
improvement of several orders of magnitude on previous upper bounds on these parameters from planetary or stellar systems \cite{Sereno:2007rm,Kagramanova:2006ax}.
The structure of this paper is the following: In the next section we consider a generalized spherically symmetric metric and connect its parameters with a possible exotic fluid energy-momentum tensor that could give rise to it. In section \ref{III} we derive general analytic expressions for the deflection angle of such metrics. In section \ref{IV} we apply these analytic expressions to derive the deflection angle in a Schwarzschild metric perturbed by a general power-law term (Kiselev metric) which may represent either an exotic fluid or a modification of GR in the vacuum. Special cases of such a term include a vacuum energy term, a Rindler acceleration term, a global monopole scalar field gravity, the electric field of Reissner-Nordstrom metric or other more general terms. In section \ref{V} we compare the deflection angle of the perturbed Schwarzschild metric of section \ref{IV} with the measured Einstein radii and deflection angles around clusters and derive order of magnitude constraints on the new metric parameters of the perturbing power-law terms. Finally in section \ref{VI}, we conclude, summarize and discuss possible future prospects of the present analysis. In what follows we set Newton's constant $G$ and the speed of light $c$ to unity unless otherwise mentioned.
\section{General Class of Spherically Symmetric Metrics and their fluid background}
\label{II}
We focus on the following class of spherically symmetric metrics
\begin{equation}
ds^2= f(r) dt^2 - f(r)^{-1} dr^2 - r^2 (d\theta^2 +\sin^2\theta d\phi^2)
\label{sphmetric}
\end{equation}
The energy momentum tensor that can give rise to this metric may be obtained from the Einstein tensor $G_\nu^\mu$. For
\begin{equation}
f(r)=1-\frac{2m}{r} - g(r)
\label{fgen1}
\end{equation}
where $g(r)$ is an arbitrary function, it is easy to show that
\begin{equation}
G^{\mu}_{\nu}=
\begin{bmatrix}
\frac{g(r)}{r^2}+\frac{g'(r)}{r} & 0 & 0 & 0 \\
0 & \frac{g(r)}{r^2}+\frac{g'(r)}{r} & 0 & 0 \\
0 & 0 & \frac{g'(r)}{r}+\frac{g''(r)}{2} & 0 \\
0 & 0 & 0 & \frac{g'(r)}{r}+\frac{g''(r)}{2}
\end{bmatrix}
=\kappa T^{\mu}_{\nu}
\end{equation}
where $T^{\mu}_{\nu}$ is the energy momentum tensor that gives rise to the metric \eqref{fgen1}, \eqref{sphmetric} and $\kappa=8\pi G$. Clearly, the parameter $m$ does not appear in $T^{\mu}_{\nu}$ because it corresponds to the vacuum solution. If $g(r)$ is a superposition of power law terms
\begin{equation} g(r)=\sum_i b_i r^{-q_i}
\label{grser}
\end{equation}
where $q_i$ are arbitrary real constants, then the energy momentum of the fluid that supports the above metric may be written as \cite{Alestas:2019wtw}
\begin{equation}
T^{\mu}_{\nu}=\frac{1}{\kappa}\sum_{i} b_i (1-q_i) r^{-(q_i+2)}
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & -\frac{1}{2}q_i & 0 \\
0 & 0 & 0 & -\frac{1}{2}q_i
\end{bmatrix}=
\begin{bmatrix}
\rho & 0 & 0 & 0 \\
0 & -p_r & 0 & 0 \\
0 & 0 & -p_\theta & 0 \\
0 & 0 & 0 & -p_\phi
\end{bmatrix}
\label{tmunu}
\end{equation}
where in the last equation we have denoted the fluid density $\rho$, the radial pressure $p_r$ and the tangential pressures $p_\theta=p_\phi$. This is a generalization of the well known Kiselev black hole \cite{Kiselev:2002dx}. Notice that since the fluid tangential and radial pressures are not equal, this energy momentum tensor does not correspond to a perfect fluid due to the inhomogeneity and local anisotropy of the pressure. Thus it is not directly related to quintessence as has been stated in previous studies. This issue is clarified in detail in Ref. \cite{Visser:2019brz}. However, in the combined presence of the terms $q_1=1$ (point mass), $q_2\simeq -2$ (cosmological constant) and $q_i>-2$ (dark fluids), the energy momentum tensor \eqref{tmunu} is consistent with dark energy because at large distances the cosmological constant $q_2\simeq -2$ term of the metric function $f(r)$ dominates and corresponds to a homogeneous and isotropic fluid with equation of state $w\simeq -1$.
As expected, the term $q_i=1$ corresponds to zero energy momentum term (vacuum solution) while for $q_i=-2$ we obtain the cosmological constant term (constant energy density-pressure) and for $q_i=0$ we have the case of a global monopole \cite{Barriola:1989hx} (zero angular pressure components while the energy density, radial pressure drop as $\sim r^{-2}$). The elecric field of a Reissner-Nordstrom black hole corresponds to $q_i=2$. Other similar solitonic field configurations corresponding to different values of $q$ could in principle be constructed.
The question that we addresss in this analysis is the following: {\it 'What would be the signature of such general and exotic fluids in gravitational lensing?'} Such fluids would give rise to the metric \eqref{sphmetric} in the context of GR. A similar metric may also emerge in the context of modified GR gravity theories even as a vacuum solution \cite{Grumiller:2010bz,Ren:2021uqb,Shaikh:2017zfl}. Therefore the detection of signatures of such a generalized spherically symmetric metric could be interpreted either as a signature of an exotic fluid in the context of GR or as presence of modifications of GR. This prospect is investigated in the following sections.
\section{Photon Geodesics and Lensing in a general fluid}
\label{III}
The geodesic equations are derived with respect to the Lagrangian $L=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}$, where $\dot{x}^{\mu}(\tau)$ is the time-like or null particle's trajectory and the dots denote derivatives with respect to the proper time affine parameter $\tau$. The Euler-Lagrange equation $\frac{d}{d\tau}\frac{\partial L}{\partial\dot{x}^{\mu}}=\frac{\partial L}{\partial x^{\mu}}$ leads to the equations of motion
\begin{equation}
\dot{t} = \frac{E}{f} \label{tgeod}
\end{equation}
\begin{equation}
\dot{\phi} = \frac{h}{r^2 sin^{2}\theta} \label{phigeod}
\end{equation}
\begin{equation}
\ddot{r} = \frac{f'}{2f^2}\dot{r}^2 + rf\dot{\theta}^2 -\frac{f'E^2}{2f} + \frac{f\; h ^2}{r^2 sin^{2}\theta}
\end{equation}
\begin{equation}
\ddot{\theta} = -\frac{2\dot{r}\dot{\theta}}{r} + \frac{cos\theta \; h^{2}}{r^{4} sin^{3}\theta}
\end{equation}
where $E$, $h$ are the energy and the angular momentum of the particle respectively. The prime denotes derivative with respect to $r$. Use of these equations in the photon geodesic constraint
\begin{equation}
ds^2=g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=0
\label{photgeod}
\end{equation}
leads to
\begin{equation}
\dot{r}^2 + r^2 f\dot{\theta}^2 = E^2 -\frac{h^2 \; f}{r^2 sin\theta}
\label{rdot}
\end{equation}
Fixing $\theta = \pi/2$, by spherical symmetry in (\ref{rdot}) leads to the redial equation
\begin{equation}
\dot{r}^2 = E^2 - \frac{h^2}{r^2}\;f(r)
\label{rdot1}
\end{equation}
At the distance of closest approach $r_0$ (see Fig. \ref{fig:lens}) where $\dot{r}=0$ we have
\begin{equation}
\frac{f(r_0)}{r_0^2}=\frac{E^2}{h^2}
\label{r0def}
\end{equation}
From eqs. (\ref{phigeod}), (\ref{rdot1}) and (\ref{r0def}) for $\theta=\pi/2$ we obtain the null geodesic trajectory equation
\begin{equation}
\left(\frac{dr}{d\phi}\right)^2 = r^4\left(\frac{f(r_0)}{r_0^{2}}-\frac{f(r)}{r^{2}}\right)
\label{drdphi2}
\end{equation}
where $r_0$ is the distance of closest approach or {\it impact parameter}. Setting $u \equiv 1/r$ in \eqref{drdphi2} leads to
\begin{equation}
\left(\frac{du}{d\phi}\right)^2= u_{0}^2 f\left(\frac{1}{u_{0}}\right) -u^2 f\left(\frac{1}{u}\right)\label{dudphi}
\end{equation}
where $u_0=1/r_0$. Eq. \eqref{dudphi} can be integrated as
\begin{equation}
\int_{\phi_{0}}^{\phi} d\phi ' = \pm \int_{u_{0}}^{u}\frac{du'}{\sqrt{u^2_{0}f(1/u_0)-u'^2 f(1/u')}} \label{intgeod}
\end{equation}
where $\phi_0$ is the angle corresponding to the closest approach (Fig. \ref{fig:lens}). For the right part of the symmetric photon geodesic (shown in Fig. \ref{fig:lens}) $u$ decreases as $\phi$ decreases and thus we use the $+$ sign in eq. \eqref{intgeod} with $\phi_0=\frac{\pi}{2}$.
\begin{figure}[h]
\centering
\includegraphics[width = 0.9\textwidth]{fig1.pdf}
\caption{The trajectory of light from source (left side) to observer (right side), passing at a distance of closest approach $r_0$ to the lens. We assume the trajectory does not cross either the cosmological or event horizons of the spacetime. Here we have drawn the angles $\phi_0$ and $\phi\equiv\phi_{obs}$ to be relative to the horizontal dashed line, implying that this horizontal line is the $\phi=0$
angle.}
\label{fig:lens}
\end{figure}
The half deflection angle $\hat \alpha/2$ at $\phi$ with respect to the straight Newtonian trajectory is the angle between the velocity vector of the photon in GR and the velocity of the photon in Newtonian theory (straight horizontal line) at each angle $\phi$. The full deflection angle requires an additional factor of 2 because the full trajectory may be thought as composed of two symmetric parts (left and right of Fig. \ref{fig:lens}). Thus we have
\begin{equation}
\hat \alpha=2(\psi(\phi)-\phi)
\label{hata1}
\end{equation}
where
\begin{equation}
cos\psi=\frac{\vec r \cdot \vec v}{\vert \vec r\vert \cdot \vert \vec v \vert}
\label{cospsi}
\end{equation}
where
the vectors $\vec r$ and $\vec v$ live in the $2d$ space $r-\phi$ with metric $g_{ij}=diag(f^{-1},r^2)$ with $\vec r=(r,0)$, $\vec v=(dr/d\tau,d\phi /d\tau)$. Thus
\begin{equation}
\vec r\cdot\vec v = g_{ij}r^i v^j=f^{-1} r \frac{dr}{d\tau}\label{rv}
\end{equation}
\begin{equation}
\vert \vec r \vert = (g_{ij} r^i r^j)^{1/2}=f^{-1/2}r \label{mr}
\end{equation}
\begin{equation}
\vert \vec v \vert = (g_{ij} v^i v^j)^{1/2}=\left(f^{-1}\left(\frac{dr}{d\tau}\right)^2 +r^2 \left(\frac{d\phi}{d\tau}\right)^2 \right)^{1/2} \label{mv}
\end{equation}
Using \eqref{rv}-\eqref{mv} in \eqref{cospsi} leads to
\begin{equation}
sin\psi=\frac{f^{1/2} r}{\left(f r^2+ \left(\frac{dr}{d\phi}\right)^2\right)^{1/2}}
\label{sinpsi}
\end{equation}
Using now \eqref{drdphi2} in \eqref{sinpsi} we find
\begin{equation}
sin\psi=\frac{u(\phi)}{u_0}\frac{f(u)^{1/2}}{f(u_0)^{1/2}}
\label{sinpsiu}
\end{equation}
In what follows we use $u_0$ as a unit for $u$ and thus we set $u_0=r_0=1$ in \eqref{sinpsiu} while $u\rightarrow u/u_0$ and thus $u$ and $r$ along with all the metric parameters become dimensionless. Thus using \eqref{hata1} for the full deflection $\hat \alpha$ at angle $\phi$ and \eqref{sinpsiu} we have
\begin{equation}
\hat \alpha (\phi)=2\left[sin^{-1}\left(u(\phi)\frac{f(u)^{1/2}}{f(1)^{1/2}}\right)-\phi\right]
\label{hata2}
\end{equation}
This is a general equation for the deflection angle. Note that the effects of the metric expressed through the function $f(u)$ may be significant in metrics that are not asymptotically flat. These effects, were not taken into account in some early studies \cite{Islam:1983rxp} in the estimate of $\hat \alpha$ for a Schwarzschild-deSitter (SdS) spacetime which is not asymptotically flat. This led to the conclusion that a cosmological constant has no effect on the gravitational lensing deflection angle. The debate on the issue of the effects of the cosmological constant on lensing however is not over \cite{Khriplovich:2008ij,Hu:2021yzn,Rindler:2007zz}.
For a source behind the lens with respect to the observer ($\phi_{obs}\simeq 0$), an observer far from the lens ($u<<1$), in an asymptotically flat space ($f(u)\simeq 1$) and assuming weak gravity at $r_0$ ($f(1)\simeq 1$), this equation takes the usual form \cite{Weinberg:1972kfs}
\begin{equation}
\hat \alpha \simeq 2\; u_{obs}
\label{hata3}
\end{equation}
In general however, the above assumptions may not be applicable and therefore the use of \eqref{hata3} instead of \eqref{hata2} should be implemented with extreme care. This point was stressed for the first time by Rindler-Ishak (RI) \cite{Rindler:2007zz} in the context of estimating $\hat \alpha$ in a SdS spacetime which is not asymptotically flat. In the present analysis we follow RI impose the above assumptions $u=u_{obs}<<1$, $f(1)\simeq 1$, $\phi=\phi_{obs}\simeq 0$ but we do not assume asymptotic flatness ($f(u)\simeq 1$) unless it is clearly applicable for the considered metric. Thus as shown below, our more general analysis reproduces the result of RI in the special case of SdS spacetime which is not asymptotically flat.
In order to calculate $\hat \alpha$ under the above assumptions we thus need to obtain $u(\phi\simeq 0)\equiv u_{obs}$ as a function of the metric parameters by integrating \eqref{intgeod} with $\phi_0=\pi/2$, $\phi=\phi_{obs}\simeq 0$, $u_0=1$, $u=u_{obs}<<1$ and with the $+$ sign and substitute it in \eqref{hata2} under the above assumptions but not assuming asymptotic flatness. Thus, the integral that needs to be valuated is
\begin{equation}
\int_{\pi/2}^{0} d\phi ' = + \int_{1}^{u_{obs}}\frac{du'}{\sqrt{1-u'^2 f(1/u')}} \label{intgeod1}
\end{equation}
Using \eqref{intgeod1}, $u_{obs}$ is expressed in terms of the metric parameters. Then, from \eqref{hata2}, $\hat \alpha$ may be obtained using
\begin{equation}
\hat \alpha \simeq 2 u_{obs}\; f(u_{obs})^{1/2}
\label{hata4}
\end{equation}
for any spherically symmetric metric of the form \eqref{sphmetric}.
We now focus on a spherically symmetric metric with
\begin{equation}
f(r)=1-b\; r^{-q}=1-b\; u^q
\label{frt1}
\end{equation}
where $b$ and $q$ are real dimensionless parameters (rescaling with $r_0$ (or $u_0$) is assumed).
In the context of this metric, eq. \eqref{intgeod1} becomes
\begin{equation}
\int_{\pi/2}^{0} d\phi' = \int_1^{u_{obs}}\frac{du}{\sqrt{1-u^{^2}+b(u^{^{q+2}}-1)}} \label{int1}
\end{equation}
For $b<<1$ we approximate the integral of the above equation and obtain
\begin{equation}
-\frac{\pi}{2} = \int_1^{u_{obs}}\frac{du}{\sqrt{1-u^2}}-\frac{b}{2}\int_1^{u_{obs}} du\frac{u^{q+2}-1}{(1-u^2)^{3/2}}\label{int2}
\end{equation}
which yields
\begin{equation}
arcsin{(u_{obs})}-\frac{b}{2}\; I_q(u_{obs})\simeq u_{obs}-\frac{b}{2}\; I_q(u_{obs})=0
\label{int3}
\end{equation}
or
\begin{equation}
u_{obs}\simeq \frac{b}{2}\; I_q(u_{obs})
\label{uobsgen}
\end{equation}
where
\begin{equation}
I_q(u_{obs})=\int_1^{u_{obs}} du\frac{u^{q+2}-1}{(1-u^2)^{3/2}}
\label{iquobs}
\end{equation}
The integral $I_q(u_{obs})$ can be calculated analytically for any real $q$ as
\begin{eqnarray}
I_q(u)&\equiv&\int_1^u du'\frac{u'^{q+2}-1}{(1-u'^2)^{3/2}}=-\frac{u_{obs}}{\sqrt{1-{u^2_{obs}}}}-\frac{i \sqrt{\pi}\Gamma\left(-\frac{q}{2}\right)}{\Gamma\left(-\frac{1}{2}-
\frac{q}{2}\right)}-\frac{e^{-\frac{1}{2}i (3+q) \pi } \Gamma\left(-\frac{q}{2}\right) \Gamma\left(\frac{3+q}{2}\right)}{\sqrt{\pi}}\nonumber\\
&+&\frac{e^{-\frac{1}{2}i(3+q)\pi}u^{3+q}_{obs}\ _2F_1\left(\frac{3}{2},\frac{3+q}{2},\frac{5+q}{2},u^2_{obs}\right)\left(-i\cos\left(\frac{q\pi}{2}\right)+\sin\left(\frac{q\pi}{2}\right)\right)}{3+q}
\label{iqint}
\end{eqnarray}
The asymptotic form of this integral for $u<<1$ is
\begin{equation}
I_q(u_{obs})=\xi(q)+O(max[u_{obs},u_{obs}^{q+3}])
\label{iqser}
\end{equation}
where
\begin{equation}
\xi(q)=-\frac{i \sqrt{\pi}\Gamma\left(-\frac{q}{2}\right)}{\Gamma\left(-\frac{1}{2}-
\frac{q}{2}\right)}-\frac{e^{-\frac{1}{2}i (3+q) \pi } \Gamma\left(-\frac{q}{2}\right) \Gamma\left(\frac{3+q}{2}\right)}{\sqrt{\pi}}
\label{xiq}
\end{equation}
is a real function of $q$. In Table \ref{tabiq} we show the values of $I_q(u)$ and of $\xi(q)$ for a few integer values of $q$ obtained from the analytical expressions \eqref{iquobs}, \eqref{iqser}, \eqref{xiq}. The corresponding forms of the deflection angle $\hat \alpha$ are also shown for each $q$ using eq. \eqref{hatamq} which is derived below.
\begingroup
\setlength{\tabcolsep}{4pt}
\begin{table}[h]
\begin{tabular}{c c c c}
\hline
$I_q$ & \textit{Analytic form} & \textit{$u<<1$ limit} & $\hat{a}(m,q) $\\[1ex] \hline\hline
\shortstack{$I_{-4}$} & $-\frac{\sqrt{1-u^2}}{u}$ & $-1/u$ & $(4m-\frac{b}{2m})(1-2m^2-\frac{b}{32m^4})$\\[1ex]
\shortstack{$I_{-3}$} & $\sqrt{\frac{2}{1+u}-1}-\tanh^{-1}(\sqrt{1-u^2})$ & $\ln{u}$ & $(4m+b\ln(2m))(1-2m^2-\frac{b}{16m^3})$ \\[1ex]
\shortstack{$I_{-2}$} & $0$ & $0$ & $4m(1-2m^2-\frac{b}{8m^2})$\\ [1ex]
\shortstack{$I_{-1}$} & $\sqrt{\frac{2}{1+u}-1}$ & $1-u$ & $(4m+b)(1-2m^2-\frac{b}{4m})$\\[1ex]
\shortstack{$I_0$} & $\cos^{-1}u$ & $\pi/2-u$ & $4m+\frac{\pi}{2}b$\\ [1ex]
\shortstack{$I_{1}$} & $\big(u+2\big)\sqrt{\frac{2}{1+u}-1}$ & $2-u$ & $4m+2b$ \\ [1ex]
\shortstack{$I_{2}$} & $\frac{1}{2}\bigg(3\cos^{-1}u+u\sqrt{1-u^2}\bigg)$ & $\frac{3\pi}{4}-u$ & $4m+\frac{3\pi}{4}b$\\ [1ex]
\shortstack{$I_{3}$} & $\frac{-8+u\big(3+4u+u^3\big)}{3\sqrt{1-u^2}}$ & $\frac{8}{3}-u$ & $4m+\frac{8}{3}b$\\ [1ex]
\shortstack{$I_{4}$} & $\frac{1}{8}\bigg(u\sqrt{1-u^2}(7+2u^2)+15\cos^{-1}u\bigg)$ & $\frac{15\pi}{16}-u$ & $4m+\frac{15\pi}{6}b$\\[1ex] \hline
\end{tabular}
\caption{\label{tabiq} Values of $I_q(u)$ and its asymptotic form for $u<<1$ which reveals the value of $\xi(q)$ for $q\geq -2$. The corresponding forms of the deflection angle $\hat \alpha$ are also shown for each $q$ using eq. \eqref{hatamq}. The value of $q$ in each raw corresponds to the index of $I_q$. For $q\geq 0$ we have set $f(u_{obs})\simeq 1$ since $u_{obs}\ll 1$. For $q<0$ we have set $u_{obs}\simeq2m$ assuming $b\ll m$ and included the $f(u_{obs})^{1/2}$ term of eq. \eqref{hata4}. All the parameters appear in their dimensionless form where we have set the impact parameter $r_0=1$. }
\end{table}
\endgroup
For a general asymptotically flat metric ($q>0$, $f(u_{obs})\simeq 1$) we have from eqs. \eqref{hata4}, \eqref{uobsgen}
\begin{equation}
\hat \alpha_q = b\; \xi(q)
\label{hataq}
\end{equation}
For example for $q=1$ corresponding to the Schwarzschild metric we have $\xi(1)=2$ and setting $b=2m$ in \eqref{hataq} we find the well known result
\begin{equation}
\hat \alpha_{q=1}\simeq \frac{4m}{r_0}
\label{hatasch}
\end{equation}
where we have reintroduced the impact parameter $r_0$. Similarly for $q=2$ we find $I_2(u_{obs})\simeq \frac{3\pi}{4}-u_{obs}+O(u_{obs}^2) \simeq \frac{3\pi}{4}$ we find
\begin{equation}
\hat \alpha_{q=2}\simeq \frac{3\pi b}{4r_0^2}
\label{hataq2}
\end{equation}
For $q<0$ the metric \eqref{sphmetric} is not asymptotically flat and therefore the solution of \eqref{uobsgen} in general is not consistent with the assumption of $u_{obs}<<1$ while $f^{1/2}(u_{obs})$ and thus $\hat \alpha$ become imaginary leading to nonphysical results. For $q\geq -2$, $I_q(u_{obs})$ remains finite as $u_{obs}\rightarrow 0$ and is of the form $I_q(u_{obs})=\xi(q)-u_{obs}\simeq \xi(q)$ but the imaginary nature of $\hat \alpha$ remains. For example, for $q=-1$ we have $I_{-1}(u_{obs})=1-u_{obs}\simeq 1$ which leads to $u_{obs}=\frac{b}{2}$. However, from \eqref{hata4} and \eqref{uobsgen} (see also Table \ref{tabiq}) we find $\hat \alpha_{q=-1} = 2 u_{obs}\left(1-b u_{obs}^{-1}\right)^{1/2}=i \; b$ which is nonphysical. Thus even though lensing can be defined in such a spacetime, it is not consistent with the assumptions imposed above ($u_{obs}<<1$, $\phi_{obs}\simeq 0$) and thus it is beyond the scope of the present analysis. For example, the context of our assumptions, in a deSitter spacetime ($q=-2$) there can be no lensing effect. However this conclusion is not applicable if a point mass is also present as is the case in the SdS spacetime which is not asymptotically flat. In that case even in the absence of asymptotic flatness, lensing can be well defined provided that $m>>b$\footnote{If $r_0$ gets restored this means $\frac{m}{r_0}>>b\; r_0^{-q}$.}. We will investigate this case in the next section.
\section{Lensing deflection angle in the presence of a point mass and a general fluid}
\label{IV}
The results of the previous section can be easily generalized in the case of simultaneous presence of a point mass and multiple fluids. In this case the metric function $f(r)$ takes the form
\begin{equation}
f(r)=1-\sum_{i} b_i\; r^{-q_i}=1-\sum_{i} b_i\; u^{q_i}
\label{frt2}
\end{equation}
where the sum runs over all the possible power law terms of $f(r)$ corresponding to various fluids or modified gravity. In the possible presence of a point mass in GR we can set $b_1=2m$ and $q_1=1$ allowing also for other terms. In this case, for $q_i\geq 0$, it is easy to show that the total deflection angle is obtained as a superposition of the individual deflection angles obtained from each power law term of $f(r)$ as
\begin{equation}
\hat \alpha_{tot} =\sum_i b_i \; \xi(q_i)\; f(u_{obs,i})^{1/2}\simeq \sum_i b_i\; \xi(q_i)
\label{hatasum}
\end{equation}
In the special case where one of the terms is due to a point mass ($b_1=2m$, $q_1=1$) in the context of GR and in the presence of a single additional power law term (Kiselev metric), the above equation becomes
\begin{equation}
\hat \alpha_{tot} =4m/r_0 + b\; r_0^{-q} \; \xi(q)
\label{hatamass}
\end{equation}
where $q\geq 0$ is assumed and we have temporarily restored the unit impact parameter $r_0$ to make contact with the well known Schwarzschild deflection angle.
The case $q<0$ can also be studied provided that the additional assumption $b\ll m$ is imposed. In this case \eqref{uobsgen} gets generalized as
\begin{equation}
u_{obs} \simeq m I_1(u_{obs}) + \frac{b}{2} I_q(u_{obs})\simeq 2m + \frac{b}{2} I_q(2m)
\label{uobsmass}
\end{equation}
since $m\gg b$. Using now \eqref{uobsmass} in \eqref{hata4} we have the general result
This is a central result of our analysis and provides the lensing deflection angle in a general perturbed Schwarzschild metric. In the special case $q=-2$ and $b=\frac{\Lambda}{3}$ corresponding to an SdS spacetime we get from eq. \eqref{hatamq}
\begin{equation}
{\hat \alpha_{SdS}}(m,-2)\simeq \frac{4m}{r_0} (1-\frac{2m^2}{r_0^2}-\frac{\Lambda r_0^4}{24m^2})
\label{hatasds}
\end{equation}
where we have restored the unit $r_0$ for comparison with previous results. Clearly eq. \eqref{hatasds} is identical with the well known result of RI \cite{Rindler:2007zz}.
\begin{equation}
\hat \alpha (m,q)=(4m+b\; I_q(2m))\; f(2m)^{1/2}
\simeq (4m+b\; I_q(2m))(1-2m^2-\frac{b}{2}(2m)^{q})
\label{hatamq}
\end{equation}
Similarly, for $q=2$ \eqref{hatamq} reduces to
\begin{equation}
\hat \alpha(m,2)=(4m+b\; I_2(2m))(1-2m^2-\frac{b}{2}(2m)^2)\simeq 4m +b\; \xi(2)= \frac{4m}{r_0}+\frac{3\pi b}{4r_0^2}
\label{hatamq2}
\end{equation}
where in the last equality we have restored the unit impact parameter $r_0$. This result is also consistent with the corresponding result of the previous section \eqref{hataq2}. It is therefore clear that eq. \eqref{hatamq} provides a general result that generalizes the corresponding result of RI \cite{Rindler:2007zz} applicable in a wide range of spherically symmetric metrics.
\begin{figure}[h]
\centering
\includegraphics[width = 0.7 \textwidth]{fig3.pdf}
\caption{The photon geodesic trajectories for $m=0.01$ and $b=+0.02$, $b=0$ and $b=-0.02$ with $q=2$. As expected from eq. \eqref{hatamq2}, the deflection angle decreases for $b<0$. }
\label{fig3}
\end{figure}
In order to test the validity of the above analytical results we have compared them with exact numerical solution for the deflection angle obtained by solving numerically eq. \eqref{intgeod1} for $u_{obs}$ and then using \eqref{hata2} with $\phi=0$ to obtain $\hat \alpha$ for fixed values of $m$ and $q$. The comparison of the approximate analytic result of \eqref{hataq2} for $m=0.01$, $q=2$ with the corresponding exact numerical result is shown in Fig. \ref{fig2}. The corresponding photon geodesic trajectories for $m=0.01$ and $b=+0.02$, $b=0$ and $b=-0.02$ with $q=2$ are shown in Fig. \ref{fig2}. As expected from eq. \eqref{hatamq2}, the deflection angle decreases for $b<0$.
Eq. \eqref{hatamq} can be used to obtain observational constraints on the parameters $m,b,q$ from cluster Einstein radius lensing data thus constraining the possible presence of exotic fluids and/or modified gravity in cluster dynamics. A method for obtaining such constraints is illustrated in the next section.
\begin{figure}[h]
\centering
\includegraphics[width = 0.75\textwidth]{fig2.pdf}
\caption{A comparison of the exact numerical result for the deflection angle with the approximate analytic result of \eqref{hataq2} for $m=0.01$, $q=2$. The numerical result was obtained by solving numerically eq. \eqref{intgeod1} for $u_{obs}$ and then using \eqref{hata2} with $\phi=0$. For small $b$ and $\hat \alpha$ the agreement is very good. Note that the assumption $m>>b$ is not needed for $q>0$.}
\label{fig2}
\end{figure}
\section{Observational Constraints on the Metric Parameters}
\label{V}
In order to obtain an order of magnitude estimate of the upper bound of the parameter $b$ of eq. \eqref{hatamass} we consider a sample of clusters \cite{Cutajar:2014gfa} which act as lenses to background galactic sources. In Fig. \ref{fig:figlens} we show the typical lensing diagram where in our analysis we have assumed $\beta \simeq 0$.
Also $D_L$ is the distance to the lens (cluster) at redshit $z_L$ and $D_S$ is the distance to the background lensed galaxy at redshift $z_{arc}$ while the observed Einstein ring of the source appears at angle $\theta$. The distances to the lens and the source may be approximately obtained from the corresponding redshift using a cosmographic expansion of the form
\begin{equation}
D(z)=\frac{c\; z}{H_0}+\frac{c(1-q_0)}{2H_0} z^2 +O(z^3)
\label{distz}
\end{equation}
where we have restored the speed of light $c$ (set to 1 in previous sections). In what follows we assume $H_0=70 km/(sec \dot Mpc)$ and $q_0 = -0.5$. Therefore, $z_L$ (and thus $D_L$), $z_S$ (and thus $D_S$) and the Einstein angle $\theta$ are measurable. Then, the deflection angle $\hat \alpha$ can be obtained using the measured quantities. In particular from Fig. \ref{fig:figlens} we have
\begin{equation}
r_0 \simeq D_L \; \theta \label{r0dist}
\end{equation}
\begin{equation}
\hat \alpha \; (D_S-D_L) \simeq \theta \; D_S \label{hatadist}
\end{equation}
Eq. \eqref{hatadist} can be used for the measurement of the deflection angle induced by cluster lenses. Through such a measurement constraints on the metric parameters can be imposed.
For example if the deflection is assumed to be induced by the cluster mass only, eq. \eqref{hatadist} can lead to constraints on the cluster mass $M_{cl}$. In particular, for the cluster A2218, using the entries of Table \ref{tabdata} and restoring $G$ and $c$ in eq. \eqref{hatasch} we have
\begin{equation}
\hat \alpha = \frac{4\; G \; M_{cl}}{c^2 r_0}=\frac{4\; G\; M_{cl}}{c^2 \theta D_L}=\theta \frac{D_S}{D_S-D_L}\simeq 10^{-4}
\label{mfrhata}
\end{equation}
which leads to $M_{cl}\lesssim 5 \times 10^{13} M_\odot$.
\begin{figure}[h]
\centering
\includegraphics[width = 0.65 \textwidth]{fig4.pdf}
\caption{The geometry of a lensing event and the definition of the relevant angles and distances.}
\label{fig:figlens}
\end{figure}
Similarly, if we assume that a $q$-exotic fluid is solely responsible for the lensing we have
\begin{equation}
\hat \alpha = b_q I_q r_0^{-q} \simeq \theta \frac{D_S}{D_S-D_L}\simeq 10^{-4}
\label{besthata}
\end{equation}
which leads to an order of magnitude estimate $b_q r_0^{-q} \lesssim 10^{-4}$ or $b_q \lesssim 10^{-4} \theta D_L\simeq 10^{-4-q} Mpc^q$.
In a similar way we may obtain order of magnitude constraints from all clusters shown in Table \ref{tabdata}. Such constraints are shown in the last column of Table \ref{tabdata}.
\begingroup
\setlength{\tabcolsep}{8pt}
\begin{table}[ht]
\begin{tabular}{c c c c c c c}
\hline
$\text{Cluster}$ & \shortstack{$\theta (arcsec)$} & \shortstack{$z_L$} & \shortstack{$z_{arc}$} & \shortstack{$\hat{a}(arcsec)$} & \shortstack{$r_0\simeq\theta D_L(Mpc)$} & \shortstack{$b_q r_0^{-q}(\cdot 10^{-4})$} \\ [1ex] \hline\hline
$\text{A}370$ & $39.0$ & $0.375$ & $0.725$ & $80.4$ & $0.304$ & $3.9$\\[1ex]
$\text{A}370$ & $45.2$ & $0.373$ & $1.3$ & $63.9$ & $0.350$ & $3.1$\\ [1ex]
$\text{A}963$ & $18.7$ & $0.206$ & $0.711$ & $26.8$ & $0.080$ & $1.3$ \\[1ex]
$\text{A}1689$ & $45.0$ & $0.183$ & $1.000$ & $55.7$ &$0.171$ & $2.7$ \\[1ex]
$\text{A}2163$ & $15.6$ & $0.201$ & $0.728$ & $22.7$ & $0.065$ & $1.1$ \\[1ex]
$\text{A}2218$ & $20.8$ & $0.175$ & $0.702$ & $26.7$ & $0.076$ & $1.3$ \\[1ex]
$\text{A}2218$ & $23.7$ & $0.171$ & $0.515$ & $35.1$ & $0.084$ & $1.7$ \\[1ex]
$\text{A}2218$ & $73.2''$ & $0.171$ & $1.034$ & $88.7$ & $0.260$ & $4.3$ \\[1ex]
$\text{A}2390$ & $37.4$ & $0.228$ & $0.913$ & $49.5$ & $0.177$ & $2.4$ \\[1ex]
$\text{MS}0440$ & $21.7$ & $0.197$ & $0.530$ & $35.1$ & $0.089$ & $1.7$ \\ [1ex]
$\text{MS}1358$ & $17.7$ & $0.329$ & $4.92$ & $18.6$ & $0.121$ & $0.9$ \\[1ex]
$\text{PKS}0745$ & $21.4$ & $0.103$ & $0.433$ & $28.9$ & $0.046$ & $1.4$ \\ [1ex] \hline
\end{tabular}
\caption{\label{tabdata} Order of magnitude constraints on the metric parameters $b_q$ from cluster lensing data.}
\end{table}
\endgroup
\section{Conclusion-Discussion}
\label{VI}
We have derived an analytic expression that provides the lensing deflection angle in a generalized spherically symmetric metric. This result extends previous special cases of spherically symmetric metrics including the Schwarzschild, SdS and spherical Rindler metrics. Our results have been tested using exact numerical solutions and reduce to previously known results in special cases of perturbed Schwarzschild metrics. Using the Einstein radii around clusters we have imposed order of magnitude constraints on the new parameters of the metric. Our results are valid to first post-Newtonian order but the method may be generalized to higher nonlinear orders by including more terms in the expansion of the integral \eqref{int1}.
The exotic fluids we have considered could be manifestations of either dark matter or dark energy in clusters of galaxies. Therefore an interesting extension of the present analysis would be to compare the derived constraints with dark energy constraints obtained using probes of the cosmic expansion rate like standard rulers (eg the CMB sound horizon) or standard candles (eg SnIa).
The generalized spherically symmetric metric considered here could emerge as vacuum solutions in modified gravity models like the Grumiller metric \cite{Grumiller:2010bz} or Weyl vacuum\cite{Mannheim:1988dj}. In that case the imposed constraints on the metric parameters may be translated to constraints of the corresponding modified gravity Lagrangian parameters. This would also be an interesting extension of the present analysis. Similarly, a generalization of the axisymmetric Kerr metric with dark fluid power law terms and the derivation of the corresponding lensing deflection angle would constitute an interesting extension of the present analysis \cite{Ghosh:2022mka,Molla:2021sgw,Bozza:2002af}.
Finally, the consideration of other general spherically symmetric spacetimes\cite{Mantica:2022phm,Gurtug:2020kwd} and the derivation of the corresponding deflection angle in terms of the metric parameters could also lead to a generalization of the present analysis.
\section*{Acknowledgments}
This article is based upon work from COST Action CA21136 - Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse), supported by COST (European Cooperation in Science and Technology). This project was also supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.), under the "First call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research
equipment Grant" (Project Number: 789).
\section*{Numerical analysis files for Figs \ref{fig3}, \ref{fig2} }
The Mathematica (v12) files used for the production of the figures and for derivation of the main results of the analysis can be found at \href{https://github.com/leandros11/lensing1}{this Github repository under the MIT license.}
|
1,108,101,565,231 | arxiv | \section{Introduction}\label{introduction}}
\IEEEPARstart{I}{mage} super-resolution (SR), especially single-image super-resolution (SISR), is one kind of image transformation task and has received increasing attention in academic and industry. As shown in Fig.~\ref{SISR}, SISR aims to reconstruct a super-resolution (SR) image from its degraded low-resolution (LR) one. It is widely used in various computer vision applications, including security and surveillance image, medical image reconstruction, video enhancement, and image segmentation.
Many SISR methods have been studied long before, such as bicubic interpolation and Lanczos resampling~\cite{duchon1979lan} which are based on interpolation. However, SISR is an inherently ill-posed problem, and there always exist multiple HR images corresponding to one original LR image. To solve this issue, some numerical methods utilize prior information to restrict the solution space of the reconstruction, such as edge-based methods~\cite{sun2008grad} and image statistics-based methods~\cite{kim2010sparse}. Meanwhile, there are some widely used learning methods, such as neighbor embedding methods~\cite{chang2004super} and sparse coding methods~\cite{yang2010image}, which assume that there exists a transformation between LR and HR patches.
\begin{figure}
\begin{center}
\includegraphics[width=0.85\linewidth]{images/SISR.png}
\end{center}
\caption{SISR aims to reconstruct a super-resolution (SR) image from its degraded low-resolution (LR) one.}
\label{SISR}
\end{figure}
Recently, deep learning (DL)~\cite{lecun2015deep} has demonstrated better performance than traditional machine learning models in many artificial intelligence fields, including computer vision~\cite{krizhevsky2012imagenet} and natural language processing~\cite{collobert2008unified}. With the rapid development of DL techniques, numerous DL-based methods have been proposed for SISR, continuously prompting the State-Of-The-Art (SOTA) forward. Like other image transformation tasks, the SISR task can generally be divided into three steps: feature extraction and representation, non-linear mapping, and image reconstruction~\cite{dong2015image}. In traditional numerical models, it is time-consuming and inefficient to design an algorithm satisfying all these processes. On the contrary, DL can transfer the SISR task to an almost end-to-end framework incorporating all these three processes, which can greatly decrease manual and computing expense~\cite{dong2011image}. Additionally, given the ill-posed nature of SISR which can lead to unstable and hard convergence on the results, DL can alleviate this issue through efficient network architecture and loss functions design. Moreover, modern GPU enables deeper and more complex DL models to train fast, which show greater representation power than traditional numerical models.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.88\linewidth]{images/Framework.png}
\end{center}
\caption{The content and taxonomy of this survey. In this survey, we divide the DL-based SISR methods into four categories, which are classified according to their specific targets. Among them, the dark gray blocks are the focus methods in this survey.}
\label{Survey}
\end{figure*}
It is well known that DL-based methods can be divided into supervised and unsupervised methods. This is the simplest classification criterion, but the range of this classification criterion is too large and not clear. As a result, many technically unrelated methods may be classified into the same type while methods with similar strategies may be classified into completely different types. Different from previous SISR surveys~\cite{yang2019deep,wang2020deep} that use supervision as the classification criterion or introduce the methods in a pure literature way, in this survey, we attempt to give a comprehensive overview of DL-based SISR methods and categorize them according to their specific targets. In Fig.~\ref{Survey}, we show the content and taxonomy of this survey. Obviously, we divide the DL-based SISR methods into four categories: reconstruction efficiency methods, reconstruction accuracy methods, perceptual quality methods, and further improvement methods. This target-based survey has a clear context hence it is convenient for readers to consult. Specifically, in this survey, we first introduce the problem definition, research background, and significance of SISR. Then, we introduce some related works, including benchmark datasets, upsample methods, optimization objectives, and assessment methods. After that, we provide a detailed investigation of SISR methods and provide the reconstruction results of them. Finally, we discuss some issues that still exist in SISR and provide some new trends and future directions. Overall, the main contributions of this survey are as follows:
\par (1). We give a thorough overview of DL-based SISR methods according to their targets. This is a new perspective that makes the survey has a clear context hence it is convenient for readers to consult.
\par (2). This survey covers more than 100 SR methods and introduces a series of new tasks and domain-specific applications extended by SISR in recent years.
\par (3). We provide a detailed comparison of reconstruction results, including classic, latest, and SOTA SISR methods, to help readers intuitively know their performance.
\par (4). We discuss some issues that still exist in SISR and summarize some new trends and future directions.
\section{Problem Setting and Related Works}
\subsection{Problem Definition}
Image super-resolution is a classic technique to improve the resolution of an imaging system, which can be classified into single-image super-resolution (SISR) and multi-image super-resolution (MISR) according to the number of the input LR images. Among them, MISR has gradually developed into video super-resolution (VSR). Compared with MISR/VSR, SISR is much more challenging since MISR/VSR have extra information for reference while SISR only has information of a single input image for the missing image features reconstruction.
Define the low-resolution image as $I_x\in \mathbb{R}^{h \times w}$ and the ground-truth high-resolution image as $I_y\in \mathbb{R}^{H \times W}$, where $H>h$ and $W>w$.
Typically, in a SISR framework, the LR image $I_x$ is modeled as $I_x=\mathcal{D}(I_y;\theta_\mathcal{D})$, where $\emph{D}$ is a degradation map $\mathbb{R}^{H \times W}\to \mathbb{R}^{h \times w}$ and $\theta_D$ denotes the degradation factor. In most cases, the degradation process is unknown. Therefore, researchers are trying to model it. The most popular degradation mode is:
\begin{equation} \label{degration}
\mathcal{D}(I_y;\theta_\mathcal{D})=(I_y\otimes \kappa)\downarrow_s + n,
\end{equation}
where $I_y\otimes \kappa$ represents the convolution between the blur kernel $\kappa$ and the HR image $I_y$, $\downarrow_s$ is a subsequent downsampling operation with scale factor $s$, and $n$ is usually the additive white Gaussian noise (AWGN) with standard deviation $\sigma$. In the SISR task, we need to recover a SR image $I_{SR}$ from the LR image $I_x$. Therefore, the task can be formulated as $I_{SR}=\mathcal{F}(I_x;\theta_\mathcal{F})$, where $\mathcal{F}$ is the SR algorithm and $\theta_\mathcal{F}$ is the parameter set of the SR process.
Recently, researches have converted the SISR into an end-to-end learning task, relying on massive training datas and effective loss functions. Meanwhile, more and more DL-based models have been proposed due to the powerful representation power of CNN and its convenience in both forward and backward computing. Therefore, the SISR task can be transformed into the following optimization goal:
\begin{equation}
\hat{\theta}_\mathcal{F}=\mathop{\arg\min}_{\theta_\mathcal{F}} \mathcal{L}(I_{SR},I_y)+\lambda\Phi(\theta),
\end{equation}
where $\mathcal{L}$ denotes the loss function between the generated SR image $I_{SR}$ and the HR image $I_y$, $\Phi(\theta)$ denotes the regularization term, and $\lambda$ is the trade-off parameter that is used to control the percentage of the regularization term.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.95\linewidth]{images/Train.png}
\end{center}
\caption{The training process of data-driven based deep neural networks.}
\label{Train}
\end{figure}
\subsection{Benchmarks Datasets}
Data is always essential for data-driven models, especially the DL-based SISR models, to achieve promising reconstruction performance (Fig.~\ref{Train}). Nowadays, industry and academia have launched several available datasets for SISR.
\begin{table*}[t]
\centering
\setlength{\tabcolsep}{4.5mm}
\renewcommand\arraystretch{1.1}
\caption{Benchmarks datasets for single-image super-resolution (SISR).}
\begin{tabular}{|c|c|c|c|c|}
\hline
\rowcolor{gray!70}\textbf{Name} & \textbf{Usage} & \textbf{Amount} & \textbf{Format} & \textbf{Description} \\
\hline
General-100~\cite{dong2016accelerating} & Train & 100 & BMP & Common images with clear edges but fewer smooth regions \\
\hline
\rowcolor{gray!30}T91~\cite{yang2010image} & Train & 91 & PNG & Common Images \\
\hline
WED~\cite{ma2016waterloo} & Train & 4744 & MAT & Common images \\
\hline
\rowcolor{gray!30}Flickr2K~\cite{timofte2017ntire} & Train & 2650 & PNG & 2K images from Flickr \\
\hline
DIV2K~\cite{agustsson2017ntire} & Train/Val & 1000 & PNG & High-quality dataset for CVPR NTIRE competition \\
\hline
\rowcolor{gray!30}BSDS300~\cite{martin2001database} & Train/Val & 300 & JPG & Common images \\
\hline
BSDS500~\cite{arbelaez2011} & Train/Val & 500 & JPG & Common images \\
\hline
\rowcolor{gray!30}RealSR~\cite{cai2019toward} & Train/Val & 100 & Train/Val & 100 real world low and high resolution image pairs \\
\hline
OutdoorScene~\cite{wang2018recovering} & Train/Val & 10624 & PNG & Images of outdoor scences \\
\hline
\rowcolor{gray!30}City100~\cite{chen2019camera} & Train/Test & 100 & RAW & Common images \\
\hline
Flickr1024~\cite{wang2019flickr1024} & Train/Test & 100 & RAW & Stereo images used for Stereo SR \\
\hline
\rowcolor{gray!30}SR-RAW~\cite{zhang2019image} & Train/Test & 7*500 & JPG/ARW & Raw images produced by real world computational zoom \\
\hline
PIPAL~\cite{jinjin2020pipal} & Test & 200 & PNG & Perceptual image quality assessment dataset \\
\hline
\rowcolor{gray!30}Set5~\cite{bevilacqua2012low} & Test & 5 & PNG & Common images, only 5 images \\
\hline
Set14~\cite{zeyde2010single} & Test & 14 & PNG & Common images, only 14 images \\
\hline
\rowcolor{gray!30}BSD100~\cite{martin2001database} & Test & 100 & JPG & A subset of BSDS500 for testing \\
\hline
Urban100~\cite{huang2015single} & Test & 100 & PNG & Images of real world structures \\
\hline
\rowcolor{gray!30}Manga109~\cite{fujimoto2016manga109} & Test & 109 & PNG & Japanese manga\\
\hline
L20~\cite{timofte2016seven} & Test & 20 & PNG & Common images, very high-resolution \\
\hline
\rowcolor{gray!30}PIRM~\cite{blau20182018} & Test & 200 & PNG & Common images, datasets for ECCV PIRM competition \\
\hline
\end{tabular}
\label{dataset}
\end{table*}
\subsubsection{\textbf{Training and Test Datasets}}
Recently, many datasets for the SISR task have been proposed, including BSDS300~\cite{martin2001database}, DIV2K~\cite{agustsson2017ntire}, and Flickr2K~\cite{timofte2017ntire}. Meanwhile, there are also many test datasets that can be used to effectively test the performance of the models, such as Set5~\cite{bevilacqua2012low}, Set14~\cite{zeyde2010single}, Urban100~\cite{huang2015single}, and Manga109~\cite{fujimoto2016manga109}. In Table~\ref{dataset}, we list a series of commonly used datasets and indicate their detailed attribute.
Among these datasets, DIV2K~\cite{agustsson2017ntire} is the most widely used dataset for model training, which is a high-quality dataset that contains 800 training images, 100 validation images, and 100 test images. Flickr2k is a large extended dataset, which contains 2650 2K images from Flickr. RealSR~\cite{cai2019toward} is the first truly collected SISR dataset with paired LR and HR images. In addition to the listed datasets, some datasets widely used in other computer vision tasks are also used as supplementary training datasets for SISR, such as ImageNet~\cite{deng2009imagenet} and CelebA~\cite{liu2015deep}. In addition, combining multiple datasets (e.g., DF2K) for training to further improve the model performance has also been widely used.
\subsubsection{\textbf{Degradation Mode}}\label{DD}
Due to the particularity of the SISR task, it is difficult to construct a large-scale paired real SR dataset. Therefore, researchers often apply degradation patterns on the aforementioned datasets to obtain corresponding degraded images to construct paired datasets. However, images in the real world are easily disturbed by various factors (e.g., sensor noise, motion blur, and compression artifacts), resulting in the captured images being more complex than the simulated images. In order to alleviate these problems and train a more effective and general SISR model, some works model the degradation mode as a combination of several operations (Eq.~\ref{degration}). Based on this degradation formula, three most widely used degradation modes have been proposed: BI, BD, and DN. Among them, \textbf{BI} is the most widely used degraded mode to simulate LR images, which is essentially a bicubic downsampling operation. For \textbf{BD}, the HR images are blurred by a Gaussian kernel of size $7 \times 7$ with standard deviation \textit{1.6} and then downsampled with the scaling factor of $\times 3$. To obtain \textbf{DN} mode LR images, the bicubic downsampling is performed on the HR image with scaling factor $\times 3$, and then the Gaussian noise with noise $level = 30$ is added into the image.
\subsection{Upsampling Methods}
The purpose of SISR is to enlarge a smaller size image into a larger one and to keep it as accurate as possible. Therefore, enlargement operation, also called upsampling, is an important step in SISR. The current upsampling mechanisms can be divided into four types: pre-upsampling SR, post-upsampling SR, progressive upsampling SR, and iterative up-and-down sampling SR. In this section, we will talk about several kinds of upsampling methods that support these upsampling mechanisms.
\subsubsection{\textbf{Interpolation Methods}}
Interpolation is the most widely used upsampling method. The current mainstream of interpolation methods includes Nearest-neighbor Interpolation, Bilinear Interpolation, and Bicubic Interpolation. Being highly interpretable and easy to implement, these methods are still widely used today. Among them, \textbf{Nearest-neighbor Interpolation} is a simple and intuitive algorithm that selects the nearest pixel value for each position to be interpolated, which has fast execution time but has difficulty in producing high-quality results. \textbf{Bilinear Interpolation} sequentially performs linear interpolation operations on the two axes of the image. This method can obtain better results than nearest-neighbor interpolation while maintaining a relatively fast speed. \textbf{Bicubic Interpolation} performs cubic interpolation on each of the two axes. Compared with Bilinear, the results of Bicubic are smoother with fewer artifacts, but slower than other interpolation methods. Interpolation is also the mainstream method for constructing SISR paired datasets, and is widely used in the data pre-processing of CNN-based SISR models.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{images/Dconv.png}
\end{center}
\caption{Two kinds of transposed convolutional layers.}
\label{Dconv}
\end{figure}
\subsubsection{\textbf{Transposed Convolutional Layers}}
As shown in Fig.~\ref{Dconv}, researchers usually consider two kinds of transposed convolution operations: one adds padding around the input matrix and then applies the convolution operation, the other adds padding between the values of the input matrix followed by the direct convolution operation. The latter is also called fractionally strided convolution, since it works like doing convolution with a stride less than one. In the transposed convolutional layer, the upsampling level is controlled by the size of padding and it is essentially the opposite of the operation of the normal convolutional layer. Transposed convolutional layer is first proposed in FSRCNN~\cite{dong2016accelerating} and widely used in DL-based SISR models.
\subsubsection{\textbf{Sub-pixel Convolutional Layer}}
In ESPCN~\cite{shi2016real}, Shi \emph{et al.} proposed an efficient sub-pixel convolutional layer. Instead of increasing the resolution by directly increasing the number of LR feature maps, sub-pixel first increases the dimension of LR feature maps, i.e., the number of the LR feature maps, and then a periodic shuffling operator is used to rearrange these points in the expanded feature maps to obtain the HR output (Fig.~\ref{SBPixel}). In detail, the formulation of the sub-pixel convolutional layer can be defined as follow:
\begin{equation}
I_{SR}=f^L(I_x)=\mathcal{PS}(W_L*f^{L-1}(I_x)+b_L),
\end{equation}
where $\mathcal{PS}$ denotes the periodic shuffling operator, which transfers a $h\times w\times C\cdot r^2$ tensor to a tensor of shape $rh\times rw\times C$, and $rh \times rw$ is explicitly the size of HR image, $C$ is the dimension of operating channels. In addition, the convolutional filter $W_L$ has the shape $n_{L-1}\times r^2C\times K_L\times K_L$, where $n_L$ is the number of feature maps in the $L-1$ layer. Compared with the transposed convolutional layer, the sub-pixel convolutional layer shows better efficiency, so it is also widely used in DL-based SISR models.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{images/SBPixel.png}
\end{center}
\caption{Principle of the sub-pixel convolutional layer.}
\label{SBPixel}
\end{figure}
\subsection{Optimization Objective}
Evaluation and parameter up-gradation are the important steps in all DL-based models. In this section, we will introduce the necessary procedures during the model training.
\subsubsection{\textbf{Learning Strategy}}\label{Content}
According to different strategies, the DL-based SISR models can be mainly divided into supervised learning methods and unsupervised learning methods.
\textbf{Supervised Learning}: In supervised learning SISR, researchers compute the reconstruction error between the ground-truth image $I_y$ and the reconstructed image $I_{SR}$:
\begin{equation}
\hat{\theta}_\mathcal{F}=\mathop{\arg\min}_{\theta_\mathcal{F}} \mathcal{L}(I_{SR},I_y).
\end{equation}
Alternatively, researchers may sometimes search for a mapping $\Phi$, such as a pre-trained neural network, to transform the images or image feature maps to some other space and then compute the error:
\begin{equation}
\hat{\theta}_\mathcal{F}=\mathop{\arg\min}_{\theta_\mathcal{F}} \mathcal{L}(\Phi(I_{SR}),\Phi(I_y)).
\end{equation}
Among them, $\mathcal{L}$ is the loss function which is used to minimize the gap between the reconstructed image and ground-truth image. According to different loss functions, the model can achieve different performances. Therefore, an effective loss function is also crucial for SISR.
\textbf{Unsupervised Learning}: In unsupervised learning SISR, the way of evaluation and parameter up-gradation is changing by different unsupervised learning algorithms. For example, ZSSR~\cite{shocher2018zero} uses the test image and its downscaling images with the data augmentation methods to build the ``training dataset" and then applies the loss function to optimize the model. In CinCGAN~\cite{yuan2018unsupervised}, a model consists of two CycleGAN~\cite{zhu2017unpaired} is proposed, where parameters are upgraded through optimizing the generator-adversarial loss, the cycle consistency loss, the identity loss, and the total variation loss together in each cycle.
\subsubsection{\textbf{Loss Function}}
In the SISR task, the loss function is used to guide the iterative optimization process of the model by computing some kind of error. Meanwhile, compared with a single loss function, researchers find that a combination of multiple loss functions can better reflect the situation of image restoration. In this section, we will briefly introduce several commonly used loss functions.
\textbf{Pixel Loss}: Pixel loss is the simplest and most popular type among loss functions in SISR, which aims to measure the difference between two images on pixel basis so that these two images can converge as close as possible. It mainly includes the L1 loss, Mean Square Error (MSE Loss), and Charbonnier loss (a differentiable variant of L1 loss):
\begin{equation}
\mathcal{L}_{L1}(I_{SR}, I_{y}) = \frac{1}{hwc} \sum_{i,j,k} \left | I_{SR}^{i,j,k} - I_{y}^{i,j,k} \right |,
\end{equation}
\begin{equation}
\mathcal{L}_{MSE}(I_{SR}, I_{y}) = \frac{1}{hwc} \sum_{i,j,k} (I_{SR}^{i,j,k} - I_{y}^{i,j,k})^{2},
\end{equation}
\begin{equation}
\mathcal{L}_{Char}(I_{SR}, I_{y}) = \frac{1}{hwc} \sum_{i,j,k} \sqrt{(I_{SR}^{i,j,k} - I_{y}^{i,j,k})^{2} + \epsilon^{2}},
\end{equation}
where, $h$, $w$, and $c$ are the height, width, and the number of channels of the image. $\epsilon$ is a numerical stability constant, usually setting to $10^{-3}$. Since most mainstream image evaluation indicators are highly correlated with pixel-by-pixel differences, pixel loss is still widely sought after. However, the image reconstructed by this type of loss function usually lacks high-frequency details, so it is difficult to obtain excellent visual effects.
\textbf{Content Loss}: Content loss is also called perceptual loss, which uses a pre-trained classification network to measure the semantic difference between images, and can be further expressed as the Euclidean distance between the high-level representations of these two images:
\begin{equation}
\mathcal{L}_{Cont}(I_{SR}, I_{y}, \phi) = \frac{1}{h_{l}w_{l}c_{l}} \sum_{i,j,k} (\phi^{i,j,k}_{(l)} (I_{SR}) - \phi^{i,j,k}_{(l)} (I_{y})),
\end{equation}
where $\phi$ represents the pre-trained classification network and $\phi_{(l)}(I_{HQ})$ represents the high-level representation extracted from the $l$ layer of the network. $h_{l}$, $w_{l}$, and $c_{l}$ are the height, width, and the number of channels of the feature map in the $l$th layer respectively. With this method, the visual effects of these two images can be as consistent as possible. Among them, VGG~\cite{simonyan2014very} and ResNet~\cite{ledig2017photo} are the most commonly used pre-training classification networks.
\textbf{Adversarial Loss}: In order to make the reconstructed SR image more realistic, Generative Adversarial Networks (GANs~\cite{goodfellow2014generative}) have been proposed and introduced into various computer vision tasks. Specifically, GAN is composed of a generator and a discriminator. The generator is responsible for generating fake samples, and the discriminator is used to determine the authenticity of the generated samples. For example, the discriminative loss function based on cross-entropy is proposed by SRGAN~\cite{ledig2017photo}:
\begin{equation}
\mathcal{L}_{Adversarial}(I_{x},G, D) = \sum_{n=1}^{N} -logD(G(I_{x})),
\end{equation}
where $G(I_{LQ})$ is the reconstructed SR image, $G$ and $D$ represent the Generator and the Discriminator, respectively.
\textbf{Prior Loss} Apart from the above loss functions, some prior knowledge can also be introduced into SISR models to participate in high-quality image reconstruction, such as sparse prior, gradient prior, and edge prior. Among them, gradient prior loss and edge prior loss are the most widely used prior loss functions, which are defined as follows:
\begin{equation}
\small
\mathcal{L}_{TV}(I_{SR}) = \frac{1}{hwc} \sum_{i,j,k}\sqrt{(I_{SR}^{i,j+1,k}-I_{y}^{i,j,k})^{2} + (I_{SR}^{i+1,j,k}-I_{y}^{i,j,k})^{2}},
\end{equation}
\begin{equation}
\mathcal{L}_{Edge}(I_{SR}, I_{y}, E) = \frac{1}{hwc} \sum_{i,j,k} \left | E(I_{SR}^{i,j,k}) - E(I_{y}^{i,j,k}) \right |.
\end{equation}
where $E$ is the image edge detector, and $E(I_{SR}^{i,j,k})$ and $E(I_{y}^{i,j,k})$ are the image edges extracted by the detector. The purpose of the prior loss is to optimize some specific information of the image toward the expected target so that the model can converge faster and the reconstructed image will contain more texture details.
\subsection{Assessment Methods}
The image quality assessment (IQA) can be generally divided into objective methods and subjective methods. Objective methods commonly use a specific formulation to compute the results, which are simple and fair, thus become the mainstream assessment method in SISR. However, they can only reflect the recovery of image pixels from a numerical point of view and are difficult to accurately measure the true visual effect of the image. In contrast, subjective methods are always based on human subjective judgments and more related to evaluate the perceptual quality of the image. Based on the pros and cons of the two types of methods mentioned above, several assessment methods are briefly introduced in the following with respect to the aspects of image reconstruction accuracy, image perceptual quality, and reconstruction efficiency.
\subsubsection{\textbf{Image Reconstruction Accuracy}}
The assessment methods applied to evaluate image reconstruction accuracy are also called \emph{Distortion measures}, which are full-reference. Specifically, given a distorted image $\hat{x}$ and a ground-truth reference image $x$, full-reference distortion quantifies the quality of $\hat{x}$ by measuring its discrepancy to $x$~\cite{Blau_2018_CVPR} using different algorithms.
\textbf{Peak Signal-to-Noise Ratio (PSNR)}: PSNR is the most widely used IQA method in the SISR field, which can be easily defined via the mean squared error (MSE) between the ground truth image $I_y\in \mathbb{R}^{H \times W}$ and the reconstructed image $I_{SR}\in \mathbb{R}^{H \times W}$:
\begin{equation}
MSE=\frac{1}{HW}\sum_{i=0}^{H-1}\sum_{j=0}^{W-1}(I_y(i,j)-I_{SR}(i,j))^2,
\end{equation}
\begin{equation}
PSNR=10 \cdot \log_{10}(\frac{MAX^2}{MSE}),
\end{equation}
where MAX is the maximum possile pixel of the image. Since PSNR is highly related to MSE, a model trained with the MSE loss will be expected to have high PSNR scores. Although higher PSNR generally indicates that the construction is of higher quality, it just considers the per-pixel MSE, which makes it fails to capture the perceptual differences~\cite{wang2009mean}.
\textbf{Structural Similarity index measure (SSIM)}: SSIM~\cite{wang2004image} is another popular assessment method that measures the similarity between two images on perceptual basis, including structures, luminance, and contrast. Different from PSNR, which calculates absolute errors on the pixel-level, SSIM suggests that there exists strong inter-dependencies among the pixels that are spatially close. These dependencies carry important information related to the structures perceptually. Thus the SSIM can be expressed as a weighted combination of three comparative measures:
\begin{equation}
\begin{split}
SSIM(I_{SR},I_y) &=(l(I_{SR},i_y)^\alpha \cdot c(I_{SR},I_y)^\beta \cdot s(I_{SR},I_y)^\gamma) \\
&=\frac{(2\mu_{I_{SR}}\mu_{I_y}+c_1)(2\sigma_{I_{SR}I_y}+c_2)}{(\mu_{I_{SR}}^2+\mu_{I_y}^2+c_1)(\sigma_{I_{SR}}^2+\sigma_{I_y}^2+c_2)}.
\end{split}
\end{equation}
where $l$, $c$, and $s$ represent luminance, contrast, and structure between $I_{SR}$ and $I_y$, respectively. $\mu_{I_{SR}}$, $\mu_{I_{y}}$, $\sigma_{I_{SR}}^2$, $\sigma_{I_y}^2$, and $\sigma_{I_{SR}I_y}$ are the average($\mu$)/variance($\sigma^2$)/covariance($\sigma$) of the corresponding items.
A higher SSIM indicates higher similarity between two images, which has been widely used due to its convenience and stable performance on evaluating the perceptual quality. In addition, there are also some variants of SSIM, such as Multi-Scale SSIM, which is conducted over multiple scales by a process of multiple stages of subsampling.
\subsubsection{\textbf{Image Perceptual Quality}}
Since the visual system of humans is complex and concerns many aspects to judge the differences between two images, i.e., the textures and flow inside the images, methods which pursue absolutely similarity differences (PSNR/SSIM) will not always perform well. Although distortion measures have been widely used, the improvement in reconstruction accuracy is not always accompanied by an improvement in visual quality. In fact, researchers have shown that the distortion and perceptual quality are at odds with each other in some cases~\cite{Blau_2018_CVPR}. The image perceptual quality of an image $\hat{x}$ is defined as the degree to which it looks like a natural image, which has nothing to do with its similarity to any reference image.
\textbf{Mean Opinion Score (MOS)}: MOS is a subjective method that can straightforwardly evaluate perceptual quality. Specifically, a number of viewers rate their opinions on the quality of a set of images by Double-stimulus~\cite{mittal2012no}, i.e., every viewer has both the source and test images. After all the viewers finishing ratings, the results are mapped onto numerical values and the average scores will be the final MOS. MOS is a time-consuming and expensive method as it requires manual participation. Meanwhile, MOS is also doubted to be unstable, since the MOS differences may be not noticeable to the users. Moreover, this method is too subjective to guarantee fairness.
\textbf{Natural Image Quality Evaluator (NIQE)}: NIQE~\cite{mittal2012making} is a completely blind image quality assessment method. Without the requirement of knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores, NIQE only makes use of measurable deviations from statistical regularities observed in natural images. It extracts a set of local (quality-aware) features from images based on a natural scene statistic (NSS) model, then fits the feature vectors to a multivariate Gaussian (MVG) model. The quality of a test image is then predicted by the distance between its MVG model and the MVG model learned from a natural image:
\begin{equation}
\begin{small}
D(\nu_1,\nu_2,\Sigma_1,\Sigma_2)=\sqrt{((\nu_1-\nu_2)^T(\frac{\Sigma_1+\Sigma_2}{2})^{-1}(\nu_1-\nu_2))},
\end{small}
\end{equation}
where $\nu_1$, $\nu_2$ and $\Sigma_1$, $\Sigma_2$ are the mean vectors and covariance matrices of the HR and SR image's MVG model. Notice that, a higher NQIE index indicates lower image perceptual quality. Compared with MOS, NIQE is a more convenient perceptual-evaluation method.
\textbf{Ma}: Ma \emph{et al.}~\cite{ma2017learning} proposed a learning-based no-reference image quality assessment. It is designed to focus on SR images, while other learning-based methods are applied to images degraded by noise, compression, or fast fading rather than SR. It learns from perceptual scores based on human subject studies involving a large number of SR images. And then it quantifies the SR artifacts through three types of statistical properties, i.e., local/global frequency variations and spatial discontinuity. Then these features are modeled by three independent learnable regression forests respectively to fit the perceptual scores of SR images, $\hat{y}_n (n=1,2,3)$. The final predicted quality score is $\hat{y}=\sum_n \lambda_n\cdot\hat{y}_n$, and the weight $\lambda$ is learned by minimizing $\lambda^*=\mathop{\arg\min}_{\lambda} (\sum_n \lambda_n\cdot\hat{y}_n-y)^2$.
Ma performs well on matching the perceptual scores of SR images, but it is still limited compared with other learning-based no-reference methods, since it can only assess the quality degradation arising from the distortion types on which they have been trained.
\textbf{PI}: In the 2018 PIRM Challenge on Perceptual Image Super-Resolution~\cite{blau20182018}, perception index (PI) is first proposed to evaluate the perceptual quality. It is a combination of the no-reference image quality measures Ma and NIQE:
\begin{equation}
PI=\frac{1}{2}((10-Ma)+NIQE).
\end{equation}
A lower PI indicates better perceptual quality. This is a new image quality evaluation standard, which has been greatly promoted and used in recent years.
Apart from the aforementioned evaluation methods, some new methods have also been proposed over these years. For example, Zhang \emph{et al.}~\cite{zhang2019ranksrgan} proposed $Ranker$ to learn the ranking orders of NR-IQA methods (i.e., NIQE) on the results of some perceptual SR models. Zhang \emph{et al.}\cite{zhang2018unreasonable} introduced a new dataset of human perceptual similarity judgments. Meanwhile, a perceptual evaluation metric, Learned Perceptual Image Patch Similarity (LPIPS), is constructed by learning the perceptual judgement in this dataset. In summary, how to measure the perceptual quality of SR images more accurately and efficiently is an important issue that needs to be explored.
\subsubsection{\textbf{Reconstruction Efficiency}}
Although designing deeper networks is the easiest way to obtain better reconstruction performance, it cannot be ignored that these models will also bring more parameters, execution time, and computational costs. In order to broaden the practical application of SISR, we need to consider the trade-off between the model performance and model complexity. Therefore, it is important to evaluate the reconstruction efficiency by the following basic assessments.
\textbf{Model Size}: The model size is related to the storage that the devices need to store the data. A model containing more parameters is harder for the device with limited hardware to run it. Therefore, building lightweight models is conducive to the promotion and application of the algorithm. Among all the indicators, the parameter quantity of the model is the most intuitive indicator to measure the model size.
\textbf{Execution Time}: Usually, a lightweight model tends to require a short execution time, but the emergence of complex strategies such as the attention mechanism has broken this balance. In other words, when some complex operations are introduced into the model, a lightweight network may also require a long execution time. Therefore, it is critically important to evaluate the execution time of the model.
\textbf{Mult-Adds}: The number of multiply-accumulate operations, or Mult-Adds, is always used to measure the model computation since operations in the CNN model are mainly multiplications and additions. The value of Mult-Adds is related to the speed or the time needed to run the model.
In summary, the trade-off between the model performance and model complexity is still need to be concerned.
\section{Single-Image Super-Resolution}
\subsection{Benchmark framework for DL-based SISR}
In 2014, Dong \emph{et al.}~\cite{dong2015image} proposed the Super-Resolution Convolutional Neural Network (SRCNN). SRCNN is the first CNN-based SISR model. It shows that a deep CNN model is equivalent to the sparse-coding-based method, which is an example-based method for SISR. Recently, more and more SISR models treat it as an end-to-end learning task. Therefore, building a deep neural network to directly learn the mapping between LR and HR images has become the mainstream method in SISR. Motivated by SRCNN, CNN-based SISR methods are blooming and constantly refreshing the best results.
According to different targets, we divide the DL-based SISR models into four categories: reconstruction efficiency methods, reconstruction accuracy methods, perceptual quality methods, and further improvement methods.
\subsection{Reconstruction Efficiency Methods}
The problem of low accuracy caused by hardware limitations raises the demand for research on efficient SISR models. Therefore, designing lightweight SISR models that can achieve the same or even better performance than their cumbersome counterparts is urgently needed. In this section, we will discuss some methods that contribute to efficient network structure design.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{images/RL.png}
\caption{Sketch of residual learning architecture / residual block.}
\label{RL}
\end{figure}
\subsubsection{\textbf{Residual Learning}}
In SRCNN, researchers find that better reconstruction performance can be obtained by adding more convolutional layers to increase the receptive field. However, directly stacking the layers will cause vanishing/exploding gradients and degradation problem~\cite{he2015convolutional}. Meanwhile, adding more layers will lead to a higher training error and more expensive computational cost.
In ResNet~\cite{he2016deep}, He \emph{et al.} proposed a residual learning framework, where a residual mapping is desired instead of fitting the whole underlying mapping (Fig.~\ref{RL}). In SISR, as LR image and HR image share most of the same information, it is easy to explicitly model the residual image between LR and HR images. Residual learning enables deeper networks and remits the problem of gradient vanishing and degradation. With the help of residual learning, Kim~\cite{kim2016accurate} proposed a very deep super-resolution network, also known as VDSR. For the convenience of network design, the residual block~\cite{he2016deep} has gradually become the basic unit in the network structure. In the convolutional branch, it usually has two $3 \times 3$ convolutional layers, two batch normalization layers, and one ReLU activation function in between. It is worth noting that the batch normalization layer is often removed in the SISR task since EDSR~\cite{lim2017enhanced} points out that the batch normalization layer consumes more memory but will not improve the model performance.
\textbf{Global and Local Residual Learning}: Global residual learning is a skip-connection from input to the final reconstruction layer, which helps improve the transmission of information from input to output and reduce the loss of information to a certain extent. However, as the network becomes deeper, a significant amount of image details are inevitably lost after going through so many layers. Therefore, the local residual learning is proposed, which is performed in every few stacked layers instead of from input to output. In this approach, a multi-path mode is formed and rich image details are carried and also helps gradient flow. Furthermore, many new feature extraction modules have introduced the local residual learning to reinforce strong learning capabilities~\cite{Li_2018_ECCV,Zhang_2018_ECCV}. Of course, combining local residual learning and global residual learning is also highly popular now~\cite{ledig2017photo,lim2017enhanced,Zhang_2018_ECCV}.
\textbf{Residual Scaling}: In EDSR~\cite{lim2017enhanced}, Lim \emph{et al.} found that increasing the feature maps, i.e., channel dimension, above a certain level would make the training procedure numerical unstable. To solve such issues, they adopted the residual scaling~\cite{szegedy2017inception}, where the residuals are scaled down by multiplying a constant between 0 and 1 before adding them to the main path. With the help of this residual scaling method, the model performance can be further improved.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{images/RNN.png}
\caption{The model structure of DRRN, where the shaded part denotes the recursive block and the parameters in the dashed box are sharing.}
\label{RNN}
\end{figure}
\subsubsection{\textbf{Recursive Learning}}
In order to obtain a large receptive field without increasing model parameters, recursive learning is proposed for SISR, where the same sub-modules are repeatedly applied in the network and they share the same parameters. In other worlds, a recursive block is a collection of recursive units, where the corresponding structures among these recursive units share the same parameters. For instance, the same convolutional layer is applied 16 times in DRCN~\cite{kim2016deeply}, resulting in a 41 $\times$ 41 size receptive field. However, too many stacked layers in recursive learning based model will still cause the problem of vanishing/exploding gradient. Therefore, in DRRN~\cite{tai2017image}, the recursive block is conducted based on residual learning (Fig.~\ref{RNN}). Recently, more and more models introduce the residual learning strategy in their recursive units, such as MemNet~\cite{tai2017memnet}, CARN~\cite{ahn2018fast}, and SRRFN~\cite{Li_2019_ICCV}.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{images/HFDB.png}
\caption{The structure of the hierarchical feature distillation block (HFDB).}
\label{HFDB}
\end{figure}
\subsubsection{\textbf{Gating Mechanism}}
Skip connection in the above residual learning tends to make the channel dimension of the output features extremely high. If such a high dimension channel remains the same in the following layers, the computational cost will be terribly large and therefore will affect the reconstruction efficiency and performance. Intuitively, the output features after the skip connection should be efficiently re-fused instead of simply concatenated.
To solve this issue, researchers recommend using the gating mechanism to adaptively extract and learn more efficient information. Most of the time, a $1\times1$ convolutional layer is adopted to accomplish the gating mechanism, which can reduce the channel dimension and leave more effective information. In SRDenseNet~\cite{tong2017image} and MSRN~\cite{Li_2018_ECCV}, such $1\times1$ convolutional layer acts as a bottleneck layer before the reconstruction module. In MemNet~\cite{tai2017memnet}, it is a gate unit at the end of each memory block to control the weights of the long-term memory and short-term memory. Note that the gate is not only able to serve as bottlenecks placed at the end of the network, but also be continuously conducted in the network. For example, in MemNet~\cite{tai2017memnet}, IDN\cite{hui2018fast}, and CARN~\cite{ahn2018image}, the gating mechanism is used in both global and local region. Sometimes, it can be combined with other operations, such as attention mechanism, to construct a more effective gate module to achieve feature distillation. For instance, Li \emph{et al.} proposed a hierarchical feature distillation block (Fig.~\ref{HFDB}) by combining $1 \times 1$ convolutional layer and attention mechanism in MDCN~\cite{li2020mdcn}.
\subsubsection{\textbf{Curriculum Learning}}
Curriculum learning refers to gradually increasing the difficulty of the learning task. For some sequence prediction tasks or sequential decision-making problems, curriculum learning is used to reduce the training time and improve the generalisation performance. Since SISR is an ill-posed problem which is always confronted with great learning difficulty due to some adverse conditions such as large scaling factors, unknown degradation kernels, and noise, it is suitable to utilize curriculum learning to simplify the learning process and improve the reconstruction efficiency.
In LapSRN~\cite{lai2017deep}, curriculum learning is applied to progressively reconstruct the sub-band residuals of high-resolution images. In ProSR~\cite{wang2018fully}, each level of the pyramid is gradually blended in to reduce the impact on the previously trained layers and the training pairs of each scale are incrementally added. In SRFBN~\cite{li2019feedback}, the curriculum learning strategy is applied to solve the complex degradation tasks, where targets of different difficulties are ordered to learn it progressively. With the help of curriculum learning, complex problems can be decomposed into multiple simple tasks, hence accelerating model convergence and obtaining better reconstruction results.
\subsection{Reconstruction Accuracy Methods}
The quality of the reconstructed SR image is always the main concern in SISR. In this section, we will introduce some classic methods and strategies that can help improve the reconstruction accuracy of SISR models.
\subsubsection{\textbf{Multi-scale Learning}}
As we all know, rich and accurate image features are essential for SR image reconstruction. Meanwhile, plenty of research works~\cite{szegedy2016rethinking, chollet2017xception, lai2017deep} have pointed out that images may exhibit different characteristics at different scales and thus making full use of these features can further improve model performance. Inspired by the inception module~\cite{chollet2017xception}, Li \emph{et al.}~\cite{Li_2018_ECCV} proposed a multi-scale residual block (MSRB, Fig.~\ref{MSRB}) for feature extraction. MSRB integrates different convolution kernels in a block to adaptively extract image features at different scales. After that, Li \emph{et al.}~\cite{li2020mdcn} further optimized the structure and proposed a more efficient multi-scale dense cross block (MDCB) for feature extraction. MDCB is essentially a dual-path dense network that can effectively detect local and multi-scale features.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{images/MSRB.png}
\caption{The structure of multi-scale residual block (MSRB~\cite{Li_2018_ECCV}).}
\label{MSRB}
\end{figure}
Recently, more and more multi-scale SISR models have been proposed. For instance, Qin \emph{et al.}~\cite{qin2020multi} proposed a multi-scale feature fusion residual network (MSFFRN) to fully exploit image features for SISR. Chang \emph{et al.}~\cite{chang2019multi} proposed a multi-scale dense network (MSDN) by combining multi-scale learning with dense connection. Cao \emph{et al.}~\cite{cao2019single} developed a new SR approach called multi-scale residual channel attention network (MSRCAN), which introduced the channel attention mechanism into the MSRB. All the above examples indicate that the extraction and utilization of multi-scale image features are of increasing importance to further improve the quality of the reconstructed images.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{images/DC.png}
\caption{The structure of a simple dense connection module.}
\label{DC}
\end{figure}
\subsubsection{\textbf{Dense Connection}}
Dense connection mechanism was proposed in DenseNet~\cite{huang2017densely}, which is widely used in the computer vision tasks in recent years. Different from the structure that only sends the hierarchical features to the final reconstruction layer, each layer in the dense block receives the features of all preceding layers (Fig.~\ref{DC}). Short paths created between most of the layers can help alleviate the problem of vanishing/exploding gradients and strengthen the deep information flow through layers, thereby further improving the reconstruction accuracy.
Motivated by the dense connection mechanism, Tong \emph{et al.} introduced it into SISR and proposed the SRDenseNet~\cite{tong2017image}. SRDenseNet not only uses the layer-level dense connections, but also the block-level one, where the output of each dense block is connected by dense connections. In this way, the low-level features and high-level features are combined and fully used to conduct the reconstruction. In RDN~\cite{zhang2018residual}, dense connections are combined with the residual learning to form the residual dense block (RDB), which allows low-frequency features to be bypassed through multiple skip connections, making the main branch focusing on learning high-frequency information. Apart from aforementioned models, dense connection is also applied in MemNet~\cite{tai2017memnet}, RPMNet~\cite{mei2019deep}, MFNet~\cite{shen2019multipath}, etc. With the help of dense connection mechanism, the information flow among different depths of the network can be fully used, thus provides better reconstruction results.
\subsubsection{\textbf{Attention Mechanism}}
Attention mechanism can be viewed as a tool that can allocate available resources to the most informative part of the input. In order to improve the efficiency during the learning procedure, some works are proposed to guide the network to pay more attention to the regions of interest. For instance, Hu \emph{et al.}~\cite{hu2018squeeze} proposed a squeeze-and-excitation (SE) block to model channel-wise relationships in the image classification task. Wang \emph{et al.}~\cite{wang2018non} proposed a non-local attention neural network for video classification by incorporating non-local operations. Motivated by these methods, attention mechanism has also been introduced into SISR.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{images/CAM.png}
\caption{The principle of channel attention mechanism (CAM).}
\label{CAM}
\end{figure}
\textbf{Channel Attention}: In SISR, we mainly want to recover as much valuable high-frequency information as possible. However, common CNN-based methods treat channel-wise features equally, which lacks flexibility in dealing with different types of information. To solve this problem, many methods~\cite{Zhang_2018_ECCV,mei2018effective} introduce the SE mechanism in the SISR model. For example, Zhang \emph{et al.}~\cite{Zhang_2018_ECCV} proposed a new module based on the SE mechanism, named residual channel attention block (RCAB). As shown in Fig.~\ref{CAM}, a global average pooling layer followed by a Sigmoid function is used to rescale each feature channel, allowing the network to concentrate on more useful channels and enhancing discriminative learning ability. In SAN~\cite{dai2019second}, second-order statistics of features are explored to conduct the attention mechanism based on covariance normalization. A great number of experiments have shown that the second-order channel attention can help the network obtain more discriminative representations, leading to higher reconstruction accuracy.
\textbf{Non-Local Attention}: When CNN-based methods conduct convolution in a local receptive field, the contextual information outside this field is ignored, while the features in distant regions may have a high correlation and can provide effective information. Given this issue, non-local attention has been proposed as a filtering algorithm to compute a weighted mean of all pixels of an image. In this way, distant pixels can also contribute to the response of a position in concern. For example, the non-local operation is conducted in a limited neighborhood to improve the robustness in NLRN~\cite{liu2018non}. A non-local attention block is proposed in RNAN~\cite{zhang2019residual}, where the attention mechanisms in both channel- and spatial-wise are used simultaneously in its mask branch to better guide feature extraction in the trunk branch. Meanwhile, a holistic attention network is proposed in HAN~\cite{niu2020single}, which consists of a layer attention module and a channel-spatial attention module, to model the holistic interdependence among layers, channels, and positions. In CSNLN~\cite{mei2020image}, a cross-scale non-local attention module is proposed to mine long-range dependencies between LR features and large-scale HR patches within the same feature map. All these methods show the effectiveness of the non-local attention, which can further improve the model performance.
\subsubsection{\textbf{Feedback Mechanism}}
Feedback mechanism refers to carrying a notion of output to the previous states, allowing the model to have a self-correcting procedure. It is worth noting that the feedback mechanism is different from recursive learning since in the feedback mechanism the model parameters are keeping self-correcting and do not share. Recently, feedback mechanism has been widely used in many
computer vision tasks~\cite{carreira2016human,cao2015look}, which is also beneficial for the SR images reconstruction. Specifically, the feedback mechanism allows the network to carry high-level information back to previous layers and refine low-level information, thus fully guide the LR image to recover high-quality SR images.
In DBPN~\cite{haris2019deep}, iterative up- and down-sampling layers are provided to achieve an error feedback mechanism for projection errors at each stage. In DSRN~\cite{han2018image}, a dual-state recurrent network is proposed, where recurrent signals are exchanged between these states in both directions via delayed feedback. In SFRBN~\cite{li2019feedback}, a feedback block is proposed, in which the input of each iteration is the output of the previous one as the feedback information. Followed by several projection groups sequentially with dense skip connections, low-level representations are refined and become more powerful high-level representations.
\subsubsection{\textbf{Additional Prior}}\label{CNNPrior}
Most methods tend to build end-to-end CNN models to achieve SISR since it is simple and easy to implement. However, it is rather difficult for them to reconstruct realistic high-frequency details due to plenty of useful features have been lost or damaged. To solve this issue, priors guided SISR framework has been proposed. Extensive experiments have shown that with the help of image priors, the model can converge faster and achieve better reconstruction accuracy. Recently, many image priors have been proposed, such as total variation prior, sparse prior, and edge prior.
Motivated by this, Yang \emph{et al.} integrated the edge prior with recursive networks and proposed a Deep Edge Guided Recurrent Residual Network (DEGREE~\cite{yang2017deep}) for SISR. After that, Fang \emph{et al.} proposed an efficient and accurate Soft-edge Assisted Network (SeaNet~\cite{fang2020soft}). Different from DEGREE, which directly applies the off-the-shelf edge detectors to detect image edges, SeaNet automatically learns more accurate image edges from the constructed Edge-Net. Meanwhile, the authors pointed out that the more accurate priors introduced, the greater improvement in performance.
\subsection{Perceptual Quality Methods}
Most methods simply seek to reconstruct SR images with high PSNR and SSIM. However, the improvement in reconstruction accuracy is not always accompanied by an improvement in visual quality. Blau \emph{et al.}~\cite{blau2018perception} pointed out that there was a perception-distortion trade-off. It is only possible to improve either perceptual quality or distortion, while improving one must be at the expense of the other. Hence, in this section, we provide methods to ease this trade-off problem, hoping to provide less distortion while maintaining good perceptual quality of the image.
\subsubsection{\textbf{Perceptual Loss}}
Although pixel-wise losses, i.e., L1 and MSE loss, have been widely used to achieve high image quality, they do not capture the perceptual differences between the SR and HR images. In order to address this problem and allow the loss functions to better measure the perceptual and semantic differences between images, content loss, texture loss, and targeted perceptual loss are proposed. Among them, content loss has been widely used to obtain more perceptual and natural images~\cite{sajjadi2017enhancenet,ledig2017photo,wang2018recovering}, which has been introduced in Sec.~\ref{Content}. Apart from obtaining more similar content, the same style, such as colors, textures, common patterns, and semantic information are also needed. Therefore, other perceptual loss need to be considered.
\textbf{Texture Loss}: Texture loss, also called style reconstruction loss, is proposed by Gatys \emph{et al.}~\cite{gatys2015texture,gatys2015neural}, which can make the model reconstruct high-quality textures. The texture loss is defined as the squared Frobenius norm of the difference between the Gram matrices $G_j^{\phi}(x)$ of the output and the ground truth images:
\begin{equation}
\mathcal{L}^{\phi,j}_{texture}(I_{SR},I_y)=||G_j^{\phi}(I_{SR})-G_j^{\phi}(I_y)||^2_F.
\end{equation}
With the help of the texture loss, the model tends to produce images that have the same local textures as the HR images during training~\cite{johnson2016perceptual}.
\textbf{Targeted Perceptual Loss}: The conventional perceptual loss estimates the reconstruction error for an entire image without considering semantic information, resulting in limited capability. Rad \emph{et al.}~\cite{rad2019srobb} proposed a targeted perceptual loss that penalized images at different semantic levels on the basis of the labels of object, background, and boundary. Therefore, more realistic textures and sharper edges can be obtained to reconstruct realistic SR images.
\subsubsection{\textbf{Adversarial Training}}
In 2014, the Generative Adversarial Networks (GANs) was proposed by Goodfellow \emph{et al.}~\cite{goodfellow2014generative}, which has been widely used in compute vision tasks, such as style transfer and image inpainting. The GANs consists of a generator and a discriminator. When the discriminator is trained to judge whether an image is true or false, the generator aims at fooling the discriminator rather than minimizing the distance to a specific image, hence it tends to generate outputs that have the same statistics as the training set.
Inspired by GAN, Ledig \emph{et al.} proposed the Super-Resolution Generative Adversarial Network (SRGAN~\cite{ledig2017photo}). In SRGAN, the generator $G$ is essentially a SR model that trained to fool the discriminator $D$, and $D$ is trained to distinguish SR images from HR images. Therefore, the generator can learn to produce outputs that are highly similar to HR images, and then reconstruct more perceptual and natural SR images. Following this approach, the generative loss $\mathcal{L}_{Gen}(I_x)$ can be defined as:
\begin{equation}
\mathcal{L}_{Gen}=-\log D_{\theta_D}(G_{\theta_G}(I_x)),
\end{equation}
and the loss in terms of discriminator is:
\begin{equation}
\mathcal{L}_{Dis}=-\log (D_{\theta_D}(I_y))-\log (1-D_{\theta_D}(G_{\theta_G}(I_x))).
\end{equation}
Therefore, we need to solve the following problem:
\begin{equation}
\begin{split}
\min_{\theta_G}\ \max_{\theta_D}\ &\mathbb{E}_{I_{y \sim p_{data}(I_y)}}(\log D_{\theta_D}(I_y))\ + \\
&\mathbb{E}_{I_{x \sim p_{G}(I_x)}}(\log(1-D_{\theta_D}(G_{\theta_G}(I_x)))).
\end{split}
\end{equation}
In SRGAN~\cite{ledig2017photo}, the generator is the SRResNet and the discriminator uses the architecture proposed by Radford \emph{et al.}~\cite{radford2015unsupervised}. In ESRGAN~\cite{wang2018esrgan}, Wang \emph{et al.} made two modifications to the SRResNet: (1) replace the original residual block with the residual-in-residual dense block; (2) remove the BN layers to improve the generalization ability of the model. In SRFeat~\cite{park2018srfeat}, Park \emph{et al.} indicated that the GAN-based SISR methods tend to produce less meaningful high-frequency noise in reconstructed images. Therefore, they adopted two discriminators: an image discriminator and a feature discriminator, where the latter is trained to distinguish SR images from HR images based on the intermediate feature map extracted from a VGG network. In ESRGAN~\cite{wang2018esrgan}, Wang \emph{et al.} adopted the Relativistic GAN~\cite{jolicoeur2018relativistic}, where the standard discriminator was replaced with the relativistic average discriminator to learn the relatively realistic between two images. This modification helps the generator to learn sharper edges and more detailed textures.
\subsubsection{\textbf{Additional Prior (Perceptual)}}
In Sec.~\ref{CNNPrior}, we have introduced the applications of prior knowledge in the CNN-based SISR models. In this section, we will show the benefits of using additional priors in GAN-based models. The target of all the introduced additional priors is to improve the perceptual quality of the reconstructed SR images.
For example, the semantic categorical prior is used to generate richer and more realistic textures with the help of spatial feature transform (SFT) in SFTGAN\cite{wang2018recovering}. With this information from high-level tasks, similar LR patches can be easily distinguished and more natural textual details can be generated. In SPSR~\cite{ma2020structure}, the authors utilized the gradient maps to guide image recovery to solve the problem of structural distortions in the GAN-based methods. Among them, the gradient maps are obtained from a gradient branch and integrated into the SR branch to provide structure prior. With the help of gradient maps, we know which region should be paid more attention to, so as to guide the image generation and reduce geometric distortions.
\subsubsection{\textbf{Cycle Consistency}}
Cycle consistency assumes that there exist some underlying relationships between the source and target domains, and tries to make supervision at the domain level. To be precise, we want to capture some special characteristics of one image collection and figure out how to translate these characteristics into the other image collection. To achieve this, Zhu \emph{et al.}~\cite{zhu2017unpaired} proposed the cycle consistency mechanism, where not only the mapping from the source domain to the target domain is learned, but also the backward mapping is combined. Specifically, given a source domain $X$ and a target domain $Y$, we have a translator $G: X \rightarrow Y$ and another translator $F: Y \rightarrow X$ that trained simultaneously to guarantee both an $adversarial\ loss$ that encourages $G(X)\approx Y$ and $F(Y)\approx X$ and a $cycle\ consistency\ loss$ that encourages $F(G(X))\approx X$ and $G(F(Y))\approx Y$.
In SISR, the idea of cycle consistency has also been widely discussed. Given the LR images domain $X$ and the HR images domain $Y$, we not only learn the mapping from LR to HR but also the backward process. Researchers have shown that learning how to do image degradation first without paired data can help generate more realistic images~\cite{bulat2018learn}. In CinCGAN~\cite{yuan2018unsupervised}, a cycle in cycle network is proposed, where the noisy and blurry input is mapped to a noise-free LR domain firstly and then upsampled with a pre-trained model and finally mapped to the HR domain. In DRN~\cite{guo2020closed}, the mapping from HR to LR images is learned to estimate the down-sampling kernel and reconstruct LR images, which forms a closed-loop to provide additional supervision. DRN also gives us a novel approach in unsupervised learning SR, where the deep network is trained with both paired and unpaired data.
\subsection{Further Improvement Methods}
In the aforementioned part, we have introduced the way to design an efficient SISR model, as well as obtaining high reconstruction accuracy and high perceptual quality for SR images. Though current SISR models have made a significant breakthrough in achieving a balance between reconstruction accuracy and perceptual quality, it still remains a hot topic to explore more effective models.
\subsubsection{\textbf{Internal Statistics}}\label{IS}
In~\cite{zontak2011internal}, Zontak \emph{et al.} found that some patches exist only in a specific image and can not be found in any external database of examples. Therefore, SR methods trained on external images can not work well on such images due to the lack of patches information, while methods based on internal statistics may have a good performance. Meanwhile, Zontak \emph{et al.} pointed out that the internal entropy of patches inside a single image was much smaller than the external entropy of patches in a general collection of natural images. Therefore, using the internal image statistics to further improve model performance is a good choice.
In ZSSR~\cite{shocher2018zero}, the property of internal image statistics is used to train an image-specific CNN, where the training examples are extracted from the test image itself. In training phase, several LR-HR pairs are generated by using data augmentation, and a CNN is trained with these pairs. In test time, the LR image $I_{LR}$ is fed to the trained CNN as input to get the reconstructed image. In this process, the model makes full use of internal statistics of the image itself for self-learning. In SinGAN~\cite{shaham2019singan}, an unconditional generative model with a pyramid of fully convolutional GANs is proposed to learn the internal patch distribution at different scales of the image. To make use of the recurrence of internal information, they upsampled the LR image several times (depending on the final scale) to obtain the final SR output.
\subsubsection{\textbf{Multi-factors Learning}}
Typically, in SISR, we often need to train specific models for different upsampling factors and it is difficult to arise at the expectation that a model can be applied to multiple upsampling factors. To solve this issue, some models have been proposed for multiple upsampling factors, such as LapSRN~\cite{lai2017fast}, MDSR~\cite{lim2017enhanced}, and MDCN~\cite{li2020mdcn}.
In LapSRN~\cite{lai2017fast}, LR images are progressively reconstructed in the pyramid networks to obtain the large-scale results, where the intermediate results can be taken directly as the corresponding multiple factors results. In~\cite{lim2017enhanced}, Lim \emph{et al.} found the inter-related phenomenon among multiple scales tasks, i.e., initializing the high-scale model parameters with the pre-trained low-scale network can accelerate the training process and improve the performance. Therefore, they proposed the scale-specific processing modules at the head and tail of the model to handle different upsampling factors. To further exploit the inter-scale correlation between different upsampling factors, Li \emph{et al.} further optimized the strategy in MDCN~\cite{li2020mdcn}. Different from MDSR which introduces the scale-specific processing strategy both at the head and tail of the model, MDCN can maximize the reuse of model parameters and learn the inter-scale correlation.
\subsubsection{\textbf{Knowledge Distillation}}
Knowledge distillation refers to a technique that transfers the representation ability of a large (Teacher) model to a small one (Student) for enhancing the performance of the student model. Hence, it has been widely used for network compression or to further improve the performance of the student model, which has shown the effectiveness in many computer vision tasks. Meanwhile, there are mainly two kinds of knowledge distillation, soft label distillation and feature distillation. In soft label distillation, the softmax outputs of a teacher model are regarded as soft labels to provide informative dark knowledge to the student model~\cite{hinton2015distilling}. In feature distillation, the intermediate features maps are transferred to the student model~\cite{ahn2019variational,romero2014fitnets}.
Inspired by this, some works introduce the knowledge distillation technique to SISR to further improve the performance of lightweight models. For instance, in SRKD~\cite{gao2018image}, a small but efficient student network is guided by a deep and powerful teacher network to achieve similar feature distributions to those of the teacher. In~\cite{lee2020learning}, the teacher network leverage the HR images as privileged information and the intermediate features of the decoder of the teacher network are transferred to the student network via feature distillation, so that the student can learn high frequencies details from the Teacher which trained with the HR images.
\subsubsection{\textbf{Reference-based SISR}}
In contrast to SISR where only a single LR image is used as input, reference-based SISR (RefSR) takes a reference image to assist the SR process. The reference images can be obtained from various sources like photo albums, video frames, and web image searches. Meanwhile, there are several approaches proposed to enhance image textures, such as image aligning and patch matching. Recently, some RefSR methods~\cite{yue2013landmark,zheng2018crossnet} choose to align the LR and reference images with the assumption that the reference image possesses similar content as the LR image. For instance, Yue \emph{et al.}~\cite{yue2013landmark} conducted global registration and local matching between the reference and LR images to solve an energy minimization problem. In CrossNet~\cite{zheng2018crossnet}, optical flow is proposed to align the reference and LR images at different scales, which are later concatenated into the corresponding layers of the decoder. However, these methods assume that the reference image has a good alignment with the LR image. Otherwise, their performance will be significantly influenced. Different from these methods, Zhang \emph{et al.}~\cite{zhang2019image} applied patch matching between VGG features of the LR and reference images to adaptively transfer textures from the reference images to the LR images. In TTSR~\cite{yang2020learning}, Yang \emph{et al.} proposed a texture transformer network to search and transfer relevant textures from the reference images to the LR images based on the attention mechanisms.
\subsubsection{\textbf{Transformer-based SISR}}
The key idea of Transformer is the “self-attention” mechanism, which can capture long-term information between sequence elements. Recently, Transformer~\cite{vaswani2017attention} has achieved brilliant results in NLP tasks. For example, the pre-trained deep learning models (e.g., BERT~\cite{devlin2018bert}, GPT~\cite{radford2019language}) have shown effectiveness over conventional methods. Inspired by this, more and more researchers have begun to explore the application of Transformer in computer vision tasks and have achieved breakthrough results many tasks.
Nowadays, some researchers try to introduce Transformer to image restoration tasks. For exsample, Chen \emph{et al.} proposed the Image Processing Transformer (IPT~\cite{chen2021pre}) which was pre-trained on large-scale datasets. In addition, contrastive learning is introduced for different image processing tasks. Therefore, the pre-trained model can efficiently be employed on the desired task after finetuning. However, IPT~\cite{chen2021pre} relies on large-scale datasets and has a large number of parameters (over 115.5M parameters), which greatly limits its application scenarios. To solve this issue, Liang \emph{et al.} proposed the SwinIR~\cite{liang2021swinir} for image restoration based on the Swin Transformer~\cite{liu2021swin}. Specifically, the Swin Transformer blocks (RSTB) is proposed for feature extraction and DIV2K+Flickr2K are used for training. Moreover, Lu \emph{et al.}~\cite{lu2021efficient} proposed an Efficient Super-Resolution Transformer (ESRT) for fast and accurate SISR. It is worth noting that ESRT is a lightweight model, which achieves competitive results with fewer parameters and low computing costs. Transformer is a powerful technology, but how to use fewer parameters and datasets to effectively train the model is still worth exploring.
\section{Domain-Specific Applications}\label{DSA}
\subsection{Real-World SISR}
The degradation modes are complex and unknown in real-world scenarios, where downsampling is usually performed after anisotropic blurring and sometimes signal-dependent noise is added. It is also affected by the in-camera signal processing (ISP) pipeline. Therefore, SISR models trained on bicubic degradation exhibit poor performance when handling real-world images. Moreover, all the aforementioned models can only be applied to some specific integral upsampling factors, but it is essential to develop scale arbitrary SISR models for different practical applications.
Recently, some datasets and new technologies have been proposed for real SISR. In~\cite{cai2019toward}, the RealSR dataset is proposed, where paired LR-HR images on the same scene are captured by adjusting the focal length of a digital camera. Meanwhile, a Laplacian Pyramid based Kernel Prediction Network (LP-KPN) is trained with this dataset to learn per-pixel kernels to recover SR images. After that, a series of real image pairs-based methods~\cite{shi2020ddet,wei2020component,sun2021learning} are proposed. However, this dataset are post-processed and difficult to collect in large quantities, which still limits the model performance. Otherwise, some new technologies have been proposed, such as unsupervised learning~\cite{prajapati2020unsupervised,kim2020unsupervised}, self-supervised learning~\cite{shocher2018zero,kim2020dual}, zero-shot learning~\cite{shocher2018zero,emad2021dualsr}, meta-learning~\cite{soh2020meta,park2020fast}, blind SISR, and scale arbitrary SISR~\cite{hu2019meta,wanglearning}. In this part, we introduce the latter three methods due to their impressive foresight and versatility.
\subsubsection{\textbf{Blind SISR}}
Blind SISR has attracted increasing attention due to its significance in real-world applications, which aims to super-resolved LR images with unknown degradation. According to the ways of degradation modelling, they can be simply divided into two categories: explicit degradation modeling methods and implicit degradation modeling methods. Among them, explicit degradation modeling methods can be further divided into two categories according to whether they use the kernel estimation technology. For instance, Zhang \emph{et al.} proposed a simple and scalable deep CNN framework for multiple degradation (SRMD~\cite{zhang2018learning}) learning. In SRMD, the concatenated LR image and degradation maps are taken as input after the dimensionality stretching strategy. In DPSR~\cite{zhang2019deep}, deep super-resolver can be used as a prior with a new degradation model, in order to handle LR images with arbitrary blur kernels. After that, UDVD~\cite{xu2020unified}, AMNet~\cite{hui2021learning}, USRNet\cite{zhang2020deep}, and a series of blind SISR methods are proposed by using the degradation map as an additional input for SR images reconstruction. In contrast, some blind SISR methods pay attention to the kernel estimation along with the SR process~\cite{gu2019blind,luo2020unfolding,kim2021koalanet,yamac2021kernelnet}. For example, in IKC~\cite{gu2019blind}, the iterative kernel correction procedure is proposed to help the blind SISR task to find more accurate blur kernels. In DAN~\cite{luo2020unfolding}, Luo \emph{et al.} adopted an alternating optimization algorithm to estimate blur kernel and restore SR image in a single network, which makes the restorer and estimator be well compatible with each other, and thus achieves good results in kernel estimation. However, the reconstruction accuracy of the above methods greatly depends on the accuracy of the degradation mode estimation. To address this issue, more implicit degradation modeling methods are proposed~\cite{yuan2018unsupervised,wang2021unsupervised,wang2021real}, which aim to implicitly learn the potential degradation modes by the external datasets.
\subsubsection{\textbf{Meta-Learning}}
It is hard for artificial agents to quickly adapt to new things/data like human intelligence, since it is challenging to integrate the prior experience with a few more new information. Meta-learning, or learning to learn, is the mechanism proposed for the learning-based problems, which is usually used in few-shot/zero-shot learning and transfer learning. In meta-learning, the trained model quickly learns a new task in large task space, where the test samples are used to optimize the meta-learner, therefore the model can quickly adapt with the help of the meta-learner when it encounters new tasks. In SISR, considering the lack of real paired samples, we hope that the model can be trained on simulated paired datasets and then transfer the learned experience to the real SISR task. To address this issue, Soh \emph{et al.} proposed the MZSR~\cite{soh2020meta}. In MZSR, a novel training scheme based on meta-transfer learning is proposed to learn an effective initial weight for fast adaptation to new tasks with the zero-shot unsupervised setting, thus the model can be applied to the real-world scenarios and achieve good results. In~\cite{park2020fast}, Park \emph{et al.} proposed an effective meta-learning method to further improve the model performance without changing the architecture of conventional SISR networks. This method can be applied to any existing SISR models and effectively handle unknown SR kernels. In~\cite{hu2020meta}, Hu \emph{et al.} proposed the first unified super-resolution network for arbitrary degradation parameters with meta-learning, termed Meta-USR~\cite{hu2020meta}.
\subsubsection{\textbf{Scale Arbitrary SISR}}
In real application scenarios, in addition to processing real images, it is also important to handle arbitrary scale factors with a single model. To achieve this, Hu \emph{et al.} proposed two simple but powerful methods termed Meta-SR~\cite{hu2019meta} and Meta-USR~\cite{hu2020meta}. Among them, Meta-SR is the first SISR method that can be used for arbitrary scale factors and Meta-USR is an improved version that can be applied to arbitrary degradation mode (including arbitrary scale factors). Although Meta-SR and Meta-USR achieve promising performance on non-integer scale factors, they cannot handle SR with asymmetric scale factors. To alleviate this problem, Wang \emph{et al.}~\cite{wanglearning} suggested learning the scale-arbitrary SISR model from scale-specific networks and developed a plug-in module for existing models to achieve scale-arbitrary SR. Specifically, the proposed plug-in module uses conditional convolution to dynamically generate filters based on the input scale information, thus the networks equipped with the proposed module achieve promising results for arbitrary scales with only a single model.
\subsection{Remote Sensing Image Super-Resolution}
With the development of satellite image processing, remote sensing has become more and more important. However, due to the limitations of current imaging sensors and complex atmospheric conditions, such as limited spatial resolution, spectral resolution, and radiation resolution, we are facing huge challenges in remote sensing applications.
Recently, many methods have been proposed for remote sensing image super-resolution. For example, a new unsupervised hourglass neural network is proposed in~\cite{haut2018new} to super-resolved remote sensing images. The model uses a generative random noise to introduce a higher variety of spatial patterns, which can be promoted to a higher scale according to a global reconstruction constraint. In~\cite{gu2019deep}, a Deep Residual Squeeze and Excitation Network (DRSEN) is proposed to overcome the problem of the high complexity of remote sensing image distribution. In~\cite{zhang2020remote}, a mixed high-order attention network (MHAN) is proposed, which consists of a feature extraction network for feature extraction and a feature refinement network with the high-order attention mechanism for detail restoration. In~\cite{dong2020remote}, the authors developed a Dense-Sampling Super-Resolution Network (DSSR) to explore the large-scale SR reconstruction of the remote sensing imageries.
\subsection{Hyperspectral Image Super-Resolution}
In contrast to human eyes that can only be exposed to visible light, hyperspectral imaging is a technique for collecting and processing information across the entire range of electromagnetic spectrum\cite{rickard1993hydice}. The hyperspectral system is often compromised due to the limitations of the amount of the incident energy, hence there is a trade-off between the spatial and spectral resolution. Therefore, hyperspectral image super-resolution is studied to solve this problem.
In~\cite{mei2017hyperspectral}, a 3D fully convolutional neural network is proposed to extract the feature of hyperspectral images. In~\cite{li2018single}, Li \emph{et al.} proposed a grouped deep recursive residual network by designing a group recursive module and embedding it into a global residual structure. In~\cite{fu2019hyperspectral}, an unsupervised CNN-based method is proposed to effectively exploit the underlying characteristics of the hyperspectral images. In~\cite{jiang2020learning}, Jiang \emph{et al.} proposed a group convolution and progressive upsampling framework to reduce the size of the model and made it feasible to obtain stable training results under small data conditions. In~\cite{liu2021spectral}, a Spectral Grouping and Attention-Driven Residual Dense Network is proposed to facilitate the modeling of all spectral bands and focus on the exploration of spatial-spectral features.
\subsection{Light Field Image Super-Resolution}
Light field (LF) camera is a camera that can capture information about the light field emanating from a scene and can provide multiple views of a scene. Recently, the LF image is becoming more and more important since it can be used for post-capture refocusing, depth sensing, and de-occlusion. However, LF cameras are faced with a trade-off between spatial and angular resolution~\cite{wang2020spatial}. In order to solve this issue, SR technology is introduced to achieve a good balance between spatial and angular resolution.
In~\cite{yoon2017light}, a cascade convolution neural network is introduced to simultaneously up-sample both the spatial and angular resolutions of a light field image. Meanwhile, a new light field image dataset is proposed for training and validation. In order to reduce the dependence of accurate depth or disparity information as priors for the light-field image super-resolution, Sun \emph{et al.}~\cite{wang2018lfnet} proposed a bidirectional recurrent convolutional neural network and an implicitly multi-scale fusion scheme for SR images reconstruction. In~\cite{wang2020spatial}, Wang \emph{et al.} proposed a spatial-angular interactive network (LF-InterNet) for LF image SR. Meanwhile, they designed an angular deformable alignment module for feature-level alignment and proposed a deformable convolution network (LF-DFnet~\cite{wang2020light}) to handle the disparity problem of LF image SR.
\subsection{Face Image Super-Resolution}
Face image super-resolution is the most famous field in which apply SR technology to domain-specific images. Due to the potential applications in facial recognition systems such as security and surveillance, face image super-resolution has become an active area of research.
Recently, DL-based methods have achieved remarkable progress in face image super-resolution. In~\cite{zhou2015learning}, a dubbed CPGAN is proposed to address face hallucination and illumination compensation together, which is optimized by the conventional face hallucination loss and a new illumination compensation loss. In~\cite{zhu2016deep}, Zhu \emph{et al.} proposed to jointly learn face hallucination and facial spatial correspondence field estimation. In~\cite{yu2017hallucinating}, spatial transformer networks are used in the generator architecture to overcome problems related to misalignment of input images. In~\cite{zhang2018super,dogan2019exemplar}, the identity loss is utilized to preserve the identity-related features by minimizing the distance between the embedding vectors of SR and HR face images. In~\cite{gao2021robust}, the mask occlusion is treated as image noise and a joint and collaborative learning network (JDSR-GAN) is constructed for the masked face super-resolution task.
\subsection{Medical Image Super-Resolution}
Medical imaging methods such as computational tomography (CT) and magnetic resonance imaging (MRI) are essential to clinical diagnoses and surgery planning. Hence, high-resolution medical images are desirable to provide necessary visual information of the human body. Recently, many methods have been proposed for medical image super-resolution
For instance, Chen \emph{et al.} proposed a Multi-level Densely Connected Super-Resolution Network (mDCSRN~\cite{chen2018efficient}) with GAN-guided training to generate high-resolution MR images, which can train and inference quickly. In~\cite{wang2019ct}, a 3D Super-Resolution Convolutional Neural Network (3DSRCNN) is proposed to improve the resolution of 3D-CT volumetric images. In~\cite{zhao2019channel}, Zhao \emph{et al.} proposed a deep Channel Splitting Network (CSN) to ease the representational burden of deep models and further improve the SR performance of MR images. In~\cite{peng2020saint}, Peng \emph{et al.} introduced a Spatially-Aware Interpolation Network (SAINT) for medical slice synthesis to alleviate the memory constraint that volumetric data posed. All of these methods are the cornerstone of building the smart medical system and have great research significance and value.
\subsection{Stereo Image Super-Resolution}
The dual camera has been widely used to estimate the depth information. Meanwhile, stereo imaging can also be applied in image restoration. In the stereo image pair, we have two images with disparity much larger than one pixel. Therefore, full use of these two images can enhance the spatial resolution.
In StereoSR~\cite{jeon2018enhancing}, Jeon \emph{et al.} proposed a method that learned a subpixel parallax prior to enhancing the spatial resolution of the stereo images. However, the number of shifted right images is fixed in StereoSR, which makes it fail to handle different stereo images with large disparity variations. To handle this problem, Wang \emph{et al.}~\cite{wang2019learning,wang2020parallax} proposed a parallax-attention mechanism with a global receptive field along the epipolar line, which can generate reliable correspondence between the stereo image pair and improve the quality of the reconstructed SR images. In~\cite{wang2019flickr1024}, a dataset named Flickr1024 is proposed for stereo image super-resolution, which consists of 1024 high-quality stereo image pairs. In~\cite{ying2020stereo}, a stereo attention module is proposed to extend pre-trained SISR networks for stereo image SR, which interacts with stereo information bi-directionally in a symmetric and compact manner. In~\cite{wang2021symmetric}, a symmetric bi-directional parallax attention module and an inline occlusion handling scheme are proposed to effectively interact crossview information. In~\cite{dai2021feedback}, a Stereo Super-Resolution and Disparity Estimation Feedback Network (SSRDE-FNet) is proposed to simultaneously handle the stereo image super-resolution and disparity estimation in a unified framework.
\begin{table*}
\centering
\doublerulesepcolor{gray}
\setlength{\tabcolsep}{6mm}
\renewcommand\arraystretch{1.1}
\caption{PSNR/SSIM comparison on Set5 ($\times 4$), Set14 ($\times 4$), and Urban100 ($\times 4$). Meanwhile, the training datasets and the number of model parameters are provided. Sort by PSNR of Set5 in ascending order. Best results are \textbf{highlighted}. Please zoom in to see details.}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\rowcolor{gray!70} \textbf{Models} & \begin{tabular}[c]{@{}c@{}}\textbf{Set5}\\ \textbf{PSNR/SSIM}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Set14}\\ \textbf{PSNR/SSIM}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Urban100}\\ \textbf{PSNR/SSIM}\end{tabular} & \textbf{Training Datasets} & \textbf{Parameters} \\
\hline
SRCNN~\cite{dong2014learning} & 30.48/0.8628 & 27.50/0.7513 & 24.52/0.7221 & T91+ImageNet & 57K \\
\hline
\rowcolor{gray!30}ESPCN~\cite{shi2016real} & 30.66/0.8646 & 27.71/0.7562 & 24.60/0.7360 & T91+ImageNet & 20K \\
\hline
FSRCNN~\cite{dong2016accelerating} & 30.71/0.8660 & 27.59/0.7550 & 24.62/0.7280 & T91+General-100 & 13K \\
\hline
\rowcolor{gray!30}VDSR~\cite{kim2016accurate} & 31.35/0.8838 & 28.02/0.7680 & 25.18/0.7540 & BSD+T91 & 665K \\
\hline
LapSRN~\cite{lai2017deep} & 31.54/0.8855 & 28.19/0.7720 & 25.21/0.7560 & BSD+T91 & 812K \\
\hline
\rowcolor{gray!30}DRRN~\cite{tai2017image} & 31.68/0.8888 & 28.21/0.7721 & 25.44/0.7638 & BSD+T91 & 297K \\
\hline
MemNet~\cite{tai2017memnet} & 31.74/0.8893 & 28.26/0.7723 & 25.50/0.7630 & BSD+T91 & 677K \\
\hline
\rowcolor{gray!30}AWSRN-S~\cite{wang1904lightweight} & 31.77/0.8893 & 28.35/0.7761 & 25.56/0.7678 & DIV2K & 588K \\
\hline
IDN~\cite{hui2018fast} & 31.82/0.8903 & 28.25/0.7730 & 25.41/0.7632 & BSD+T91 & 678K \\
\hline
\rowcolor{gray!30}NLRN~\cite{liu2018non} & 31.92/0.8916 & 28.36/0.7745 & 25.79/0.7729 & BSD+T91 & 330K \\
\hline
CARN-M~\cite{ahn2018fast} & 31.92/0.8903 & 28.42/0.7762 & 25.62/0.7694 & DIV2K & 412K \\
\hline
\rowcolor{gray!30}MAFFSRN~\cite{muqeet2020multi} & 32.24/0.8952 & 28.61/0.7819 & 26.11/0.7858 & DIV2K & 550K \\
\hline
RFDN~\cite{liu2020residual} & 32.18/0.8948 & 28.58/0.7812 & 26.04/0.7848 & DIV2K & 441K \\
\hline
\rowcolor{gray!30}ESRT~\cite{lu2021efficient} & 32.19/0.8947 & \textbf{28.69}/\textbf{0.7833} & \textbf{26.39}/\textbf{0.7962} & DIV2K & 751K \\
\hline
IMDN~\cite{hui2019lightweight} & 32.21/0.8949 & 28.58/0.7811 & 26.04/0.7838 & DIV2K & 715K \\
\hline
\rowcolor{gray!30}MSFIN~\cite{wang2021lightweight} & \textbf{32.28}/\textbf{0.8957} & 28.57/0.7813 & 26.13/0.7865 & DIV2K & 682K \\
\hline
\hline
DSRN~\cite{han2018image} & 31.40/0.8830 & 28.07/0.7700 & 25.08/0.7470 & T91 & 1.2M \\
\hline
\rowcolor{gray!30}DRCN~\cite{kim2016deeply} & 31.53/0.8838 & 28.02/0.7670 & 25.14/0.7510 & T91 & 1.8M \\
\hline
MADNet~\cite{lan2020madnet} & 31.95/0.8917 & 28.44/0.7780 & 25.76/0.7746 & DIV2K & 1M \\
\hline
\rowcolor{gray!30}SRMD~\cite{zhang2018learning} & 31.96/0.8925 & 28.35/0.7787 & 25.68/0.7731 & BSD+DIV2K+WED & 1.6M \\
\hline
SRDenseNet~\cite{tong2017image} & 32.02/0.8934 & 28.50/0.7782 & 26.05/0.7819 & ImageNet & 2.0M \\
\hline
\rowcolor{gray!30}SRResNet~\cite{ledig2017photo} & 32.05/0.8910 & 28.49/0.7800 & -------/------- & ImageNet & 1.5M \\
\hline
MSRN~\cite{Li_2018_ECCV} & 32.07/0.8903 & 28.60/0.7751 & 26.04/0.7896 & DIV2K & 6.3M \\
\hline
\rowcolor{gray!30}CARN~\cite{ahn2018fast} & 32.13/0.8937 & 28.60/0.7806 & 26.07/0.7837 & BSD+T91+DIV2K & 1.6M \\
\hline
SeaNet~\cite{fang2020soft} & 32.33/0.8970 & 28.81/0.7855 & 26.32/0.7942 & DIV2K & 7.4M \\
\hline
\rowcolor{gray!30}CRN~\cite{ahn2018fast} & 32.34/0.8971 & 28.74/0.7855 & 26.44/0.7967 & DIV2K & 9.5M \\
\hline
EDSR~\cite{lim2017enhanced} & 32.46/0.8968 & 28.80/0.7876 & 26.64/0.8033 & DIV2K & 43M \\
\hline
\rowcolor{gray!30}RDN~\cite{zhang2018residual} & 32.47/0.8990 & 28.81/0.7871 & 26.61/0.8028 & DIV2K & 22.6M \\
\hline
DBPN~\cite{haris2019deep} & 32.47/0.8980 & 28.82/0.7860 & 26.38/0.7946 & DIV2K+Flickr2K & 10M \\
\hline
\rowcolor{gray!30}SRFBN~\cite{li2019feedback} & 32.47/0.8983 & 28.81/0.7868 & 26.60/0.8015 & DIV2K+Flickr2K & 3.63M \\
\hline
MDCN~\cite{li2020mdcn} & 32.48/0.8985 & 28.83/0.7879 & 26.69/0.8049 & DIV2K & 4.5M \\
\hline
\rowcolor{gray!30}RNAN~\cite{zhang2019residual} & 32.49/0.8982 & 28.83/0.7878 & 26.61/0.8023 & DIV2K & 7.5M \\
\hline
SRRFN~\cite{Li_2019_ICCV} & 32.56/0.8993 & 28.86/0.7882 & 26.78/0.8071 & DIV2K & 4.2M \\
\hline
\rowcolor{gray!30}IGNN~\cite{zhou2020cross} & 32.57/0.8998 & 28.85/0.7891 & 26.84/0.8090 & DIV2K & 48M \\
\hline
NLSA~\cite{mei2021image} & 32.59/0.9000 & 28.87/0.7891 & 26.96/0.8109 & DIV2K & 41M \\
\hline
\rowcolor{gray!30}RCAN~\cite{zhang2018image} & 32.63/0.9002 & 28.87/0.7889 & 26.82/0.8087 & DIV2K & 16M \\
\hline
SAN~\cite{dai2019second} & 32.64/0.9003 & 28.92/0.7888 & 26.79/0.8068 & DIV2K & 15.7M \\
\hline
\rowcolor{gray!30}HAN~\cite{niu2020single} & 32.64/0.9002 & 28.90/0.7890 & 26.85/0.8094 & DIV2K & 16.1M \\
\hline
IPT~\cite{chen2021pre} & 32.64/-------- & 29.01/-------- & 27.26/-------- & ImageNet & 115.5M \\
\hline
\rowcolor{gray!30}RFANet~\cite{liu2020residual} & 32.66/0.9004 & 28.88/0.7894 & 26.92/0.8112 & DIV2K & 11M \\
\hline
DRN-S~\cite{guo2020closed} & 32.68/0.9010 & 28.93/0.7900 & 26.84/0.8070 & DIV2K+Flickr2K & 4.8M \\
\hline
\rowcolor{gray!30}RRDB~\cite{wang2018esrgan} & 32.73/0.9011 & 28.99/0.7917 & 27.03/0.8153 & DIV2K+Flickr2K & 16.7M \\
\hline
DRN-L~\cite{guo2020closed} & 32.74/0.9020 & 28.98/0.7920 & 27.03/0.8130 & DIV2K+Flickr2K & 9.8M \\
\hline
\rowcolor{gray!30}SwinIR~\cite{liang2021swinir} & \textbf{32.92}/\textbf{0.9044} & \textbf{29.09}/\textbf{0.7950} & \textbf{27.45}/\textbf{0.8254} & DIV2K+Flickr2K & 11.8M \\
\hline
\end{tabular}
\label{Results}
\end{table*}
\section{Reconstruction Results}
In order to help readers intuitively know the performance of the aforementioned SISR models, we provide a detailed comparison of reconstruction results of these models. According to the number of model parameters, we divide SISR models into two types: lightweight models and large models. Note that we call model with parameters less than 1000K as lightweight model and model with parameters more than 1M (M=million) as large model. Specifically, we collect 44 representative SISR models, including the most classic, latest, and SOTA SISR models.
In TABLE~\ref{Results} we provide the reconstruction results, training datasets, and model parameters of these models (lightweight models and large models are separated by the bold black line). According to the results, we can find that: (1) using a large dataset (e.g., DIV2K+Flickr2K) can make the model achieve better results; (2) it is not entirely correct that the more model parameters, the better the model performance. This means that unreasonably increasing the model size is not the best solution; (3) Transformer-based models show strong advantages, whether in lightweight models (e.g., ESRT~\cite{lu2021efficient}) or large models (e.g., SwinIR~\cite{liang2021swinir}); (4) research on the tiny model (parameters less than 1000K) is still lacking. In the future, it is still important to explore more discriminative evaluation indicators and develop more effective
SISR models.
\section{Remaining Issues and Future Directions}
It is true that the above models have achieved promising results and have greatly promoted the development of SISR. However, we cannot ignore that there are still many challenging issues in SISR. In this section, we will point out some of the challenges and summarize some promising trends and future directions.
\subsection{Lightweight SISR for Edge Devices}
With the huge development of smart terminal market, research on lightweight SISR models has gained increasing attention. Although existing lightweight SISR models have achieved a good balance between model size and performance, we find that they still cannot be used in edge devices (e.g., smartphones, smart cameras). This is because the model size and computational costs of these models are still exceed the limits of edge devices.
Therefore, exploring lightweight SISR models that can be practical in use for the edge devices has great research significance and commercial value. To achieve this, more efficient network structure and mechanisms are worthy of further exploration. Moreover, it is also necessary to use technologies like network binarization~\cite{ma2019efficient} and network quantization~\cite{li2020pams} to further reduce the model size. In the future, it is worth combining the lightweight SISR models with model compression schemes to achieve the usage of SISR on edge devices.
\subsection{Flexible and Adjustable SISR}
Although DL-based SISR models have achieved gratifying results, we notice a phenomenon that the structure of all these models must be consistent during training and testing. This greatly limits the flexibility of the model, making the same model difficult to be applied to different applications scenarios. In other words, training specially designed models to meet the requirements of different platforms in necessary for previous methods. However, it will require a great amount of manpower and material resources. Therefore, it is crucial for us to design a flexible and adjustable SISR model that can be deployed on different platforms without retraining while keeping good reconstruction results.
\subsection{New Loss Functions and Assessment Methods}
In the past, most of SISR models relied on L1 loss or MSE loss. Although some other new loss functions like content loss, texture loss, and adversarial loss have been proposed, they still cannot achieve a good balance between reconstruction accuracy and perceptual quality. Therefore, it remains a important research topic to explore new loss functions that can ease the perception-distortion trade-off. Meanwhile, some new assessment methods are subjective and unfair. Therefore, new assessment methods that can efficiently reflect image perception and distortion at the same time are also essential.
\subsection{Mutual Promotion with High-Level Tasks}
As we all know, high-level computer vision tasks (e.g., image classification, image segmentation, and image analysis) are highly dependent on the quality of the input image, so SISR technology is usually used for pre-processing. Meanwhile, the quality of the SR images will greatly affect the accuracy of these tasks. Therefore, we recommend using the accuracy of high-level CV tasks as an evaluation indicator to measure the quality of the SR image. Meanwhile, we can design some loss functions related to high-level tasks, thus we can combine the feedback from other tasks to further improve the quality of SR images. On the other hand, we find that the two-step method of pre-processing the image using the SISR model is inefficient, which cannot fully use the potential features of the image itself, resulting in poor model performance. Therefore, we recommend exploring SISR models that can interact with high-level CV tasks, thus SISR and other tasks can promote and learn from each other.
\subsection{Efficient and Accurate Real SISR}
Real SISR is destined to become the future mainstream in this field. Therefore, it will inevitably become the focus of researchers in the next few years. On the one hand, a sufficiently large and accurate real image dataset is critical to Real SISR. To achieve this, in addition to the manual collection, we recommend using generative technology to simulate the images, as well as using the generative adversarial network to simulate enough degradation modes to build the large real dataset. On the other hand, considering the difficulty of constructing real image dataset, it is important to develop unsupervised learning-based SISR, meta learning-based SISR, and blind SISR. Among them, unsupervised learning can make the models get rid of the dependence on dataset, meta learning can help models migrate from simulated datasets to real data with simple fine-tuning, and blind SISR can display or implicitly learn the degradation mode of the image, and then reconstruct high-quality SR images based on the learned degradation mode. Although plenty of blind SISR methods have been proposed, they always have unstable performance or have strict prerequisites. Therefore, combining them may bring new solutions for real SISR.
\subsection{Efficient and Accurate Scale Arbitrary SISR}
SISR has seen its applications in diverse real-life scenarios and users. Therefore, it is necessary to develop a flexible and universal scale arbitrary SISR model that can be adapted to any scale, including asymmetric and non-integer scale factors. Currently, most DL-based SISR models can only be applied to one or a limited number of multiple upsampling factors. Although a few scale arbitrary SISR methods have also been proposed, they tend to lack the flexibility to use and the simplicity to be implemented, which greatly limits their application scenarios. Therefore, exploring a CNN-based accurate scale arbitrary SISR model as simple and flexible as Bicubic is crucial to the spread of SISR technology.
\subsection{Consider the Characteristics of Different Images}
Although a series of models have been proposed for domain-specific applications, most of them directly transfer the SISR methods to these specific fields. This is the simplest and feasible method, but it will also inhibit the model performance since they ignore the data structure characteristics of the domain-specific images. Therefore, fully mining and using the potential prior and data characteristics of the domain-specific images is beneficial for efficient and accurate domain-specific SISR models construction. In the future, it will be a trend to further optimize the existing SISR models based on the prior knowledge and the characteristics of the domain-specific images.
\section{Conclusion}
In this survey, we have given a comprehensive overview of DL-based single image super-resolution methods according to their targets, including reconstruction efficiency, reconstruction accuracy, perceptual quality, and other technologies that can further improve model performance. Meanwhile, we provided a detailed introduction to the related works of SISR and introduced a series of new tasks and domain-specific applications extended by SISR. In order to view the performance of each model more intuitively, we also provided a detailed comparison of reconstruction results. Moreover, we provided some underlying problems in SISR and introduced several new trends and future directions worthy of further exploration. We believe that the survey can help researchers better understand this field and further promote the development of this field.
\ifCLASSOPTIONcompsoc
\bibliographystyle{IEEEtran}
|
1,108,101,565,232 | arxiv | \section{Introduction}
In the last few decades the study of interaction between ultra intense laser pulse and plasma has attracted significant
amount of interest because of their applications in number of areas like laser fusion \cite{Tabak_94,mori_14},
plasma-based particle accelerators \cite{liu_13,joshi_07,Santala_01,Tajima_79} and photon acceleration
schemes \cite{Santala_2000} etc. Varieties of
exact nonlinear localized solutions have been observed in the extensive study of the laser plasma interaction
process \cite{Tabak_94,kaw_92, Farina_05,Mori_88, Katsouleas_88, Kuehl_93, sudan_97}.
When a laser pulse interacts with the plasma medium, it expels plasma electrons due to ponderomotive force and creates a cavity evacuating plasma electrons from the center of a laser pulse. These electrons are pulled back by the stationary ions which are left in the background. Hence the exact balance between ponderomotive force and electrostatic force leads to a
configuration where the electrons are piled up at the edge and prevents the leak of radiation.
Such a structure can either be stationary or be propagating with constant speed in the plasma.
The complete characterization of these structures has been made earlier \cite{saxena_06, Poornakala_02, Farina_01, sita_11, deepa_15}.
Depending on the number of peaks of the light pulse trapped in the density cage, these solutions have been termed as a single peak, paired and multiple peak solutions. The question of accessibility and stability of such structures has been addressed by fluid simulations \cite{Poornakala_02,saxena_06, saxena_07, sita_11}. Some PIC studies have also been carried out \cite{Farina_01,bulanov_98} on the evolution of coherent structures involving light matter coupled systems.
The coherent structures are important in many ways.
For instance, they can be used as a means for transporting energy in plasma medium.
They can also be utilized for particle and photon acceleration purposes \cite{jung_2011}.
A precarious spatial balance of electrostatic and electromagnetic fields is required for the formation of exact solitonic structures. The frequency of the electromagnetic field and the propagation speed also have to satisfy stringent eigenvalue condition, for multiple peaks, only discrete eigenvalues are permitted. These conditions are very difficult to satisfy in a realistic situation. It is shown that there also exist time-dependent localized structures which do not require any such delicate balance between the various fields. Such structures though time dependent, are observed to survive as a single entity for a very long duration, e.g. for several hundreds of plasma periods.
The energy leakage is minimal. We feel that such versatile long-lived structures are also well suited for many applications including the transport of energy.
The context of the formation of such time-dependent structures, the behavior they exhibit during evolution etc., have been examined in considerable detail using fluid and PIC simulations. In this manuscript, however, we will restrict to non-propagating time-dependent structures. Propagating time-dependent structures have also been observed, the details of which will be presented in a subsequent manuscript.
This paper has been organized as follows. The next section contains a description of the governing equation along with a brief discussion on numerical simulation and associated diagnostics.
In section III, we provide a description of the spontaneously formed time-dependent non-propagating structure. We show that in the collisional interaction between high but unequal amplitude single peak structures, the remnant structure displays interesting oscillatory behavior. These remnants
are observed to invariably have a high amplitude
(exceeding the upper limit of radiation amplitude for static single peak solutions \cite{bulanov_98}) static oscillating profile.
In section IV, we recreate the nature of time dependence observed in such spontaneously formed structures by deliberately disturbing the delicate balance between the radiation and the electron density profile of the exact non-propagating solution proposed by Esirkepov et al \cite{bulanov_98}.
Despite significant disturbance added to the solution, the structure does not disintegrate but exhibits similar traits of time dependence as observed in a spontaneously formed structure in the aftermath of collisions between two exact solutions in section III.
A detailed study of these system shows that the energy alternates between field and kinetic forms. Basically, the excess radiation introduced in the system tries to leak out of the structure and in the process excites electron density oscillations. The excitation of plasma wave in an inhomogeneous density background maintained by the equilibrium structure along with relativistic effect leads to wave breaking. Both fluid and PIC simulations exactly match with each other before the onset of wave breaking.
In the event of wave breaking as the peaked density structures get formed the total energy evaluated from fluid description shows a small dip, whereas in the PIC simulations the
energy continues to remain conserved. It is observed that around this time in PIC, the particles acquire random kinetic energy which is accounted for in the energy evaluation in PIC.
Interestingly even after wave breaking the radiation structure continues to retain its identity and the out of phase oscillations between field and kinetic energy continues. After wave breaking even though there is no exact match between the fluid and PIC time evolution, the characteristic features of the oscillations in terms of frequency etc., remains identical.
Section V contains the summary and discussion.
\section{Simulation set up}
\subsection{Fluid simulation}
The basic governing equations to study the formation and propagation of solitons in the coupled laser plasma system are the relativistic fluid-Maxwell equations. These equations contain the continuity equation, momentum equation for electrons and the full set of Maxwell's equations.
Here the dynamics of ions are ignored because of their slow response due to heavy mass. The intensity of the light
field (laser) is considered to be high enough for the electrons to get driven in the relativistic domain.
This yields the following complete set of the equation in the normalized variables for variations restricted in 1-D, along with $\hat{x}$ directions.
\begin{equation}
\frac{\partial n}{\partial t} + \frac{\partial (nv_x) }{\partial x}= 0
\label{eq1}
\end{equation}
\begin{equation}
\left( \frac{\partial}{\partial t} + v_x\frac{\partial }{\partial x}\right)
(\gamma \vec v) = -\vec{E} - (\vec{v} \times \vec{B})
\label{eq2}
\end{equation}
\begin{equation}
\frac {\partial{\vec B}} {\partial t}=-\frac {\partial} {\partial x}(\widehat{x} \times \vec{E})
\label{eq3}
\end{equation}
\begin{equation}
\frac {\partial{\vec E}} {\partial t}= n \vec{v}+\frac {\partial} {\partial x}(\widehat{x} \times \vec{B})
\label{eq4}
\end{equation}
\begin{equation}
\frac {\partial{E_x}} {\partial x}= (1- n)
\label{eq5}
\end{equation}
\begin{equation}
\frac {\partial{B_x}} {\partial x}= 0
\label{eq6}
\end{equation}
where $\gamma=(1-v^2)^{-1/2}$, is the relativistic factor associated with the plasma electron of
density $n$ and velocity $\vec{v}$. In above equations, $v_x$, $E_x$ and $B_x$ represents the
$x$- component of velocity $\vec{v}$, electric field $\vec{E}$ and magnetic field $\vec{B}$
respectively. We can also write,
$ \vec{E}=- \widehat{x} \frac {\partial \phi } {\partial x}-\frac{\partial \vec{A}}{\partial t}$ ,
$\frac {\partial^2 \phi } {\partial x^2}= (n-1)$
and $\vec{B}=\frac {\partial} {\partial x}(\widehat{x} \times \vec{A}) $, where $\phi$ and $\vec{A}$
represents electrostatic potential
and vector potential respectively. Here normalized variables are, $n \rightarrow \frac{n}{n_0}$,
$\vec{v}\rightarrow \frac{\vec{v}}{c}$, $\vec{E}\rightarrow \frac{e\vec{E}}{mc\omega_{pe}}$,
$\vec{B}\rightarrow \frac{e\vec{B}}{mc\omega_{pe}}$, $x\rightarrow \frac{x}{\frac{c}{\omega_{pe}}}$,
$t\rightarrow {t \omega_{pe}}$, $\phi \rightarrow \frac{e\phi}{mc^2}$, $A \rightarrow \frac{eA}{mc^2}$.
The equations (\ref{eq1}-\ref{eq6} ) form the complete set of equations to study the light plasma interaction system in 1-D.
A fluid code was developed which is based on flux-corrected transport scheme
\cite{boris_93} using the LCPFCT suite of subroutines.
The basic principle of LCPFCT routine is based on a generalization of two-step Lax-Wendroff method \cite{Press_92}.
\par\vspace{\baselineskip}
\subsection{Particle - In - Cell (PIC) Simulations}
The Particle -In - Cell (PIC) essentially uses the equation of motion (Lorentz force equation) for the evolution of particle velocity and position. The
Maxwell's equations are used to evolve electric and magnetic fields in a self-consistent manner. The methodology of the PIC simulations has been described in detail in many review articles \cite{Dawson_83} and books \cite{Birdsall_85}.
The box length of the system ($L_x$), cell size ($\Delta x$) and time step ($\Delta t$) are chosen similar for both fluid and
PIC simulations. The time step has been calculated using the Courant-Friedrich-Levy condition\cite{Courant_1928}. We have ignored the evolution of ions here. They only provide merely a smooth neutralizing background. The particle positions initially are chosen in such a fashion as to define the requisite electron density of the structure of choice. There are well-known prescriptions for the same \cite{Hockney_81}. The charge density and the current density defined at the grids are used for evolving the electric and magnetic fields. The electric and magnetic fields, in turn, are interpolated at the locations of the particle for advancing the velocity and the location of the particles through well-known schemes \cite{Birdsall_85}.
The results from PIC simulations are in general quite noisy. A choice of about $50$ to a maximum of about $250 $ particles per cell in our simulations showed a considerable reduction in noise.
We present the results in terms of same normalizations as discussed in the fluid subsection. The boundary condition of the system has been taken to be periodic to perform this simulation.
We have initiated our simulation using the profiles of density, velocity, electric field and magnetic field for various kind of localized structures composed of coupled light and plasma medium.
The profiles of density, velocity, electric field and magnetic field with time have been recorded.
\subsection{Energy and other diagnostics}
The total energy of the system is a crucial diagnostics for any simulation as it ensures the
accuracy of the simulation. It can be shown that for the set of Eqs(\ref{eq1}-\ref{eq6}) the
total energy composed of the sum of field and kinetic energy of the particles satisfies the following
conservation law:
\begin{equation}
\frac{1}{2}\frac{\partial}{\partial t} \int \left[ E^2 +B^2 \right] dx + \frac{\partial}{\partial t}\int \left[n(\gamma -1)
\right] dx + \int \left[ \nabla \cdot (\vec{E} \times \vec{B}) \right] dx = 0
\end{equation}
The first term here is the field energy, the second term denotes the kinetic energy of the plasma
medium. Here $\gamma$ is the relativistic factor. The third term represents the energy loss through Poynting flux. In our
simulations, we have considered a sufficiently large box size to avoid the leaking of the radiation
from the boundaries in the time scale of interest. Thus the sum of field and kinetic energy of the
system should remain constant.
In the fluid simulations the energies
are evaluated numerically by summing up at the grid locations.
$$ FE(t)= \frac{1}{2} \sum_i [E_i^2(t) +B_i^2(t)] \Delta x$$
$$KE(t)= \sum_i n_i(t) [\gamma_i(t) -1] \Delta x$$
$$TE(t)=KE(t) +FE(t)$$
here $i=1,2,3,...N_x$ and $N_x$ is the number of spatial grid points and
$\Delta x $ denotes the grid spacing. Also,
$E_i(t)$, $B_i(t)$, $n_i(t)$ and $\gamma_i(t)$ are the values of electric
field, magnetic field, electron density and relativistic factor respectively at the $i-$th grid point
at any time $t$.
In the PIC simulations, the field energy is calculated in the similar fashion by summing the values
at the grid locations. However, for the kinetic energy, the sum is over all the particles. Thus, we have
\begin{equation}
KE(t)=\sum_j \gamma_j(n_j-1)
\end{equation}
Where $j=1,2,...N_p$ represents the index associated with the electrons in the system.
The change in kinetic is related to the work done by the electric field on particles (electrons here).
We thus have
\begin{equation}
\frac{\partial}{\partial t}\int \left[n(\gamma -1) \right] dx = - \int n \vec{v} \cdot \vec{E} dx = - \frac{1}{2}\frac{\partial}{\partial t} \int \left[ E^2 +B^2 \right] dx
\end{equation}
\section{Spontaneously excited non-propagating time-dependent structures}
The coupled set of laser plasma equations permits a variety of stationary and propagating exact
solutions which have been characterized in
the parameter space of frequency vs. group velocity space \cite{kaw_92,saxena_06}.
As mentioned earlier, these solutions are interesting and tell us that it is possible in some cases for the radiation to move in a plasma such that the plasma wakefield excited at its front gets absorbed by its tail. For structures with zero group speed, however, the front and tail cannot be distinguished and the light wave here is simply trapped in an electron density cavity that it digs for itself by ponderomotive pressure. Exact analytical form of such solutions have been obtained by
Esirkepov et al \cite{bulanov_98}. They showed that there is an upper limit on the amplitude of such structures which define complete cavitation.
The evolution and collisional interaction of some of these structures have been studied \cite{saxena_06,saxena_07}. At low amplitudes, the behavior of these structures is similar to solitons.
At high amplitudes, there is a perceptible difference in their behavior. In fact, we show here that when two high but unequal amplitude oppositely
moving exact single peak structures suffer collision as shown in Fig.(\ref{figure_1}), they merge together and form a time dependent non-propagating structure.
The amplitude of this spontaneously formed structure shows oscillations in time and there is also evidence of the radiation leaking out from it.
However, the overall structure seems to persist for a long time.
The amplitude of radiation, in this case, exceeds the upper permissible limit of static exact solutions [ Esirkepov et al., \cite{bulanov_98}] and triggers
a complex interplay of plasma density and radiation field oscillations. It should be noted that even though the original structures had
group speeds in opposite directions of unequal magnitude the final structure is non-propagating. This has been verified in a number of such collisional studies. It appears that there is a preference
towards formation of non-propagating structures. These non-propagating time-dependent structures survive for a long time.
This seems to suggest that the time-dependent structures are more realistic. They form readily, do not require a
precarious balance of various fields and survive for a considerable time.
In the next section, we specifically choose the non-propagating exact structures and deliberately introduce a perturbation in it ( in terms of an enhanced radiation) to observe its subsequent evolution.
\section{Time dependence of Esirkepov structures with enhanced radiation}
Esirkepov et al. \cite{bulanov_98} have obtained an exact analytical form for the stationary solitonic structures. We take the analytical form of the solution and express it in terms of the initial conditions for electric, magnetic and velocity fields. These fields are then evolved in our simulations.
The initial density of the electron is chosen to satisfy the analytical form of the solution.
The stationary solitary solutions in our simulation was chosen for a value of $A_0=1$;
where $A_0$ is the peak value of $R$ of the solution. The frequency of laser has been derived from,
$\omega=\sqrt{2\sqrt{1+A_0^2}-2}/A_0 \simeq 0.9102$. We have verified the stationarity and energy conservation of
these exact solutions with both our fluid and PIC codes.
We now take the same solution, keep all the fields identical but enhance the trapped radiation inside by a multiplicative factor of $1.1$. This disturbs the precarious
parallel force balance of the electron momentum equation leading to oscillations in electron density.
The evolving profiles for $R$ and $n$ are shown in figure~(\ref{figure_2}) and figure~(\ref{figure_3}) respectively from both fluid and PIC simulations.
The two simulations are in remarkable agreement. The oscillations in density and radiation field $R$ seem to be quite regular.
In one of the subplots of
figure~(\ref{figure_3}), we have shown the evolution of the peak of both density $n$ and $R$ fields. It should be noted that the oscillations of the two fields are typically always out of phase in time. It can be understood by realizing that the excess radiation trapped inside the structure offers excess ponderomotive pressure due to which the electron density is pushed out and starts getting enhanced at the edge. As the density cavity gets deeper and broader the radiation trapped at the center expands out leading to the fall in its peak. This leads to a drop in ponderomotive pressure which is now dominated by the electrostatic pull acting on electrons by the ions. Thus the plasma oscillations triggered by the excess radiation continue in time.
It should be noted that these plasma oscillations are occurring against an inhomogeneous background density profile sustained by the trapped radiation.
Moreover, the relativistic $\gamma$ factor is also a function of space. Thus such an oscillation would be expected to suffer wave breaking.
From the time dependence of the peak of the radiation field $R$, it can be observed that it steadily decreases. This happens as a result of steady leakage of the radiation from the
edges (see
inset of figure~(\ref{figure_2}) where the edge portion of the structure has been zoomed in at $t = 40$)
indicating clearly the leaking of the radiation. The peak of density oscillations, however, steadily increases with time.
In fact, the density profile is observed to generate several sharply peaked structures.
Around $ t\sim 216$ as shown in figure~(\ref{figure_4}), the density acquires a very spiky form.
This, in fact, is a signature of wave breaking phenomena. We have tracked the
total energy (TE) evolution in Fig.(\ref{figure_5}) which is conserved all throughout but shows a
very small dip exactly at $ t\sim 216$ in the fluid simulation, exactly at the same time when the spike in density spike gets formed.
Despite changing the grid size in the fluid simulation the energy dip and the density spike typically occurs around the same time.
It is interesting to observe that in the PIC simulations too exactly around this time the density spikes appear.
At the wave breaking point in the fluid code, the energy gets lost to the grid.
In PIC simulations where the total energy incorporates the individual kinetic energies of the particles, the total energy remains conserved (See figure~(\ref{figure_5})).
However, this energy now shows up as the random kinetic energy of the particles.
It is interesting to note that the FE and KE continue to remain out of phase before as well as after and also during the wave breaking process as can be viewed from the enhanced inset of the Fig.(\ref{figure_5}). The evolution of FE and KE match exactly in the fluid and PIC simulations before wave breaking. However after wave breaking there appears a slight mismatch between the fluid and PIC simulations.
We also considered perturbing the radiation field asymmetrically in the structure.
This choice ensures that at one of the edges radiation pressure dominates whereas at the other edge scalar potential is the dominant force. In this case, from Fig~(\ref{figure_7}),
one observes that the structures show asymmetric oscillations with one edge expanding while the other contracts.
Basically, the edge where the radiation pressure exceeds the equilibrium value, the radiation tends to expand out.
At the other edge where the $R$ is lower than
equilibrium value, it is pushed in. Some amount of radiation is again observed to leak out. The asymmetric oscillations
are in much closer qualitative agreement with the results reported in
section III where the collision between unequal structures led to the formation of non-propagating time-dependent structure.
The plasma oscillations also get excited and the amplitude of density keeps growing. Ultimately wave breaking occurs at about $t \sim 184$ which is tracked by the formation of
density spikes (see fig~(\ref{figure_8})) and a dip in the value of the total energy which can be seen from figure~(\ref{figure_9}).
Thereafter one again ends up with structures in which radiation trapped between density peaks survives for a long duration.
\section{Summary and Discussion}
We report observations of time-dependent localized structures in the
1-D fluid as well as PIC simulations for the coupled laser plasma system.
Despite such a time dependence the structures are found to be fairly robust in a sense that they survive by
retaining their identity for several hundreds of plasma periods.
Such time-dependent structures can form either spontaneously or can also be recreated by disturbing the delicate balance between various fields required in the context of exact solutions.
For instance, the collision amidst two high but different amplitude exact solutions also leads to the formation of a non-propagating structure with oscillating amplitudes.
the same is also observed when the exact non-propagating solutions obtained by Esirkepov et al \cite{bulanov_98} are deliberately disturbed by introducing excess radiation.
It should be noted that while the exact solutions require a very delicate balance between the radiation and density fields for the time-dependent structures, it
is not necessary to satisfy such a stringent condition. Thus while it would be rather difficult to form exact solutions experimentally, in contrast, these time-dependent structures which form spontaneously and retain their identity for a long time would be more easily amenable in experimental observations.
This raises several questions. Does the laser plasma system permit a new variety of time-dependent solutions, where the time dependence is not merely restricted to steady propagation?
We also have evidence of obtaining time dependent propagating structures spontaneously which will be reported in a subsequent publication.
|
1,108,101,565,233 | arxiv | \chapter{Distributed Resource Allocation in 5G Cellular Networks}
\title{Distributed Resource Allocation in 5G Cellular Networks\footnote{Book chapter in \emph{Towards 5G: Applications, Requirements and Candidate Technologies}, Wiley, 2015, (Eds. Rath Vannithamby and Shilpa Telwar).
}}
\author{Monowar Hasan and Ekram Hossain \\
University of Manitoba, Canada}
\date{}
\maketitle
\section{Introduction}
The fifth generation (5G) cellular networks are expected to provide wide variety of high rate (i.e., 300 Mbps and 60 Mbps in downlink and uplink,
respectively, in 95 percent of locations and time \cite{metis_5g}) multimedia services. The 5G communication platform is seen as a global unified standard with seamless connectivity among existing standards, e.g., High Speed Packet Access (HSPA), Long Term Evolution-Advanced (LTE-A) and Wireless Fidelity (WiFi). Some of the emerging features and trends of 5G networks are: multi-tier dense heterogeneous networks \cite{horizon_5g, toshiba_5g}, device-to-device (D2D) and machine-to-machine (M2M) communications \cite{toshiba_5g, d2d_5g}, densification of the heterogeneous base stations (e.g., extensive use of relays and small cells) \cite{nw_dens_5g}, cloud-based radio access network \cite{toshiba_5g}, integrated use of multiple radio access technologies \cite{multi_rat_5g}, wireless network virtualization \cite{toshiba_5g}, massive and 3D MIMO \cite{toshiba_5g, mimo_5g}, millimeter wave \cite{mmw_5g} and full duplex \cite{5g_shilpa} communications.
The 5G cellular wireless systems will have a multi-tier architecture consisting of macrocells, different types of licensed small cells and D2D networks to serve users with different quality-of-service (QoS) requirements in a spectrum efficient manner. Distributed resource allocation and interference management is one of the fundamental research challenges for such multi-tier heterogeneous networks. In this chapter, we consider the radio resource allocation problem in a multi-tier orthogonal frequency division multiple access (OFDMA)-based cellular (e.g., 5G LTE-A) network. In particular, we present three novel approaches for distributed resource allocation in such networks utilizing the concepts of stable matching, factor-graph based message passing, and distributed auction.
Matching theory, a sub-field of economics, is a promising concept for distributed resource management in wireless networks. The matching theory allows low-complexity algorithmic manipulations to provide a decentralized self-organizing solution to the resource allocation problems. In matching-based resource allocation, each of the agents (e.g., radio resources and transmitter nodes) ranks the opposite set using a preference relation. The solution of the matching is able to assign the resources with the transmitters depending on the preferences.
The message passing approach for resource allocation provides low (e.g., polynomial time) complexity solution by distributing the computational load among the nodes in the network. In the radio resource allocation problems, the decision making agents (e.g., radio resources and the transmitters) form a virtual graphical structure. Each node computes and exchanges simple messages with neighboring nodes in order to find the solution of the resource allocation problem.
Similar to matching based allocation, auction method is also inherited from economics and used in wireless resource allocation problems. Resource allocation algorithms based on auction method provides polynomial complexity solution which are shown to output near-optimal performance. The auction process evolves with a bidding process, in which unassigned agents (e.g., transmitters) raise the cost and bid for resources simultaneously. Once the bids from all the agents are available, the resources are assigned to the highest bidder.
We illustrate each of the modeling schemes with respect to a practical radio resource allocation problem. In particular, we consider a multi-tier network consisting a macro base station (MBS), a set of small cell base
stations (SBSs) and corresponding small cell user equipments (SUEs), as well as D2D user equipments (DUEs). There is a common set of radio resources (e.g., resource blocks [RBs]) available to the network tiers (e.g., MBS, SBSs
and DUEs). The SUEs and DUEs use the available resources (e.g., RB and power level) in an underlay manner as long as the interference caused to the macro tier (e.g., macro user equipments [MUEs]) remains below a given threshold. The goal of resource allocation is to allocate the available RBs and transmit power levels to the SUEs and DUEs in order to maximize the spectral efficiency without causing significant interference to the MUEs. We show that due to the nature of the resource allocation problem, the centralize solution is computationally expensive and also incurs huge signaling overhead. Therefore, it may not be feasible to solve the problem by a single centralized controller node (e.g., MBS) especially in a dense network. Hence distributed solutions with low signaling overhead is desirable.
We assume that readers are familiar with the basics of OFDMA-based cellular wireless networks (e.g., LTE-A networks), as well as have preliminary background on theory of computing (e.g., data structures, algorithms and computational complexity). Followed by a brief theoretical overview of the modeling tools (e.g., stable matching, message passing
and auction algorithm), we present the distributed solution approaches for the resource allocation problem in the aforementioned network setup. We also provide a brief qualitative comparison in terms of various performance
metrics such as complexity, convergence, algorithm overhead etc.
The organization of the rest of the chapter is as follows: the system model, related assumptions, and the resource allocation problem is presented in Section \ref{sec:sys_model}. The disturbed solutions for resource allocation problem, e.g., stable matching, message passing and auction method are discussed in the Sections \ref{sec:sm_ra}, \ref{sec:mp_ra}, \ref{sec:am_ra}, respectively. The qualitative comparisons among the resource allocation approaches are presented in Section \ref{sec:comparisons}. We conclude the chapter in Section \ref{sec:conclusion} highlighting the directions for future research. Key mathematical symbols and notations used in the chapter are summarized in Table \ref{tab:notations}.
\begin{table}[!h]
\centering
\begin{footnotesize}
\begin{tabular}{c P{10.0cm}}
\toprule
\multicolumn{1}{c}{Notation} & \multicolumn{1}{c}{Physical Interpretation} \\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Network model:}} \\
$\mathcal{U}^{\mathrm m}$, $\mathcal{U}^{\mathrm s}$, $\mathcal{U}^{\mathrm d}$ & Set of MUE, SUE and D2D pairs, respectively \\
$\mathcal{K}^{\mathrm T}$, $\mathcal{K}^{\mathrm R}$ & Set of underlay transmitters and receivers, respectively \\
$\mathcal{N}$, $\mathcal{L}$ & Set of RBs and power levels, respectively \\
$K$, $N$, $L$ & Total number of underlay transmitters, RBs, and power levels, respectively \\
$u_k$ & The UE associated with underlay transmitter $k$ \\
$x_{k}^{(n,l)}, \mathbf{X}$ & Allocation indicator, whether transmitter $k$ using resource $\lbrace n, l \rbrace$ and the indicator vector, respectively\\
$g_{i,j}^{(n)}$ & Channel gain between link $i,j$ over RB $n$ \\
$\gamma_{u_k}^{(n)}$ & SINR in RB $n$ for the UE $u_k$\\
$\Gamma_{u_k}^{(n,l)}$ & Achievable SINR of the UE $u_k$ over RB $n$ using power level $l$\\
$p_{k}^{(n)}$ & Transmit power of transmitter $k$ over RB $n$\\
$R_{u_k}$ & Achievable data rate for $u_k$ \\
$I^{(n)}$, $I_{\mathrm{max}}^{(n)}$ & Aggregated interference and threshold limit for the RB $n$, respectively \\
$\mathfrak{U}_{k}^{(n,l)}$ & Utility for transmitter $k$ using resource $\lbrace n, l \rbrace$ \\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Stable matching:}} \\
$\mu$ & Matching (e.g., allocation) of transmitter to the resources \\
$i_1 \succeq_j i_2$ & Preference relation for agent $j$ (i.e., $i_1$ is more preferred than $i_2$) \\
$\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$, $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L})$ & Preference profile for the transmitter $k$ and RB $n$, respectively \\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Message passing:}} \\
$\delta_{\lbrace n,l \rbrace \rightarrow k} \big( x_{k}^{(n,l)} \big)$ & Message delivered by the resource $\lbrace n,l \rbrace$ to the transmitter $k$\\
$\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big)$ & Message from transmitter $k$ to the resource $\lbrace n,l \rbrace$ \\
$\psi_{\lbrace n,l \rbrace \rightarrow k}$ & Normalized message from the resource $\lbrace n,l \rbrace$ to the transmitter $k$ \\
$\psi_{ k \rightarrow \lbrace n,l \rbrace }$ & Normalized message from the transmitter $k$ to the resource $\lbrace n,l \rbrace$\\
$\tau_{k}^{(n,l)}$ & Node marginals for the transmitter $k$ using resource $\lbrace n,l \rbrace$ \\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Auction method:}} \\
$C_{k}^{(n,l)}$ & Cost for transmitter $k$ using resource $\lbrace n,l \rbrace$ \\
$B_{k}^{(n,l)}$ & Data rate (multiplied by a weighting factor) achieved by transmitter $k$ using resource $\lbrace n,l \rbrace$ \\
$\mathfrak{b}_{k}^{( n,l)}$ & Local bidding information available to transmitter $k$ for the resource $\lbrace n,l \rbrace$ \\
$\epsilon$ & Minimum bid increment parameter \\
$\Theta_{k} = \lbrace n,l \rbrace$ & Assignment of resource $\lbrace n,l \rbrace$ to the transmitter $k$\\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Miscellaneous:}} \\
$|\mathbf{y}| $ & Length of the vector $\mathbf{y}$ \\
$y(t)$ & Value of variable $y$ at any iteration $t$ \\
$z := y$ & Assignment of the value of variable $y$ to the variable $z$ \\
\textit{/* comment */} & Commented text inside algorithms \\
\toprule
\end{tabular}
\end{footnotesize}
\caption{List of major notations}{}
\label{tab:notations}
\end{table}
\section{System Model} \label{sec:sys_model}
\subsection{Network Model and Assumptions} \label{subsec:nw_model}
\begin{figure}[h t b]
\centering
\includegraphics[width=3.0in]{5g_systemmodel.pdf}
\caption{Schematic diagram of the heterogeneous network model. The D2D pairs, SBSs and SUEs are underlaid within the macro tier by reusing same set of radio resources.}
\label{fig:sys_mod}
\end{figure}
Let us consider a transmission scenario of heterogeneous network as shown in Fig. \ref{fig:sys_mod}. The network consists of one MBS and a set of $C$ cellular MUEs, i.e., $\mathcal{U}^{\mathrm m} = \lbrace 1,2,\cdots, C \rbrace$. There are also $D$ D2D pairs and a cluster of $S$ SBSs located within the coverage area of the MBS. The set of SBSs is denoted by $\mathcal{S} = \lbrace 1, 2, \cdots S\rbrace$. For simplicity we assume that each SBS serves only one SUE for a single time instance and the set of SUE is given by $\mathcal{U}^{\mathrm s} = \lbrace 1,2,\cdots, S \rbrace$. The set of D2D pairs is denoted as $\mathcal{U}^{\mathrm d} = \lbrace 1,2,\cdots, D \rbrace$. In addition, the $d$-th element of the sets $\mathcal{U}^{\mathrm d_{T}}$ and $\mathcal{U}^{\mathrm d_{R}}$ denotes the transmitter and receiver UE of the D2D pair $d \in \mathcal{U}^{\mathrm d}$, respectively. The set of UEs in the network is given by $\mathcal{U} = \mathcal{U}^{\mathrm m} \cup \mathcal{U}^{\mathrm s} \cup \mathcal{U}^{\mathrm d}$. For notational convenience, we denote by $\mathcal{K}^{\mathrm T} = \mathcal{S} \cup \mathcal{U}^{\mathrm d_T}$ the set of underlay transmitters (e.g., SBSs and transmitting D2D UEs) and $\mathcal{K}^{\mathrm R} = \mathcal{U}^{\mathrm s} \cup \mathcal{U}^{\mathrm d_{R}}$ denotes the set of underlay receivers (e.g., SUEs and receiving D2D UEs).
The SBSs and DUEs are underlaid within the \textit{macro tier} (e.g., MBS and MUEs). Both the macro tier and the \textit{underlay tier} (e.g., SBSs, SUEs and D2D pairs) use the same set $\mathcal{N} = \lbrace 1, 2, \cdots N \rbrace$ of orthogonal RBs\footnote{The minimum scheduling unit of LTE-A standard is referred to as an RB. One RB consists of 12 subcarriers (e.g., 180 kHz) in the frequency domain and one sub-feame (e.g., 1 millisecond) in the time domain. For a brief overview of heterogeneous network in the context of LTE-A standard refer to \cite[Chapter 1]{hetnet_book_sir}.}. Each transmitter node in the underlay tier (e.g., SBS and D2D transmitter) selects one RB from the available $N$ RBs. In addition, the underlay transmitters are capable of selecting the transmit power from a finite set of power levels, i.e., $\mathcal{L} = \lbrace 1, 2, \cdots L \rbrace$.
Each SBS and D2D transmitter should select a suitable RB-power level combination. This RB-power level combination is referred to as \textit{transmission alignment}\footnote{Throughout this chapter we use the term \textit{resource} and \textit{transmission alignment} interchangeably.} \cite{prabo_journal}. For each RB $n \in \mathcal{N}$, there is a predefined threshold $I_{\mathrm{max}}^{(n)}$ for maximum aggregated interference caused by the underlay tier to the macro tier. We assume that value of $I_{\mathrm{max}}^{(n)}$ is known to the underlay transmitters by using the feedback control channels. An underlay transmitter (i.e., SBS or transmitter DUE) is allowed to use the particular transmission alignment as long as the cross-tier interference to the MUEs is within the threshold limit.
The system model considered here is a \textit{multi-tier heterogeneous network} since each of the network tiers (e.g., macro tier and underlay tier consisting with small cells and D2D UEs) has different transmit power range, coverage region and specific set of users with different application requirements. It is assumed that the user association to the base stations (either MBS or SBSs) is completed prior to resource allocation. In addition, the potential DUEs are discovered during the D2D session setup by transmitting known synchronization or reference signal (i.e., beacons) \cite{network_asst_d2d}. According to our system model, only one MUE is served on each RB to avoid co-tier interference within the macro tier. However multiple underlay UEs (e.g., SUEs and DUEs) can reuse the same RB to improve the spectrum utilization. This reuse causes severe cross-tier interference to the MUEs, and also co-tier interference within the underlay tier; which leads the requirement of an efficient resource allocation scheme.
\subsection{Achievable Data Rate}
The MBS transmits to the MUEs using a fixed power $p_{M}^{(n)} > 0$ for $\forall n$. For each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$, the transmit power over the RBs is determined by the vector $\mathbf{P}_{\mathrm k} = \left[ p_k^{(1)}, p_k^{(2)}, \cdots, p_k^{(N)} \right]^{\mathsf{T}}$ where $p_k^{(n)} \geq 0$ denotes the the transmit power level of the transmitter $k$ over RB $n$. The transmit power $p_k^{(n)}, ~\forall n$ must be selected from the finite set of power levels $\mathcal{L}$. Note that if the RB $n$ is not allocated to the transmitter $k$, the corresponding power variable $p_k^{(n)} = 0$. Since we assume that each underlay transmitter selects only one RB, only one element in the power vector $\mathbf{P}_{\mathrm k}$ is non-zero.
All links are assumed to experience independent block fading. We denote by $g_{i, j}^{(n)}$ the channel gain between the links $i$ and $j$ over RB $n$ and defined by $g_{i, j}^{(n)} = \beta_{i,j}^{(n)} d_{i,j}^{-\alpha} $ where $\beta_{i,j}^{(n)}$ denote the channel fading component between link $i$ and $j$ over RB $n$, $d_{i,j}$ is the distance between node $i$ and $j$, and $\alpha$ is the path-loss exponent.
For the SUEs, we denote $u_k$ as the SUE associated to SBS $k \in \mathcal{S}$, and for the DUEs, $u_k$ refer to the receiving D2D UE of the D2D transmitter $k \in \mathcal{U}^{\mathrm d_{T}}$. The received signal-to-interference-plus-noise ratio (SINR) for the any arbitrary SUE or D2D receiver, i.e., $u_k \in \mathcal{K}^{\mathrm R}, k \in \mathcal{K}^{\mathrm T}$ over RB $n$ is given by
\begin{equation} \label{eq:sinr_underlay}
\gamma_{u_k}^{(n)} = \frac{g_{k, u_k}^{(n)}p_{k}^{(n)}}{\underbrace{ g_{M, u_k}^{(n)}p_{M}^{(n)}}_\text{interference from macro tier} + \underbrace{\sum\limits_{\substack{ k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k }} g_{k^\prime, u_k}^{(n)} p_{k^\prime}^{(n)}}_\text{interference from underlay tier} + ~\sigma^2}
\end{equation}
where $g_{k,u_k}^{(n)}$ is the link gain between the SBS and SUE (e.g., $u_k \in \mathcal{U}^{\mathrm s}, k \in \mathcal{S}$) or the link gain between the D2D UEs (e.g., $u_k \in \mathcal{U}^{\mathrm d_{R}}, k \in \mathcal{U}^{\mathrm d_T}$), and $g_{M, u_k}^{(n)}$ is the interference gain between the MBS and the UE $u_k$. In Equation (\ref{eq:sinr_underlay}), the variable $\sigma^2 = N_0 B_{\mathrm{RB}}$ where $B_{\mathrm {RB}}$ is the bandwidth corresponding to an RB and $N_0$ denotes the thermal noise. Similarly, the SINR for the MUE $m \in \mathcal{U}^{\mathrm m}$ over RB $n$ can be written as follows:
\begin{equation}
\gamma_{m}^{(n)} = \frac{g_{M, m}^{(n)}p_{M}^{(n)}}{\sum\limits_{\substack{ k \in \mathcal{K}^{\mathrm T} }} g_{k, m}^{(n)} p_{k}^{(n)} + ~\sigma^2}.
\end{equation}
Given the SINR, the data rate of the UE $u \in \mathcal{U}$ over RB $n$ can be calculated according to the Shannon's formula, i.e., $R_{u}^{(n)} = B_{\mathrm {RB}} \log_2 \left(1 + \gamma_{u}^{(n)} \right)$.
\subsection{Formulation of the Resource Allocation Problem} \label{subsec:rap_org}
The objective of resource (i.e., RB and transmit power) allocation problem is to obtain the assignment of RB and power level (e.g., transmission alignment) for the underlay UEs (e.g., D2D UEs and SUEs) that maximizes the achievable sum data rate. The RB and power level allocation indicator for any underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ is denoted by a binary decision variable $x_{k}^{(n, l)}$ where
\begin{equation}
x_{k}^{(n, l)} = \begin{cases}
1, \quad \text{if the transmitter $k$ is trasnmitting over RB $n$ with power level $l$} \\
0, \quad \text{otherwise.}
\end{cases}
\end{equation}
Note that the decision variable $x_{k}^{(n, l)} = 1$ implies that $p_k^{(n)} = l$. Let $K = S + D$ denote the total number of underlay transmitters. The achievable data rate of an underlay UE $u_k$ with the corresponding transmitter $k$ is written as
\begin{equation} \label{eq:rate_ue}
R_{u_k} = \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} ~x_{k}^{(n,l)} B_{\mathrm {RB}} \log_2 \left(1 + \gamma_{u_k}^{(n)} \right).
\end{equation}
The aggregated interference experienced on RB $n$ is given by $I^{(n)} = \sum\limits_{k =1}^{K}\sum\limits_{l = 1}^{L}x_{k}^{(n, l)} g_{k,m_k^*}^{(n)} p_k^{(n)}$, where $m_k^* = \underset{m}{\operatorname{argmax}}~ g_{k,m}^{(n)}, ~\forall m \in \mathcal{U}^{\mathrm m}$.
In order to calculate the aggregated interference $I^{(n)}$ on RB $n$ we use the concept of reference user \cite{ref_user}. For any RB $n$, the interference caused by the underlay transmitter $k$ is determined by the highest gains between the transmitter $k$ and MUEs, e.g., the MUE $m_k^*$ who is the mostly affected UE by the transmitter $k$. Satisfying the interference constraints considering the gain with reference user will also satisfy the interference constraints for other MUEs. As mentioned in Section \ref{subsec:nw_model}, an underlay transmitter is allowed to use a particular transmission alignment only when it does not violate the interference threshold to the MUEs, i.e., $I^{(n)} < I_{\mathrm{max}}^{(n)}, ~\forall n$. Mathematically, the resource allocation problem can be expressed by using the following optimization formulation:
\begin{myoptimizationproblem} \label{opt:combopt}
\vspace*{-2.0em}
\begin{subequations}
\begin{align}
\hspace{3em} \underset{x_{k}^{(n,l)},~ p_k^{(n)}}{\operatorname{max}} ~ \sum_{\substack{k =1}}^{K} \sum_{n = 1}^{N} \sum_{l = 1}^{L} ~x_{k}^{(n,l)} & B_{\mathrm {RB}} \log_2\left( 1 + \gamma_{u_k}^{(n)} \right) \nonumber \\
\text{subject~ to:} \hspace{7em} \nonumber\\
\sum_{k =1}^{K}\sum_{l = 1}^{L}x_{k}^{(n, l)} g_{k,m_k^*}^{(n)} p_k^{(n)} &< I_{\mathrm{max}}^{(n)}, \quad \forall n \in\mathcal{N} \label{eq:opt_intf}\\
\sum_{n = 1}^{N} \sum_{l = 1}^{L} x_{k}^{(n,l)} &\leq 1, \quad \quad ~~\forall k \in \mathcal{K}^{\mathrm T} \label{eq:opt_rbpw}\\
x_{k}^{(n,l)} &\in \lbrace 0, 1 \rbrace, ~~~ \forall k \in \mathcal{K}^{\mathrm T},~\forall n \in \mathcal{N},~\forall l \in \mathcal{L} \label{eq:opt_bin}
\end{align}
\end{subequations}
\end{myoptimizationproblem}
where \vspace*{-0.3em} \begin{equation} \label{eq:sinr_formulation}
\gamma_{u_k}^{(n)} = \frac{g_{k, u_k}^{(n)}p_{k}^{(n)}}{ g_{M, u_k}^{(n)}p_{M}^{(n)} + \sum\limits_{\substack{ k^\prime \in \mathcal{K}^{\mathrm{T}},\\ k^\prime \neq k }}^{K} \sum\limits_{l^\prime = 1}^{L} x_{j}^{(n,l^\prime)} g_{k^\prime, u_k}^{(n)} p_{k^\prime}^{(n)} + ~\sigma^2}.
\end{equation}
The objective of the resource allocation problem $\mathbf{P\ref{opt:combopt}}$ is to maximize the data rate of the SUEs and DUEs subject to the set of constraints given by Equations (\ref{eq:opt_intf})-(\ref{eq:opt_bin}). With the constraint in Equation (\ref{eq:opt_intf}), the aggregated interference caused to the MUEs by the underlay transmitters on each RB is limited by a predefined threshold. The constraint in Equation (\ref{eq:opt_rbpw}) indicates that the number of RB selected by each underlay transmitter should be at most one and each transmitter can only select one power level at each RB. The binary indicator variable for transmission alignment selection is represented by the constraint in Equation (\ref{eq:opt_bin}).
\begin{corollary}
The resource allocation problem $\mathbf{P\ref{opt:combopt}}$ is a combinatorial non-convex non-linear optimization problem and the centralized solution of the above problem is strongly NP-hard especially for the large set of ~$\mathcal{U}$, $\mathcal{N}$, and $\mathcal{L}$.
\end{corollary}
The complexity to solve the above problem using exhaustive search is of $\mathcal{O}\left( \left(NL \right)^{K} \right)$. As an example, when $N=6, L=3, $ and $K = 3+2 = 5$, the decision set (e.g., search space) contains $1889568$ possible transmission alignments.
Considering the computational overhead, it not feasible to solve the resource allocation problem by a single central controller (e.g., MBS) in a practical system; and such centralized solution approach requires all the channel state information (CSI) available to the MBS.
Due to mathematical intractability of solving the above resource allocation problem, in the following we present three distributed heuristic solution approaches, namely, stable matching, factor graph based message passing, and distributed auction-based approaches. The distributed solutions are developed under the assumption that the system is feasible, i.e., given the resources and parameters (e.g., size of the network, interference thresholds etc.), it is possible to obtain an allocation that satisfies all the constraints of the original optimization problem.
\section{Resource Allocation Using Stable Matching} \label{sec:sm_ra}
The resource allocation approach using stable matching involves multiple decision-making agents, i.e., the available radio resources (transmission alignments) and the underlay transmitters; and the solutions (i.e., matching between transmission alignments and transmitters) are produced by individual actions of the agents. The actions, i.e., matching requests and confirmation or rejection are determined by the given \textit{preference profiles}, i.e., the agents hold lists of preferred matches over the opposite set each. The matching outcome yields mutually beneficial assignments between the transmitters and available resources that are individually conducted by such preference lists. In our model, the preference could based on CSI parameters and achievable SINR. \textit{Stability} in matching implies that, with regard to their initial preferences, neither the underlay transmitters nor the MBS (e.g., transmission alignments) have an incentive to alter the allocation.
\subsection{Concept of Matching}
A \textit{matching} (i.e., allocation) is given as an assignment of transmission alignment to the underlay transmitters forming the set $\lbrace k, n, l \rbrace \in \mathcal{K}^{\mathrm T} \times \mathcal{N} \times \mathcal{L}$. According to our system model, each underlay transmitter is assigned to only one RB; however, multiple transmitters can transmit on the same RB to improve spectrum utilization. This scheme corresponds to a \textit{many-to-one} matching in the theory of stable matching. More formally the matching can be defined as follows \cite{sm_def}:
\begin{definition} \label{def:matching}
A matching $\mu$ is defined as a function, i.e., $\mu: \mathcal{K}^{\mathrm T} \times \mathcal{N} \times \mathcal{L} \rightarrow \mathcal{K}^{\mathrm T} \times \mathcal{N} \times \mathcal{L}$ such that
\begin{enumerate} [label={\roman*)}]
\setlength{\itemsep}{0.5pt}%
\setlength{\parskip}{0pt}%
\item $\mu(k) \in \mathcal{N} \times \mathcal{L} $ and $|\mu_l(n)| \in \lbrace 0,1 \rbrace$ \quad \mbox{and}
\item $\mu(n) \in \left\lbrace\mathcal{K}^{\mathrm T} \times \mathcal{L}\right\rbrace \cup \lbrace \varnothing \rbrace$ and $|\mu(n)| \in \lbrace 1, 2, \ldots, K \rbrace$
\end{enumerate}
where $\mu(k) = \lbrace n, l\rbrace \Leftrightarrow \mu(n) = \lbrace k, l\rbrace$ for $ \forall k \in \mathcal{K}^{\mathrm T}, \forall n \in \mathcal{N}, \forall l \in \mathcal{L},$ and $|\mu(\cdot)|$ denotes the cardinality of matching outcome $\mu(\cdot)$.
\end{definition}
The above \textbf{Definition \ref{def:matching}} implies that $\mu$ is a one-to-one matching if the input to the function is an underlay transmitter. On the other hand, $\mu$ is a one-to-many function, i.e., $\mu_l(n)$ is not unique if the input to the function is an RB. The interpretation of $\mu(n) = \varnothing $ implies that for some RB $n \in \mathcal{N}$ the corresponding RB is unused by any underlay transmitter under the matching $\mu$. The outcome of the matching determines the RB allocation vector and corresponding power level, e.g., $\mu \equiv \mathbf{X} $, where
\begin{equation} \label{eq:rap_X}
\mathbf{X} = \left[x_{1}^{(1, 1)}, \cdots, x_{1}^{(1, L)}, \cdots, x_{1}^{(N, L)}, \cdots, x_{K}^{(N, L)} \right]^{\mathsf{T}}.
\end{equation}
\subsection{Utility Function and Preference Profile}
Let the parameter $\Gamma_{u_k}^{(n, l)} \triangleq {\gamma_{u_k}^{(n)}}_{ \!\! \vert p_k^{(n)} = l }$ denote the achievable SINR of the UE $u_k$ over RB $n$ using power level $l$ (e.g., $p_k^{(n)} = l$) where $\gamma_{u_k}^{(n)}$ is given by Equation (\ref{eq:sinr_formulation}). We express the data rate as a function of SINR. In particular, let $\mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right) = B_{\mathrm {RB}} \log_2 \left(1 + \Gamma_{u_k}^{(n, l)} \right)$ denote the achievable data rate for the transmitter $k$ over RB $n$ using power level $l$. The utility of an underlay transmitter for a particular transmission alignment is determined by two factors, i.e., the achievable data rate for a given RB power level combination, and an additional cost function that represents the aggregated interference caused to the MUEs on that RB. In particular, the \textit{utility} of the underlay transmitter $k$ for a given RB $n$ and power level $l$ is given by
\begin{equation} \label{eq:sm_utility}
\mathfrak{U}_{k}^{(n,l)} = w_1 \mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right) - w_2 \left( I^{(n)} - I_{\mathrm{max}}^{(n)} \right)
\end{equation}
where $w_1$ and $w_2$ are the biasing factors and can be selected based on which network tier (i.e., macro or underlay tier) should be given priority for resource allocation \cite{prabo_journal}. As mentioned earlier each underlay transmitter and RB hold a list of preferred matches. The preference profile of an underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ over the set of available RBs $\mathcal{N}$ and power levels $\mathcal{L}$ is defined as a vector of linear order $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L}) = \left[\mathfrak{U}_{k}^{(n,l)} \right]_{n \in \mathcal{N}, l \in \mathcal{L}}$. We denote by $\lbrace n_1, l_1 \rbrace \succeq_k \lbrace n_2, l_2 \rbrace$ that the transmitter $k$ prefers the transmission alignment $\lbrace n_1, l_1 \rbrace$ to $\lbrace n_2, l_2 \rbrace$, and consequently, $\mathfrak{U}_{k}^{(n_1,l_1)} > \mathfrak{U}_{k}^{(n_2,l_2)}$. Similarly, the each RB holds the preference over the underlay transmitters and power levels given by $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L}) = \left[\mathfrak{U}_{k}^{(n,l)} \right]_{k \in \mathcal{K}^{\mathrm T}, l \in \mathcal{L}}$.
\subsection{Algorithm Development}
The matching between transmission alignments to the transmitters is performed in an iterative manner as presented in \textbf{Algorithm \ref{alg:ta_sm}}. While a transmitter is unallocated and has a non-empty preference list, the transmitter is temporarily assigned to its first preference over transmission alignments, e.g., the pair of RB and power level, $\lbrace n,l \rbrace$. If the allocation to the RB $n$ does not violate the tolerable interference limit $I_{\mathrm{max}}^{(n)}$, the allocation will persist. Otherwise, until the aggregated interference on the RB $n$ is below threshold, the worst preferred transmitter(s) from the preference list of RB $n$ will be removed even though it was allocated previously. The process terminates when no more transmitters are unallocated. Since the iterative process dynamically updates the preference lists, the procedure above ends up with a local stable matching \cite{matching_org_paper}.
\begin{algorithm} [!t]
\AtBeginEnvironment{algorithmic}{\small}
\caption{Assignment of transmission alignments using stable matching}
\label{alg:ta_sm}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\REQUIRE The preference profiles $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$,~ $\forall k \in \mathcal{K}^{\mathrm T}$ and $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L})$,~ $\forall n \in \mathcal{N}$.
\ENSURE The transmission alignment indicator $\mathbf{X} = \left[x_{1}^{(1, 1)}, \cdots, x_{1}^{(1, L)}, \cdots, x_{1}^{(N, L)}, \cdots, x_{K}^{(N, L)} \right]^{\mathsf{T}}$.
\STATE Initialize $\mathbf{X} := \mathbf{0}$.
\WHILE{ some transmitter $k$ is unassigned \AND $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$ is non-empty }
\STATE $\left\lbrace n_{\mathrm{mp}},l_{\mathrm{mp}} \right\rbrace :=$ most preferred RB with power level $l_{\mathrm{mp}}$ from the profile $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$.
\STATE Set $x_{k}^{\left(n_{\mathrm{mp}},l_{\mathrm{mp}} \right)} := 1$. ~\COMMENT{\footnotesize Temporarily assign the RB and power level to the transmitter $k$}
\STATE $\mathfrak{I}^{(n_\mathrm{mp})} := g_{k,m_k^*}^{(n_\mathrm{mp})} l_{\mathrm{mp}} + \!\! \sum\limits_{\substack{ k^\prime \in \mathcal{K}^{\mathrm{T}},\\k^\prime \neq k}}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n_\mathrm{mp}, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n_\mathrm{mp})} p_{k^\prime}^{(n_\mathrm{mp})}$. ~\COMMENT{\footnotesize Estimate interference of $n_\mathrm{mp}$}
\IF{$\mathfrak{I}^{(n_\mathrm{mp})} \geq I_{\mathrm{max}}^{(\mathrm{mp})} $}
\REPEAT
\STATE $ \left\lbrace k_{\mathrm{lp}}, l_{\mathrm{lp}} \right\rbrace :=$ least preferred transmitter with power level $l_{\mathrm{lp}}$ assigned to $n_{\mathrm{mp}}$.
\STATE Set $x_{k_{\mathrm{lp}}}^{\left( n_{\mathrm{mp}}, l_{\mathrm{lp}}\right)} := 0$. ~\COMMENT{\footnotesize Revoke assignment due to interference threshold violation}
\STATE $\mathfrak{I}^{(n_\mathrm{mp})} := \sum\limits_{\substack{ k^\prime =1, }}^{K}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n_\mathrm{mp}, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n_\mathrm{mp})} p_{k^\prime}^{(n_\mathrm{mp})}$. ~\COMMENT{\footnotesize Update interference level}
{ \footnotesize \textit{/* Update preference profiles */} }
\FORALL {successor $\lbrace \hat{k}_{\mathrm{lp}}, \hat{l}_{\mathrm{lp}} \rbrace$ of $ \left\lbrace k_{\mathrm{lp}}, l_{\mathrm{lp}} \right\rbrace$ on profile $\boldsymbol{\mathscr{P}}_{n_{\mathrm{mp}}}(\mathcal{K}^{\mathrm T}, \mathcal{L})$}
\STATE remove $\lbrace \hat{k}_{\mathrm{lp}}, \hat{l}_{\mathrm{lp}}\rbrace$ from $\boldsymbol{\mathscr{P}}_{n_{\mathrm{mp}}}(\mathcal{K}^{\mathrm T}, \mathcal{L})$.
\STATE remove $\left\lbrace n_{\mathrm{mp}}, l_{\mathrm{mp}} \right\rbrace$ from $\boldsymbol{\mathscr{P}}_{\hat{k}_{\mathrm{lp}}}(\mathcal{N}, \mathcal{L})$.
\ENDFOR
\UNTIL{$\mathfrak{I}^{(n_\mathrm{mp})} < I_{\mathrm{max}}^{(n_\mathrm{mp})}$ }
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
The overall stable matching based resource allocation approach is summarized in \textbf{Algorithm \ref{alg:ra_sm}}. Note that \textbf{Algorithm \ref{alg:ta_sm}} is executed repeatedly. The convergence of \textbf{Algorithm \ref{alg:ra_sm}} occurs when the outcome of two consecutive local matching is similar, e.g., $\mathbf{X}(t) = \mathbf{X}(t-1)$ and as a consequence $R(t) = R(t-1)$, where $R(t) = \sum\limits_{k=1}^{K} R_{u_k}(t)$ denotes the achievable sum rate of the underlay tier at iteration $t$.
\begin{algorithm} [!t]
\AtBeginEnvironment{algorithmic}{\small}
\caption{Stable matching-based resource allocation}
\label{alg:ra_sm}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\renewcommand{\algorithmicensure}{\textbf{Initialization:}}
\ENSURE
\STATE Estimate the CSI parameters from previous time slot.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ randomly selects a transmission alignment and the MBS broadcasts the aggregated interference of each RB using pilot signals.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ builds the preference profile $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$ from the CSI estimations and the utility function given by Equation (\ref{eq:sm_utility}).
\STATE For each $n \in \mathcal{N}$, the MBS builds the preference profiles $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L})$.
\STATE Initialize number of iterations $t := 1$.
\renewcommand{\algorithmicensure}{\textbf{Update:}}
\vspace*{0.5em}
\ENSURE
\WHILE{$\mathbf{X}(t) \neq \mathbf{X}(t-1)$ \AND $t$ is less than some predefined threshold $T_{\mathrm{max}}$}
\STATE MBS obtains a local stable matching $\mathbf{X}(t)$ using \textbf{Algorithm \ref{alg:ta_sm}}, calculates the aggregated interference $I^{(n)}(t)$ for $\forall n$ and informs the transmitters.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ updates the preference profile $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$ based on current allocation vector $\mathbf{X}(t)$ and interference level $I^{(n)}(t)$.
\STATE MBS updates the preference profile $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L})$ for $\forall n \in \mathcal{N}$ using $\mathbf{X}(t)$ and $I^{(n)}(t)$
\STATE Update $t := t+1$.
\ENDWHILE
\renewcommand{\algorithmicensure}{\textbf{Allocation:}}
\vspace*{0.5em}
\ENSURE
\STATE Allocate the RB and power levels to the SBSs and D2D UEs based on the matching obtained from the update phase.
\end{algorithmic}
\end{algorithm}
\subsection{Stability, Optimality, and Complexity of the Solution}
In this section, we analyze the solution obtained by stable matching approach. The stability, optimality, and the complexity of the algorithm are discussed in the following.
\subsubsection{Stability}
The notion of stability in the matching $\mu$ means that none of the agents (e.g., either underlay transmitters or the resources) prefers to change the allocation obtained by $\mu$. Hence, the matching $\mu$ is stable if no transmitter and no resource who are not allocated to each other,
as given in $\mu$, prefer each other to their allocation in $\mu$. The transmitters and resources are said to be \textit{acceptable} if the agents (e.g., transmitters and resources) prefer each other to remain unallocated. In addition, a matching $\mu$ is called \textit{individually rational} if no agent $\tilde{\jmath}$ prefers unallocation to the matching in $\mu(\tilde{\jmath})$. Before formally defining the stability of matching, we introduce the term \textit{blocking pair} which is defined as
\begin{definition} \label{def:blocking}
A matching $\mu$ is \textbf{blocked} by a pair of agent $(i,j)$ if they prefer each other to the matching obtain by $\mu$, i.e., $i \succeq_{j} \mu(j)$ and $j \succeq_{i} \mu(i)$.
\end{definition}
Using the above definition, the stability of the matching can be defined as follows \cite[Chapter 5]{matchingbook_two}:
\begin{definition} \label{def:stability}
A matching $\mu$ is \textbf{stable} if it is individually rational and there is no tuple $(k, n, l)$ within the set of acceptable agents such that $k$ prefers $\lbrace n,l \rbrace$ to $\mu(k)$ and $n$ prefers $\lbrace k,l \rbrace$ to $\mu(n)$, i.e., not blocked by any pair of agents.
\end{definition}
The following theorem shows that the solution obtained by the matching algorithm is stable.
\begin{theorem} \label{thm:stability}
The assignment performed in \textbf{Algorithm \ref{alg:ta_sm}} leads to a stable allocation.
\end{theorem}
\begin{proof}
We proof the theorem by contradiction. Let $\mu$ be a matching obtained by \textbf{Algorithm \ref{alg:ta_sm}}. Let us assume that the resource $\lbrace n, l \rbrace$ is not allocated to the transmitter $k$, but it belongs to a higher order in the preference list. According to this assumption,
the tuple $(k, n, l)$ will block $\mu$. Since the position of the resource $\lbrace n, l \rbrace$ in the preference profile of $k$ is higher compared to any resource $\lbrace \hat{n}, \hat{l} \rbrace$ that is matched by $\mu$, i.e., $\lbrace n, l \rbrace \succeq_{k} \mu(k) $, transmitter $k$ must select $\lbrace n, l \rbrace$ before the algorithm terminates. Note that, the resource $\lbrace n,l \rbrace$ is not assigned to transmitter $k$ in the matching outcome $\mu$. This implies that
$k$ is unassigned with the resource $\lbrace n , l \rbrace$ (e.g., line 9 in \textbf{Algorithm \ref{alg:ta_sm}}) and $(k, \hat{n}, \hat{l})$ is a better assignment. As a result, the tuple $(k, n, l)$ will not block $\mu$, which contradicts our assumption. The proof concludes since no blocking pair exists, and therefore, the matching outcome $\mu$ leads to a stable matching.
\end{proof}
It is worth mentioning that the assignment is stable at each iteration of \textbf{Algorithm \ref{alg:ta_sm}}. Since after evaluation of the utility, the preference profiles are updated and the matching subroutine is repeated, a stable allocation is obtained at each iteration.
\subsubsection{Optimality}
The optimality property of the stable matching approach can be observed using the definition of weak Pareto optimality. Let $\mathcal{R}_{\mu}$ denote the sum-rate obtained by matching $\mu$. A matching $\mu$ is weak Pareto optimal if there is no other matching $ \widehat{\mu}$ that can achieve a better sum-rate, i.e., $\mathcal{R}_{\widehat{\mu}} \geq \mathcal{R}_{\mu} $ \cite{sm_def}.
\begin{theorem}
The stable matching-based resource allocation algorithm is weak Pareto optimal.
\end{theorem}
\begin{proof}
Let us consider $\mu$ to be the stable allocation obtained by \textbf{Algorithm \ref{alg:ta_sm}}. For instance, let $\widehat{\mu}$ be an arbitrary stable outcome better that $\mu$, i.e., $\widehat{\mu}$ can achieve a better sum-rate. Since the allocation $\widehat{\mu}$ is better than $\mu$, there exists atleast one resource $\lbrace \hat{n}, \hat{l} \rbrace$ allocated to transmitter $k$ in $\widehat{\mu}$, and $k$ is allocated to the resource $\lbrace n, l \rbrace$ in $\mu$. According to our assumption, $k$ prefers $\lbrace \hat{n}, \hat{l} \rbrace$ to $\lbrace n,l \rbrace$, and let $\lbrace \hat{n}, \hat{l} \rbrace$ be allocated to transmitter $\hat{k}$ in $\mu$. It is obvious that resource $\lbrace \hat{n}, \hat{l} \rbrace$ is better than $\lbrace n,l \rbrace$ to transmitter $k$ and $\lbrace k, l \rbrace$ is better than $\lbrace \hat{k}, \hat{l} \rbrace$ to resource $\hat{n}$, i.e., $\lbrace \hat{n}, \hat{l} \rbrace \succeq_k \lbrace n,l \rbrace$
and $\lbrace k, l \rbrace \succeq_{\hat{n}} \lbrace \hat{k}, \hat{l} \rbrace$.
By the definition of blocking pair, $\mu$ is blocked by $(k, \hat{n}, \hat{l})$ and hence $\mu$ is unstable. This contradicts our assumption that $\mu$ is a stable allocation. Since there is no stable outcome $\widehat{\mu}$ which is better that $\mu$, by definition $\mu$ is an optimal allocation.
\end{proof}
\subsubsection{Complexity}
It is possible to show that the stable matching algorithm will iterate for finite number of times.
\begin{theorem}
\label{thm:sm_time}
The RB allocation subroutine terminates after some finite step $T^\prime$.
\end{theorem}
\begin{proof}
Let the finite set $\tilde{\mathcal{X}}$ represent the all possible combinations of transmitter-resource matching where each element $\tilde{x}_{k}^{(n,l)} \in \tilde{\mathcal{X}}$ denotes the resource $\lbrace n,l \rbrace$ is allocated to the transmitter $k$. Since no transmitter is rejected by the same resource more than once (i.e., line 9 in \textbf{Algorithm \ref{alg:ta_sm}}), the finiteness of the set $\tilde{\mathcal{X}}$ ensures the termination of the matching subroutine in finite number of steps.
\end{proof}
For each underlay transmitter, the complexity to build the preference profile using any standard sorting algorithm is $\mathcal{O}\left( NL \log( NL) \right)$ (line 8, \textbf{Algorithm \ref{alg:ra_sm}}). Similarly, in line 9, the complexity to output the ordered set of preference profile for the RBs is of $\mathcal{O}\left( N KL \log (KL) \right)$. Let $\xi = \displaystyle \sum_{k = 1 }^{K} |\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L}) | + \sum_{n = 1}^{N} |\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L}) | = 2 K N L$ be the total length of input preferences in \textbf{Algorithm \ref{alg:ta_sm}}, where $|\boldsymbol{\mathscr{P}}_j(\cdot)|$ denotes the length of profile vector $\boldsymbol{\mathscr{P}}_j(\cdot)$. From \textbf{Theorem \ref{thm:sm_time}} and \cite[Chapter 1]{matching_thesis} it can be shown that, if implemented with suitable data structures, the time complexity of the RB allocation subroutine is linear in the size of input preference profiles, i.e., $\mathcal{O}(\xi) = \mathcal{O}\left( K N L \right)$. Since the update phase of \textbf{Algorithm \ref{alg:ra_sm}} runs at most fixed $T < T_{\mathrm{max}}$ iterations, the complexity of the stable matching-based solution is linear in $K, N, L$.
\section{Message Passing Approach for Resource Allocation} \label{sec:mp_ra}
In the following, we reformulate the resource allocation problem $\mathbf{P \ref{opt:combopt}}$ in such a way that can be solved with a message passing (MP) technique. The MP approach involves computation of the \textit{marginals}, e.g., the messages exchanged between the nodes of a specific graphical model. Among different representations of graphical model, we consider \textit{factor graph} based MP scheme. A factor graph is made up of two different types of nodes, i.e., \textit{function} and \textit{variable} nodes, and an edge connects a function (e.g., factor) node to a variable node if and only if the variable appears in the function. Mathematically, this can be expressed as follows \cite{factorgraph_thory}:
\begin{definition}
A factor graph can be represented by a $\mathcal{V}$-$\mathcal{F}$ bipartite graph where $\mathcal{V} = \left\lbrace v_1, \cdots v_a \right\rbrace$ is the set of variable nodes and $\mathcal{F} = \left\lbrace f_1(\cdot), \cdots f_b(\cdot) \right\rbrace$ is the set of function (e.g., factor) nodes. The connectivity (e.g., edges) of the factor graph can be represented by an $a \times b$ binary matrix $\mathbf{E} = [E_{i,j}]$ where $E_{i,j} = 1$ if the variable node $i$ is connected with the factor node $j$ and $E_{i,j} = 0$, otherwise.
\end{definition}
\subsection{Overview of the MP Scheme} \label{subsec:mp_overview}
Before presenting the details resource allocation approach for a heterogeneous scenario, we briefly introduce the generic MP scheme (for the details of factor graph based MP scheme refer to \cite{factorgraph_thory}). Let us consider the maximization of an arbitrary function $f(v_1, \cdots, v_J)$ over all possible values of the argument, i.e., $Z = \underset{\mathbf{v}}{\operatorname{max}} ~ f(\mathbf{v})$ where $\mathbf{v} = \left[ v_1, \cdots, v_J \right]^{\mathsf{T}}$. We denote by $\underset{\mathbf{v}}{\operatorname{max}}$ that the maximization is computed over all possible combinations of the elements of the the vector $\mathbf{v}$. The \textit{marginal} of $Z$ with respect to variable $v_j$ is given by $\phi_j(v_j) = \underset{\sim (v_j)}{\operatorname{max}} ~f(\mathbf{v})$ where $\underset{\sim (\cdot)}{\operatorname{max}}$ denote the maximization over all variables except $(\cdot)$. Let us now decompose $f(\mathbf{v})$ into summation of $I$ functions, i.e., $\sum\limits_{i=1}^{I} f_{i}(\hat{v}_i)$ where $\hat{v}_i$ is a subset of the elements of the vector $\mathbf{v}$ and let $\mathbf{f}= \left[ f_{1}(\cdot), \cdots, f_{I}(\cdot) \right]^{\mathsf{T}}$ is the vector of $I$ functions. In addition, let $\mathfrak{f}_j$ represents subset of functions in $\mathbf{f}$ where the variable $v_j$ appears. Hence the marginal can be rewritten as $\phi_j(v_j) = \underset{\sim (v_j)}{\operatorname{max}} ~\sum\limits_{i=1}^{I} f_{i}(\hat{v}_i)$. According to the \textit{max-sum} MP strategy the message passed by any variable node $v_j$ to any generic function node $f_i(\cdot)$ is given by $\delta_{v_j \rightarrow f_i(\cdot)}(v_j) = \sum\limits_{i^\prime \in \mathfrak{f}_j, i^\prime \neq i} \delta_{f_{i^\prime}(\cdot) \rightarrow v_j }(v_j)$. Similarly, the message from function node $f_i{(\cdot)}$ to variable node $v_j$ is given as $\delta_{f_i(\cdot) \rightarrow v_j}(v_j) = \underset{\sim (v_j)}{\operatorname{max}} \left(f_i(v_1, \cdots, v_J) + \sum\limits_{j^\prime \in \hat{v}_i, j^\prime \neq j} \delta_{v_{j^\prime} \rightarrow f_{i}(\cdot)}(v_{j^\prime}) \right) $. When the factor graph is cycle free (e.g., there is a unique path connecting any two nodes), all the variables nodes $j = \lbrace 1, \cdots, J \rbrace$ can compute the marginals as $\phi_j(v_j) = \sum\limits_{i=1}^{I} \delta_{f_{i}(\cdot) \rightarrow v_j }(v_j)$. Utilizing the general distributive law (e.g., $\operatorname{\max} \sum = \sum \operatorname{\max}$) \cite{mp_distributive} the maximization therefore can be computed as $Z = \sum\limits_{j=1}^{J} \underset{v_j}{\operatorname{max}} ~\phi_j(v_j)$.
\subsection{Reformulation of the Resource Allocation Problem Utilizing MP Approach}
In order to solve the resource allocation problem $\mathbf{P \ref{opt:combopt}}$ presented in Section \ref{subsec:rap_org} using MP, we reformulate it as a utility maximization problem. Let us define the reward functions $\mathfrak{W}_n(\mathbf{X})$ and $\mathfrak{R}_k(\mathbf{X})$ where the transmission alignment vector $\mathbf{X}$ is given by Equation (\ref{eq:rap_X}). With the constraint in Equation (\ref{eq:opt_intf}), we can define $\mathfrak{W}_n(\mathbf{X})$ as follows:
\begin{equation} \label{eq:util_mp1}
\mathfrak{W}_n(\mathbf{X}) =
\begin{cases} 0, & \text{if~} \sum\limits_{k =1}^{K}\sum\limits_{l = 1}^{L}x_{k}^{(n, l)} g_{k,m_k^*}^{(n)} p_k^{(n)} < I_{\mathrm{max}}^{(n)} \\
- \infty, & \text{otherwise.}\end{cases}
\end{equation}
Similarly to deal with the constraint in Equation (\ref{eq:opt_rbpw}) we define $\mathfrak{R}_k(\mathbf{X})$ as
\begin{equation} \label{eq:util_mp2}
\mathfrak{R}_k(\mathbf{X}) =
\begin{cases} \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} ~x_{k}^{(n,l)} B_{\mathrm {RB}} \log_2\left( 1 + \gamma_{u_k}^{(n)} \right) & \text{if~} \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} x_{k}^{(n,l)} \leq 1 \\
- \infty & \text{otherwise.}\end{cases}
\end{equation}
The interpretations of the reward functions in Equations (\ref{eq:util_mp1}) and (\ref{eq:util_mp2}) are straightforward. Satisfying the interference constraint in Equation (\ref{eq:opt_intf}) does not cost any penalty (e.g., zero reward) in the function $\mathfrak{W}_n(\mathbf{X})$, and in the function $\mathfrak{R}_k(\mathbf{X})$ fulfillment of the RB requirement constraint in Equation (\ref{eq:opt_rbpw}) gives the desired data rate. However, both in the functions $\mathfrak{W}_n(\mathbf{X})$ and $\mathfrak{R}_k(\mathbf{X})$, the unfulfilled constraints, respectively, given by in Equations (\ref{eq:opt_intf}) and (\ref{eq:opt_rbpw}), result in infinite cost.
From the Equations (\ref{eq:util_mp1}) and (\ref{eq:util_mp2}), the resource allocation problem $\mathbf{P \ref{opt:combopt}}$ can be rewritten as
$$\underset{\mathbf{X}}{\operatorname{max}} \left( \sum\limits_{n=1}^{N} \mathfrak{W}_n(\mathbf{X}) + \sum\limits_{k=1}^{K} \mathfrak{R}_k(\mathbf{X}) \right)$$
and the optimal transmission allocation vector is therefore given by
\begin{equation} \label{eq:mp_Xall}
\mathbf{X}^* = \underset{\mathbf{X}}{\operatorname{argmax}} \left( \sum\limits_{n=1}^{N} \mathfrak{W}_n(\mathbf{X}) + \sum\limits_{k=1}^{K} \mathfrak{R}_k(\mathbf{X}) \right).
\end{equation}
Since our goal is to obtain a distributed solution for the above resource allocation problem, we focus on a single transmission alignment allocation variable, e.g., $x_{k}^{(n,l)}$. From Equation (\ref{eq:mp_Xall}) we obtain ${x_{k}^{(n,l)}}^* = \underset{x_{k}^{(n,l)}} {\operatorname{argmax}}~ \phi_{k}^{(n,l)}\big(x_{k}^{(n,l)}\big)$ where the marginal $\phi_{k}^{(n,l)} \big(x_{k}^{(n,l)} \big)$ is given by
\begin{equation} \label{eq:marginal_mp}
\phi_{k}^{(n,l)}\big(x_{k}^{(n,l)}\big) = \underset{\sim \bigl(x_{k}^{(n,l)} \bigl)}{\operatorname{max}} \left( \sum\limits_{n=1}^{N} \mathfrak{W}_n(\mathbf{X}) + \sum\limits_{k=1}^{K} \mathfrak{R}_k(\mathbf{X}) \right).
\end{equation}
As mentioned in the previous section, $\underset{\sim \bigl(x_{k}^{(n,l)} \bigl)}{\operatorname{max}}$ denote the maximization over all variables in $\mathbf{X}$ except $x_{k}^{(n,l)}$. The marginalization in Equation (\ref{eq:marginal_mp}) can be computed in a distributed way where each node conveys the solution of a local problem to one another by passing information messages according to the max-sum MP strategy. Note that according to our system model the underlay transmitters and the resources (e.g., transmission alignments) can form a bipartite graph, e.g., each transmission alignment $\lbrace n,l \rbrace$ can be assigned to any of the $K$ transmitters as long as interference to the MUEs on RB $n$ is below threshold. Without loss of generality, let us consider a generic transmission alignment, e.g., RB-power level pair $\lbrace n,l \rbrace \in \mathcal{N} \times \mathcal{L}$ and an underlay transmitter $k \in \mathcal{K}^{\mathrm T}$. Using the function in Equation (\ref{eq:util_mp1}) and utilizing the max-sum MP strategy presented in Section \ref{subsec:mp_overview}, it is possible to show that the message delivered by the resource $\lbrace n,l \rbrace$ to the transmitter $k$ can be expressed as \cite{min-sum-mp}
\begin{equation} \label{eq:mp_msg1}
\begin{aligned}
\delta_{\lbrace n,l \rbrace \rightarrow k} \big( x_{k}^{(n,l)} \big) = \operatorname{max} \sum\limits_{k^\prime \in \mathcal{K}^{\mathrm T},~ k^\prime \neq k} \delta_{k^\prime \rightarrow \lbrace n,l \rbrace} \big( x_{k^\prime}^{(n,l)} \big) \\
\text{subject to:~~ } \sum\limits_{k =1}^{K}\sum\limits_{l = 1}^{L}x_{k}^{(n, l)} g_{k,m_k^*}^{(n)} p_k^{(n)} < I_{\mathrm{max}}^{(n)}.
\end{aligned}
\end{equation}
Note that the term $\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big)$ in the above equation denotes the message from transmitter $k$ to the resource $\lbrace n,l \rbrace$ which can be written as \cite{min-sum-mp}
\begin{equation} \label{eq:mp_msg2}
\begin{aligned}
\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big) = x_{k}^{(n,l)} R_{u_k}^{(n,l)} + \operatorname{max} \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} x_{k}^{(n^\prime,l^\prime)} & R_{u_k}^{(n^\prime,l^\prime)} + \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \big( x_{k}^{(n^\prime,l^\prime)} \big) \\
\text{subject to:~~ } \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} x_{k}^{(n,l)} &\leq 1
\end{aligned}
\end{equation}
where $R_{u_k}^{(n,l)} = B_{\mathrm {RB}} \log_2\left( 1 + \Gamma_{k}^{(n,l)} \right)$ and $\Gamma_k^{(n, l)} \triangleq {\gamma_{u_k}^{(n)}}_{ \!\! \vert p_k^{(n)} = l }$.
The interpretation of the Equations (\ref{eq:mp_msg1}) and (\ref{eq:mp_msg2}) are as follows: the messages $\delta_{\lbrace n,l \rbrace \rightarrow k} ( 1 )$ and $\delta_{k \rightarrow \lbrace n,l \rbrace} (1)$ carry the information relative to the use of the resource $\lbrace n, l\rbrace$ by the transmitter $k$; while the messages $\delta_{\lbrace n,l \rbrace \rightarrow k} (0)$ and $\delta_{k \rightarrow \lbrace n,l \rbrace} (0)$ carry the information relative to the lack of transmission over the resource $\lbrace n, l\rbrace$ by the transmitter $k$. In order to obtain both the messages $\delta_{\lbrace n,l \rbrace \rightarrow k} \big( x_{k}^{(n,l)} \big) $ and $\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big) $, it is required to solve the local optimization problem relative to the allocation variable $ x_{k}^{(n,l)} $.
Based on the discussions of Section \ref{subsec:mp_overview}, the link-wise marginal in Equation (\ref{eq:marginal_mp}) can be written as \cite{min-sum-mp}
\begin{equation} \label{eq:mp_marginal_modified}
\phi_{k}^{(n,l)}\big(x_{k}^{(n,l)}\big) = \delta_{\lbrace n,l \rbrace \rightarrow k} \big( x_{k}^{(n,l)} \big) + \delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big)
\end{equation}
and hence the transmission allocation variable is given by
\begin{equation} \label{eq:mp_allocation_upd}
{x_{k}^{(n,l)}}^* = \underset{x_{k}^{(n,l)}} {\operatorname{argmax}}~ \phi_{k}^{(n,l)}\big(x_{k}^{(n,l)}\big).
\end{equation}
At each iteration of the MP-based resource allocation algorithm, at most one message passes through the edge of any given direction (e.g., from transmitters to resources or from resources to transmitters); and each iteration the messages are updated by replacing the previous message sent on the same edge in the same direction \cite{min-sum-mp}. When both the messages given by Equations (\ref{eq:mp_msg1}) and (\ref{eq:mp_msg2}) are available, the marginal can be computed using Equation (\ref{eq:mp_marginal_modified}) and the transmission allocation variable is obtained by Equation (\ref{eq:mp_allocation_upd}).
\subsection{Effective Implementation of MP Scheme in a Practical Heterogeneous Network} \label{subsec:mp_effective}
It is worth noting that, sending messages from resources to transmitters (and vice versa) requires actual transmission on the radio channel. In a practical LTE-A-based 5G system, since the exchange of messages actually involves effective transmissions over the channel, the MP scheme described in the preceding section might be limited by the signaling overhead due to transfer of messages between the transmitters and resources. In the following, we observe that the amount of message signaling can be significantly reduced by some algebraic manipulations. Since the messages carry the information regarding whether any resource is used by any underlay transmitter, each transmitter $k$ actually delivers a real valued vector with two element, i.e., $\boldsymbol{\delta}_{k \rightarrow \lbrace n,l \rbrace} = \left[ \delta_{k \rightarrow \lbrace n,l \rbrace} (1),~ \delta_{k \rightarrow \lbrace n,l \rbrace} (0) \right]^{\mathsf{T}}$ and each resource $\lbrace n, l\rbrace$ delivers the vector $\boldsymbol{\delta}_{\lbrace n,l \rbrace \rightarrow k} = \left[ \delta_{\lbrace n,l \rbrace \rightarrow k} (1),~ \delta_{\lbrace n,l \rbrace \rightarrow k} (0) \right]^{\mathsf{T}}$. Let us now rewrite the message $\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big)$ using the utility function introduced in Equation (\ref{eq:sm_utility}) as follows:
\begin{equation} \label{eq:mp_msg2_wUtil}
\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big) = x_{k}^{(n,l)} \mathfrak{U}_{k}^{(n,l)} + \operatorname{max} \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} x_{k}^{(n^\prime,l^\prime)} \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \big( x_{k}^{(n^\prime,l^\prime)} \big).
\end{equation}
By subtracting the constant term $\sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0) $ from the both sides of Equation (\ref{eq:mp_msg2_wUtil}) we can obtain the following:
\begin{equation} \label{eq:mp_msg2_wUtil_sub}
\begin{aligned}
& \delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big) - \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0) = x_{k}^{(n,l)} \mathfrak{U}_{k}^{(n,l)} ~~+\\
& \operatorname{max} \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} x_{k}^{(n^\prime,l^\prime)} \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \big( x_{k}^{(n^\prime,l^\prime)} \big) - \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0).
\end{aligned}
\end{equation}
Let us now introduce the parameter $\psi_{\lbrace n,l \rbrace \rightarrow k} = \delta_{\lbrace n,l \rbrace \rightarrow k} (1) - \delta_{\lbrace n,l \rbrace \rightarrow k} (0)$ defined as the normalized message. For instance, consider the vector $$\Psi_{k} = \left[ \mathfrak{U}_{k}^{(1,1)} + \psi_{\lbrace 1,1 \rbrace \rightarrow k}, \cdots, \mathfrak{U}_{k}^{(1,L)} + \psi_{\lbrace 1,L \rbrace \rightarrow k}, \cdots, \mathfrak{U}_{k}^{(N,L)} + \psi_{\lbrace N,L \rbrace \rightarrow k} \right]^{\mathsf{T}}$$ and let us denote by $\left\langle \upsilon_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace}$ the maximal entry of the vector $\Psi_{k}$ without considering the term $\mathfrak{U}_{k}^{(n,l)} + \psi_{\lbrace n,l \rbrace \rightarrow k}$. It can be noted that the terms within the summation in Equation (\ref{eq:mp_msg2_wUtil_sub}) are either $0$ (e.g., when $x_{k}^{(n,l)} = 0$) or $\mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k}$ (e.g., when $x_{k}^{(n,l)} = 1$). Since each transmitter requires only a single transmission alignment, when the variable $x_{k}^{(n,l)} = 0$, only one term in the summation of Equation (\ref{eq:mp_msg2_wUtil_sub}) is non-zero. For the case $x_{k}^{(n,l)} = 1$, no term within the summation of Equation (\ref{eq:mp_msg2_wUtil_sub}) is non-zero. Consequently, for $x_{k}^{(n,l)} = 0$, the maximum rate will be achieved if
\begin{equation} \label{eq:mp_msg_wSort1}
\delta_{k \rightarrow \lbrace n,l \rbrace} (0) - \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0) = \left\langle \upsilon_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace}.
\end{equation}
Similarly, when $x_{k}^{(n,l)} = 1$, the maximum is given by
\begin{equation} \label{eq:mp_msg_wSort2}
\delta_{k \rightarrow \lbrace n,l \rbrace} (1) - \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0) = \mathfrak{U}_{k}^{(n,l)}.
\end{equation}
Since by definition $\psi_{ k \rightarrow \lbrace n,l \rbrace } = \delta_{k \rightarrow \lbrace n,l \rbrace} (1) - \delta_{k \rightarrow \lbrace n,l \rbrace} (0)$, from the Equations (\ref{eq:mp_msg_wSort1}) and (\ref{eq:mp_msg_wSort2}), the normalized messages from the transmitter $k$ to the resource $\lbrace n, l\rbrace$ can be derived as
\begin{align} \label{eq:mp_msg_norm1}
\psi_{ k \rightarrow \lbrace n,l \rbrace } &= \mathfrak{U}_{k}^{(n,l)} - \left\langle \upsilon_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace} \nonumber \\
&= \mathfrak{U}_{k}^{(n,l)} - \left\langle \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace}.
\end{align}
Likewise, from \cite{min-sum-mp}, it can be shown that the normalized message sent from the resource $\lbrace n, l\rbrace$ to the transmitter $k$ becomes
\begin{equation} \label{eq:mp_msg_norm2}
\psi_{\lbrace n,l \rbrace \rightarrow k} = \delta_{\lbrace n,l \rbrace \rightarrow k} (1) - \delta_{\lbrace n,l \rbrace \rightarrow k} (0) = - \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n,l \rbrace }.
\end{equation}
For any arbitrary graph, the allocation variables may keep oscillating and might not converge to any fixed point, and the MP scheme may require some heuristic approach to terminate. However, in the context of loopy graphical models, by introducing a suitable weight, the messages given by Equations (\ref{eq:mp_msg_norm1}) and (\ref{eq:mp_msg_norm2}) perturb to a fixed point \cite{min-sum-mp, remp_proof}. Accordingly, Equations (\ref{eq:mp_msg_norm1}) and (\ref{eq:mp_msg_norm2}) can be rewritten as \cite{min-sum-mp}
\begin{align}
\psi_{ k \rightarrow \lbrace n,l \rbrace } &= \mathfrak{U}_{k}^{(n,l)} - \omega \left\langle \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace} - (1-\omega) \left( \mathfrak{U}_{k}^{(n,l)} + \psi_{\lbrace n,l \rbrace \rightarrow k} \right) \label{eq:mp_norm_msg_w1} \\
\psi_{\lbrace n,l \rbrace \rightarrow k} &= - \omega \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n,l \rbrace } - (1-\omega)~ \psi_{ k \rightarrow \lbrace n,l \rbrace } \label{eq:mp_norm_msg_w2}
\end{align}
where $\omega \in (0, 1]$ denotes the weighting factor for each edge. Notice that when $\omega = 1$, the messages given by Equations (\ref{eq:mp_norm_msg_w1}) and (\ref{eq:mp_norm_msg_w2}) reduce to the original formulation, e.g., Equations (\ref{eq:mp_msg_norm1}) and (\ref{eq:mp_msg_norm2}), respectively. Given the normalized messages $\psi_{ k \rightarrow \lbrace n,l \rbrace }$ and $\psi_{\lbrace n,l \rbrace \rightarrow k} $ for $\forall k, n, l$, the node marginals for the normalized messages can be calculated as $\tau_{k}^{(n,l)} = \psi_{ k \rightarrow \lbrace n,l \rbrace } + \psi_{\lbrace n,l \rbrace \rightarrow k} $ and hence from Equation (\ref{eq:mp_allocation_upd}) the transmission alignment allocation can be obtained as
\begin{equation} \label{eq:mp_X_finally}
{x_{k}^{(n,l)}}^* =
\begin{cases}
1 & \text{if } \tau_{k}^{(n,l)} > 0 \text{ and } I^{(n)} < I_{\mathrm{max}}^{(n)}\\
0 & \text{otherwise.} \\
\end{cases}
\end{equation}
\subsection{Algorithm Development}
In line with our discussions and from the expressions derived in Section \ref{subsec:mp_effective}, the
MP-based resource allocation approach is outlined in \textbf{Algorithm \ref{alg:mp_rec_alloc}}. The underlay transmitters and the resources (e.g., MBS) exchange the messages in an iterative manner. The MBS assigns the resource to the transmitters considering the node marginals, as well as the interference experienced on the RBs. The algorithm terminates when the sum data rate is reached to a steady value, i.e., the allocation vector $\mathbf{X}$ remains the same in successive iterations.
\begin{algorithm} [!t]
\caption{Resource allocation using message passing}
\label{alg:mp_rec_alloc}
\begin{algorithmic}[1]
\AtBeginEnvironment{algorithmic}{\small}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\renewcommand{\algorithmicensure}{\textbf{Initialization:}}
\ENSURE
\STATE Estimate the CSI parameters from previous time slot.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ selects a transmission alignment randomly and reports to MBS.
\STATE Initialize $t:= 1, ~\psi_{ k \rightarrow \lbrace n,l \rbrace }(0) := 0, ~\psi_{\lbrace n,l \rbrace \rightarrow k}(0) := 0$ for $\forall k, n, l$.
\renewcommand{\algorithmicensure}{\textbf{Update:}}
\vspace*{0.5em}
\ENSURE
\WHILE{$\mathbf{X}(t) \neq \mathbf{X}(t-1)$ \AND $t$ less than some predefined threshold $T_{\mathrm{max}}$}
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ sends the message
\[\resizebox{0.93\textwidth}{!}{ $\psi_{ k \rightarrow \lbrace n,l \rbrace }(t) = \mathfrak{U}_{k}^{(n,l)}(t-1) - \omega \left\langle \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1) + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k}(t-1) \right\rangle_{\sim \lbrace n,l \rbrace} - (1-\omega) \left( \mathfrak{U}_{k}^{(n,l)}(t-1) + \psi_{\lbrace n,l \rbrace \rightarrow k }(t-1) \right)$ }\]
for $\forall \lbrace n, l \rbrace \in \mathcal{N}\times \mathcal{L}$ to the MBS.
\STATE For all the resource $\forall \lbrace n, l\rbrace \in \mathcal{N} \times \mathcal{L}$, MBS sends messages $$\psi_{\lbrace n,l \rbrace \rightarrow k}(t) = - \omega \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n,l \rbrace }(t-1) - (1-\omega)~ \psi_{ k \rightarrow \lbrace n,l \rbrace } (t-1) $$ to each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ computes the marginals as $\tau_{k}^{(n,l)}(t) = \psi_{ k \rightarrow \lbrace n,l \rbrace }(t) + \psi_{\lbrace n,l \rbrace \rightarrow k}(t) $ for $\forall \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}$ and reports to the MBS.
\vspace{2pt}
{\footnotesize \textit{/* MBS calculates the allocation vector according to Equation (\ref{eq:mp_X_finally}) */} }
\vspace{2pt}
\STATE Set $x_{k}^{(n,l)} := 0$ for $\forall k, n, l$ \hspace{1em} \COMMENT{\footnotesize Initialize the variable to obtain final allocation}
\FORALL{$k \in \mathcal{K}^{\mathrm T} \text{ and } \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}$}
\IF{$\tau_{k}^{(n,l)}(t) > 0 $}
\STATE Set $x_{k}^{(n,l)} := 1$. ~~\COMMENT{\footnotesize Assign the resource to the transmitter}
\STATE $\mathfrak{I}^{(n)} := \sum\limits_{\substack{ k^\prime =1}}^{K}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)}$. ~\COMMENT{\footnotesize Calculate interference in RB $n$}
\IF{ $\mathfrak{I}^{(n)} \geq I_{\mathrm{max}}^{(n)}$ }
\REPEAT
\STATE $\lbrace \hat{k}, \hat{l} \rbrace := \!\!\! \underset{k^\prime \in \mathcal{K}^{\mathrm T}, l^\prime \in \mathcal{L}}{\operatorname{argmax}} x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)}$ \COMMENT{\footnotesize Most interfering transmitter $\hat{k}$ with $p_{\hat{k}}^{(n)} = \hat{l}$ }
\STATE Set $x_{\hat{k}}^{(n,\hat{l})} := 0$. ~~~\COMMENT{\footnotesize Unassigned due to interference threshold violation}
\STATE $\mathfrak{I}^{(n)} := \sum\limits_{\substack{ k^\prime =1}}^{K}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)}$. ~~~\COMMENT{\footnotesize Update interference level}
\UNTIL{$\mathfrak{I}^{(n)} < I_{\mathrm{max}}^{(n)}$}
\ENDIF
\ENDIF
\ENDFOR
\STATE MBS calculates the transmission alignment allocation vector $\mathbf{X}(t) = \left[ x_{k}^{(n,l)} \right]_{\forall k, n, l}$ for the iteration $t$.
\STATE Update $t:= t + 1$.
\ENDWHILE
\renewcommand{\algorithmicensure}{\textbf{Allocation:}}
\vspace*{0.5em}
\ENSURE
\STATE Allocate the transmission alignments (e.g., RB and power levels) to the SBSs and D2D transmitters.
\end{algorithmic}
\end{algorithm}
\subsection{Convergence, Optimality, and Complexity of the Solution}
The convergence, optimality, and complexity of the message passing approach is analyzed in the following subsections.
\subsubsection{Convergence and Optimality}
As presented in the following theorem, the message passing algorithm converges to fixed messages within fixed number of iterations.
\begin{theorem}
The marginals and the allocation in \textbf{Algorithm \ref{alg:mp_rec_alloc}} converge to a fixed point.
\end{theorem}
\begin{proof}
The proof is constructed by utilizing the concept of \textit{contraction mapping} \cite[Chapter 3]{mp_converge}. Let the vector $\boldsymbol{\psi}(t) = \left[\psi_{ 1 \rightarrow \lbrace 1,1 \rbrace }(t), \cdots, \psi_{ k \rightarrow \lbrace n,l \rbrace }(t), \cdots \psi_{ K \rightarrow \lbrace N,L \rbrace }(t) \right]^{\mathrm T}$ represent all the messages exchanged between the transmitters and the resources (e.g., MBS) at iteration $t$. Let us consider the messages are translated into the mapping $\boldsymbol{\psi}(t+1) = \mathbb{T}\left( \boldsymbol{\psi}(t) \right) = \left[ \mathbb{T}_{1}^{(1,1)}\left( \boldsymbol{\psi}(t) \right), \cdots, \mathbb{T}_{K}^{(N,L)}\left( \boldsymbol{\psi}(t) \right) \right]^{\mathrm{T}}$. From the Equations (\ref{eq:mp_norm_msg_w1}) and (\ref{eq:mp_norm_msg_w2}) we can obtain $\psi_{ k \rightarrow \lbrace n,l \rbrace }(t+1) = \mathbb{T}_{k}^{(n,l)}\left( \boldsymbol{\psi}(t) \right)$ as follows:
\begin{align}
\mathbb{T}_{k}^{(n,l)}\left( \boldsymbol{\psi}(t) \right) = \omega \left( \mathfrak{U}_{k}^{(n,l)}(t) - \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t)\right)~ + \nonumber \\
\omega \left( \omega \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n^\prime,l^\prime \rbrace }(t) + (1-\omega) \psi_{ k \rightarrow \lbrace n^\prime,l^\prime \rbrace }(t) \right)~ + \nonumber \\
(1- \omega) \left( \omega \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n,l \rbrace }(t) + (1-\omega) \psi_{ k \rightarrow \lbrace n,l \rbrace }(t) \right).
\end{align}
For any vector $\mathbf{u}$ and $\mathbf{v}$, any generic mapping $\mathbb{T}$ is a contraction if ${\parallel \mathbb{T} (\mathbf{u}) - \mathbb{T}( \mathbf{v}) \parallel}_{\infty} \leq \varepsilon {\parallel \mathbf{u} - \mathbf{v} \parallel}_{\infty}$, where $\varepsilon < 1$ is the modulus of the mapping \cite[Chapter 3]{mp_converge}. From \cite{remp_proof}, it can be shown that the mapping $\mathbb{T} : \mathbb{R}^{KNF} \rightarrow \mathbb{R}^{KNF}$ is a contraction under the maximum norm, e.g., ${\parallel \mathbb{T}\left( \boldsymbol{\psi} \right) \parallel}_{\infty} = \underset{k \in \mathcal{K}^{\mathrm T}, n \in \mathcal{N}, l \in \mathcal{L}}{\operatorname{max}} |\mathbb{T}_{k}^{(n,l)}\left( \boldsymbol{\psi} \right)|$. Since the contraction mappings have a unique fixed point convergence property for any initial vector, the proof concludes with that fact that message passing algorithm converges to a fixed marginal and hence to a fixed allocation vector $\mathbf{X}$.
\end{proof}
The following theorem presents the fixed convergence point of the message passing algorithm is an optimal solution of the original resource allocation problem.
\begin{theorem}
The allocation obtained by message passing algorithm converges to the optimal solution of resource allocation problem $\mathbf{P\ref{opt:combopt}}$.
\end{theorem}
\begin{proof}
The theorem is proved by contradiction. Let us consider that the solution $\widetilde{\mathbf{X}}$ obtained by message passing algorithm is not optimal and let $\mathbf{X}^*$ be the optimal solution obtained by solving $\mathbf{P\ref{opt:combopt}}$. Let us further assume that there are $\chi \leq |\mathbf{X}|$ entries (e.g., allocations) that differ between $\widetilde{\mathbf{X}}$ and $\mathbf{X}^*$. In addition, let $\widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}} \subseteq \mathcal{N} \times \mathcal{L}$ denote the subset of resources for which two allocations differ. For each $\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}$ there is a transmitter $\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}$ such that $\tilde{x}_{\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} = 1$ and $x_{\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{*(\tilde{n}, \tilde{l})} = 0$, and a transmitter $\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace} \neq \kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}$ such that $\tilde{x}_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} = 0$ and $x_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{*(\tilde{n}, \tilde{l})} = 1$. Hence, the assignment of resource $\lbrace \tilde{n}, \tilde{l} \rbrace $ to transmitter $\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}$ implies that the marginal $\tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} < 0$ and the following set of inequalities hold for each $\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}$:
\begin{align}
\tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} = \omega \left[ \left( \mathfrak{U}_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} + \psi_{\lbrace \tilde{n}, \tilde{l} \rbrace \rightarrow \ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace} } \right) - \left( \mathfrak{U}_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(n^\prime, l^\prime)} + \psi_{\lbrace n^\prime, l^\prime \rbrace \rightarrow \ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace} } \right) \right] < 0
\end{align}
where $\lbrace n^\prime, l^\prime \rbrace$ is the resource as represented in Equation (\ref{eq:mp_msg_norm1}). According to our assumption, the resource $\lbrace n^\prime, l^\prime \rbrace$ also belongs to $\widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}$. Hence, $\sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}}\tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} = \omega \left( \Delta \mathfrak{U} + \Delta \psi \right)$ where
\begin{align}
\Delta \mathfrak{U} &= \sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}} \left( \mathfrak{U}_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} - \mathfrak{U}_{\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} \right) \nonumber \\
&= \sum_{k=1}^{K}\sum_{N=1}^{N}\sum_{l=1}^{L} x_{k}^{*(n,l)} \mathfrak{U}_{k}^{(n,l)} - \sum_{k=1}^{K}\sum_{N=1}^{N}\sum_{l=1}^{L} \tilde{x}_{k}^{(n,l)} \mathfrak{U}_{k}^{(n,l)}
\end{align}
and $\Delta \psi = \sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}} \left( \psi_{\lbrace \tilde{n}, \tilde{l}\rbrace \rightarrow \ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}} - \psi_{\lbrace \tilde{n}, \tilde{l}\rbrace \rightarrow \kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}} \right)$. After some algebraic manipulations (for details refer to \cite{remp_proof}) we can obtain $\frac{2 (1- \omega)}{\omega} \!\!\!\! \sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}} \!\!\! \tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} \leq \Delta \mathfrak{U}$. Since $0 < \omega < 1$ and both the variables $\sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}} \tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})}$ and $ \Delta \mathfrak{U}$ are positive, our assumption that $\widetilde{\mathbf{X}}$ is not optimal is contradicted and the proof follows.
\end{proof}
\subsubsection{Complexity}
If the message passing algorithm requires $T < T_{\mathrm{max}}$ iterations to converge, it is straightforward to verify that the time complexity at each MBS is of $\mathcal{O}\left( T K N L \right)$. Similarly, considering a standard sorting algorithm that outputs the term $\left\langle \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace}$ in order to generate the message $\psi_{ k \rightarrow \lbrace n,l \rbrace }$ with worst-case complexity of $\mathcal{O}\left( NL \log \left( NL \right) \right)$, the overall time complexity at each underlay transmitter is of $\mathcal{O} \left(T {(NL)}^2 \log \left( NL \right) \right)$.
\section{Auction-Based Resource Allocation} \label{sec:am_ra}
Our final solution approach for the resource allocation is the distributed auction algorithm. The allocation using auction is based on the \textit{bidding} procedure, where the agents (i.e., underlay transmitters) bid for the resources (e.g., RB and power level). The transmitters select the bid for the resources based on the \textit{costs} (e.g., the interference caused to the MUEs) of using the resource. The desired assignment relies on the appropriate selection of the bids.
The unassigned transmitters raise the cost of using resource and bid for the resources simultaneously. Once the bids from all the transmitters are available, the resources are assigned to the highest bidder. An overview of auction approach
is presented in the following.
\subsection{Overview of the Auction Approach} \label{subsec:auc_overview}
In a generic auction-based assignment model, every resource $j$ associated with a cost $c_j$ and each agent $i$ can get the benefit $B_{ij}$ from the resource $j$. Given the benefit $B_{ij}$, every agent $i$ who wishes to be assigned with the resource $j$, needs to pay the cost $c_j$. The net value (e.g., utility) that an agent $i$ can get from the resource $j$ is given by $B_{ij} - c_j$. The auction procedure involves the assignment of agent $i$ with the resource $j^\prime$ which provides the maximal net value, i.e.,
\begin{equation} \label{eq:auction_price}
B_{ij^\prime} - c_{j^\prime} = \underset{j}{\operatorname{max}} \left\lbrace B_{ij} - c_{j} \right\rbrace.
\end{equation}
If the condition given in Equation (\ref{eq:auction_price}) is satisfied for all the agents $i$, the assignment and the set of costs are referred to as \textit{equilibrium} \cite{auction_org}. However, in many practical problems, obtaining an equilibrium assignment is not straightforward due to the possibility of cycles. In particular, there may be cases where the agents contend for a small number of equally desirable resources without increasing the cost, which creates cycle (e.g., infinite loop) in the auction process. To avoid this difficulty, the notion of \textit{almost equilibrium} is introduced in the literature. The assignment and the set of costs are said to be almost equilibrium when the net value for assigning each agent $i$ with the resource $j^\prime$ is within a constant $\epsilon >0$ of being maximal. Hence, in order to be an almost equilibrium assignment, the following condition needs to be satisfied for all the agents \cite{auction_org}:
\begin{equation} \label{eq:auction_price_comslac}
B_{ij^\prime} - c_{j^\prime} \geq \underset{j}{\operatorname{max}} \left\lbrace B_{ij} - c_{j} \right\rbrace - \epsilon.
\end{equation}
The condition in Equation (\ref{eq:auction_price_comslac}) is known as $\epsilon$\textit{-complementary slackness}. When $\epsilon = 0$, Equation (\ref{eq:auction_price_comslac}) reduces to ordinary complementary slackness given by Equation (\ref{eq:auction_price}).
For instance, let the variable $\Theta_i = j$ denote that agent $i$ is assigned with the resource $j$. In addition, let $c_{ij}$ denote the cost that agent $i$ incurs in order to be assigned with resource $j$ and $\mathfrak{b}_{ij}$ is the bidding information (i.e., highest bidder) available to the agent $i$ about resource $j$. The auction procedure evolves in an iterative manner. Given the the assignment $\Theta_i$, the set of costs $\left[c_{ij}\right]_{\forall ij}$, and the set of largest bidders $\left[\mathfrak{b}_{ij}\right]_{\forall ij}$ of previous iteration, the agents locally update the costs and the highest bidders for current iteration. In particular, the costs $c_{ij}(t)$ and bidding information $\mathfrak{b}_{ij}(t)$ available to the agent $i$ about resource $j$ for iteration $t$ are updated from the previous iteration as follows \cite{auction_base}:
\begin{align}
c_{ij}(t) &= \underset{i^\prime, i^\prime \neq i}{\operatorname{max}} \left\lbrace c_{ij}(t-1), c_{i^\prime j}(t-1) \right\rbrace \label{eq:auc_cost}\\
\mathfrak{b}_{ij}(t) &= \underset{i^* \in \underset{i^\prime, i^\prime \neq i}{\operatorname{~argmax}} \left\lbrace c_{ij}(t-1), c_{i^\prime j}(t-1) \right\rbrace }{\operatorname{max}} \left\lbrace \mathfrak{b}_{i^*j}(t-1) \right\rbrace. \label{eq:auc_bid}
\end{align}
The above update equations ensure that the agents will have the updated maximum cost of the resource $j$ (i.e., $c_j \triangleq \underset{i}{\operatorname{max}} \lbrace c_{i j} \rbrace$) and the corresponding highest bidder for that resource. Once the update cost and bidding information are available, agent $i$ checks whether the cost of the resource currently assigned to agent $i$, e.g., $c_{i \Theta_i(t-1)}$ has been increased by any other agents. If so, the current assignment obtained from previous iteration may not be at (almost) equilibrium and the agent needs to select a new assignment, e.g., $\Theta_{i}(t) = \underset{j}{\operatorname{argmax}} \left\lbrace B_{ij}(t) - c_{ij} (t) \right\rbrace$. In order to update the cost for new assignment (e.g., $\Theta_{i}(t)$) for any iteration $t$, the agent will use the following cost update rule \cite{auction_base}:
\begin{equation} \label{eq:auc_costupd}
c_{ij}(t) = c_{ij}(t-1) + \Delta_i(t-1)
\end{equation}
where $\Delta_i$ is given by
\begin{equation} \label{eq:auc_price_uptate}
\Delta_i(t-1) = \underset{j}{\operatorname{max}} \left\lbrace B_{ij}(t-1) - c_{ij}(t-1) \right\rbrace - \underset{j^\prime \neq \Theta_i(t)}{\operatorname{max}} \left\lbrace B_{ij^\prime}(t-1) - c_{ij^\prime}(t-1) \right\rbrace + \epsilon.
\end{equation}
The variable $\underset{j}{\operatorname{max}} \left\lbrace B_{ij}(t-1) - c_{ij}(t-1) \right\rbrace$ and $\underset{j^\prime \neq \Theta_i(t)}{\operatorname{max}} \left\lbrace B_{ij^\prime}(t-1) - c_{ij^\prime}(t-1) \right\rbrace$ denote the maximum and second maximum net utility, respectively. Note that $\Delta_i$ is always greater than zero as $\epsilon >0$ and by definition $\underset{j}{\operatorname{max}} \left\lbrace B_{ij}(t-1) - c_{ij}(t-1) \right\rbrace > \underset{j^\prime \neq \Theta_i(t)}{\operatorname{max}} \left\lbrace B_{ij^\prime}(t-1) - c_{ij^\prime}(t-1) \right\rbrace$. Since $c_{i \Theta_i(t)}(t)$ is the highest cost for iteration $t$, agent $i$ can also update the bidding information, e.g., $\mathfrak{b}_{i \Theta_i(t)}(t) = i$. Accordingly, the cost update rule using $\Delta_i$ as given in Equation (\ref{eq:auc_costupd}) ensures that the assignment and the set of costs are almost at equilibrium \cite{auction_base}.
\subsection{Auction for Radio Resource Allocation}
Based on the discussion provided in the preceding section, in the following, we present the auction-based resource allocation scheme. We present the cost model and use the concept of auction to develop the resource allocation algorithm in our considered heterogeneous network setup.
\subsubsection{Cost Function}
Let us consider the utility function given by Equation (\ref{eq:sm_utility}). Recall that the term $w_2 \left( I^{(n)} - I_{\mathrm{max}}^{(n)} \right) $ in Equation (\ref{eq:sm_utility}) represents the cost (e.g., interference caused by underlay transmitters to the MUE) of using the RB $n$. In particular, when the transmitter $k$ is transmitting with power level $l$, the cost of using RB $n$ can be represented by
\begin{align} \label{eq:auc_cost_ra}
c_{k}^{(n,l)} &= w_2 \left( I^{(n)} - I_{\mathrm{max}}^{(n)} \right) = w_2 \left( \sum\limits_{k^\prime =1}^{K}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)} - I_{\mathrm{max}}^{(n)} \right) \nonumber \\
&= w_2 \left( g_{k,m_k^*}^{(n)} l + \sum\limits_{\substack{ k^\prime \in \mathcal{K}^{\mathrm{T}}, k^\prime \neq k}}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)} - I_{\mathrm{max}}^{(n)} \right).
\end{align}
Let the parameter $C_{k}^{(n,l)} = \max \lbrace 0, c_{k}^{(n,l)} \rbrace$ and accordingly the cost $C_{k}^{(n,l)} = 0$ only if $I^{(n)} \leq I_{\mathrm{max}}^{(n)}$. Notice that using the cost term we can represent Equation (\ref{eq:sm_utility}) as $$\mathfrak{U}_{k}^{(n,l)} = w_1 \mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right) - w_2 \left( I^{(n)} - I_{\mathrm{max}}^{(n)} \right) = B_{k}^{(n,l)} - c_{k}^{(n,l)} = B_{k}^{(n,l)} - C_{k}^{(n,l)}$$ where $B_{k}^{(n,l)} = w_1 \mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right)$, and $c_{k}^{(n,l)}$ is given by Equation (\ref{eq:auc_cost_ra}). The variable $B_{k}^{(n,l)}$ is proportional to the data rate achieved by transmitter $k$ using resource $\lbrace n,l \rbrace$. Analogous to the discussion of previous section, $\mathfrak{U}_{k}^{(n,l)}$ represents the net benefit that transmitter $k$ obtains from the resource $\lbrace n,l\rbrace$.
Let $\mathfrak{b}_{k}^{( n,l)}$ denote the local bidding information available to transmitter $k$ for the resource $\lbrace n,l \rbrace$. For notational convenience, let us assume that $\Theta : [k]_{k = 1, \cdots, K} \rightarrow \left[ \lbrace n, l \rbrace \right]_{\substack{n = 1, \cdots, N \\ l = 1, \cdots, L}}$ denotes the mapping between the transmitters and the resources, i.e., $\Theta_k = \lbrace n,l \rbrace$ represents the assignment of resource $\lbrace n,l \rbrace$ to transmitter $k$. Hence we represent by $C_{k}^{\Theta_k}$ the cost of using the resource $\lbrace n,l \rbrace$ obtained by the assignment $\Theta_k = \lbrace n,l \rbrace$. Similarly, given $\Theta_k = \lbrace n,l \rbrace$ the variable $\mathfrak{b}_{k}^{\Theta_k} \equiv \mathfrak{b}_{k}^{( n,l)}$ denotes the local bidding information about the resource $\lbrace n,l \rbrace$ available to the transmitter $k$. Note that $\Theta_k = \lbrace n,l \rbrace$ also implies $x_{k}^{(n,l)} = 1$. In other words, $\Theta_k = \lbrace n,l \rbrace$ denote the non-zero entry of the vector $\mathbf{x}_k = \left[ x_{k}^{(n,l)} \right]_{\forall n,l}$. Since each underlay transmitter $k$ selects only one resource $\lbrace n, l\rbrace$, only a single entry in the vector $\mathbf{x}_k$ is non-zero.
\subsubsection{Update of Cost and Bidder Information}
In order to obtain the updated cost and bidding information, we utilize similar concept given by Equations (\ref{eq:auc_cost})-(\ref{eq:auc_price_uptate}). At the beginning of the auction procedure, each underlay transmitter updates the cost as $C_{k}^{(n,l)}(t) = \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \left\lbrace C_{k}^{(n,l)}(t-1), C_{k^\prime}^{(n,l)}(t-1) \right\rbrace $. In addition, as described by Equation (\ref{eq:auc_bid}), the information of maximum bidder is obtained by $\mathfrak{b}_{k}^{( n,l)}(t) = \mathfrak{b}_{k^*}^{( n,l)}(t-1)$ where $k^* = \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{argmax}} \left\lbrace C_{k}^{(n,l)}(t-1), C_{k^\prime}^{(n,l)}(t-1) \right\rbrace$. When the transmitter $k$ needs to select a new assignment, i.e., $\Theta_k = \lbrace \hat{n},\hat{l}\rbrace$, the transmitter increases the cost of using the resource, e.g., $C_{k}^{(\hat{n}, \hat{l})}(t) = C_{k}^{(\hat{n}, \hat{l})}(t-1) +\Delta_k(t-1)$, and $\Delta_k(t-1)$ is given by
\begin{align} \label{eq:auc_costupdate}
\Delta_k(t-1) = \underset{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N}\times\mathcal{L}}{\operatorname{max}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1) - \underset{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N}\times\mathcal{L} \\ n^\prime \neq \hat{n}, l^\prime \neq \hat{l} }}{\operatorname{max}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1) + \epsilon
\end{align}
where $\epsilon > 0$ indicates the minimum bid requirement parameter. Similar to Equation (\ref{eq:auc_price_uptate}), the term $\underset{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N}\times\mathcal{L}}{\operatorname{max}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1) - \underset{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N}\times\mathcal{L} \\ n^\prime \neq \hat{n}, l^\prime \neq \hat{l} }}{\operatorname{max}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1)$ denotes the difference between the maximum and the second to the maximum utility value. In the case when the transmitter $k$ does not prefer to be assigned with a new resource, the allocation from the previous iteration will remain unchanged, i.e., $\Theta_k(t) = \Theta_k(t-1)$, and consequently, $\mathbf{x}_k(t) = \mathbf{x}_k(t-1)$.
\begin{algorithm} [!t]
\AtBeginEnvironment{algorithmic}{\small}
\caption{Auction method for any underlay transmitter $k$}
\label{alg:auc_loc}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\REQUIRE Parameters from previous iteration: an assignment $\mathbf{X}(t-1) = \left[ \mathbf{x}_1(t-1), \cdots \mathbf{x}_K(t-1) \right]^{\mathsf{T}}$, aggregated interference $I^{(n)}(t-1)$ for $\forall n$, cost of using resources $\mathbf{C}(t-1) = \left[ C_{k}^{(n,l)}(t-1)\right]_{\forall k,n, l}$ and the highest bidders of the resources $\mathfrak{B}(t-1) = \left[ \mathfrak{B}_k(t) \right]_{\forall k}$ where $\mathfrak{B}_k(t) = \left[\mathfrak{b}_{k}^{( n,l)}(t) \right]_{\forall n, l}$.
\ENSURE The allocation variable $\mathbf{x}_k(t) = \left[x_{k}^{(n,l)}\right]_{\forall n, l}$, updated costs $\mathbf{C}_k(t) = \left[ C_{k}^{(n,l)}(t)\right]_{\forall n, l}$, and bidding information $\mathfrak{B}_k(t) = \left[\mathfrak{b}_{k}^{( n,l)}(t) \right]_{\forall n, l}$ at current iteration $t$ for the transmitter $k$.
\STATE Initialize $\mathbf{x}_k(t) := \mathbf{0}$.
\STATE For all the resources $\lbrace n, l\rbrace \in \mathcal{N}\times \mathcal{L}$,
\begin{itemize}
\item Obtain the transmitter $k^* := \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{argmax}} \left\lbrace C_{k}^{(n,l)}(t-1), C_{k^\prime}^{(n,l)}(t-1) \right\rbrace$ and update the highest bidder as $\mathfrak{b}_{k}^{( n,l)}(t) := \mathfrak{b}_{k^*}^{( n,l)}(t-1)$.
\item Update the cost as
$C_{k}^{(n,l)}(t) := \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \left\lbrace C_{k}^{(n,l)}(t-1), C_{k^\prime}^{(n,l)}(t-1) \right\rbrace $.
\end{itemize}
\vspace*{2pt}
{\footnotesize \textit{/* Let $\Theta_k(t-1)$ denote the assignment of transmitter $k$ at previous iteration $t-1$, i.e., $\Theta_k(t-1)$ represents the non-zero entry in the vector $\mathbf{x}_k(t-1)$. Since each transmitter uses only one transmission alignment, only a single entry in the vector $\mathbf{x}_k(t-1)$ is non-zero. When cost is greater than previous iteration and the transmitter $k$ is not the highest bidder, update the assignment */} }
\vspace*{5pt}
\IF{ $C_{k}^{\Theta_k(t-1)} (t) \geq C_{k}^{\Theta_k(t-1)} (t-1) $ \AND $\mathfrak{b}_{k}^{\Theta_k(t-1)}(t) \neq k $ }
\STATE $\lbrace \hat{n}, \hat{l} \rbrace := \underset{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{argmax}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t)$. ~\COMMENT{\footnotesize Obtain the best resource for transmitter $k$}
\STATE $\mathfrak{I}^{(\hat{n})} := g_{k,m_k^*}^{(\hat{n})} \hat{l} + I^{(\hat{n})}$. ~\COMMENT{\footnotesize Calculate additional interference caused by transmitter $k$ for using RB $\hat{n}$}
\IF{ $\mathfrak{I}^{(\hat{n})} < I_{\mathrm{max}}^{(\hat{n})}$ }
\STATE Set $x_{k}^{(\hat{n},\hat{l})} := 1$. ~~~~~\COMMENT{\footnotesize e.g., $\Theta_{k}(t) = \lbrace \hat{n},\hat{l} \rbrace $}
\STATE Update the highest bidder for the resource $\lbrace \hat{n}, \hat{l} \rbrace $ as $\mathfrak{b}_{k}^{(\hat{n}, \hat{l}) }(t) := k$.
\STATE Increase the cost for the resource $\lbrace \hat{n}, \hat{l} \rbrace $ as $C_{k}^{(\hat{n}, \hat{l})}(t) = C_{k}^{(\hat{n}, \hat{l})}(t-1) +\Delta_k(t-1)$ where $\Delta_k(t-1)$ is given by Equation (\ref{eq:auc_costupdate}).
\ELSE
\STATE Keep the assignment unchanged from previous iteration, i.e., $\mathbf{x}_k(t) := \mathbf{x}_k(t-1)$.
\ENDIF
\ELSE
\STATE Keep the assignment unchanged from previous iteration, i.e., $\mathbf{x}_k(t) := \mathbf{x}_k(t-1)$.
\ENDIF
\end{algorithmic}
\end{algorithm}
\subsection{Algorithm Development}
\textbf{Algorithm \ref{alg:auc_alg}} outlines the auction-based resource allocation approach. Each transmitter locally executes \textbf{Algorithm \ref{alg:auc_loc}} and obtains a temporary allocation. When the execution of \textbf{Algorithm \ref{alg:auc_loc}} is finished, each underlay transmitter $k$ reports to the MBS the local information, e.g., choices for the resources, $\mathbf{x}_k = \left[ x_{k}^{(n,l)} \right]_{\forall n,l}$. Once the information (e.g., output parameters from \textbf{Algorithm \ref{alg:auc_loc}}) from all the transmitters are available to the MBS, the necessary parameters (e.g., input arguments required by \textbf{Algorithm \ref{alg:auc_loc}}) are calculated and broadcast by the MBS. \textbf{Algorithm \ref{alg:auc_loc}} repeated iteratively until the allocation variable $\mathbf{X} = \left[\mathbf{x}_k \right]_{\forall k} = \left[x_{1}^{(1, 1)}, \cdots, x_{1}^{(1, L)}, \cdots, x_{1}^{(N, L)}, \cdots, x_{K}^{(N, L)} \right]^{\mathsf{T}}$ for two successive iterations becomes similar.
\begin{algorithm} [!t]
\AtBeginEnvironment{algorithmic}{\small}
\caption{Auction-based resource allocation}
\label{alg:auc_alg}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\renewcommand{\algorithmicensure}{\textbf{Initialization:}}
\ENSURE
\STATE Estimate the CSI parameters from the previous time slot.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ randomly selects a transmission alignment and reports to the MBS.
\STATE MBS broadcasts the assignment of all transmitters, aggregated interference of each RB, the costs and the highest bidders using pilot signals.
\STATE Initialize number of iterations $t := 1$.
\renewcommand{\algorithmicensure}{\textbf{Update:}}
\vspace*{0.5em}
\ENSURE
\WHILE{$\mathbf{X}(t) \neq \mathbf{X}(t-1)$ \AND $t$ is less than some predefined threshold $T_{\mathrm{max}}$}
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ locally runs the \textbf{Algorithm \ref{alg:auc_loc}} and reports the assignment $\mathbf{x}_k(t)$, the cost $\mathbf{C}_k(t)$ and the bidding information $\mathfrak{B}_k(t)$ to the MBS.
\STATE MBS calculates the aggregated interference $I^{(n)}(t)$ for $\forall n$, the allocation variable $\mathbf{X}(t)$, information about highest bidders $\mathfrak{B}(t)$, the cost $\mathbf{C}(t)$, and broadcast to the underlay transmitters.
\STATE Update $t := t+1$.
\ENDWHILE
\renewcommand{\algorithmicensure}{\textbf{Allocation:}}
\vspace*{0.5em}
\ENSURE
\STATE Allocate the RB and power levels to the SBSs and D2D UEs.
\end{algorithmic}
\end{algorithm}
\subsection{Convergence, Complexity, and Optimality of the Auction Approach}
In the following subsections we analyze the convergence, complexity, and optimality of the solution obtained by auction algorithm.
\subsubsection{Convergence and Complexity}
For any arbitrary fixed $\epsilon >0$, the auction approach is guaranteed to converge to a fixed assignment. The following theorem shows that the auction process terminates within a fixed number of iterations.
\begin{theorem} \label{thm:auc_terminate}
The auction process terminates in a finite number of iterations.
\end{theorem}
\begin{proof}
According to our system model, each underlay transmitter selects only one transmission alignment. Hence, once each resource receives at least one bid (which implies that each transmitter is assigned to a resource), the auction process must terminate. Now if any resource $\lbrace n, l \rbrace$ receives a bid in $\hat{t}$ iterations, the cost must be greater than the initial price by $\hat{t} \epsilon$. As a result, the resource $\lbrace n, l \rbrace$ becomes costly to be assigned when compared to any resource $\lbrace n^\prime, l^\prime \rbrace$ that has not received any bid yet. The argument follows that there are two possibilities, e.g., \textit{i)} the auction process terminates in a finite iterations with each transmitter assigned to a resource, regardless of every resource receives a bid; or \textit{ii)} the auction process continues for a finite number of iterations and each resource will receive at least one bid, therefore, the algorithm terminates.
\end{proof}
At termination, the solution (e.g., allocation) obtained is almost at equilibrium, e.g., the condition in Equation (\ref{eq:auction_price_comslac}) is satisfied for all the underlay transmitters. Since the algorithm terminates after a finite number of iterations, we can show that the algorithm converges to a fixed allocation and the complexity at each transmitter is linear to the number of resources.
\begin{theorem}
The auction algorithm converges to a fixed allocation with the number of iterations of $$\mathcal{O}\left( T KNL \left \lceil {\frac{\underset{k, n, l}{\operatorname{max}} B_{k}^{(n,l)} - \underset{k, n, l}{\operatorname{min}} B_{k}^{(n,l)}}{\epsilon} } \right\rceil \right).$$
\end{theorem}
\begin{proof}
The proof follows from the similar argument presented in \textbf{Theorem \ref{thm:auc_terminate}}. In the worst case, the total number of iterations in which a resource can receive a bid is no more than $ \Upsilon = \left \lceil {\frac{\underset{k, n, l}{\operatorname{max}} B_{k}^{(n,l)} - \underset{k, n, l}{\operatorname{min}} B_{k}^{(n,l)}}{\epsilon} } \right\rceil$ \cite{auction_base}. Since each bid requires $\mathcal{O}\left(NL \right)$ iterations, and each iteration involves a bid by a single transmitter, the total number of iterations in \textbf{Algorithm \ref{alg:auc_alg}} is of $\mathcal{O}\left( KNL \Upsilon \right)$. For the convergence, the allocation variable $\mathbf{X}$ needs to be unchanged for at least $T \geq 2$ consecutive iterations. Hence, the overall running time of the algorithm is $\mathcal{O}\left( T KNL \Upsilon \right)$.
\end{proof}
Note that for any transmitter node $k \in \mathcal{K}^{\mathrm T}$, the complexity of the auction process given by \textbf{Algorithm \ref{alg:auc_loc}} is linear with number of resources for each of the iterations.
\subsubsection{Optimality}
In the following we show that the data rate obtained by the auction algorithm is within $K \epsilon$ of the maximum data rate obtained by solving the original optimization problem $\mathbf{P\ref{opt:combopt}}$.
\begin{theorem}
The data rate obtained by the distributed auction algorithm is within $K \epsilon$ of the optimal solution
\end{theorem}
\begin{proof}
We construct the proof by using an approach similar to that presented in \cite{auction_base}. The data rate obtained by any assignment $\mathbf{X}$ will satisfy the following condition:
\begin{equation} \label{eq:auc_prof_ineqal}
\sum_{k=1}^{K} R_{u_k} \leq \sum_{\lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}} \widehat{C}^{(n,l)} + \sum_{k=1}^{K} \underset{ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{max}} \left\lbrace B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right\rbrace
\end{equation}
where $\widehat{C}^{(n,l)} = \underset{k^\prime \in \mathcal{K}^{\mathrm T}}{\operatorname{max}} C_{k^\prime}^{(n,l)} $, $B_{k}^{(n,l)} = w_1 \mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right)$ and $R_{u_k}$ is given by Equation (\ref{eq:rate_ue}). The inequality given by Equation (\ref{eq:auc_prof_ineqal}) is satisfied since the first term in the right side of the inequality, e.g., $\sum\limits_{\lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}} \widehat{C}_{(n,l)}$
is equal to $\sum\limits_{k=1}^{K} \sum\limits_{n=1}^{N} \sum\limits_{l=1}^{L} x_{k}^{(n,l)} C_{k}^{(n,l)}$ and the second term is not less than $\sum\limits_{k=1}^{K} \sum\limits_{n=1}^{N} \sum\limits_{l=1}^{L} x_{k}^{(n,l)}\left( B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right)$. Let the variable $A^* \triangleq \underset{\mathbf{X}^*}{\operatorname{max}} \sum\limits_{k=1}^{K} R_{u_k} = \sum\limits_{k=1}^{K} \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} ~{x_{k}^{(n,l)}}^{*} B_{\mathrm {RB}} \log_2 \left(1 + \gamma_{u_k}^{(n)} \right)$ denote the optimal achievable data rate. In addition, let the variable $D^*$ be defined as
\begin{equation}
D^* \triangleq \underset{\substack{\widehat{C}^{(n,l)} \\ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}}{\operatorname{min}} \left\lbrace \sum_{\lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}} \widehat{C}^{(n,l)} + \sum_{k=1}^{K} \underset{ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{max}} \left\lbrace B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right\rbrace \right\rbrace.
\end{equation}
Hence from Equation (\ref{eq:auc_prof_ineqal}), we can write $A^* \leq D^*$. Since the final assignment and the set of costs are almost at equilibrium, for any underlay transmitter $k$, the condition $\sum\limits_{n=1}^{N} \sum\limits_{l=1}^{L} x_{k}^{(n,l)}\left( B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right) \geq \underset{ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{max}} \left\lbrace B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right\rbrace - \epsilon$ will hold. Consequently, we can obtain the following inequality:
\begin{align}
D^* &\leq \sum_{k=1}^{K} \left( \sum_{n=1}^{N} \sum_{l=1}^{L} x_{k}^{(n,l)} \widehat{C}^{(n,l)} + \underset{ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{max}} \left\lbrace B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right\rbrace \right) \nonumber \\
&\leq \sum_{k=1}^{K}\sum_{n=1}^{N} \sum_{l=1}^{L} x_{k}^{(n,l)} B_{k}^{(n,l)} + K \epsilon \leq \sum_{k=1}^{K} R_{u_k} + K \epsilon \leq A^* + K \epsilon.
\end{align}
Since $A^* \leq D^*$, the data rate achieved by the auction algorithm is within $K \epsilon$ of the optimal data rate $A^*$ and the proof follows.
\end{proof}
\section{Qualitative Comparison Among the Resource Allocation Schemes} \label{sec:comparisons}
In this section, we compare the different resource allocation schemes discussed above based on several criteria (e.g., flow of algorithm execution, information requirement and algorithm overhead, complexity and optimality of the solution, convergence behavior etc.). We term the centralize solution (which can be obtained by solving the optimization problem $\mathbf{P\ref{opt:combopt}}$) as COS (centralized optimal scheme) and compare it with the distributed solutions. A comparison among the resource allocation schemes is presented in Table \ref{tab:comp}.
\begin{table}[!h]
\centering
\begin{footnotesize}
\begin{tabular}{P{2.0cm} P{2.5cm} P{2.5cm} P{2.5cm} P{2.5cm} }
\toprule
\multirow{2}{*}{Criterion} & \multicolumn{4}{c}{Schemes}\\ \cline{2-5}
& \multicolumn{1}{c}{COS} & \multicolumn{1}{c}{Stable matching} & \multicolumn{1}{c}{Message passing} & \multicolumn{1}{c}{Auction method} \\
\midrule
Type of the solution & Centralized & Distributed & Distributed & Distributed\\
\midrule
Algorithm execution & MBS solves the resource optimization problem (e.g., $\mathbf{P\ref{opt:combopt}}$) & MBS and underlay transmitters locally update the preference profiles, MBS runs the matching subroutine & MBS and underlay transmitters alliteratively exchange the messages, MBS computes the marginals and selects allocation & Each underlay transmitters locally runs the auction subroutine, MBS collects the parameters from all the transmitters and broadcast required parameters needed for the auction subroutine\\
\midrule
Optimality & Optimal & Weak Pareto optimal & Optimal subject to the weight $\omega$ & Within $K\epsilon$ to the optimal\\
\midrule
Complexity & $\mathcal{O}\left( \left(NL \right)^{K} \right)$ at the MBS & \resizebox{.99\hsize}{!}{$\mathcal{O}\left( T NL \log(NL) \right)$} at the transmitters, $\mathcal{O}(TKNL) $ at the MBS & \resizebox{.99\hsize}{!}{ $\mathcal{O} \left(T {(NL)}^2 \log \left( NL \right) \right)$} at the transmitters, $\mathcal{O}\left( T K N L \right)$ at the MBS & For each iteration linear with $N, L$ at the transmitters, overall running time $\mathcal{O}\left( T KNL \Upsilon \right)$\\
\midrule
Convergence behavior & N/A & Converges to a stable matching and hence to a fixed allocation & Converges to a fixed marginal and to a fixed allocation & Converges to a fixed allocation within $K\epsilon$ of the optimal \\
\midrule
Information required by the MBS & Channel gains (e.g., CSI parameters) between all the links of the network & The preference profiles and the channel gains $G_{k}^{(n)} = \left[ g_{k,m_{k}^*}^{(n)} \right]_{\forall k, n}$ & The messages $\left[ \psi_{k \rightarrow \lbrace n,l \rbrace} \right]_{\forall k,n,l}$ and the channel gains $G_{k}^{(n)} = \left[ g_{k,m_{k}^*}^{(n)} \right]_{\forall k, n}$ & The channel gains $G_{k}^{(n)} = \left[ g_{k,m_{k}^*}^{(n)} \right]_{\forall k, n}$, local assignments $\mathbf{x}_k$, the cost $\mathbf{C}_k$, and the bidding information $\mathfrak{B}_k$ for $\forall k$ \\
\midrule
Algorithm overhead & High (exponential) computational complexity, requirement of all CSI parameters of the network & Build the preference profiles, exchange information to update preference profiles, execution of matching subroutine & Calculation and exchange of messages, computation of the marginals & Computation and exchange of the parameters, e.g., $I^{(n)}$ for $\forall n$, the allocation vector $\mathbf{X}$, information about highest bidders $\mathfrak{B}$, the cost vector $\mathbf{C}$\\
\toprule
\end{tabular}
\end{footnotesize}
\caption{Comparison among different resource allocation approaches}{}
\label{tab:comp}
\end{table}
\section{Chapter Summary and Conclusion} \label{sec:conclusion}
We have presented three comprehensive distributed solution approaches for the future 5G cellular mobile communication systems. Considering a heterogeneous multi-tier 5G network, we have developed distributed radio resource allocation algorithms using three different mathematical models (e.g., stable matching, message passing, and auction method). The properties (e.g., convergence, complexity, optimality) of these distributed solutions are also briefly analyzed. To this end, a qualitative comparison of these schemes is illustrated.
The solution tools presented in this chapter can also be applicable to address the resource allocation problems in other enabling technologies for 5G systems. In particular, the mathematical tools presented in this chapter open up new opportunities to investigate other network models, such as resource allocation problems for wireless virtualization \cite{wnv_1} and cloud-based radio access networks \cite{c_ran1}. In such systems, these modeling tools need to be customized accordingly based on the objective and constraints required for the resource allocation problem.
In addition to the presented solutions, there are few game theoretical models which have not been covered in this chapter. However, these game models can also be considered as potential distributed solution tools. Different from traditional cooperative and non-cooperative games, the game models (such as mean field games \cite{mfg_schedule, mfg_crn}, evolutionary games \cite{evo_wireless} etc.) are scalable by nature, and hence applicable to model such large heterogeneous 5G networks. Utilizing those advanced game models for the resource allocation problems and analyzing the performance (e.g., data rate, spectrum and energy efficiency etc.) of 5G systems could be an interesting area of research.
\clearpage
\bibliographystyle{IEEEtran}
\chapter{Distributed Resource Allocation in 5G Cellular Networks}
\title{Distributed Resource Allocation in 5G Cellular Networks\footnote{Book chapter in \emph{Towards 5G: Applications, Requirements and Candidate Technologies}, Wiley, 2015, (Eds. Rath Vannithamby and Shilpa Telwar).
}}
\author{Monowar Hasan and Ekram Hossain \\
University of Manitoba, Canada}
\date{}
\maketitle
\section{Introduction}
The fifth generation (5G) cellular networks are expected to provide wide variety of high rate (i.e., 300 Mbps and 60 Mbps in downlink and uplink,
respectively, in 95 percent of locations and time \cite{metis_5g}) multimedia services. The 5G communication platform is seen as a global unified standard with seamless connectivity among existing standards, e.g., High Speed Packet Access (HSPA), Long Term Evolution-Advanced (LTE-A) and Wireless Fidelity (WiFi). Some of the emerging features and trends of 5G networks are: multi-tier dense heterogeneous networks \cite{horizon_5g, toshiba_5g}, device-to-device (D2D) and machine-to-machine (M2M) communications \cite{toshiba_5g, d2d_5g}, densification of the heterogeneous base stations (e.g., extensive use of relays and small cells) \cite{nw_dens_5g}, cloud-based radio access network \cite{toshiba_5g}, integrated use of multiple radio access technologies \cite{multi_rat_5g}, wireless network virtualization \cite{toshiba_5g}, massive and 3D MIMO \cite{toshiba_5g, mimo_5g}, millimeter wave \cite{mmw_5g} and full duplex \cite{5g_shilpa} communications.
The 5G cellular wireless systems will have a multi-tier architecture consisting of macrocells, different types of licensed small cells and D2D networks to serve users with different quality-of-service (QoS) requirements in a spectrum efficient manner. Distributed resource allocation and interference management is one of the fundamental research challenges for such multi-tier heterogeneous networks. In this chapter, we consider the radio resource allocation problem in a multi-tier orthogonal frequency division multiple access (OFDMA)-based cellular (e.g., 5G LTE-A) network. In particular, we present three novel approaches for distributed resource allocation in such networks utilizing the concepts of stable matching, factor-graph based message passing, and distributed auction.
Matching theory, a sub-field of economics, is a promising concept for distributed resource management in wireless networks. The matching theory allows low-complexity algorithmic manipulations to provide a decentralized self-organizing solution to the resource allocation problems. In matching-based resource allocation, each of the agents (e.g., radio resources and transmitter nodes) ranks the opposite set using a preference relation. The solution of the matching is able to assign the resources with the transmitters depending on the preferences.
The message passing approach for resource allocation provides low (e.g., polynomial time) complexity solution by distributing the computational load among the nodes in the network. In the radio resource allocation problems, the decision making agents (e.g., radio resources and the transmitters) form a virtual graphical structure. Each node computes and exchanges simple messages with neighboring nodes in order to find the solution of the resource allocation problem.
Similar to matching based allocation, auction method is also inherited from economics and used in wireless resource allocation problems. Resource allocation algorithms based on auction method provides polynomial complexity solution which are shown to output near-optimal performance. The auction process evolves with a bidding process, in which unassigned agents (e.g., transmitters) raise the cost and bid for resources simultaneously. Once the bids from all the agents are available, the resources are assigned to the highest bidder.
We illustrate each of the modeling schemes with respect to a practical radio resource allocation problem. In particular, we consider a multi-tier network consisting a macro base station (MBS), a set of small cell base
stations (SBSs) and corresponding small cell user equipments (SUEs), as well as D2D user equipments (DUEs). There is a common set of radio resources (e.g., resource blocks [RBs]) available to the network tiers (e.g., MBS, SBSs
and DUEs). The SUEs and DUEs use the available resources (e.g., RB and power level) in an underlay manner as long as the interference caused to the macro tier (e.g., macro user equipments [MUEs]) remains below a given threshold. The goal of resource allocation is to allocate the available RBs and transmit power levels to the SUEs and DUEs in order to maximize the spectral efficiency without causing significant interference to the MUEs. We show that due to the nature of the resource allocation problem, the centralize solution is computationally expensive and also incurs huge signaling overhead. Therefore, it may not be feasible to solve the problem by a single centralized controller node (e.g., MBS) especially in a dense network. Hence distributed solutions with low signaling overhead is desirable.
We assume that readers are familiar with the basics of OFDMA-based cellular wireless networks (e.g., LTE-A networks), as well as have preliminary background on theory of computing (e.g., data structures, algorithms and computational complexity). Followed by a brief theoretical overview of the modeling tools (e.g., stable matching, message passing
and auction algorithm), we present the distributed solution approaches for the resource allocation problem in the aforementioned network setup. We also provide a brief qualitative comparison in terms of various performance
metrics such as complexity, convergence, algorithm overhead etc.
The organization of the rest of the chapter is as follows: the system model, related assumptions, and the resource allocation problem is presented in Section \ref{sec:sys_model}. The disturbed solutions for resource allocation problem, e.g., stable matching, message passing and auction method are discussed in the Sections \ref{sec:sm_ra}, \ref{sec:mp_ra}, \ref{sec:am_ra}, respectively. The qualitative comparisons among the resource allocation approaches are presented in Section \ref{sec:comparisons}. We conclude the chapter in Section \ref{sec:conclusion} highlighting the directions for future research. Key mathematical symbols and notations used in the chapter are summarized in Table \ref{tab:notations}.
\begin{table}[!h]
\centering
\begin{footnotesize}
\begin{tabular}{c P{10.0cm}}
\toprule
\multicolumn{1}{c}{Notation} & \multicolumn{1}{c}{Physical Interpretation} \\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Network model:}} \\
$\mathcal{U}^{\mathrm m}$, $\mathcal{U}^{\mathrm s}$, $\mathcal{U}^{\mathrm d}$ & Set of MUE, SUE and D2D pairs, respectively \\
$\mathcal{K}^{\mathrm T}$, $\mathcal{K}^{\mathrm R}$ & Set of underlay transmitters and receivers, respectively \\
$\mathcal{N}$, $\mathcal{L}$ & Set of RBs and power levels, respectively \\
$K$, $N$, $L$ & Total number of underlay transmitters, RBs, and power levels, respectively \\
$u_k$ & The UE associated with underlay transmitter $k$ \\
$x_{k}^{(n,l)}, \mathbf{X}$ & Allocation indicator, whether transmitter $k$ using resource $\lbrace n, l \rbrace$ and the indicator vector, respectively\\
$g_{i,j}^{(n)}$ & Channel gain between link $i,j$ over RB $n$ \\
$\gamma_{u_k}^{(n)}$ & SINR in RB $n$ for the UE $u_k$\\
$\Gamma_{u_k}^{(n,l)}$ & Achievable SINR of the UE $u_k$ over RB $n$ using power level $l$\\
$p_{k}^{(n)}$ & Transmit power of transmitter $k$ over RB $n$\\
$R_{u_k}$ & Achievable data rate for $u_k$ \\
$I^{(n)}$, $I_{\mathrm{max}}^{(n)}$ & Aggregated interference and threshold limit for the RB $n$, respectively \\
$\mathfrak{U}_{k}^{(n,l)}$ & Utility for transmitter $k$ using resource $\lbrace n, l \rbrace$ \\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Stable matching:}} \\
$\mu$ & Matching (e.g., allocation) of transmitter to the resources \\
$i_1 \succeq_j i_2$ & Preference relation for agent $j$ (i.e., $i_1$ is more preferred than $i_2$) \\
$\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$, $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L})$ & Preference profile for the transmitter $k$ and RB $n$, respectively \\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Message passing:}} \\
$\delta_{\lbrace n,l \rbrace \rightarrow k} \big( x_{k}^{(n,l)} \big)$ & Message delivered by the resource $\lbrace n,l \rbrace$ to the transmitter $k$\\
$\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big)$ & Message from transmitter $k$ to the resource $\lbrace n,l \rbrace$ \\
$\psi_{\lbrace n,l \rbrace \rightarrow k}$ & Normalized message from the resource $\lbrace n,l \rbrace$ to the transmitter $k$ \\
$\psi_{ k \rightarrow \lbrace n,l \rbrace }$ & Normalized message from the transmitter $k$ to the resource $\lbrace n,l \rbrace$\\
$\tau_{k}^{(n,l)}$ & Node marginals for the transmitter $k$ using resource $\lbrace n,l \rbrace$ \\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Auction method:}} \\
$C_{k}^{(n,l)}$ & Cost for transmitter $k$ using resource $\lbrace n,l \rbrace$ \\
$B_{k}^{(n,l)}$ & Data rate (multiplied by a weighting factor) achieved by transmitter $k$ using resource $\lbrace n,l \rbrace$ \\
$\mathfrak{b}_{k}^{( n,l)}$ & Local bidding information available to transmitter $k$ for the resource $\lbrace n,l \rbrace$ \\
$\epsilon$ & Minimum bid increment parameter \\
$\Theta_{k} = \lbrace n,l \rbrace$ & Assignment of resource $\lbrace n,l \rbrace$ to the transmitter $k$\\
\midrule
\multicolumn{2}{l}{$\bullet$~\textit{Miscellaneous:}} \\
$|\mathbf{y}| $ & Length of the vector $\mathbf{y}$ \\
$y(t)$ & Value of variable $y$ at any iteration $t$ \\
$z := y$ & Assignment of the value of variable $y$ to the variable $z$ \\
\textit{/* comment */} & Commented text inside algorithms \\
\toprule
\end{tabular}
\end{footnotesize}
\caption{List of major notations}{}
\label{tab:notations}
\end{table}
\section{System Model} \label{sec:sys_model}
\subsection{Network Model and Assumptions} \label{subsec:nw_model}
\begin{figure}[h t b]
\centering
\includegraphics[width=3.0in]{5g_systemmodel.pdf}
\caption{Schematic diagram of the heterogeneous network model. The D2D pairs, SBSs and SUEs are underlaid within the macro tier by reusing same set of radio resources.}
\label{fig:sys_mod}
\end{figure}
Let us consider a transmission scenario of heterogeneous network as shown in Fig. \ref{fig:sys_mod}. The network consists of one MBS and a set of $C$ cellular MUEs, i.e., $\mathcal{U}^{\mathrm m} = \lbrace 1,2,\cdots, C \rbrace$. There are also $D$ D2D pairs and a cluster of $S$ SBSs located within the coverage area of the MBS. The set of SBSs is denoted by $\mathcal{S} = \lbrace 1, 2, \cdots S\rbrace$. For simplicity we assume that each SBS serves only one SUE for a single time instance and the set of SUE is given by $\mathcal{U}^{\mathrm s} = \lbrace 1,2,\cdots, S \rbrace$. The set of D2D pairs is denoted as $\mathcal{U}^{\mathrm d} = \lbrace 1,2,\cdots, D \rbrace$. In addition, the $d$-th element of the sets $\mathcal{U}^{\mathrm d_{T}}$ and $\mathcal{U}^{\mathrm d_{R}}$ denotes the transmitter and receiver UE of the D2D pair $d \in \mathcal{U}^{\mathrm d}$, respectively. The set of UEs in the network is given by $\mathcal{U} = \mathcal{U}^{\mathrm m} \cup \mathcal{U}^{\mathrm s} \cup \mathcal{U}^{\mathrm d}$. For notational convenience, we denote by $\mathcal{K}^{\mathrm T} = \mathcal{S} \cup \mathcal{U}^{\mathrm d_T}$ the set of underlay transmitters (e.g., SBSs and transmitting D2D UEs) and $\mathcal{K}^{\mathrm R} = \mathcal{U}^{\mathrm s} \cup \mathcal{U}^{\mathrm d_{R}}$ denotes the set of underlay receivers (e.g., SUEs and receiving D2D UEs).
The SBSs and DUEs are underlaid within the \textit{macro tier} (e.g., MBS and MUEs). Both the macro tier and the \textit{underlay tier} (e.g., SBSs, SUEs and D2D pairs) use the same set $\mathcal{N} = \lbrace 1, 2, \cdots N \rbrace$ of orthogonal RBs\footnote{The minimum scheduling unit of LTE-A standard is referred to as an RB. One RB consists of 12 subcarriers (e.g., 180 kHz) in the frequency domain and one sub-feame (e.g., 1 millisecond) in the time domain. For a brief overview of heterogeneous network in the context of LTE-A standard refer to \cite[Chapter 1]{hetnet_book_sir}.}. Each transmitter node in the underlay tier (e.g., SBS and D2D transmitter) selects one RB from the available $N$ RBs. In addition, the underlay transmitters are capable of selecting the transmit power from a finite set of power levels, i.e., $\mathcal{L} = \lbrace 1, 2, \cdots L \rbrace$.
Each SBS and D2D transmitter should select a suitable RB-power level combination. This RB-power level combination is referred to as \textit{transmission alignment}\footnote{Throughout this chapter we use the term \textit{resource} and \textit{transmission alignment} interchangeably.} \cite{prabo_journal}. For each RB $n \in \mathcal{N}$, there is a predefined threshold $I_{\mathrm{max}}^{(n)}$ for maximum aggregated interference caused by the underlay tier to the macro tier. We assume that value of $I_{\mathrm{max}}^{(n)}$ is known to the underlay transmitters by using the feedback control channels. An underlay transmitter (i.e., SBS or transmitter DUE) is allowed to use the particular transmission alignment as long as the cross-tier interference to the MUEs is within the threshold limit.
The system model considered here is a \textit{multi-tier heterogeneous network} since each of the network tiers (e.g., macro tier and underlay tier consisting with small cells and D2D UEs) has different transmit power range, coverage region and specific set of users with different application requirements. It is assumed that the user association to the base stations (either MBS or SBSs) is completed prior to resource allocation. In addition, the potential DUEs are discovered during the D2D session setup by transmitting known synchronization or reference signal (i.e., beacons) \cite{network_asst_d2d}. According to our system model, only one MUE is served on each RB to avoid co-tier interference within the macro tier. However multiple underlay UEs (e.g., SUEs and DUEs) can reuse the same RB to improve the spectrum utilization. This reuse causes severe cross-tier interference to the MUEs, and also co-tier interference within the underlay tier; which leads the requirement of an efficient resource allocation scheme.
\subsection{Achievable Data Rate}
The MBS transmits to the MUEs using a fixed power $p_{M}^{(n)} > 0$ for $\forall n$. For each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$, the transmit power over the RBs is determined by the vector $\mathbf{P}_{\mathrm k} = \left[ p_k^{(1)}, p_k^{(2)}, \cdots, p_k^{(N)} \right]^{\mathsf{T}}$ where $p_k^{(n)} \geq 0$ denotes the the transmit power level of the transmitter $k$ over RB $n$. The transmit power $p_k^{(n)}, ~\forall n$ must be selected from the finite set of power levels $\mathcal{L}$. Note that if the RB $n$ is not allocated to the transmitter $k$, the corresponding power variable $p_k^{(n)} = 0$. Since we assume that each underlay transmitter selects only one RB, only one element in the power vector $\mathbf{P}_{\mathrm k}$ is non-zero.
All links are assumed to experience independent block fading. We denote by $g_{i, j}^{(n)}$ the channel gain between the links $i$ and $j$ over RB $n$ and defined by $g_{i, j}^{(n)} = \beta_{i,j}^{(n)} d_{i,j}^{-\alpha} $ where $\beta_{i,j}^{(n)}$ denote the channel fading component between link $i$ and $j$ over RB $n$, $d_{i,j}$ is the distance between node $i$ and $j$, and $\alpha$ is the path-loss exponent.
For the SUEs, we denote $u_k$ as the SUE associated to SBS $k \in \mathcal{S}$, and for the DUEs, $u_k$ refer to the receiving D2D UE of the D2D transmitter $k \in \mathcal{U}^{\mathrm d_{T}}$. The received signal-to-interference-plus-noise ratio (SINR) for the any arbitrary SUE or D2D receiver, i.e., $u_k \in \mathcal{K}^{\mathrm R}, k \in \mathcal{K}^{\mathrm T}$ over RB $n$ is given by
\begin{equation} \label{eq:sinr_underlay}
\gamma_{u_k}^{(n)} = \frac{g_{k, u_k}^{(n)}p_{k}^{(n)}}{\underbrace{ g_{M, u_k}^{(n)}p_{M}^{(n)}}_\text{interference from macro tier} + \underbrace{\sum\limits_{\substack{ k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k }} g_{k^\prime, u_k}^{(n)} p_{k^\prime}^{(n)}}_\text{interference from underlay tier} + ~\sigma^2}
\end{equation}
where $g_{k,u_k}^{(n)}$ is the link gain between the SBS and SUE (e.g., $u_k \in \mathcal{U}^{\mathrm s}, k \in \mathcal{S}$) or the link gain between the D2D UEs (e.g., $u_k \in \mathcal{U}^{\mathrm d_{R}}, k \in \mathcal{U}^{\mathrm d_T}$), and $g_{M, u_k}^{(n)}$ is the interference gain between the MBS and the UE $u_k$. In Equation (\ref{eq:sinr_underlay}), the variable $\sigma^2 = N_0 B_{\mathrm{RB}}$ where $B_{\mathrm {RB}}$ is the bandwidth corresponding to an RB and $N_0$ denotes the thermal noise. Similarly, the SINR for the MUE $m \in \mathcal{U}^{\mathrm m}$ over RB $n$ can be written as follows:
\begin{equation}
\gamma_{m}^{(n)} = \frac{g_{M, m}^{(n)}p_{M}^{(n)}}{\sum\limits_{\substack{ k \in \mathcal{K}^{\mathrm T} }} g_{k, m}^{(n)} p_{k}^{(n)} + ~\sigma^2}.
\end{equation}
Given the SINR, the data rate of the UE $u \in \mathcal{U}$ over RB $n$ can be calculated according to the Shannon's formula, i.e., $R_{u}^{(n)} = B_{\mathrm {RB}} \log_2 \left(1 + \gamma_{u}^{(n)} \right)$.
\subsection{Formulation of the Resource Allocation Problem} \label{subsec:rap_org}
The objective of resource (i.e., RB and transmit power) allocation problem is to obtain the assignment of RB and power level (e.g., transmission alignment) for the underlay UEs (e.g., D2D UEs and SUEs) that maximizes the achievable sum data rate. The RB and power level allocation indicator for any underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ is denoted by a binary decision variable $x_{k}^{(n, l)}$ where
\begin{equation}
x_{k}^{(n, l)} = \begin{cases}
1, \quad \text{if the transmitter $k$ is trasnmitting over RB $n$ with power level $l$} \\
0, \quad \text{otherwise.}
\end{cases}
\end{equation}
Note that the decision variable $x_{k}^{(n, l)} = 1$ implies that $p_k^{(n)} = l$. Let $K = S + D$ denote the total number of underlay transmitters. The achievable data rate of an underlay UE $u_k$ with the corresponding transmitter $k$ is written as
\begin{equation} \label{eq:rate_ue}
R_{u_k} = \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} ~x_{k}^{(n,l)} B_{\mathrm {RB}} \log_2 \left(1 + \gamma_{u_k}^{(n)} \right).
\end{equation}
The aggregated interference experienced on RB $n$ is given by $I^{(n)} = \sum\limits_{k =1}^{K}\sum\limits_{l = 1}^{L}x_{k}^{(n, l)} g_{k,m_k^*}^{(n)} p_k^{(n)}$, where $m_k^* = \underset{m}{\operatorname{argmax}}~ g_{k,m}^{(n)}, ~\forall m \in \mathcal{U}^{\mathrm m}$.
In order to calculate the aggregated interference $I^{(n)}$ on RB $n$ we use the concept of reference user \cite{ref_user}. For any RB $n$, the interference caused by the underlay transmitter $k$ is determined by the highest gains between the transmitter $k$ and MUEs, e.g., the MUE $m_k^*$ who is the mostly affected UE by the transmitter $k$. Satisfying the interference constraints considering the gain with reference user will also satisfy the interference constraints for other MUEs. As mentioned in Section \ref{subsec:nw_model}, an underlay transmitter is allowed to use a particular transmission alignment only when it does not violate the interference threshold to the MUEs, i.e., $I^{(n)} < I_{\mathrm{max}}^{(n)}, ~\forall n$. Mathematically, the resource allocation problem can be expressed by using the following optimization formulation:
\begin{myoptimizationproblem} \label{opt:combopt}
\vspace*{-2.0em}
\begin{subequations}
\begin{align}
\hspace{3em} \underset{x_{k}^{(n,l)},~ p_k^{(n)}}{\operatorname{max}} ~ \sum_{\substack{k =1}}^{K} \sum_{n = 1}^{N} \sum_{l = 1}^{L} ~x_{k}^{(n,l)} & B_{\mathrm {RB}} \log_2\left( 1 + \gamma_{u_k}^{(n)} \right) \nonumber \\
\text{subject~ to:} \hspace{7em} \nonumber\\
\sum_{k =1}^{K}\sum_{l = 1}^{L}x_{k}^{(n, l)} g_{k,m_k^*}^{(n)} p_k^{(n)} &< I_{\mathrm{max}}^{(n)}, \quad \forall n \in\mathcal{N} \label{eq:opt_intf}\\
\sum_{n = 1}^{N} \sum_{l = 1}^{L} x_{k}^{(n,l)} &\leq 1, \quad \quad ~~\forall k \in \mathcal{K}^{\mathrm T} \label{eq:opt_rbpw}\\
x_{k}^{(n,l)} &\in \lbrace 0, 1 \rbrace, ~~~ \forall k \in \mathcal{K}^{\mathrm T},~\forall n \in \mathcal{N},~\forall l \in \mathcal{L} \label{eq:opt_bin}
\end{align}
\end{subequations}
\end{myoptimizationproblem}
where \vspace*{-0.3em} \begin{equation} \label{eq:sinr_formulation}
\gamma_{u_k}^{(n)} = \frac{g_{k, u_k}^{(n)}p_{k}^{(n)}}{ g_{M, u_k}^{(n)}p_{M}^{(n)} + \sum\limits_{\substack{ k^\prime \in \mathcal{K}^{\mathrm{T}},\\ k^\prime \neq k }}^{K} \sum\limits_{l^\prime = 1}^{L} x_{j}^{(n,l^\prime)} g_{k^\prime, u_k}^{(n)} p_{k^\prime}^{(n)} + ~\sigma^2}.
\end{equation}
The objective of the resource allocation problem $\mathbf{P\ref{opt:combopt}}$ is to maximize the data rate of the SUEs and DUEs subject to the set of constraints given by Equations (\ref{eq:opt_intf})-(\ref{eq:opt_bin}). With the constraint in Equation (\ref{eq:opt_intf}), the aggregated interference caused to the MUEs by the underlay transmitters on each RB is limited by a predefined threshold. The constraint in Equation (\ref{eq:opt_rbpw}) indicates that the number of RB selected by each underlay transmitter should be at most one and each transmitter can only select one power level at each RB. The binary indicator variable for transmission alignment selection is represented by the constraint in Equation (\ref{eq:opt_bin}).
\begin{corollary}
The resource allocation problem $\mathbf{P\ref{opt:combopt}}$ is a combinatorial non-convex non-linear optimization problem and the centralized solution of the above problem is strongly NP-hard especially for the large set of ~$\mathcal{U}$, $\mathcal{N}$, and $\mathcal{L}$.
\end{corollary}
The complexity to solve the above problem using exhaustive search is of $\mathcal{O}\left( \left(NL \right)^{K} \right)$. As an example, when $N=6, L=3, $ and $K = 3+2 = 5$, the decision set (e.g., search space) contains $1889568$ possible transmission alignments.
Considering the computational overhead, it not feasible to solve the resource allocation problem by a single central controller (e.g., MBS) in a practical system; and such centralized solution approach requires all the channel state information (CSI) available to the MBS.
Due to mathematical intractability of solving the above resource allocation problem, in the following we present three distributed heuristic solution approaches, namely, stable matching, factor graph based message passing, and distributed auction-based approaches. The distributed solutions are developed under the assumption that the system is feasible, i.e., given the resources and parameters (e.g., size of the network, interference thresholds etc.), it is possible to obtain an allocation that satisfies all the constraints of the original optimization problem.
\section{Resource Allocation Using Stable Matching} \label{sec:sm_ra}
The resource allocation approach using stable matching involves multiple decision-making agents, i.e., the available radio resources (transmission alignments) and the underlay transmitters; and the solutions (i.e., matching between transmission alignments and transmitters) are produced by individual actions of the agents. The actions, i.e., matching requests and confirmation or rejection are determined by the given \textit{preference profiles}, i.e., the agents hold lists of preferred matches over the opposite set each. The matching outcome yields mutually beneficial assignments between the transmitters and available resources that are individually conducted by such preference lists. In our model, the preference could based on CSI parameters and achievable SINR. \textit{Stability} in matching implies that, with regard to their initial preferences, neither the underlay transmitters nor the MBS (e.g., transmission alignments) have an incentive to alter the allocation.
\subsection{Concept of Matching}
A \textit{matching} (i.e., allocation) is given as an assignment of transmission alignment to the underlay transmitters forming the set $\lbrace k, n, l \rbrace \in \mathcal{K}^{\mathrm T} \times \mathcal{N} \times \mathcal{L}$. According to our system model, each underlay transmitter is assigned to only one RB; however, multiple transmitters can transmit on the same RB to improve spectrum utilization. This scheme corresponds to a \textit{many-to-one} matching in the theory of stable matching. More formally the matching can be defined as follows \cite{sm_def}:
\begin{definition} \label{def:matching}
A matching $\mu$ is defined as a function, i.e., $\mu: \mathcal{K}^{\mathrm T} \times \mathcal{N} \times \mathcal{L} \rightarrow \mathcal{K}^{\mathrm T} \times \mathcal{N} \times \mathcal{L}$ such that
\begin{enumerate} [label={\roman*)}]
\setlength{\itemsep}{0.5pt}%
\setlength{\parskip}{0pt}%
\item $\mu(k) \in \mathcal{N} \times \mathcal{L} $ and $|\mu_l(n)| \in \lbrace 0,1 \rbrace$ \quad \mbox{and}
\item $\mu(n) \in \left\lbrace\mathcal{K}^{\mathrm T} \times \mathcal{L}\right\rbrace \cup \lbrace \varnothing \rbrace$ and $|\mu(n)| \in \lbrace 1, 2, \ldots, K \rbrace$
\end{enumerate}
where $\mu(k) = \lbrace n, l\rbrace \Leftrightarrow \mu(n) = \lbrace k, l\rbrace$ for $ \forall k \in \mathcal{K}^{\mathrm T}, \forall n \in \mathcal{N}, \forall l \in \mathcal{L},$ and $|\mu(\cdot)|$ denotes the cardinality of matching outcome $\mu(\cdot)$.
\end{definition}
The above \textbf{Definition \ref{def:matching}} implies that $\mu$ is a one-to-one matching if the input to the function is an underlay transmitter. On the other hand, $\mu$ is a one-to-many function, i.e., $\mu_l(n)$ is not unique if the input to the function is an RB. The interpretation of $\mu(n) = \varnothing $ implies that for some RB $n \in \mathcal{N}$ the corresponding RB is unused by any underlay transmitter under the matching $\mu$. The outcome of the matching determines the RB allocation vector and corresponding power level, e.g., $\mu \equiv \mathbf{X} $, where
\begin{equation} \label{eq:rap_X}
\mathbf{X} = \left[x_{1}^{(1, 1)}, \cdots, x_{1}^{(1, L)}, \cdots, x_{1}^{(N, L)}, \cdots, x_{K}^{(N, L)} \right]^{\mathsf{T}}.
\end{equation}
\subsection{Utility Function and Preference Profile}
Let the parameter $\Gamma_{u_k}^{(n, l)} \triangleq {\gamma_{u_k}^{(n)}}_{ \!\! \vert p_k^{(n)} = l }$ denote the achievable SINR of the UE $u_k$ over RB $n$ using power level $l$ (e.g., $p_k^{(n)} = l$) where $\gamma_{u_k}^{(n)}$ is given by Equation (\ref{eq:sinr_formulation}). We express the data rate as a function of SINR. In particular, let $\mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right) = B_{\mathrm {RB}} \log_2 \left(1 + \Gamma_{u_k}^{(n, l)} \right)$ denote the achievable data rate for the transmitter $k$ over RB $n$ using power level $l$. The utility of an underlay transmitter for a particular transmission alignment is determined by two factors, i.e., the achievable data rate for a given RB power level combination, and an additional cost function that represents the aggregated interference caused to the MUEs on that RB. In particular, the \textit{utility} of the underlay transmitter $k$ for a given RB $n$ and power level $l$ is given by
\begin{equation} \label{eq:sm_utility}
\mathfrak{U}_{k}^{(n,l)} = w_1 \mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right) - w_2 \left( I^{(n)} - I_{\mathrm{max}}^{(n)} \right)
\end{equation}
where $w_1$ and $w_2$ are the biasing factors and can be selected based on which network tier (i.e., macro or underlay tier) should be given priority for resource allocation \cite{prabo_journal}. As mentioned earlier each underlay transmitter and RB hold a list of preferred matches. The preference profile of an underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ over the set of available RBs $\mathcal{N}$ and power levels $\mathcal{L}$ is defined as a vector of linear order $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L}) = \left[\mathfrak{U}_{k}^{(n,l)} \right]_{n \in \mathcal{N}, l \in \mathcal{L}}$. We denote by $\lbrace n_1, l_1 \rbrace \succeq_k \lbrace n_2, l_2 \rbrace$ that the transmitter $k$ prefers the transmission alignment $\lbrace n_1, l_1 \rbrace$ to $\lbrace n_2, l_2 \rbrace$, and consequently, $\mathfrak{U}_{k}^{(n_1,l_1)} > \mathfrak{U}_{k}^{(n_2,l_2)}$. Similarly, the each RB holds the preference over the underlay transmitters and power levels given by $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L}) = \left[\mathfrak{U}_{k}^{(n,l)} \right]_{k \in \mathcal{K}^{\mathrm T}, l \in \mathcal{L}}$.
\subsection{Algorithm Development}
The matching between transmission alignments to the transmitters is performed in an iterative manner as presented in \textbf{Algorithm \ref{alg:ta_sm}}. While a transmitter is unallocated and has a non-empty preference list, the transmitter is temporarily assigned to its first preference over transmission alignments, e.g., the pair of RB and power level, $\lbrace n,l \rbrace$. If the allocation to the RB $n$ does not violate the tolerable interference limit $I_{\mathrm{max}}^{(n)}$, the allocation will persist. Otherwise, until the aggregated interference on the RB $n$ is below threshold, the worst preferred transmitter(s) from the preference list of RB $n$ will be removed even though it was allocated previously. The process terminates when no more transmitters are unallocated. Since the iterative process dynamically updates the preference lists, the procedure above ends up with a local stable matching \cite{matching_org_paper}.
\begin{algorithm} [!t]
\AtBeginEnvironment{algorithmic}{\small}
\caption{Assignment of transmission alignments using stable matching}
\label{alg:ta_sm}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\REQUIRE The preference profiles $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$,~ $\forall k \in \mathcal{K}^{\mathrm T}$ and $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L})$,~ $\forall n \in \mathcal{N}$.
\ENSURE The transmission alignment indicator $\mathbf{X} = \left[x_{1}^{(1, 1)}, \cdots, x_{1}^{(1, L)}, \cdots, x_{1}^{(N, L)}, \cdots, x_{K}^{(N, L)} \right]^{\mathsf{T}}$.
\STATE Initialize $\mathbf{X} := \mathbf{0}$.
\WHILE{ some transmitter $k$ is unassigned \AND $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$ is non-empty }
\STATE $\left\lbrace n_{\mathrm{mp}},l_{\mathrm{mp}} \right\rbrace :=$ most preferred RB with power level $l_{\mathrm{mp}}$ from the profile $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$.
\STATE Set $x_{k}^{\left(n_{\mathrm{mp}},l_{\mathrm{mp}} \right)} := 1$. ~\COMMENT{\footnotesize Temporarily assign the RB and power level to the transmitter $k$}
\STATE $\mathfrak{I}^{(n_\mathrm{mp})} := g_{k,m_k^*}^{(n_\mathrm{mp})} l_{\mathrm{mp}} + \!\! \sum\limits_{\substack{ k^\prime \in \mathcal{K}^{\mathrm{T}},\\k^\prime \neq k}}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n_\mathrm{mp}, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n_\mathrm{mp})} p_{k^\prime}^{(n_\mathrm{mp})}$. ~\COMMENT{\footnotesize Estimate interference of $n_\mathrm{mp}$}
\IF{$\mathfrak{I}^{(n_\mathrm{mp})} \geq I_{\mathrm{max}}^{(\mathrm{mp})} $}
\REPEAT
\STATE $ \left\lbrace k_{\mathrm{lp}}, l_{\mathrm{lp}} \right\rbrace :=$ least preferred transmitter with power level $l_{\mathrm{lp}}$ assigned to $n_{\mathrm{mp}}$.
\STATE Set $x_{k_{\mathrm{lp}}}^{\left( n_{\mathrm{mp}}, l_{\mathrm{lp}}\right)} := 0$. ~\COMMENT{\footnotesize Revoke assignment due to interference threshold violation}
\STATE $\mathfrak{I}^{(n_\mathrm{mp})} := \sum\limits_{\substack{ k^\prime =1, }}^{K}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n_\mathrm{mp}, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n_\mathrm{mp})} p_{k^\prime}^{(n_\mathrm{mp})}$. ~\COMMENT{\footnotesize Update interference level}
{ \footnotesize \textit{/* Update preference profiles */} }
\FORALL {successor $\lbrace \hat{k}_{\mathrm{lp}}, \hat{l}_{\mathrm{lp}} \rbrace$ of $ \left\lbrace k_{\mathrm{lp}}, l_{\mathrm{lp}} \right\rbrace$ on profile $\boldsymbol{\mathscr{P}}_{n_{\mathrm{mp}}}(\mathcal{K}^{\mathrm T}, \mathcal{L})$}
\STATE remove $\lbrace \hat{k}_{\mathrm{lp}}, \hat{l}_{\mathrm{lp}}\rbrace$ from $\boldsymbol{\mathscr{P}}_{n_{\mathrm{mp}}}(\mathcal{K}^{\mathrm T}, \mathcal{L})$.
\STATE remove $\left\lbrace n_{\mathrm{mp}}, l_{\mathrm{mp}} \right\rbrace$ from $\boldsymbol{\mathscr{P}}_{\hat{k}_{\mathrm{lp}}}(\mathcal{N}, \mathcal{L})$.
\ENDFOR
\UNTIL{$\mathfrak{I}^{(n_\mathrm{mp})} < I_{\mathrm{max}}^{(n_\mathrm{mp})}$ }
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
The overall stable matching based resource allocation approach is summarized in \textbf{Algorithm \ref{alg:ra_sm}}. Note that \textbf{Algorithm \ref{alg:ta_sm}} is executed repeatedly. The convergence of \textbf{Algorithm \ref{alg:ra_sm}} occurs when the outcome of two consecutive local matching is similar, e.g., $\mathbf{X}(t) = \mathbf{X}(t-1)$ and as a consequence $R(t) = R(t-1)$, where $R(t) = \sum\limits_{k=1}^{K} R_{u_k}(t)$ denotes the achievable sum rate of the underlay tier at iteration $t$.
\begin{algorithm} [!t]
\AtBeginEnvironment{algorithmic}{\small}
\caption{Stable matching-based resource allocation}
\label{alg:ra_sm}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\renewcommand{\algorithmicensure}{\textbf{Initialization:}}
\ENSURE
\STATE Estimate the CSI parameters from previous time slot.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ randomly selects a transmission alignment and the MBS broadcasts the aggregated interference of each RB using pilot signals.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ builds the preference profile $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$ from the CSI estimations and the utility function given by Equation (\ref{eq:sm_utility}).
\STATE For each $n \in \mathcal{N}$, the MBS builds the preference profiles $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L})$.
\STATE Initialize number of iterations $t := 1$.
\renewcommand{\algorithmicensure}{\textbf{Update:}}
\vspace*{0.5em}
\ENSURE
\WHILE{$\mathbf{X}(t) \neq \mathbf{X}(t-1)$ \AND $t$ is less than some predefined threshold $T_{\mathrm{max}}$}
\STATE MBS obtains a local stable matching $\mathbf{X}(t)$ using \textbf{Algorithm \ref{alg:ta_sm}}, calculates the aggregated interference $I^{(n)}(t)$ for $\forall n$ and informs the transmitters.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ updates the preference profile $\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L})$ based on current allocation vector $\mathbf{X}(t)$ and interference level $I^{(n)}(t)$.
\STATE MBS updates the preference profile $\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L})$ for $\forall n \in \mathcal{N}$ using $\mathbf{X}(t)$ and $I^{(n)}(t)$
\STATE Update $t := t+1$.
\ENDWHILE
\renewcommand{\algorithmicensure}{\textbf{Allocation:}}
\vspace*{0.5em}
\ENSURE
\STATE Allocate the RB and power levels to the SBSs and D2D UEs based on the matching obtained from the update phase.
\end{algorithmic}
\end{algorithm}
\subsection{Stability, Optimality, and Complexity of the Solution}
In this section, we analyze the solution obtained by stable matching approach. The stability, optimality, and the complexity of the algorithm are discussed in the following.
\subsubsection{Stability}
The notion of stability in the matching $\mu$ means that none of the agents (e.g., either underlay transmitters or the resources) prefers to change the allocation obtained by $\mu$. Hence, the matching $\mu$ is stable if no transmitter and no resource who are not allocated to each other,
as given in $\mu$, prefer each other to their allocation in $\mu$. The transmitters and resources are said to be \textit{acceptable} if the agents (e.g., transmitters and resources) prefer each other to remain unallocated. In addition, a matching $\mu$ is called \textit{individually rational} if no agent $\tilde{\jmath}$ prefers unallocation to the matching in $\mu(\tilde{\jmath})$. Before formally defining the stability of matching, we introduce the term \textit{blocking pair} which is defined as
\begin{definition} \label{def:blocking}
A matching $\mu$ is \textbf{blocked} by a pair of agent $(i,j)$ if they prefer each other to the matching obtain by $\mu$, i.e., $i \succeq_{j} \mu(j)$ and $j \succeq_{i} \mu(i)$.
\end{definition}
Using the above definition, the stability of the matching can be defined as follows \cite[Chapter 5]{matchingbook_two}:
\begin{definition} \label{def:stability}
A matching $\mu$ is \textbf{stable} if it is individually rational and there is no tuple $(k, n, l)$ within the set of acceptable agents such that $k$ prefers $\lbrace n,l \rbrace$ to $\mu(k)$ and $n$ prefers $\lbrace k,l \rbrace$ to $\mu(n)$, i.e., not blocked by any pair of agents.
\end{definition}
The following theorem shows that the solution obtained by the matching algorithm is stable.
\begin{theorem} \label{thm:stability}
The assignment performed in \textbf{Algorithm \ref{alg:ta_sm}} leads to a stable allocation.
\end{theorem}
\begin{proof}
We proof the theorem by contradiction. Let $\mu$ be a matching obtained by \textbf{Algorithm \ref{alg:ta_sm}}. Let us assume that the resource $\lbrace n, l \rbrace$ is not allocated to the transmitter $k$, but it belongs to a higher order in the preference list. According to this assumption,
the tuple $(k, n, l)$ will block $\mu$. Since the position of the resource $\lbrace n, l \rbrace$ in the preference profile of $k$ is higher compared to any resource $\lbrace \hat{n}, \hat{l} \rbrace$ that is matched by $\mu$, i.e., $\lbrace n, l \rbrace \succeq_{k} \mu(k) $, transmitter $k$ must select $\lbrace n, l \rbrace$ before the algorithm terminates. Note that, the resource $\lbrace n,l \rbrace$ is not assigned to transmitter $k$ in the matching outcome $\mu$. This implies that
$k$ is unassigned with the resource $\lbrace n , l \rbrace$ (e.g., line 9 in \textbf{Algorithm \ref{alg:ta_sm}}) and $(k, \hat{n}, \hat{l})$ is a better assignment. As a result, the tuple $(k, n, l)$ will not block $\mu$, which contradicts our assumption. The proof concludes since no blocking pair exists, and therefore, the matching outcome $\mu$ leads to a stable matching.
\end{proof}
It is worth mentioning that the assignment is stable at each iteration of \textbf{Algorithm \ref{alg:ta_sm}}. Since after evaluation of the utility, the preference profiles are updated and the matching subroutine is repeated, a stable allocation is obtained at each iteration.
\subsubsection{Optimality}
The optimality property of the stable matching approach can be observed using the definition of weak Pareto optimality. Let $\mathcal{R}_{\mu}$ denote the sum-rate obtained by matching $\mu$. A matching $\mu$ is weak Pareto optimal if there is no other matching $ \widehat{\mu}$ that can achieve a better sum-rate, i.e., $\mathcal{R}_{\widehat{\mu}} \geq \mathcal{R}_{\mu} $ \cite{sm_def}.
\begin{theorem}
The stable matching-based resource allocation algorithm is weak Pareto optimal.
\end{theorem}
\begin{proof}
Let us consider $\mu$ to be the stable allocation obtained by \textbf{Algorithm \ref{alg:ta_sm}}. For instance, let $\widehat{\mu}$ be an arbitrary stable outcome better that $\mu$, i.e., $\widehat{\mu}$ can achieve a better sum-rate. Since the allocation $\widehat{\mu}$ is better than $\mu$, there exists atleast one resource $\lbrace \hat{n}, \hat{l} \rbrace$ allocated to transmitter $k$ in $\widehat{\mu}$, and $k$ is allocated to the resource $\lbrace n, l \rbrace$ in $\mu$. According to our assumption, $k$ prefers $\lbrace \hat{n}, \hat{l} \rbrace$ to $\lbrace n,l \rbrace$, and let $\lbrace \hat{n}, \hat{l} \rbrace$ be allocated to transmitter $\hat{k}$ in $\mu$. It is obvious that resource $\lbrace \hat{n}, \hat{l} \rbrace$ is better than $\lbrace n,l \rbrace$ to transmitter $k$ and $\lbrace k, l \rbrace$ is better than $\lbrace \hat{k}, \hat{l} \rbrace$ to resource $\hat{n}$, i.e., $\lbrace \hat{n}, \hat{l} \rbrace \succeq_k \lbrace n,l \rbrace$
and $\lbrace k, l \rbrace \succeq_{\hat{n}} \lbrace \hat{k}, \hat{l} \rbrace$.
By the definition of blocking pair, $\mu$ is blocked by $(k, \hat{n}, \hat{l})$ and hence $\mu$ is unstable. This contradicts our assumption that $\mu$ is a stable allocation. Since there is no stable outcome $\widehat{\mu}$ which is better that $\mu$, by definition $\mu$ is an optimal allocation.
\end{proof}
\subsubsection{Complexity}
It is possible to show that the stable matching algorithm will iterate for finite number of times.
\begin{theorem}
\label{thm:sm_time}
The RB allocation subroutine terminates after some finite step $T^\prime$.
\end{theorem}
\begin{proof}
Let the finite set $\tilde{\mathcal{X}}$ represent the all possible combinations of transmitter-resource matching where each element $\tilde{x}_{k}^{(n,l)} \in \tilde{\mathcal{X}}$ denotes the resource $\lbrace n,l \rbrace$ is allocated to the transmitter $k$. Since no transmitter is rejected by the same resource more than once (i.e., line 9 in \textbf{Algorithm \ref{alg:ta_sm}}), the finiteness of the set $\tilde{\mathcal{X}}$ ensures the termination of the matching subroutine in finite number of steps.
\end{proof}
For each underlay transmitter, the complexity to build the preference profile using any standard sorting algorithm is $\mathcal{O}\left( NL \log( NL) \right)$ (line 8, \textbf{Algorithm \ref{alg:ra_sm}}). Similarly, in line 9, the complexity to output the ordered set of preference profile for the RBs is of $\mathcal{O}\left( N KL \log (KL) \right)$. Let $\xi = \displaystyle \sum_{k = 1 }^{K} |\boldsymbol{\mathscr{P}}_{k}(\mathcal{N}, \mathcal{L}) | + \sum_{n = 1}^{N} |\boldsymbol{\mathscr{P}}_{n}(\mathcal{K}^{\mathrm T}, \mathcal{L}) | = 2 K N L$ be the total length of input preferences in \textbf{Algorithm \ref{alg:ta_sm}}, where $|\boldsymbol{\mathscr{P}}_j(\cdot)|$ denotes the length of profile vector $\boldsymbol{\mathscr{P}}_j(\cdot)$. From \textbf{Theorem \ref{thm:sm_time}} and \cite[Chapter 1]{matching_thesis} it can be shown that, if implemented with suitable data structures, the time complexity of the RB allocation subroutine is linear in the size of input preference profiles, i.e., $\mathcal{O}(\xi) = \mathcal{O}\left( K N L \right)$. Since the update phase of \textbf{Algorithm \ref{alg:ra_sm}} runs at most fixed $T < T_{\mathrm{max}}$ iterations, the complexity of the stable matching-based solution is linear in $K, N, L$.
\section{Message Passing Approach for Resource Allocation} \label{sec:mp_ra}
In the following, we reformulate the resource allocation problem $\mathbf{P \ref{opt:combopt}}$ in such a way that can be solved with a message passing (MP) technique. The MP approach involves computation of the \textit{marginals}, e.g., the messages exchanged between the nodes of a specific graphical model. Among different representations of graphical model, we consider \textit{factor graph} based MP scheme. A factor graph is made up of two different types of nodes, i.e., \textit{function} and \textit{variable} nodes, and an edge connects a function (e.g., factor) node to a variable node if and only if the variable appears in the function. Mathematically, this can be expressed as follows \cite{factorgraph_thory}:
\begin{definition}
A factor graph can be represented by a $\mathcal{V}$-$\mathcal{F}$ bipartite graph where $\mathcal{V} = \left\lbrace v_1, \cdots v_a \right\rbrace$ is the set of variable nodes and $\mathcal{F} = \left\lbrace f_1(\cdot), \cdots f_b(\cdot) \right\rbrace$ is the set of function (e.g., factor) nodes. The connectivity (e.g., edges) of the factor graph can be represented by an $a \times b$ binary matrix $\mathbf{E} = [E_{i,j}]$ where $E_{i,j} = 1$ if the variable node $i$ is connected with the factor node $j$ and $E_{i,j} = 0$, otherwise.
\end{definition}
\subsection{Overview of the MP Scheme} \label{subsec:mp_overview}
Before presenting the details resource allocation approach for a heterogeneous scenario, we briefly introduce the generic MP scheme (for the details of factor graph based MP scheme refer to \cite{factorgraph_thory}). Let us consider the maximization of an arbitrary function $f(v_1, \cdots, v_J)$ over all possible values of the argument, i.e., $Z = \underset{\mathbf{v}}{\operatorname{max}} ~ f(\mathbf{v})$ where $\mathbf{v} = \left[ v_1, \cdots, v_J \right]^{\mathsf{T}}$. We denote by $\underset{\mathbf{v}}{\operatorname{max}}$ that the maximization is computed over all possible combinations of the elements of the the vector $\mathbf{v}$. The \textit{marginal} of $Z$ with respect to variable $v_j$ is given by $\phi_j(v_j) = \underset{\sim (v_j)}{\operatorname{max}} ~f(\mathbf{v})$ where $\underset{\sim (\cdot)}{\operatorname{max}}$ denote the maximization over all variables except $(\cdot)$. Let us now decompose $f(\mathbf{v})$ into summation of $I$ functions, i.e., $\sum\limits_{i=1}^{I} f_{i}(\hat{v}_i)$ where $\hat{v}_i$ is a subset of the elements of the vector $\mathbf{v}$ and let $\mathbf{f}= \left[ f_{1}(\cdot), \cdots, f_{I}(\cdot) \right]^{\mathsf{T}}$ is the vector of $I$ functions. In addition, let $\mathfrak{f}_j$ represents subset of functions in $\mathbf{f}$ where the variable $v_j$ appears. Hence the marginal can be rewritten as $\phi_j(v_j) = \underset{\sim (v_j)}{\operatorname{max}} ~\sum\limits_{i=1}^{I} f_{i}(\hat{v}_i)$. According to the \textit{max-sum} MP strategy the message passed by any variable node $v_j$ to any generic function node $f_i(\cdot)$ is given by $\delta_{v_j \rightarrow f_i(\cdot)}(v_j) = \sum\limits_{i^\prime \in \mathfrak{f}_j, i^\prime \neq i} \delta_{f_{i^\prime}(\cdot) \rightarrow v_j }(v_j)$. Similarly, the message from function node $f_i{(\cdot)}$ to variable node $v_j$ is given as $\delta_{f_i(\cdot) \rightarrow v_j}(v_j) = \underset{\sim (v_j)}{\operatorname{max}} \left(f_i(v_1, \cdots, v_J) + \sum\limits_{j^\prime \in \hat{v}_i, j^\prime \neq j} \delta_{v_{j^\prime} \rightarrow f_{i}(\cdot)}(v_{j^\prime}) \right) $. When the factor graph is cycle free (e.g., there is a unique path connecting any two nodes), all the variables nodes $j = \lbrace 1, \cdots, J \rbrace$ can compute the marginals as $\phi_j(v_j) = \sum\limits_{i=1}^{I} \delta_{f_{i}(\cdot) \rightarrow v_j }(v_j)$. Utilizing the general distributive law (e.g., $\operatorname{\max} \sum = \sum \operatorname{\max}$) \cite{mp_distributive} the maximization therefore can be computed as $Z = \sum\limits_{j=1}^{J} \underset{v_j}{\operatorname{max}} ~\phi_j(v_j)$.
\subsection{Reformulation of the Resource Allocation Problem Utilizing MP Approach}
In order to solve the resource allocation problem $\mathbf{P \ref{opt:combopt}}$ presented in Section \ref{subsec:rap_org} using MP, we reformulate it as a utility maximization problem. Let us define the reward functions $\mathfrak{W}_n(\mathbf{X})$ and $\mathfrak{R}_k(\mathbf{X})$ where the transmission alignment vector $\mathbf{X}$ is given by Equation (\ref{eq:rap_X}). With the constraint in Equation (\ref{eq:opt_intf}), we can define $\mathfrak{W}_n(\mathbf{X})$ as follows:
\begin{equation} \label{eq:util_mp1}
\mathfrak{W}_n(\mathbf{X}) =
\begin{cases} 0, & \text{if~} \sum\limits_{k =1}^{K}\sum\limits_{l = 1}^{L}x_{k}^{(n, l)} g_{k,m_k^*}^{(n)} p_k^{(n)} < I_{\mathrm{max}}^{(n)} \\
- \infty, & \text{otherwise.}\end{cases}
\end{equation}
Similarly to deal with the constraint in Equation (\ref{eq:opt_rbpw}) we define $\mathfrak{R}_k(\mathbf{X})$ as
\begin{equation} \label{eq:util_mp2}
\mathfrak{R}_k(\mathbf{X}) =
\begin{cases} \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} ~x_{k}^{(n,l)} B_{\mathrm {RB}} \log_2\left( 1 + \gamma_{u_k}^{(n)} \right) & \text{if~} \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} x_{k}^{(n,l)} \leq 1 \\
- \infty & \text{otherwise.}\end{cases}
\end{equation}
The interpretations of the reward functions in Equations (\ref{eq:util_mp1}) and (\ref{eq:util_mp2}) are straightforward. Satisfying the interference constraint in Equation (\ref{eq:opt_intf}) does not cost any penalty (e.g., zero reward) in the function $\mathfrak{W}_n(\mathbf{X})$, and in the function $\mathfrak{R}_k(\mathbf{X})$ fulfillment of the RB requirement constraint in Equation (\ref{eq:opt_rbpw}) gives the desired data rate. However, both in the functions $\mathfrak{W}_n(\mathbf{X})$ and $\mathfrak{R}_k(\mathbf{X})$, the unfulfilled constraints, respectively, given by in Equations (\ref{eq:opt_intf}) and (\ref{eq:opt_rbpw}), result in infinite cost.
From the Equations (\ref{eq:util_mp1}) and (\ref{eq:util_mp2}), the resource allocation problem $\mathbf{P \ref{opt:combopt}}$ can be rewritten as
$$\underset{\mathbf{X}}{\operatorname{max}} \left( \sum\limits_{n=1}^{N} \mathfrak{W}_n(\mathbf{X}) + \sum\limits_{k=1}^{K} \mathfrak{R}_k(\mathbf{X}) \right)$$
and the optimal transmission allocation vector is therefore given by
\begin{equation} \label{eq:mp_Xall}
\mathbf{X}^* = \underset{\mathbf{X}}{\operatorname{argmax}} \left( \sum\limits_{n=1}^{N} \mathfrak{W}_n(\mathbf{X}) + \sum\limits_{k=1}^{K} \mathfrak{R}_k(\mathbf{X}) \right).
\end{equation}
Since our goal is to obtain a distributed solution for the above resource allocation problem, we focus on a single transmission alignment allocation variable, e.g., $x_{k}^{(n,l)}$. From Equation (\ref{eq:mp_Xall}) we obtain ${x_{k}^{(n,l)}}^* = \underset{x_{k}^{(n,l)}} {\operatorname{argmax}}~ \phi_{k}^{(n,l)}\big(x_{k}^{(n,l)}\big)$ where the marginal $\phi_{k}^{(n,l)} \big(x_{k}^{(n,l)} \big)$ is given by
\begin{equation} \label{eq:marginal_mp}
\phi_{k}^{(n,l)}\big(x_{k}^{(n,l)}\big) = \underset{\sim \bigl(x_{k}^{(n,l)} \bigl)}{\operatorname{max}} \left( \sum\limits_{n=1}^{N} \mathfrak{W}_n(\mathbf{X}) + \sum\limits_{k=1}^{K} \mathfrak{R}_k(\mathbf{X}) \right).
\end{equation}
As mentioned in the previous section, $\underset{\sim \bigl(x_{k}^{(n,l)} \bigl)}{\operatorname{max}}$ denote the maximization over all variables in $\mathbf{X}$ except $x_{k}^{(n,l)}$. The marginalization in Equation (\ref{eq:marginal_mp}) can be computed in a distributed way where each node conveys the solution of a local problem to one another by passing information messages according to the max-sum MP strategy. Note that according to our system model the underlay transmitters and the resources (e.g., transmission alignments) can form a bipartite graph, e.g., each transmission alignment $\lbrace n,l \rbrace$ can be assigned to any of the $K$ transmitters as long as interference to the MUEs on RB $n$ is below threshold. Without loss of generality, let us consider a generic transmission alignment, e.g., RB-power level pair $\lbrace n,l \rbrace \in \mathcal{N} \times \mathcal{L}$ and an underlay transmitter $k \in \mathcal{K}^{\mathrm T}$. Using the function in Equation (\ref{eq:util_mp1}) and utilizing the max-sum MP strategy presented in Section \ref{subsec:mp_overview}, it is possible to show that the message delivered by the resource $\lbrace n,l \rbrace$ to the transmitter $k$ can be expressed as \cite{min-sum-mp}
\begin{equation} \label{eq:mp_msg1}
\begin{aligned}
\delta_{\lbrace n,l \rbrace \rightarrow k} \big( x_{k}^{(n,l)} \big) = \operatorname{max} \sum\limits_{k^\prime \in \mathcal{K}^{\mathrm T},~ k^\prime \neq k} \delta_{k^\prime \rightarrow \lbrace n,l \rbrace} \big( x_{k^\prime}^{(n,l)} \big) \\
\text{subject to:~~ } \sum\limits_{k =1}^{K}\sum\limits_{l = 1}^{L}x_{k}^{(n, l)} g_{k,m_k^*}^{(n)} p_k^{(n)} < I_{\mathrm{max}}^{(n)}.
\end{aligned}
\end{equation}
Note that the term $\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big)$ in the above equation denotes the message from transmitter $k$ to the resource $\lbrace n,l \rbrace$ which can be written as \cite{min-sum-mp}
\begin{equation} \label{eq:mp_msg2}
\begin{aligned}
\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big) = x_{k}^{(n,l)} R_{u_k}^{(n,l)} + \operatorname{max} \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} x_{k}^{(n^\prime,l^\prime)} & R_{u_k}^{(n^\prime,l^\prime)} + \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \big( x_{k}^{(n^\prime,l^\prime)} \big) \\
\text{subject to:~~ } \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} x_{k}^{(n,l)} &\leq 1
\end{aligned}
\end{equation}
where $R_{u_k}^{(n,l)} = B_{\mathrm {RB}} \log_2\left( 1 + \Gamma_{k}^{(n,l)} \right)$ and $\Gamma_k^{(n, l)} \triangleq {\gamma_{u_k}^{(n)}}_{ \!\! \vert p_k^{(n)} = l }$.
The interpretation of the Equations (\ref{eq:mp_msg1}) and (\ref{eq:mp_msg2}) are as follows: the messages $\delta_{\lbrace n,l \rbrace \rightarrow k} ( 1 )$ and $\delta_{k \rightarrow \lbrace n,l \rbrace} (1)$ carry the information relative to the use of the resource $\lbrace n, l\rbrace$ by the transmitter $k$; while the messages $\delta_{\lbrace n,l \rbrace \rightarrow k} (0)$ and $\delta_{k \rightarrow \lbrace n,l \rbrace} (0)$ carry the information relative to the lack of transmission over the resource $\lbrace n, l\rbrace$ by the transmitter $k$. In order to obtain both the messages $\delta_{\lbrace n,l \rbrace \rightarrow k} \big( x_{k}^{(n,l)} \big) $ and $\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big) $, it is required to solve the local optimization problem relative to the allocation variable $ x_{k}^{(n,l)} $.
Based on the discussions of Section \ref{subsec:mp_overview}, the link-wise marginal in Equation (\ref{eq:marginal_mp}) can be written as \cite{min-sum-mp}
\begin{equation} \label{eq:mp_marginal_modified}
\phi_{k}^{(n,l)}\big(x_{k}^{(n,l)}\big) = \delta_{\lbrace n,l \rbrace \rightarrow k} \big( x_{k}^{(n,l)} \big) + \delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big)
\end{equation}
and hence the transmission allocation variable is given by
\begin{equation} \label{eq:mp_allocation_upd}
{x_{k}^{(n,l)}}^* = \underset{x_{k}^{(n,l)}} {\operatorname{argmax}}~ \phi_{k}^{(n,l)}\big(x_{k}^{(n,l)}\big).
\end{equation}
At each iteration of the MP-based resource allocation algorithm, at most one message passes through the edge of any given direction (e.g., from transmitters to resources or from resources to transmitters); and each iteration the messages are updated by replacing the previous message sent on the same edge in the same direction \cite{min-sum-mp}. When both the messages given by Equations (\ref{eq:mp_msg1}) and (\ref{eq:mp_msg2}) are available, the marginal can be computed using Equation (\ref{eq:mp_marginal_modified}) and the transmission allocation variable is obtained by Equation (\ref{eq:mp_allocation_upd}).
\subsection{Effective Implementation of MP Scheme in a Practical Heterogeneous Network} \label{subsec:mp_effective}
It is worth noting that, sending messages from resources to transmitters (and vice versa) requires actual transmission on the radio channel. In a practical LTE-A-based 5G system, since the exchange of messages actually involves effective transmissions over the channel, the MP scheme described in the preceding section might be limited by the signaling overhead due to transfer of messages between the transmitters and resources. In the following, we observe that the amount of message signaling can be significantly reduced by some algebraic manipulations. Since the messages carry the information regarding whether any resource is used by any underlay transmitter, each transmitter $k$ actually delivers a real valued vector with two element, i.e., $\boldsymbol{\delta}_{k \rightarrow \lbrace n,l \rbrace} = \left[ \delta_{k \rightarrow \lbrace n,l \rbrace} (1),~ \delta_{k \rightarrow \lbrace n,l \rbrace} (0) \right]^{\mathsf{T}}$ and each resource $\lbrace n, l\rbrace$ delivers the vector $\boldsymbol{\delta}_{\lbrace n,l \rbrace \rightarrow k} = \left[ \delta_{\lbrace n,l \rbrace \rightarrow k} (1),~ \delta_{\lbrace n,l \rbrace \rightarrow k} (0) \right]^{\mathsf{T}}$. Let us now rewrite the message $\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big)$ using the utility function introduced in Equation (\ref{eq:sm_utility}) as follows:
\begin{equation} \label{eq:mp_msg2_wUtil}
\delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big) = x_{k}^{(n,l)} \mathfrak{U}_{k}^{(n,l)} + \operatorname{max} \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} x_{k}^{(n^\prime,l^\prime)} \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \big( x_{k}^{(n^\prime,l^\prime)} \big).
\end{equation}
By subtracting the constant term $\sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0) $ from the both sides of Equation (\ref{eq:mp_msg2_wUtil}) we can obtain the following:
\begin{equation} \label{eq:mp_msg2_wUtil_sub}
\begin{aligned}
& \delta_{k \rightarrow \lbrace n,l \rbrace} \big( x_{k}^{(n,l)} \big) - \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0) = x_{k}^{(n,l)} \mathfrak{U}_{k}^{(n,l)} ~~+\\
& \operatorname{max} \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} x_{k}^{(n^\prime,l^\prime)} \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \big( x_{k}^{(n^\prime,l^\prime)} \big) - \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0).
\end{aligned}
\end{equation}
Let us now introduce the parameter $\psi_{\lbrace n,l \rbrace \rightarrow k} = \delta_{\lbrace n,l \rbrace \rightarrow k} (1) - \delta_{\lbrace n,l \rbrace \rightarrow k} (0)$ defined as the normalized message. For instance, consider the vector $$\Psi_{k} = \left[ \mathfrak{U}_{k}^{(1,1)} + \psi_{\lbrace 1,1 \rbrace \rightarrow k}, \cdots, \mathfrak{U}_{k}^{(1,L)} + \psi_{\lbrace 1,L \rbrace \rightarrow k}, \cdots, \mathfrak{U}_{k}^{(N,L)} + \psi_{\lbrace N,L \rbrace \rightarrow k} \right]^{\mathsf{T}}$$ and let us denote by $\left\langle \upsilon_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace}$ the maximal entry of the vector $\Psi_{k}$ without considering the term $\mathfrak{U}_{k}^{(n,l)} + \psi_{\lbrace n,l \rbrace \rightarrow k}$. It can be noted that the terms within the summation in Equation (\ref{eq:mp_msg2_wUtil_sub}) are either $0$ (e.g., when $x_{k}^{(n,l)} = 0$) or $\mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k}$ (e.g., when $x_{k}^{(n,l)} = 1$). Since each transmitter requires only a single transmission alignment, when the variable $x_{k}^{(n,l)} = 0$, only one term in the summation of Equation (\ref{eq:mp_msg2_wUtil_sub}) is non-zero. For the case $x_{k}^{(n,l)} = 1$, no term within the summation of Equation (\ref{eq:mp_msg2_wUtil_sub}) is non-zero. Consequently, for $x_{k}^{(n,l)} = 0$, the maximum rate will be achieved if
\begin{equation} \label{eq:mp_msg_wSort1}
\delta_{k \rightarrow \lbrace n,l \rbrace} (0) - \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0) = \left\langle \upsilon_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace}.
\end{equation}
Similarly, when $x_{k}^{(n,l)} = 1$, the maximum is given by
\begin{equation} \label{eq:mp_msg_wSort2}
\delta_{k \rightarrow \lbrace n,l \rbrace} (1) - \sum\limits_{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L} \\ n^\prime \neq n,~ l^\prime \neq l }} \delta_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} (0) = \mathfrak{U}_{k}^{(n,l)}.
\end{equation}
Since by definition $\psi_{ k \rightarrow \lbrace n,l \rbrace } = \delta_{k \rightarrow \lbrace n,l \rbrace} (1) - \delta_{k \rightarrow \lbrace n,l \rbrace} (0)$, from the Equations (\ref{eq:mp_msg_wSort1}) and (\ref{eq:mp_msg_wSort2}), the normalized messages from the transmitter $k$ to the resource $\lbrace n, l\rbrace$ can be derived as
\begin{align} \label{eq:mp_msg_norm1}
\psi_{ k \rightarrow \lbrace n,l \rbrace } &= \mathfrak{U}_{k}^{(n,l)} - \left\langle \upsilon_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace} \nonumber \\
&= \mathfrak{U}_{k}^{(n,l)} - \left\langle \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace}.
\end{align}
Likewise, from \cite{min-sum-mp}, it can be shown that the normalized message sent from the resource $\lbrace n, l\rbrace$ to the transmitter $k$ becomes
\begin{equation} \label{eq:mp_msg_norm2}
\psi_{\lbrace n,l \rbrace \rightarrow k} = \delta_{\lbrace n,l \rbrace \rightarrow k} (1) - \delta_{\lbrace n,l \rbrace \rightarrow k} (0) = - \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n,l \rbrace }.
\end{equation}
For any arbitrary graph, the allocation variables may keep oscillating and might not converge to any fixed point, and the MP scheme may require some heuristic approach to terminate. However, in the context of loopy graphical models, by introducing a suitable weight, the messages given by Equations (\ref{eq:mp_msg_norm1}) and (\ref{eq:mp_msg_norm2}) perturb to a fixed point \cite{min-sum-mp, remp_proof}. Accordingly, Equations (\ref{eq:mp_msg_norm1}) and (\ref{eq:mp_msg_norm2}) can be rewritten as \cite{min-sum-mp}
\begin{align}
\psi_{ k \rightarrow \lbrace n,l \rbrace } &= \mathfrak{U}_{k}^{(n,l)} - \omega \left\langle \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace} - (1-\omega) \left( \mathfrak{U}_{k}^{(n,l)} + \psi_{\lbrace n,l \rbrace \rightarrow k} \right) \label{eq:mp_norm_msg_w1} \\
\psi_{\lbrace n,l \rbrace \rightarrow k} &= - \omega \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n,l \rbrace } - (1-\omega)~ \psi_{ k \rightarrow \lbrace n,l \rbrace } \label{eq:mp_norm_msg_w2}
\end{align}
where $\omega \in (0, 1]$ denotes the weighting factor for each edge. Notice that when $\omega = 1$, the messages given by Equations (\ref{eq:mp_norm_msg_w1}) and (\ref{eq:mp_norm_msg_w2}) reduce to the original formulation, e.g., Equations (\ref{eq:mp_msg_norm1}) and (\ref{eq:mp_msg_norm2}), respectively. Given the normalized messages $\psi_{ k \rightarrow \lbrace n,l \rbrace }$ and $\psi_{\lbrace n,l \rbrace \rightarrow k} $ for $\forall k, n, l$, the node marginals for the normalized messages can be calculated as $\tau_{k}^{(n,l)} = \psi_{ k \rightarrow \lbrace n,l \rbrace } + \psi_{\lbrace n,l \rbrace \rightarrow k} $ and hence from Equation (\ref{eq:mp_allocation_upd}) the transmission alignment allocation can be obtained as
\begin{equation} \label{eq:mp_X_finally}
{x_{k}^{(n,l)}}^* =
\begin{cases}
1 & \text{if } \tau_{k}^{(n,l)} > 0 \text{ and } I^{(n)} < I_{\mathrm{max}}^{(n)}\\
0 & \text{otherwise.} \\
\end{cases}
\end{equation}
\subsection{Algorithm Development}
In line with our discussions and from the expressions derived in Section \ref{subsec:mp_effective}, the
MP-based resource allocation approach is outlined in \textbf{Algorithm \ref{alg:mp_rec_alloc}}. The underlay transmitters and the resources (e.g., MBS) exchange the messages in an iterative manner. The MBS assigns the resource to the transmitters considering the node marginals, as well as the interference experienced on the RBs. The algorithm terminates when the sum data rate is reached to a steady value, i.e., the allocation vector $\mathbf{X}$ remains the same in successive iterations.
\begin{algorithm} [!t]
\caption{Resource allocation using message passing}
\label{alg:mp_rec_alloc}
\begin{algorithmic}[1]
\AtBeginEnvironment{algorithmic}{\small}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\renewcommand{\algorithmicensure}{\textbf{Initialization:}}
\ENSURE
\STATE Estimate the CSI parameters from previous time slot.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ selects a transmission alignment randomly and reports to MBS.
\STATE Initialize $t:= 1, ~\psi_{ k \rightarrow \lbrace n,l \rbrace }(0) := 0, ~\psi_{\lbrace n,l \rbrace \rightarrow k}(0) := 0$ for $\forall k, n, l$.
\renewcommand{\algorithmicensure}{\textbf{Update:}}
\vspace*{0.5em}
\ENSURE
\WHILE{$\mathbf{X}(t) \neq \mathbf{X}(t-1)$ \AND $t$ less than some predefined threshold $T_{\mathrm{max}}$}
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ sends the message
\[\resizebox{0.93\textwidth}{!}{ $\psi_{ k \rightarrow \lbrace n,l \rbrace }(t) = \mathfrak{U}_{k}^{(n,l)}(t-1) - \omega \left\langle \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1) + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k}(t-1) \right\rangle_{\sim \lbrace n,l \rbrace} - (1-\omega) \left( \mathfrak{U}_{k}^{(n,l)}(t-1) + \psi_{\lbrace n,l \rbrace \rightarrow k }(t-1) \right)$ }\]
for $\forall \lbrace n, l \rbrace \in \mathcal{N}\times \mathcal{L}$ to the MBS.
\STATE For all the resource $\forall \lbrace n, l\rbrace \in \mathcal{N} \times \mathcal{L}$, MBS sends messages $$\psi_{\lbrace n,l \rbrace \rightarrow k}(t) = - \omega \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n,l \rbrace }(t-1) - (1-\omega)~ \psi_{ k \rightarrow \lbrace n,l \rbrace } (t-1) $$ to each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ computes the marginals as $\tau_{k}^{(n,l)}(t) = \psi_{ k \rightarrow \lbrace n,l \rbrace }(t) + \psi_{\lbrace n,l \rbrace \rightarrow k}(t) $ for $\forall \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}$ and reports to the MBS.
\vspace{2pt}
{\footnotesize \textit{/* MBS calculates the allocation vector according to Equation (\ref{eq:mp_X_finally}) */} }
\vspace{2pt}
\STATE Set $x_{k}^{(n,l)} := 0$ for $\forall k, n, l$ \hspace{1em} \COMMENT{\footnotesize Initialize the variable to obtain final allocation}
\FORALL{$k \in \mathcal{K}^{\mathrm T} \text{ and } \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}$}
\IF{$\tau_{k}^{(n,l)}(t) > 0 $}
\STATE Set $x_{k}^{(n,l)} := 1$. ~~\COMMENT{\footnotesize Assign the resource to the transmitter}
\STATE $\mathfrak{I}^{(n)} := \sum\limits_{\substack{ k^\prime =1}}^{K}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)}$. ~\COMMENT{\footnotesize Calculate interference in RB $n$}
\IF{ $\mathfrak{I}^{(n)} \geq I_{\mathrm{max}}^{(n)}$ }
\REPEAT
\STATE $\lbrace \hat{k}, \hat{l} \rbrace := \!\!\! \underset{k^\prime \in \mathcal{K}^{\mathrm T}, l^\prime \in \mathcal{L}}{\operatorname{argmax}} x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)}$ \COMMENT{\footnotesize Most interfering transmitter $\hat{k}$ with $p_{\hat{k}}^{(n)} = \hat{l}$ }
\STATE Set $x_{\hat{k}}^{(n,\hat{l})} := 0$. ~~~\COMMENT{\footnotesize Unassigned due to interference threshold violation}
\STATE $\mathfrak{I}^{(n)} := \sum\limits_{\substack{ k^\prime =1}}^{K}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)}$. ~~~\COMMENT{\footnotesize Update interference level}
\UNTIL{$\mathfrak{I}^{(n)} < I_{\mathrm{max}}^{(n)}$}
\ENDIF
\ENDIF
\ENDFOR
\STATE MBS calculates the transmission alignment allocation vector $\mathbf{X}(t) = \left[ x_{k}^{(n,l)} \right]_{\forall k, n, l}$ for the iteration $t$.
\STATE Update $t:= t + 1$.
\ENDWHILE
\renewcommand{\algorithmicensure}{\textbf{Allocation:}}
\vspace*{0.5em}
\ENSURE
\STATE Allocate the transmission alignments (e.g., RB and power levels) to the SBSs and D2D transmitters.
\end{algorithmic}
\end{algorithm}
\subsection{Convergence, Optimality, and Complexity of the Solution}
The convergence, optimality, and complexity of the message passing approach is analyzed in the following subsections.
\subsubsection{Convergence and Optimality}
As presented in the following theorem, the message passing algorithm converges to fixed messages within fixed number of iterations.
\begin{theorem}
The marginals and the allocation in \textbf{Algorithm \ref{alg:mp_rec_alloc}} converge to a fixed point.
\end{theorem}
\begin{proof}
The proof is constructed by utilizing the concept of \textit{contraction mapping} \cite[Chapter 3]{mp_converge}. Let the vector $\boldsymbol{\psi}(t) = \left[\psi_{ 1 \rightarrow \lbrace 1,1 \rbrace }(t), \cdots, \psi_{ k \rightarrow \lbrace n,l \rbrace }(t), \cdots \psi_{ K \rightarrow \lbrace N,L \rbrace }(t) \right]^{\mathrm T}$ represent all the messages exchanged between the transmitters and the resources (e.g., MBS) at iteration $t$. Let us consider the messages are translated into the mapping $\boldsymbol{\psi}(t+1) = \mathbb{T}\left( \boldsymbol{\psi}(t) \right) = \left[ \mathbb{T}_{1}^{(1,1)}\left( \boldsymbol{\psi}(t) \right), \cdots, \mathbb{T}_{K}^{(N,L)}\left( \boldsymbol{\psi}(t) \right) \right]^{\mathrm{T}}$. From the Equations (\ref{eq:mp_norm_msg_w1}) and (\ref{eq:mp_norm_msg_w2}) we can obtain $\psi_{ k \rightarrow \lbrace n,l \rbrace }(t+1) = \mathbb{T}_{k}^{(n,l)}\left( \boldsymbol{\psi}(t) \right)$ as follows:
\begin{align}
\mathbb{T}_{k}^{(n,l)}\left( \boldsymbol{\psi}(t) \right) = \omega \left( \mathfrak{U}_{k}^{(n,l)}(t) - \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t)\right)~ + \nonumber \\
\omega \left( \omega \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n^\prime,l^\prime \rbrace }(t) + (1-\omega) \psi_{ k \rightarrow \lbrace n^\prime,l^\prime \rbrace }(t) \right)~ + \nonumber \\
(1- \omega) \left( \omega \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \psi_{ k^\prime \rightarrow \lbrace n,l \rbrace }(t) + (1-\omega) \psi_{ k \rightarrow \lbrace n,l \rbrace }(t) \right).
\end{align}
For any vector $\mathbf{u}$ and $\mathbf{v}$, any generic mapping $\mathbb{T}$ is a contraction if ${\parallel \mathbb{T} (\mathbf{u}) - \mathbb{T}( \mathbf{v}) \parallel}_{\infty} \leq \varepsilon {\parallel \mathbf{u} - \mathbf{v} \parallel}_{\infty}$, where $\varepsilon < 1$ is the modulus of the mapping \cite[Chapter 3]{mp_converge}. From \cite{remp_proof}, it can be shown that the mapping $\mathbb{T} : \mathbb{R}^{KNF} \rightarrow \mathbb{R}^{KNF}$ is a contraction under the maximum norm, e.g., ${\parallel \mathbb{T}\left( \boldsymbol{\psi} \right) \parallel}_{\infty} = \underset{k \in \mathcal{K}^{\mathrm T}, n \in \mathcal{N}, l \in \mathcal{L}}{\operatorname{max}} |\mathbb{T}_{k}^{(n,l)}\left( \boldsymbol{\psi} \right)|$. Since the contraction mappings have a unique fixed point convergence property for any initial vector, the proof concludes with that fact that message passing algorithm converges to a fixed marginal and hence to a fixed allocation vector $\mathbf{X}$.
\end{proof}
The following theorem presents the fixed convergence point of the message passing algorithm is an optimal solution of the original resource allocation problem.
\begin{theorem}
The allocation obtained by message passing algorithm converges to the optimal solution of resource allocation problem $\mathbf{P\ref{opt:combopt}}$.
\end{theorem}
\begin{proof}
The theorem is proved by contradiction. Let us consider that the solution $\widetilde{\mathbf{X}}$ obtained by message passing algorithm is not optimal and let $\mathbf{X}^*$ be the optimal solution obtained by solving $\mathbf{P\ref{opt:combopt}}$. Let us further assume that there are $\chi \leq |\mathbf{X}|$ entries (e.g., allocations) that differ between $\widetilde{\mathbf{X}}$ and $\mathbf{X}^*$. In addition, let $\widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}} \subseteq \mathcal{N} \times \mathcal{L}$ denote the subset of resources for which two allocations differ. For each $\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}$ there is a transmitter $\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}$ such that $\tilde{x}_{\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} = 1$ and $x_{\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{*(\tilde{n}, \tilde{l})} = 0$, and a transmitter $\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace} \neq \kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}$ such that $\tilde{x}_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} = 0$ and $x_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{*(\tilde{n}, \tilde{l})} = 1$. Hence, the assignment of resource $\lbrace \tilde{n}, \tilde{l} \rbrace $ to transmitter $\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}$ implies that the marginal $\tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} < 0$ and the following set of inequalities hold for each $\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}$:
\begin{align}
\tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} = \omega \left[ \left( \mathfrak{U}_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} + \psi_{\lbrace \tilde{n}, \tilde{l} \rbrace \rightarrow \ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace} } \right) - \left( \mathfrak{U}_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(n^\prime, l^\prime)} + \psi_{\lbrace n^\prime, l^\prime \rbrace \rightarrow \ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace} } \right) \right] < 0
\end{align}
where $\lbrace n^\prime, l^\prime \rbrace$ is the resource as represented in Equation (\ref{eq:mp_msg_norm1}). According to our assumption, the resource $\lbrace n^\prime, l^\prime \rbrace$ also belongs to $\widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}$. Hence, $\sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}}\tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} = \omega \left( \Delta \mathfrak{U} + \Delta \psi \right)$ where
\begin{align}
\Delta \mathfrak{U} &= \sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}} \left( \mathfrak{U}_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} - \mathfrak{U}_{\kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} \right) \nonumber \\
&= \sum_{k=1}^{K}\sum_{N=1}^{N}\sum_{l=1}^{L} x_{k}^{*(n,l)} \mathfrak{U}_{k}^{(n,l)} - \sum_{k=1}^{K}\sum_{N=1}^{N}\sum_{l=1}^{L} \tilde{x}_{k}^{(n,l)} \mathfrak{U}_{k}^{(n,l)}
\end{align}
and $\Delta \psi = \sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}} \left( \psi_{\lbrace \tilde{n}, \tilde{l}\rbrace \rightarrow \ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}} - \psi_{\lbrace \tilde{n}, \tilde{l}\rbrace \rightarrow \kappa_{\lbrace \tilde{n}, \tilde{l} \rbrace}} \right)$. After some algebraic manipulations (for details refer to \cite{remp_proof}) we can obtain $\frac{2 (1- \omega)}{\omega} \!\!\!\! \sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}} \!\!\! \tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})} \leq \Delta \mathfrak{U}$. Since $0 < \omega < 1$ and both the variables $\sum\limits_{\lbrace \tilde{n}, \tilde{l} \rbrace \in \widetilde{\mathcal{N}} \times \widetilde{\mathcal{L}}} \tau_{\ddot{\kappa}_{\lbrace \tilde{n}, \tilde{l} \rbrace}}^{(\tilde{n}, \tilde{l})}$ and $ \Delta \mathfrak{U}$ are positive, our assumption that $\widetilde{\mathbf{X}}$ is not optimal is contradicted and the proof follows.
\end{proof}
\subsubsection{Complexity}
If the message passing algorithm requires $T < T_{\mathrm{max}}$ iterations to converge, it is straightforward to verify that the time complexity at each MBS is of $\mathcal{O}\left( T K N L \right)$. Similarly, considering a standard sorting algorithm that outputs the term $\left\langle \mathfrak{U}_{k}^{(n^\prime,l^\prime)} + \psi_{\lbrace n^\prime,l^\prime \rbrace \rightarrow k} \right\rangle_{\sim \lbrace n,l \rbrace}$ in order to generate the message $\psi_{ k \rightarrow \lbrace n,l \rbrace }$ with worst-case complexity of $\mathcal{O}\left( NL \log \left( NL \right) \right)$, the overall time complexity at each underlay transmitter is of $\mathcal{O} \left(T {(NL)}^2 \log \left( NL \right) \right)$.
\section{Auction-Based Resource Allocation} \label{sec:am_ra}
Our final solution approach for the resource allocation is the distributed auction algorithm. The allocation using auction is based on the \textit{bidding} procedure, where the agents (i.e., underlay transmitters) bid for the resources (e.g., RB and power level). The transmitters select the bid for the resources based on the \textit{costs} (e.g., the interference caused to the MUEs) of using the resource. The desired assignment relies on the appropriate selection of the bids.
The unassigned transmitters raise the cost of using resource and bid for the resources simultaneously. Once the bids from all the transmitters are available, the resources are assigned to the highest bidder. An overview of auction approach
is presented in the following.
\subsection{Overview of the Auction Approach} \label{subsec:auc_overview}
In a generic auction-based assignment model, every resource $j$ associated with a cost $c_j$ and each agent $i$ can get the benefit $B_{ij}$ from the resource $j$. Given the benefit $B_{ij}$, every agent $i$ who wishes to be assigned with the resource $j$, needs to pay the cost $c_j$. The net value (e.g., utility) that an agent $i$ can get from the resource $j$ is given by $B_{ij} - c_j$. The auction procedure involves the assignment of agent $i$ with the resource $j^\prime$ which provides the maximal net value, i.e.,
\begin{equation} \label{eq:auction_price}
B_{ij^\prime} - c_{j^\prime} = \underset{j}{\operatorname{max}} \left\lbrace B_{ij} - c_{j} \right\rbrace.
\end{equation}
If the condition given in Equation (\ref{eq:auction_price}) is satisfied for all the agents $i$, the assignment and the set of costs are referred to as \textit{equilibrium} \cite{auction_org}. However, in many practical problems, obtaining an equilibrium assignment is not straightforward due to the possibility of cycles. In particular, there may be cases where the agents contend for a small number of equally desirable resources without increasing the cost, which creates cycle (e.g., infinite loop) in the auction process. To avoid this difficulty, the notion of \textit{almost equilibrium} is introduced in the literature. The assignment and the set of costs are said to be almost equilibrium when the net value for assigning each agent $i$ with the resource $j^\prime$ is within a constant $\epsilon >0$ of being maximal. Hence, in order to be an almost equilibrium assignment, the following condition needs to be satisfied for all the agents \cite{auction_org}:
\begin{equation} \label{eq:auction_price_comslac}
B_{ij^\prime} - c_{j^\prime} \geq \underset{j}{\operatorname{max}} \left\lbrace B_{ij} - c_{j} \right\rbrace - \epsilon.
\end{equation}
The condition in Equation (\ref{eq:auction_price_comslac}) is known as $\epsilon$\textit{-complementary slackness}. When $\epsilon = 0$, Equation (\ref{eq:auction_price_comslac}) reduces to ordinary complementary slackness given by Equation (\ref{eq:auction_price}).
For instance, let the variable $\Theta_i = j$ denote that agent $i$ is assigned with the resource $j$. In addition, let $c_{ij}$ denote the cost that agent $i$ incurs in order to be assigned with resource $j$ and $\mathfrak{b}_{ij}$ is the bidding information (i.e., highest bidder) available to the agent $i$ about resource $j$. The auction procedure evolves in an iterative manner. Given the the assignment $\Theta_i$, the set of costs $\left[c_{ij}\right]_{\forall ij}$, and the set of largest bidders $\left[\mathfrak{b}_{ij}\right]_{\forall ij}$ of previous iteration, the agents locally update the costs and the highest bidders for current iteration. In particular, the costs $c_{ij}(t)$ and bidding information $\mathfrak{b}_{ij}(t)$ available to the agent $i$ about resource $j$ for iteration $t$ are updated from the previous iteration as follows \cite{auction_base}:
\begin{align}
c_{ij}(t) &= \underset{i^\prime, i^\prime \neq i}{\operatorname{max}} \left\lbrace c_{ij}(t-1), c_{i^\prime j}(t-1) \right\rbrace \label{eq:auc_cost}\\
\mathfrak{b}_{ij}(t) &= \underset{i^* \in \underset{i^\prime, i^\prime \neq i}{\operatorname{~argmax}} \left\lbrace c_{ij}(t-1), c_{i^\prime j}(t-1) \right\rbrace }{\operatorname{max}} \left\lbrace \mathfrak{b}_{i^*j}(t-1) \right\rbrace. \label{eq:auc_bid}
\end{align}
The above update equations ensure that the agents will have the updated maximum cost of the resource $j$ (i.e., $c_j \triangleq \underset{i}{\operatorname{max}} \lbrace c_{i j} \rbrace$) and the corresponding highest bidder for that resource. Once the update cost and bidding information are available, agent $i$ checks whether the cost of the resource currently assigned to agent $i$, e.g., $c_{i \Theta_i(t-1)}$ has been increased by any other agents. If so, the current assignment obtained from previous iteration may not be at (almost) equilibrium and the agent needs to select a new assignment, e.g., $\Theta_{i}(t) = \underset{j}{\operatorname{argmax}} \left\lbrace B_{ij}(t) - c_{ij} (t) \right\rbrace$. In order to update the cost for new assignment (e.g., $\Theta_{i}(t)$) for any iteration $t$, the agent will use the following cost update rule \cite{auction_base}:
\begin{equation} \label{eq:auc_costupd}
c_{ij}(t) = c_{ij}(t-1) + \Delta_i(t-1)
\end{equation}
where $\Delta_i$ is given by
\begin{equation} \label{eq:auc_price_uptate}
\Delta_i(t-1) = \underset{j}{\operatorname{max}} \left\lbrace B_{ij}(t-1) - c_{ij}(t-1) \right\rbrace - \underset{j^\prime \neq \Theta_i(t)}{\operatorname{max}} \left\lbrace B_{ij^\prime}(t-1) - c_{ij^\prime}(t-1) \right\rbrace + \epsilon.
\end{equation}
The variable $\underset{j}{\operatorname{max}} \left\lbrace B_{ij}(t-1) - c_{ij}(t-1) \right\rbrace$ and $\underset{j^\prime \neq \Theta_i(t)}{\operatorname{max}} \left\lbrace B_{ij^\prime}(t-1) - c_{ij^\prime}(t-1) \right\rbrace$ denote the maximum and second maximum net utility, respectively. Note that $\Delta_i$ is always greater than zero as $\epsilon >0$ and by definition $\underset{j}{\operatorname{max}} \left\lbrace B_{ij}(t-1) - c_{ij}(t-1) \right\rbrace > \underset{j^\prime \neq \Theta_i(t)}{\operatorname{max}} \left\lbrace B_{ij^\prime}(t-1) - c_{ij^\prime}(t-1) \right\rbrace$. Since $c_{i \Theta_i(t)}(t)$ is the highest cost for iteration $t$, agent $i$ can also update the bidding information, e.g., $\mathfrak{b}_{i \Theta_i(t)}(t) = i$. Accordingly, the cost update rule using $\Delta_i$ as given in Equation (\ref{eq:auc_costupd}) ensures that the assignment and the set of costs are almost at equilibrium \cite{auction_base}.
\subsection{Auction for Radio Resource Allocation}
Based on the discussion provided in the preceding section, in the following, we present the auction-based resource allocation scheme. We present the cost model and use the concept of auction to develop the resource allocation algorithm in our considered heterogeneous network setup.
\subsubsection{Cost Function}
Let us consider the utility function given by Equation (\ref{eq:sm_utility}). Recall that the term $w_2 \left( I^{(n)} - I_{\mathrm{max}}^{(n)} \right) $ in Equation (\ref{eq:sm_utility}) represents the cost (e.g., interference caused by underlay transmitters to the MUE) of using the RB $n$. In particular, when the transmitter $k$ is transmitting with power level $l$, the cost of using RB $n$ can be represented by
\begin{align} \label{eq:auc_cost_ra}
c_{k}^{(n,l)} &= w_2 \left( I^{(n)} - I_{\mathrm{max}}^{(n)} \right) = w_2 \left( \sum\limits_{k^\prime =1}^{K}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)} - I_{\mathrm{max}}^{(n)} \right) \nonumber \\
&= w_2 \left( g_{k,m_k^*}^{(n)} l + \sum\limits_{\substack{ k^\prime \in \mathcal{K}^{\mathrm{T}}, k^\prime \neq k}}\sum\limits_{l^\prime = 1}^{L}x_{k^\prime}^{(n, l^\prime)} g_{k^\prime,m_{k^\prime}^*}^{(n)} p_{k^\prime}^{(n)} - I_{\mathrm{max}}^{(n)} \right).
\end{align}
Let the parameter $C_{k}^{(n,l)} = \max \lbrace 0, c_{k}^{(n,l)} \rbrace$ and accordingly the cost $C_{k}^{(n,l)} = 0$ only if $I^{(n)} \leq I_{\mathrm{max}}^{(n)}$. Notice that using the cost term we can represent Equation (\ref{eq:sm_utility}) as $$\mathfrak{U}_{k}^{(n,l)} = w_1 \mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right) - w_2 \left( I^{(n)} - I_{\mathrm{max}}^{(n)} \right) = B_{k}^{(n,l)} - c_{k}^{(n,l)} = B_{k}^{(n,l)} - C_{k}^{(n,l)}$$ where $B_{k}^{(n,l)} = w_1 \mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right)$, and $c_{k}^{(n,l)}$ is given by Equation (\ref{eq:auc_cost_ra}). The variable $B_{k}^{(n,l)}$ is proportional to the data rate achieved by transmitter $k$ using resource $\lbrace n,l \rbrace$. Analogous to the discussion of previous section, $\mathfrak{U}_{k}^{(n,l)}$ represents the net benefit that transmitter $k$ obtains from the resource $\lbrace n,l\rbrace$.
Let $\mathfrak{b}_{k}^{( n,l)}$ denote the local bidding information available to transmitter $k$ for the resource $\lbrace n,l \rbrace$. For notational convenience, let us assume that $\Theta : [k]_{k = 1, \cdots, K} \rightarrow \left[ \lbrace n, l \rbrace \right]_{\substack{n = 1, \cdots, N \\ l = 1, \cdots, L}}$ denotes the mapping between the transmitters and the resources, i.e., $\Theta_k = \lbrace n,l \rbrace$ represents the assignment of resource $\lbrace n,l \rbrace$ to transmitter $k$. Hence we represent by $C_{k}^{\Theta_k}$ the cost of using the resource $\lbrace n,l \rbrace$ obtained by the assignment $\Theta_k = \lbrace n,l \rbrace$. Similarly, given $\Theta_k = \lbrace n,l \rbrace$ the variable $\mathfrak{b}_{k}^{\Theta_k} \equiv \mathfrak{b}_{k}^{( n,l)}$ denotes the local bidding information about the resource $\lbrace n,l \rbrace$ available to the transmitter $k$. Note that $\Theta_k = \lbrace n,l \rbrace$ also implies $x_{k}^{(n,l)} = 1$. In other words, $\Theta_k = \lbrace n,l \rbrace$ denote the non-zero entry of the vector $\mathbf{x}_k = \left[ x_{k}^{(n,l)} \right]_{\forall n,l}$. Since each underlay transmitter $k$ selects only one resource $\lbrace n, l\rbrace$, only a single entry in the vector $\mathbf{x}_k$ is non-zero.
\subsubsection{Update of Cost and Bidder Information}
In order to obtain the updated cost and bidding information, we utilize similar concept given by Equations (\ref{eq:auc_cost})-(\ref{eq:auc_price_uptate}). At the beginning of the auction procedure, each underlay transmitter updates the cost as $C_{k}^{(n,l)}(t) = \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \left\lbrace C_{k}^{(n,l)}(t-1), C_{k^\prime}^{(n,l)}(t-1) \right\rbrace $. In addition, as described by Equation (\ref{eq:auc_bid}), the information of maximum bidder is obtained by $\mathfrak{b}_{k}^{( n,l)}(t) = \mathfrak{b}_{k^*}^{( n,l)}(t-1)$ where $k^* = \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{argmax}} \left\lbrace C_{k}^{(n,l)}(t-1), C_{k^\prime}^{(n,l)}(t-1) \right\rbrace$. When the transmitter $k$ needs to select a new assignment, i.e., $\Theta_k = \lbrace \hat{n},\hat{l}\rbrace$, the transmitter increases the cost of using the resource, e.g., $C_{k}^{(\hat{n}, \hat{l})}(t) = C_{k}^{(\hat{n}, \hat{l})}(t-1) +\Delta_k(t-1)$, and $\Delta_k(t-1)$ is given by
\begin{align} \label{eq:auc_costupdate}
\Delta_k(t-1) = \underset{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N}\times\mathcal{L}}{\operatorname{max}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1) - \underset{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N}\times\mathcal{L} \\ n^\prime \neq \hat{n}, l^\prime \neq \hat{l} }}{\operatorname{max}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1) + \epsilon
\end{align}
where $\epsilon > 0$ indicates the minimum bid requirement parameter. Similar to Equation (\ref{eq:auc_price_uptate}), the term $\underset{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N}\times\mathcal{L}}{\operatorname{max}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1) - \underset{\substack{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N}\times\mathcal{L} \\ n^\prime \neq \hat{n}, l^\prime \neq \hat{l} }}{\operatorname{max}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t-1)$ denotes the difference between the maximum and the second to the maximum utility value. In the case when the transmitter $k$ does not prefer to be assigned with a new resource, the allocation from the previous iteration will remain unchanged, i.e., $\Theta_k(t) = \Theta_k(t-1)$, and consequently, $\mathbf{x}_k(t) = \mathbf{x}_k(t-1)$.
\begin{algorithm} [!t]
\AtBeginEnvironment{algorithmic}{\small}
\caption{Auction method for any underlay transmitter $k$}
\label{alg:auc_loc}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\REQUIRE Parameters from previous iteration: an assignment $\mathbf{X}(t-1) = \left[ \mathbf{x}_1(t-1), \cdots \mathbf{x}_K(t-1) \right]^{\mathsf{T}}$, aggregated interference $I^{(n)}(t-1)$ for $\forall n$, cost of using resources $\mathbf{C}(t-1) = \left[ C_{k}^{(n,l)}(t-1)\right]_{\forall k,n, l}$ and the highest bidders of the resources $\mathfrak{B}(t-1) = \left[ \mathfrak{B}_k(t) \right]_{\forall k}$ where $\mathfrak{B}_k(t) = \left[\mathfrak{b}_{k}^{( n,l)}(t) \right]_{\forall n, l}$.
\ENSURE The allocation variable $\mathbf{x}_k(t) = \left[x_{k}^{(n,l)}\right]_{\forall n, l}$, updated costs $\mathbf{C}_k(t) = \left[ C_{k}^{(n,l)}(t)\right]_{\forall n, l}$, and bidding information $\mathfrak{B}_k(t) = \left[\mathfrak{b}_{k}^{( n,l)}(t) \right]_{\forall n, l}$ at current iteration $t$ for the transmitter $k$.
\STATE Initialize $\mathbf{x}_k(t) := \mathbf{0}$.
\STATE For all the resources $\lbrace n, l\rbrace \in \mathcal{N}\times \mathcal{L}$,
\begin{itemize}
\item Obtain the transmitter $k^* := \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{argmax}} \left\lbrace C_{k}^{(n,l)}(t-1), C_{k^\prime}^{(n,l)}(t-1) \right\rbrace$ and update the highest bidder as $\mathfrak{b}_{k}^{( n,l)}(t) := \mathfrak{b}_{k^*}^{( n,l)}(t-1)$.
\item Update the cost as
$C_{k}^{(n,l)}(t) := \underset{k^\prime \in \mathcal{K}^{\mathrm T}, k^\prime \neq k}{\operatorname{max}} \left\lbrace C_{k}^{(n,l)}(t-1), C_{k^\prime}^{(n,l)}(t-1) \right\rbrace $.
\end{itemize}
\vspace*{2pt}
{\footnotesize \textit{/* Let $\Theta_k(t-1)$ denote the assignment of transmitter $k$ at previous iteration $t-1$, i.e., $\Theta_k(t-1)$ represents the non-zero entry in the vector $\mathbf{x}_k(t-1)$. Since each transmitter uses only one transmission alignment, only a single entry in the vector $\mathbf{x}_k(t-1)$ is non-zero. When cost is greater than previous iteration and the transmitter $k$ is not the highest bidder, update the assignment */} }
\vspace*{5pt}
\IF{ $C_{k}^{\Theta_k(t-1)} (t) \geq C_{k}^{\Theta_k(t-1)} (t-1) $ \AND $\mathfrak{b}_{k}^{\Theta_k(t-1)}(t) \neq k $ }
\STATE $\lbrace \hat{n}, \hat{l} \rbrace := \underset{\lbrace n^\prime, l^\prime \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{argmax}} \mathfrak{U}_{k}^{(n^\prime,l^\prime)}(t)$. ~\COMMENT{\footnotesize Obtain the best resource for transmitter $k$}
\STATE $\mathfrak{I}^{(\hat{n})} := g_{k,m_k^*}^{(\hat{n})} \hat{l} + I^{(\hat{n})}$. ~\COMMENT{\footnotesize Calculate additional interference caused by transmitter $k$ for using RB $\hat{n}$}
\IF{ $\mathfrak{I}^{(\hat{n})} < I_{\mathrm{max}}^{(\hat{n})}$ }
\STATE Set $x_{k}^{(\hat{n},\hat{l})} := 1$. ~~~~~\COMMENT{\footnotesize e.g., $\Theta_{k}(t) = \lbrace \hat{n},\hat{l} \rbrace $}
\STATE Update the highest bidder for the resource $\lbrace \hat{n}, \hat{l} \rbrace $ as $\mathfrak{b}_{k}^{(\hat{n}, \hat{l}) }(t) := k$.
\STATE Increase the cost for the resource $\lbrace \hat{n}, \hat{l} \rbrace $ as $C_{k}^{(\hat{n}, \hat{l})}(t) = C_{k}^{(\hat{n}, \hat{l})}(t-1) +\Delta_k(t-1)$ where $\Delta_k(t-1)$ is given by Equation (\ref{eq:auc_costupdate}).
\ELSE
\STATE Keep the assignment unchanged from previous iteration, i.e., $\mathbf{x}_k(t) := \mathbf{x}_k(t-1)$.
\ENDIF
\ELSE
\STATE Keep the assignment unchanged from previous iteration, i.e., $\mathbf{x}_k(t) := \mathbf{x}_k(t-1)$.
\ENDIF
\end{algorithmic}
\end{algorithm}
\subsection{Algorithm Development}
\textbf{Algorithm \ref{alg:auc_alg}} outlines the auction-based resource allocation approach. Each transmitter locally executes \textbf{Algorithm \ref{alg:auc_loc}} and obtains a temporary allocation. When the execution of \textbf{Algorithm \ref{alg:auc_loc}} is finished, each underlay transmitter $k$ reports to the MBS the local information, e.g., choices for the resources, $\mathbf{x}_k = \left[ x_{k}^{(n,l)} \right]_{\forall n,l}$. Once the information (e.g., output parameters from \textbf{Algorithm \ref{alg:auc_loc}}) from all the transmitters are available to the MBS, the necessary parameters (e.g., input arguments required by \textbf{Algorithm \ref{alg:auc_loc}}) are calculated and broadcast by the MBS. \textbf{Algorithm \ref{alg:auc_loc}} repeated iteratively until the allocation variable $\mathbf{X} = \left[\mathbf{x}_k \right]_{\forall k} = \left[x_{1}^{(1, 1)}, \cdots, x_{1}^{(1, L)}, \cdots, x_{1}^{(N, L)}, \cdots, x_{K}^{(N, L)} \right]^{\mathsf{T}}$ for two successive iterations becomes similar.
\begin{algorithm} [!t]
\AtBeginEnvironment{algorithmic}{\small}
\caption{Auction-based resource allocation}
\label{alg:auc_alg}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\renewcommand{\algorithmicforall}{\textbf{for each}}
\renewcommand{\algorithmiccomment}[1]{\textit{/* #1 */}}
\renewcommand{\algorithmicensure}{\textbf{Initialization:}}
\ENSURE
\STATE Estimate the CSI parameters from the previous time slot.
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ randomly selects a transmission alignment and reports to the MBS.
\STATE MBS broadcasts the assignment of all transmitters, aggregated interference of each RB, the costs and the highest bidders using pilot signals.
\STATE Initialize number of iterations $t := 1$.
\renewcommand{\algorithmicensure}{\textbf{Update:}}
\vspace*{0.5em}
\ENSURE
\WHILE{$\mathbf{X}(t) \neq \mathbf{X}(t-1)$ \AND $t$ is less than some predefined threshold $T_{\mathrm{max}}$}
\STATE Each underlay transmitter $k \in \mathcal{K}^{\mathrm T}$ locally runs the \textbf{Algorithm \ref{alg:auc_loc}} and reports the assignment $\mathbf{x}_k(t)$, the cost $\mathbf{C}_k(t)$ and the bidding information $\mathfrak{B}_k(t)$ to the MBS.
\STATE MBS calculates the aggregated interference $I^{(n)}(t)$ for $\forall n$, the allocation variable $\mathbf{X}(t)$, information about highest bidders $\mathfrak{B}(t)$, the cost $\mathbf{C}(t)$, and broadcast to the underlay transmitters.
\STATE Update $t := t+1$.
\ENDWHILE
\renewcommand{\algorithmicensure}{\textbf{Allocation:}}
\vspace*{0.5em}
\ENSURE
\STATE Allocate the RB and power levels to the SBSs and D2D UEs.
\end{algorithmic}
\end{algorithm}
\subsection{Convergence, Complexity, and Optimality of the Auction Approach}
In the following subsections we analyze the convergence, complexity, and optimality of the solution obtained by auction algorithm.
\subsubsection{Convergence and Complexity}
For any arbitrary fixed $\epsilon >0$, the auction approach is guaranteed to converge to a fixed assignment. The following theorem shows that the auction process terminates within a fixed number of iterations.
\begin{theorem} \label{thm:auc_terminate}
The auction process terminates in a finite number of iterations.
\end{theorem}
\begin{proof}
According to our system model, each underlay transmitter selects only one transmission alignment. Hence, once each resource receives at least one bid (which implies that each transmitter is assigned to a resource), the auction process must terminate. Now if any resource $\lbrace n, l \rbrace$ receives a bid in $\hat{t}$ iterations, the cost must be greater than the initial price by $\hat{t} \epsilon$. As a result, the resource $\lbrace n, l \rbrace$ becomes costly to be assigned when compared to any resource $\lbrace n^\prime, l^\prime \rbrace$ that has not received any bid yet. The argument follows that there are two possibilities, e.g., \textit{i)} the auction process terminates in a finite iterations with each transmitter assigned to a resource, regardless of every resource receives a bid; or \textit{ii)} the auction process continues for a finite number of iterations and each resource will receive at least one bid, therefore, the algorithm terminates.
\end{proof}
At termination, the solution (e.g., allocation) obtained is almost at equilibrium, e.g., the condition in Equation (\ref{eq:auction_price_comslac}) is satisfied for all the underlay transmitters. Since the algorithm terminates after a finite number of iterations, we can show that the algorithm converges to a fixed allocation and the complexity at each transmitter is linear to the number of resources.
\begin{theorem}
The auction algorithm converges to a fixed allocation with the number of iterations of $$\mathcal{O}\left( T KNL \left \lceil {\frac{\underset{k, n, l}{\operatorname{max}} B_{k}^{(n,l)} - \underset{k, n, l}{\operatorname{min}} B_{k}^{(n,l)}}{\epsilon} } \right\rceil \right).$$
\end{theorem}
\begin{proof}
The proof follows from the similar argument presented in \textbf{Theorem \ref{thm:auc_terminate}}. In the worst case, the total number of iterations in which a resource can receive a bid is no more than $ \Upsilon = \left \lceil {\frac{\underset{k, n, l}{\operatorname{max}} B_{k}^{(n,l)} - \underset{k, n, l}{\operatorname{min}} B_{k}^{(n,l)}}{\epsilon} } \right\rceil$ \cite{auction_base}. Since each bid requires $\mathcal{O}\left(NL \right)$ iterations, and each iteration involves a bid by a single transmitter, the total number of iterations in \textbf{Algorithm \ref{alg:auc_alg}} is of $\mathcal{O}\left( KNL \Upsilon \right)$. For the convergence, the allocation variable $\mathbf{X}$ needs to be unchanged for at least $T \geq 2$ consecutive iterations. Hence, the overall running time of the algorithm is $\mathcal{O}\left( T KNL \Upsilon \right)$.
\end{proof}
Note that for any transmitter node $k \in \mathcal{K}^{\mathrm T}$, the complexity of the auction process given by \textbf{Algorithm \ref{alg:auc_loc}} is linear with number of resources for each of the iterations.
\subsubsection{Optimality}
In the following we show that the data rate obtained by the auction algorithm is within $K \epsilon$ of the maximum data rate obtained by solving the original optimization problem $\mathbf{P\ref{opt:combopt}}$.
\begin{theorem}
The data rate obtained by the distributed auction algorithm is within $K \epsilon$ of the optimal solution
\end{theorem}
\begin{proof}
We construct the proof by using an approach similar to that presented in \cite{auction_base}. The data rate obtained by any assignment $\mathbf{X}$ will satisfy the following condition:
\begin{equation} \label{eq:auc_prof_ineqal}
\sum_{k=1}^{K} R_{u_k} \leq \sum_{\lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}} \widehat{C}^{(n,l)} + \sum_{k=1}^{K} \underset{ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{max}} \left\lbrace B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right\rbrace
\end{equation}
where $\widehat{C}^{(n,l)} = \underset{k^\prime \in \mathcal{K}^{\mathrm T}}{\operatorname{max}} C_{k^\prime}^{(n,l)} $, $B_{k}^{(n,l)} = w_1 \mathscr{R}\left(\Gamma_{u_k}^{(n, l)}\right)$ and $R_{u_k}$ is given by Equation (\ref{eq:rate_ue}). The inequality given by Equation (\ref{eq:auc_prof_ineqal}) is satisfied since the first term in the right side of the inequality, e.g., $\sum\limits_{\lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}} \widehat{C}_{(n,l)}$
is equal to $\sum\limits_{k=1}^{K} \sum\limits_{n=1}^{N} \sum\limits_{l=1}^{L} x_{k}^{(n,l)} C_{k}^{(n,l)}$ and the second term is not less than $\sum\limits_{k=1}^{K} \sum\limits_{n=1}^{N} \sum\limits_{l=1}^{L} x_{k}^{(n,l)}\left( B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right)$. Let the variable $A^* \triangleq \underset{\mathbf{X}^*}{\operatorname{max}} \sum\limits_{k=1}^{K} R_{u_k} = \sum\limits_{k=1}^{K} \sum\limits_{n = 1}^{N} \sum\limits_{l = 1}^{L} ~{x_{k}^{(n,l)}}^{*} B_{\mathrm {RB}} \log_2 \left(1 + \gamma_{u_k}^{(n)} \right)$ denote the optimal achievable data rate. In addition, let the variable $D^*$ be defined as
\begin{equation}
D^* \triangleq \underset{\substack{\widehat{C}^{(n,l)} \\ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}}{\operatorname{min}} \left\lbrace \sum_{\lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}} \widehat{C}^{(n,l)} + \sum_{k=1}^{K} \underset{ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{max}} \left\lbrace B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right\rbrace \right\rbrace.
\end{equation}
Hence from Equation (\ref{eq:auc_prof_ineqal}), we can write $A^* \leq D^*$. Since the final assignment and the set of costs are almost at equilibrium, for any underlay transmitter $k$, the condition $\sum\limits_{n=1}^{N} \sum\limits_{l=1}^{L} x_{k}^{(n,l)}\left( B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right) \geq \underset{ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{max}} \left\lbrace B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right\rbrace - \epsilon$ will hold. Consequently, we can obtain the following inequality:
\begin{align}
D^* &\leq \sum_{k=1}^{K} \left( \sum_{n=1}^{N} \sum_{l=1}^{L} x_{k}^{(n,l)} \widehat{C}^{(n,l)} + \underset{ \lbrace n, l \rbrace \in \mathcal{N} \times \mathcal{L}}{\operatorname{max}} \left\lbrace B_{k}^{(n,l)} - \widehat{C}^{(n,l)} \right\rbrace \right) \nonumber \\
&\leq \sum_{k=1}^{K}\sum_{n=1}^{N} \sum_{l=1}^{L} x_{k}^{(n,l)} B_{k}^{(n,l)} + K \epsilon \leq \sum_{k=1}^{K} R_{u_k} + K \epsilon \leq A^* + K \epsilon.
\end{align}
Since $A^* \leq D^*$, the data rate achieved by the auction algorithm is within $K \epsilon$ of the optimal data rate $A^*$ and the proof follows.
\end{proof}
\section{Qualitative Comparison Among the Resource Allocation Schemes} \label{sec:comparisons}
In this section, we compare the different resource allocation schemes discussed above based on several criteria (e.g., flow of algorithm execution, information requirement and algorithm overhead, complexity and optimality of the solution, convergence behavior etc.). We term the centralize solution (which can be obtained by solving the optimization problem $\mathbf{P\ref{opt:combopt}}$) as COS (centralized optimal scheme) and compare it with the distributed solutions. A comparison among the resource allocation schemes is presented in Table \ref{tab:comp}.
\begin{table}[!h]
\centering
\begin{footnotesize}
\begin{tabular}{P{2.0cm} P{2.5cm} P{2.5cm} P{2.5cm} P{2.5cm} }
\toprule
\multirow{2}{*}{Criterion} & \multicolumn{4}{c}{Schemes}\\ \cline{2-5}
& \multicolumn{1}{c}{COS} & \multicolumn{1}{c}{Stable matching} & \multicolumn{1}{c}{Message passing} & \multicolumn{1}{c}{Auction method} \\
\midrule
Type of the solution & Centralized & Distributed & Distributed & Distributed\\
\midrule
Algorithm execution & MBS solves the resource optimization problem (e.g., $\mathbf{P\ref{opt:combopt}}$) & MBS and underlay transmitters locally update the preference profiles, MBS runs the matching subroutine & MBS and underlay transmitters alliteratively exchange the messages, MBS computes the marginals and selects allocation & Each underlay transmitters locally runs the auction subroutine, MBS collects the parameters from all the transmitters and broadcast required parameters needed for the auction subroutine\\
\midrule
Optimality & Optimal & Weak Pareto optimal & Optimal subject to the weight $\omega$ & Within $K\epsilon$ to the optimal\\
\midrule
Complexity & $\mathcal{O}\left( \left(NL \right)^{K} \right)$ at the MBS & \resizebox{.99\hsize}{!}{$\mathcal{O}\left( T NL \log(NL) \right)$} at the transmitters, $\mathcal{O}(TKNL) $ at the MBS & \resizebox{.99\hsize}{!}{ $\mathcal{O} \left(T {(NL)}^2 \log \left( NL \right) \right)$} at the transmitters, $\mathcal{O}\left( T K N L \right)$ at the MBS & For each iteration linear with $N, L$ at the transmitters, overall running time $\mathcal{O}\left( T KNL \Upsilon \right)$\\
\midrule
Convergence behavior & N/A & Converges to a stable matching and hence to a fixed allocation & Converges to a fixed marginal and to a fixed allocation & Converges to a fixed allocation within $K\epsilon$ of the optimal \\
\midrule
Information required by the MBS & Channel gains (e.g., CSI parameters) between all the links of the network & The preference profiles and the channel gains $G_{k}^{(n)} = \left[ g_{k,m_{k}^*}^{(n)} \right]_{\forall k, n}$ & The messages $\left[ \psi_{k \rightarrow \lbrace n,l \rbrace} \right]_{\forall k,n,l}$ and the channel gains $G_{k}^{(n)} = \left[ g_{k,m_{k}^*}^{(n)} \right]_{\forall k, n}$ & The channel gains $G_{k}^{(n)} = \left[ g_{k,m_{k}^*}^{(n)} \right]_{\forall k, n}$, local assignments $\mathbf{x}_k$, the cost $\mathbf{C}_k$, and the bidding information $\mathfrak{B}_k$ for $\forall k$ \\
\midrule
Algorithm overhead & High (exponential) computational complexity, requirement of all CSI parameters of the network & Build the preference profiles, exchange information to update preference profiles, execution of matching subroutine & Calculation and exchange of messages, computation of the marginals & Computation and exchange of the parameters, e.g., $I^{(n)}$ for $\forall n$, the allocation vector $\mathbf{X}$, information about highest bidders $\mathfrak{B}$, the cost vector $\mathbf{C}$\\
\toprule
\end{tabular}
\end{footnotesize}
\caption{Comparison among different resource allocation approaches}{}
\label{tab:comp}
\end{table}
\section{Chapter Summary and Conclusion} \label{sec:conclusion}
We have presented three comprehensive distributed solution approaches for the future 5G cellular mobile communication systems. Considering a heterogeneous multi-tier 5G network, we have developed distributed radio resource allocation algorithms using three different mathematical models (e.g., stable matching, message passing, and auction method). The properties (e.g., convergence, complexity, optimality) of these distributed solutions are also briefly analyzed. To this end, a qualitative comparison of these schemes is illustrated.
The solution tools presented in this chapter can also be applicable to address the resource allocation problems in other enabling technologies for 5G systems. In particular, the mathematical tools presented in this chapter open up new opportunities to investigate other network models, such as resource allocation problems for wireless virtualization \cite{wnv_1} and cloud-based radio access networks \cite{c_ran1}. In such systems, these modeling tools need to be customized accordingly based on the objective and constraints required for the resource allocation problem.
In addition to the presented solutions, there are few game theoretical models which have not been covered in this chapter. However, these game models can also be considered as potential distributed solution tools. Different from traditional cooperative and non-cooperative games, the game models (such as mean field games \cite{mfg_schedule, mfg_crn}, evolutionary games \cite{evo_wireless} etc.) are scalable by nature, and hence applicable to model such large heterogeneous 5G networks. Utilizing those advanced game models for the resource allocation problems and analyzing the performance (e.g., data rate, spectrum and energy efficiency etc.) of 5G systems could be an interesting area of research.
\clearpage
\bibliographystyle{IEEEtran}
|
1,108,101,565,234 | arxiv | \section{Introduction}
Transition metal dichalcogenides (TMDs) are good candidates for nanoengineering due to their quasi-two-dimensional nature.
The weak interlayer interaction allows the fine-tuning of the electronic and vibrational properties of the nanostructure by stacking different types and numbers of layers.\cite{geim_van_2013}
To characterize the properties of these nanostructures, Raman spectroscopy is a useful and accurate technique, which simultaneously probes their vibrational and optical properties.
It yields information about the lattice symmetry, the vibrational eigenmodes, and optically active electronic transitions, including excitonic effects.\cite{cardona_light_scattering_solids_II,loudon_theory_1963}
In particular, when in the resonant regime, the Raman intensities show a strong dependence on the laser photon energy for certain phonon modes, as was shown for
MoSe$_2$,\cite{soubelet_resonance_2016,kim_davydov_2016} MoS$_2$,\cite{lee_anomalous_2015} and WS$_2$.\cite{staiger_splitting_2015}
This dependence allows the identification of excitonic states and the investigation of their coupling to phonons, as demonstrated for MoS$_2$,\cite{carvalho2015,scheuschner_interlayer_2015} WS$_2$, and WSe$_2$.\cite{del_corro_atypical_2016}
In MoTe$_2$\xspace, measurements also show such a strong dependence.\cite{yamamoto_strong_2014,ruppert_optical_2014,froehlicher2015,grzeszczyk_raman_2016,song_physical_2016}
In the case of triple-layer MoTe$_2$\xspace, it was observed that the intensity ratio between the lowest- and highest-frequency modes belonging to the same Davydov triplet significantly changes with laser photon energy.\cite{froehlicher2015,grzeszczyk_raman_2016,song_physical_2016}
The change of the Raman intensities with laser photon energy in MoTe$_2$\xspace and in TMDs in general is yet to be fully understood and few \textit{ab initio}\xspace studies are present in the literature.\cite{guo_double_2015}
More recently, the experimental observation of the temperature dependence of the Raman intensities was reported.\cite{golasa_resonant_2017}
Single-layer MoTe$_2$ is a near-infrared (1.1~eV at room temperature) direct optical band gap semiconductor, as such it is possible
to probe excitonic states with visible photon energies.\cite{ruppert_optical_2014,froehlicher_direct_2016}
Additionally the Davydov split modes appear prominently at visible (hence easily available) laser photon energies.\cite{froehlicher2015,grzeszczyk_raman_2016,song_physical_2016}
In this work, we explain the dependence of the one-phonon Raman intensities on the laser photon energy using computational simulations and compare them with experimental results.
The accurate description of resonant Raman scattering is challenging due to the interplay between electronic correlation and electron-phonon coupling.
Up to now, most theoretical studies have focused on the non-resonant regime using simpler models like the bond-polarizability model or density functional perturbation theory.\cite{luo_anomalous_2013,luo_effects_2013,umari_raman_2001}
However, these methods assume static electromagnetic fields, which is not applicable in the resonant case where the \emph{dynamic} dielectric response needs to be accounted for.
Resonant Raman spectroscopy has also been studied using empirical models fitted from experiments to describe the electronic bands, phonon dispersion and electron-phonon coupling.\cite{cantarero_excitons_1989}
More recently, a study on the double-resonant Raman process in MoTe$_2$\xspace investigated the resonance surface using calculations of the electronic structure and phonon dispersion.\cite{guo_double_2015}
Here we use an \textit{ab initio} approach to calculate the first-order Raman susceptibility as a function of laser photon energy.
We calculate the Raman susceptibility by approximating the derivative of the dielectric response function with respect to lattice displacements with finite differences.\cite{gillet2013,del_corro_atypical_2016}
To this end, we combine different \textit{ab initio} methods:
we calculate the ground state properties using density functional theory (DFT), the phonons with density functional perturbation theory (DFPT), and the optical absorption spectra both in the independent-particle approximation and including many-body effects.
We discuss the main qualitative features on the independent-particle level and show that the inclusion of excitonic effects provides a reliable quantitative description of the Raman spectrum, in very good agreement with experimental results.
Moreover, the calculations reproduce the experimentally reported\cite{froehlicher2015,grzeszczyk_raman_2016,song_physical_2016} dependence of the intensity ratio of the $A^\prime_1$ Davydov triplet as a function of laser photon energy. Finally we give an explanation of the results in terms of quantum interference effects.
\section{Raman intensities from first principles}\label{sec:raman}
The experimental observable of interest, the Raman intensity, is, in the case of phonon emission (Stokes scattering), given by\cite{loudon_theory_1963,birman_theory_1966,cardona_light_scattering_solids_II}
\begin{align}
I (\omega_L) \propto \sum_\mu (\omega_{\rm L}-\omega_\mu)^4 |(\vec{e}_{\rm S})^\dagger \bm{\alpha}_\mu (\omega_L) (\vec{e}_{\rm L})|^2\frac{n_\mu+1}{2\omega_\mu}.
\label{eqI}
\end{align}
Here, $\bm{\alpha}_\mu(\omega)$ is the Raman susceptibility tensor, $\vec{e}_{\rm L}$ and $\vec{e}_{\rm S}$ are the polarization vectors of the incoming and scattered light, respectively, $\omega_{\rm L}$ is the frequency of the incoming light, $\omega_{\mu}$ denotes the frequency of phonon mode $\mu$, and $n_\mu$ represents its occupation factor.
In the frozen-phonon limit, the Raman tensor equals the change of the dielectric susceptibility $\bm{\chi}(\omega)$ with atomic displacements\cite{cardona_light_scattering_solids_II}
\begin{align}
\bm{\alpha}_\mu(\omega) = \sum_{\tau, i} \frac{\partial \bm{\chi}(\omega)}{\partial R_{\tau, i}} Q^{\tau,i}_\mu, \label{eq:raman_tensor}
\end{align}
where $R_{\tau,i}$ is the position of atom $\tau$ in the Cartesian direction $i$, and $Q_\mu$ the eigenvector of the phonon mode $\mu$, normalized according to
\begin{align}
\sum_{\tau, i} M_{\tau} Q^{\tau, i}_\mu Q^{\tau, i}_\nu = \delta_{\mu\nu}.
\end{align}
where $M_{\tau}$ denotes the mass of atom $\tau$.
This formulation allows us to account for many-body effects in the Raman susceptibility by incorporating them in the calculation of the dielectric response.
At this level, different well-tested implementations are available in a fully \textit{ab initio} framework which allow the inclusion of excitonic and electronic correlation effects, which are especially relevant in TMDs.\cite{qiu_optical_2013,molina-sanchez_effect_2013}
The frozen-phonon approximation is valid at energies that fulfill the condition
\begin{align}
\hbar\omega_{\mathrm{\mu}} \ll \left| \hbar\omega_{\rm L} - \Delta E + i\gamma \right|,
\end{align}
where $\Delta E$ represents the energy of an electronic transition, $\hbar\omega_{\rm L} = E_{\rm L}$ is the photon energy of the incoming laser light (from now on designated simply as laser energy) and $\gamma$ is the broadening, i.e., the inverse lifetime, of the electronic excitation.
In the non-resonant regime, $E_{\rm L}$ is far away from any electronic transition energy and this condition is automatically satisfied.
In the resonant regime, where the laser energy always matches the energy of an electronic transition, the relevant condition is that the phonon energy ($\sim$20-25~meV) is smaller than the electronic broadening.
At room temperature the broadening due to electron-phonon coupling is around 100 meV\cite{molina-sanchez_temperature-dependent_2016} and therefore the frozen-phonon approximation is reasonable.
This approach explicitly captures the laser-energy dependence inherent to the Raman susceptibility tensor, which is crucial for studying resonance effects.
This formulation goes beyond the bond polarizability model and DFPT, which assume static electromagnetic fields, and are therefore only valid in the non-resonant regime.\cite{lazzeri_first-principles_2003,veithen_nonlinear_2005}
\subsection{Electronic structure and phonons}\label{sec:gs}
The electronic structure of MoTe$_2$\xspace is calculated using DFT within the local density approximation (LDA), as implemented in the PWscf code of the Quantum ESPRESSO suite.\cite{giannozzi_quantum_2009}
We include the semi-core 4s and 4p states in the pseudopotential of molybdenum and account for spin-orbit interaction by employing spinorial wave functions.
The charge density is calculated using a plane-wave energy cutoff of 100~Ry and a $16\times 16\times 1$ $\bf k$-point grid for both the single- and triple-layer calculation.
For the lattice parameter, we use the experimental value of 3.52~\AA. \cite{podberezskaya_crystal}
\begin{table}
\center
\label{table:sl}
\begin{tabular}{lccccc}
Mode & $A{^\prime_1}$ & $E^{\prime}(x)$ & $E^{\prime}(y)$\\\hline
Raman tensor $\bm{\alpha}_\mu$ &
$\begin{bmatrix}
a & & \\
& a & \\
& & b\\
\end{bmatrix}$&
$\begin{bmatrix}
& d & \\
d & & \\
& & \\
\end{bmatrix}$&
$\begin{bmatrix}
d & & \\
& -d & \\
& & \\
\end{bmatrix}$\\\hline
Single-layer &\phantom{(a)} 174.6 (171.5) & \multicolumn{2}{c}{ 238.3 (236.5) } \\
Freq. (cm$^{-1}$) & & \\\hline
Triple-layer & (a) 173.6 (169.4) & \multicolumn{2}{c}{ 235.5 (234.7) } \\
Freq. (cm$^{-1}$) &\phantom{(a)} 175.1 \phantom{(000.0)} & \multicolumn{2}{c}{ 238.0 \phantom{(000.0)} } \\
& (b) 176.4 (172.6) & \multicolumn{2}{c}{ 239.0 (234.7) } \\
\end{tabular}
\caption{
Calculated and experimental\cite{froehlicher2015} (in parentheses) phonon mode frequencies and corresponding form of the Raman tensor for the space group D$_{\rm 3h}$.\cite{loudon_theory_1963}
We distinguish the two Raman active $A^\prime_1$ modes in triple-layer MoTe$_2$\xspace using the letters (a) and (b).
The triple-layer mode with frequency 175.1 cm$^{-1}$ is Raman inactive and belongs to the $A^{\prime\prime}_2$ representation.
All other listed modes are Raman-active.
The calculated splitting of the $E^\prime$ mode in triple-layer MoTe$_2$\xspace is not observed experimentally.
This mode, however, is not studied in detail here. For a complete discussion see Ref.\citenum{froehlicher2015}.
}\label{tab:phonon_modes}
\end{table}
The phonons of MoTe$_2$\xspace are calculated using DFPT.
Due to momentum conservation, only phonon modes at $\Gamma$ participate in first-order Raman scattering, as the magnitude of the light momentum is negligible compared to the crystal momentum.
The Raman-active phonon modes of interest are reported in Table~\ref{tab:phonon_modes}.
Both single- and triple-layer MoTe$_2$\xspace belong to the space group D$_{\rm 3h}$.
We refer to the different phonon modes by their irreducible representation label in the Mulliken notation.
The phonon modes of single-layer MoTe$_2$\xspace are denoted by $E^\prime$ and $A^\prime_1$ for the in-plane and out-of-plane modes, respectively.\cite{molina-sanchez_vibrational_2015}
When going from single-layer to triple-layer MoTe$_2$\xspace, the $A^\prime_1$ mode splits into a Davydov triplet composed of two Raman-active $A^\prime_1$ modes, which we denote by $A^\prime_1$(a) and $A^\prime_1$(b), and one IR-active $A^{\prime\prime}_2$ mode.
In this work, we will study the experimentally observed inversion of the Raman intensity ratio between the $A^\prime_1$(a) and (b) modes as a function of laser energy as shown in Figure~\ref{fig:Fig_IPCMS}.
\subsection{Optical absorption}\label{sec:abs}
We calculate the optical absorption on two levels of theory:
in the first approach, we treat electrons and holes as independent particles (IP), while in the second case, we include many-body effects due to electron-electron and electron-hole interaction perturbatively using the GW approximation and Bethe-Salpeter equation (BSE).\cite{onida_electronic_2002}
\subsubsection{Independent-particle approximation}
The expression for the IP dielectric susceptibility can be derived from time-dependent perturbation theory and is given by \cite{baroni_abinitio_1986}
\begin{align}
\chi^{ij}(\omega)
\propto \sum_{\mathbf{k} vc} \left[ \frac{(\Lambda^i_{cv\mathbf{k}})^* \Lambda^j_{cv\mathbf{k}}}
{\hbar\omega - (\epsilon_{c\kb}-\epsilon_{v\kb}) + i\gamma} + (\omega \to -\omega) \right],
\label{eq:chi_ip}
\end{align}
where $\Lambda^i_{cv\mathbf{k}} = \braket{\psi_{c\mathbf{k}}| p^i/m_e |\psi_{v\mathbf{k}}}$ denotes the electron-light coupling (ELC) matrix elements, also referred to as dipole matrix element, and $\epsilon_{v\kb}$ and $\epsilon_{c\kb}$ are the DFT valence and conduction bands energies, respectively.
The index $i$ denotes the Cartesian component of the ELC, while the parameter $\gamma$ represents the sum of the electron and hole broadening.
We use a constant broadening of 100~meV.
The calculation of the ELC was performed using the \texttt{yambo} code. \cite{marini_yambo:_2009}
The absorption spectrum is proportional to the imaginary part of the diagonal elements of the dielectric susceptibility tensor $\boldsymbol{\chi}(\omega)$.
\subsubsection{Many-body perturbation theory}
The two-dimensional character of MoTe$_2$\xspace reduces the dielectric screening and hence many-body effects are more pronounced than in three-dimensional materials.
Such effects manifest themselves as significant corrections to the electronic band energies and in large excitonic binding energies.
We account for these effects by combining the GW method and the BSE.\cite{rohlfing_electron-hole_2000}
GW calculations were performed non-self-consistently (G$_0$W$_0$) using a $36\times 36\times 1$ sampling of the Brillouin zone (BZ) and a 40~Ry cutoff for the plane-wave basis.
A converged quasi-particle band gap was obtained by including 120 electronic bands for single- and 360 bands for triple-layer MoTe$_2$\xspace.
It should be noted that an accurate GW correction requires the inclusion of the semi-core states in the Mo pseudopotential.\cite{rohlfing_quasiparticle_1995}
In order to avoid spurious interactions between periodic copies of the layers along the z-direction, we apply a Coulomb cutoff.\cite{rozzi_exact_2006}
We account for electron-hole interactions by solving the BSE with a statically screened Coulomb potential.\cite{rohlfing_electron-hole_2000}
In terms of exciton energies $\epsilon_s$, exciton-light coupling matrix elements $\Gamma^i_s$, and excitonic broadening $\gamma$, the dielectric susceptibility reads:
\begin{align}
\chi^{ij}(\omega) \propto
\sum_{s}
\frac{(\Gamma^i_{s})^* \Gamma^j_{s}}
{\hbar\omega - \epsilon_s + i\gamma} + (\omega \to -\omega)\label{eq:chi_bse}.
\end{align}
The BSE calculations were performed using a 30~Ry cutoff for the plane-wave basis and a $36\times 36\times 1$ $\mathbf{k}$-point grid to sample the Brillouin zone.
We include electronic transitions inside a 3~eV window (see Supporting Information for details of the GW and BSE calculations).
\subsection{Raman susceptibility tensor}
The Raman susceptibility tensor $\bm\alpha_\mu(\omega)$ of phonon mode $\mu$ is calculated by approximating the directional derivative of $\boldsymbol{\chi}(\omega)$ with the finite differences method.
For this, we evaluate the dielectric susceptibility at the two displaced positions $\vec{R}^\pm_\tau = \vec{R}_\tau \pm \delta \vec{Q}^\mu_\tau$ and divide by the amplitude of the two displacements.
An important practical drawback of this method is that the displacements according to certain phonon modes break some symmetries of the crystal.
This in turn increases the computational cost of the calculation with respect to the fully symmetric absorption calculation.
To reduce the computational cost, we extrapolate the GW correction from the undisplaced to the displaced case using a scissor operator, which incorporates the stretching of the bands.
This scissor operator is kept fixed for all calculations (see Supporting Information).
In addition, note that both the real and imaginary part of the dielectric susceptibility enter in the calculation of the Raman susceptibility.
The real part is known to converge more slowly with the number of bands.
In the IP picture we can further analyze the Raman susceptibility tensor by splitting it up into the contributions from individual $\mathbf{k}$-points.
To this end, we note from Eq.~\ref{eq:chi_ip} that we can
represent the susceptibility $\chi^{ij}(\omega)$ as a sum over $\mathbf{k}$:
\begin{equation}
\chi^{ij}(\omega) = \sum_{\mathbf{k}}\chi_\mathbf{k}^{ij}(\omega),
\end{equation}
where the term $\chi_\mathbf{k}^{ij}(\omega)$ contains contributions from all electronic transitions at that $\mathbf{k}$-point.
Analogously, we write the Raman susceptibility from Eq.~\ref{eq:raman_tensor} as $\alpha^{ij}(\omega) = \sum_\mathbf{k} \alpha^{ij}_\mathbf{k}(\omega)$.
Contrary to the dielectric susceptibility, in which $\chi^{ij}(\omega)$ is the sum of all $\chi_{\mathbf{k}}^{ij}(\omega)$, the Raman intensity is the \emph{square} of the sum of $\alpha^{ij}_\mathbf{k}(\omega)$:
\begin{align}
\begin{split}
I \propto \left| \sum_\mathbf{k} \alpha^{ij}_\mathbf{k} \right|^2 = \underbrace{\sum_\mathbf{k} \left|\alpha^{ij}_\mathbf{k}\right|^2}_\text{direct terms} + \underbrace{\sum_{\substack{\mathbf{k},\mathbf{k}' \\ \mathbf{k}\neq\mathbf{k}^\prime}} \left(\alpha^{ij}_\mathbf{k}\right)^* \alpha^{ij}_{\mathbf{k}^\prime}.}_\text{interference terms}
\end{split}\label{eq:direct_interference}
\end{align}
The interference terms can be constructive or destructive.
If enough electronic transitions with a finite amplitude are in phase we detect a large Raman intensity.
However, if the contributions are out of phase, interference will lead to a small or even zero Raman intensity.
The weight of the direct terms in the final result is much smaller than that of the interference terms (see Supporting Information).
The key point of this paper is to use the concept of quantum interference to explain the observed behavior of the Raman intensity with laser energy in MoTe$_2$\xspace.
This concept was shown to be important in the Raman intensities of graphene where an increase of the Raman intensity is observed when destructive interference terms are Pauli-blocked through electron or hole doping.\cite{basko_calculation_2009,kalbac_influence_2010,chen_controlling_2011,reichardt_ab_2017}
We show that selection rules manifest themselves at the level of quantum interference, but even when selection rules do not apply, quantum interference explains the behavior of the
Raman intensity.
Since interference effects reflect the interplay of all the terms $\alpha^{ij}_\mathbf{k}(\omega)$, it is inaccurate to attribute the features in the behavior of the Raman intensities to a single electronic transition.
\section{Results}\label{sec:results}
\subsection{Experimental results}
Single- and few-layer hexagonal MoTe$_2$\xspace (hereafter simply denoted MoTe$_2$\xspace) samples were prepared by mechanical exfoliation and deposited onto Si substrates covered with a 90 nm SiO$_2$ epilayer.
The Raman spectra of single- and triple-layer MoTe$_2$\xspace were measured at three different laser energies ($E_{\rm L}=1.58~\rm eV$, 1.96~eV, and 2.33~eV) in a backscattering geometry using a custom-built micro-Raman setup.
The incoming laser beam was linearly polarized and the Raman scattered light was sent through a monochromator with a 500 mm focal length coupled to a charge-coupled device (CCD) array.
A 900 (resp. 2400) lines/mm grating was used for measurements at 1.58 eV (resp. 1.96 eV and 2.33 eV).
Laser intensities below $50~\rm kW/cm^{-2}$ were employed in order to avoid photoinduced heating and sample deterioration.
The Raman spectra were fit with Voigt profiles taking into account the spectral resolution of our setup of 1.0, 0.4 and 0.6~cm$^{-1}$ at $E_{\rm L}=$1.58~eV, 1.96~eV, and 2.33~eV, respectively.
Figure~\ref{fig:Fig_IPCMS} shows micro-Raman spectra of single- and triple-layer MoTe$_2$\xspace.
The number of MoTe$_2$\xspace layers has been unambiguously identified as described in Ref.\citenum{froehlicher2015}.
The raw spectra have been normalized by the integrated intensity of the T$_{\rm 2g}$ (point group O$_{\rm h}$) Raman mode of silicon at $\approx 520~\rm cm^{-1}$ for a qualitative comparison.
To quantitatively compare experimentally measured Raman intensities with the \textit{ab initio}\xspace Raman susceptibilities calculated according to Eq.~\ref{eqI}, we have also taken optical interference effects into account and extracted the $\rm xx$-component of the Raman susceptibility after carefully considering the polarization-dependent response of our setup. Additional details on the normalization procedure can be found in the Supporting Information.
\begin{figure*}[!tbh]
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/Figure1_IPCMS.pdf}
\caption{Micro-Raman spectra of single-layer (panel a) and triple-layer (panel b) MoTe$_2$\xspace at three different laser energies in a backscattering geometry. All the spectra have been normalized by the integrated intensity of the Raman mode from the underlying Si substrate at $\approx 520~\rm cm^{-1}$. The corresponding atomic displacements for the Raman-active modes are shown as insets in the upper panels.}
\label{fig:Fig_IPCMS}
\end{center}
\end{figure*}
In Figure.~\ref{fig:Fig_IPCMS}, we show the experimentally obtained Raman spectra of single- (panel (a)) and triple-layer (panel (b)) MoTe$_2$. The prominent A$^\prime_1$ and E$^\prime$~modes are clearly visible. In single-layer MoTe$_2$, the A$^\prime_1$~mode dominates the spectrum at laser energies of $E_L$=1.58 and 1.96~eV, while at $E_L$=2.33~eV the E$^\prime$~mode is dominant.
Similarly, in triple-layer MoTe$_2$\xspace, the $A_1^\prime$\xspace and $E^\prime$\xspace mode dominate the Raman spectra at $E_{\rm L}=1.58~\rm eV$ and $E_{\rm L}=2.33~\rm eV$, respectively. However, the $A_1^\prime$\xspace modes feature and the $E^\prime$\xspace mode have comparable intensities at $E_{\rm L}=1.96~\rm eV$.
Remarkably, the Davydov-split $A_1^\prime$(a)\xspace and $A_1^\prime$(b)\xspace modes have similar intensities in triple-layer MoTe$_2$\xspace at $E_{\rm L}=1.96~\rm eV$ and $2.33~\rm eV$, whereas the bulk-like $A_1^\prime$(b)\xspace mode is 13 times more intense than the $A_1^\prime$(a)\xspace mode at $E_{\rm L}=1.58~\rm eV$. Note that the $E^\prime$\xspace mode does not display a measurable Davydov spitting.\cite{froehlicher2015} In Figure~\ref{fig:raman}, we will compare the experimentally measured Raman susceptibilities and integrated intensity ratios between the $A_1^\prime$(b)\xspace and $A_1^\prime$(a)\xspace modes with \textit{ab initio}\xspace calculations and correlate the observation of a prominent Davydov splitting with resonantly enhanced Raman intensities.
\subsection{Theoretical calculations}
In the following we will discuss the results of first-principles calculations and compare them with our experimental results.
Before discussing the results for triple-layer MoTe$_2$\xspace, we analyze the single-layer case.
This allows us to introduce the concept of quantum interference in a simpler context.
In all cases we will analyze the $\rm xx$-component of the Raman susceptibility tensor, $\alpha^\mu_{\rm xx}(\omega)$.
The other components are related to $\rm xx$-component of $\alpha$, as shown in Table~\ref{tab:phonon_modes}.
\subsubsection{Single-layer MoTe$_2$\xspace}
In the case of single-layer MoTe$_2$\xspace, we analyze the Raman susceptibility for the $A^\prime_1$ and $E^\prime$ modes.
Figure~\ref{fig:raman}a shows the Raman susceptibility as a function of laser energy for both the IP (dashed lines) and BSE calculations (solid lines).
Up to a laser energy of 2~eV the intensity of the $A^\prime_1$ mode is larger than that of the $E^\prime$ mode.
At higher laser energy, the $E^\prime$ mode has a larger intensity than the $A^\prime_1$ mode, in good agreement with the experimental data reported here and in the literature.\cite{ruppert_optical_2014,froehlicher2015,grzeszczyk_raman_2016}
The overall scale of the theoretical results (IP- and BSE-level) is has been chosen to reflect that of the experimental results.
This chosen scale is the same for both the IP- and BSE-level calculation to allow a comparison between the two.
Since the overall scaling factor cancels when considering intensity ratios, the quantity that can be compared unambiguously between experiment and theory is the ratio of two intensities, as shown in Figure \ref{fig:raman}c and d.
The inclusion of many-body effects does not change this general trend but affects the relative intensities at the excitonic transitions.
\begin{figure}[h!]
\center
\includegraphics[width=0.45\textwidth]{figures/1l/raman.pdf}
\includegraphics[width=0.45\textwidth]{figures/3l/raman.pdf}
\includegraphics[width=0.45\textwidth]{figures/1l/raman_ratio.pdf}
\includegraphics[width=0.45\textwidth]{figures/3l/raman_ratio.pdf}
\caption{
a) and b) Calculated $\rm xx$-component of the Raman susceptibility tensor squared ($|\alpha^{\rm xx}|^2$) at the IP~level (dashed line) and at the BSE~level (solid lines) for single-layer (panel a) and triple-layer (panel c) MoTe$_2$\xspace as a function of laser energy for the $A^\prime_1$(a) and $A^\prime_1$(b) modes.
The blue squares and green circles correspond to the same quantity (up to a normalization factor) extracted from the spectra in Figure~\ref{fig:Fig_IPCMS}a and b using Eq.~{\ref{eqI}}.
The vertical lines are guides to the eye.
The BSE optical absorption is represented by a gray area. The optical gap is in good agreement with the experimental values reported in Refs.~\citenum{ruppert_optical_2014} and \citenum{froehlicher_direct_2016}.
c) and d) Ratio of the intensities of the $A^\prime_1$ and $E^\prime$ modes (panel c) and $A^\prime_1$(b) and $A^\prime_1$(a) modes (panel d) calculated on the IP~level (dashed line) and BSE~level (solid line).
The black squares represent to the experimentally observed ratios.
}
\label{fig:raman}
\end{figure}
We first analyze the contributions of the individual $\mathbf{k}$-points to the IP susceptibility $\chi_\mathbf{k}(\omega)$.
Figure \ref{fig:1l_raman_bands}a shows $\chi_\mathbf{k}(\omega)$ along a path through the high-symmetry points in the BZ.
The main contributions to $\chi(\omega)$ for laser energies between 0.8 and 2~eV come from the lower bands in transition space in a region around K and between K and M.
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{figures/1l/ip/bands_chi.pdf}
\includegraphics[width=0.5\textwidth]{figures/1l/ip/bands_raman.pdf}
\includegraphics[width=0.5\textwidth]{figures/1l/bands_change.pdf}
\caption{
a) IP absorption Im($\chi^{\rm xx}_\mathbf{k}$) represented in transition space along the high-symmetry points in the Brillouin zone for single-layer MoTe$_2$\xspace.
We only show points close to M and K as there are no relevant transitions close to the $\Gamma$ point for laser energies up to 2~eV.
b) Raman susceptibility $\alpha^{\rm xx}_\mathbf{k}$($\omega$) along the high-symmetry line.
Points at which the absolute value of $\alpha^{\rm xx}_\mathbf{k}$($\omega$) is below 7\% of the maximum value at that $\omega$ are shown in white, otherwise the phase of $\alpha^{\rm xx}_\mathbf{k}$($\omega$) is represented by color.
The horizontal lines correspond to the laser energies used in our experiment.
c) Change of electronic bands with atomic displacements according to the $A^\prime_1$ and $E^\prime$ phonon modes.}
\label{fig:1l_raman_bands}
\end{figure}
It should be noted that only optically active transitions can contribute to the Raman susceptibility, but not all of them necessarily do so.
For instance, near the band gap, the $A^\prime_1$~mode is active while the $E^\prime$~mode is silent, even though the same electronic transitions contribute and both modes are, in principle, allowed by lattice symmetry.
This behavior can be understood in terms of angular momentum conservation.
Near the band gap at K the band structure is rotationally symmetric and thus angular momentum is conserved.
Both incoming and outgoing photons carry an angular momentum of $\pm\hbar$ while the $E^\prime$ phonon does as well.
This implies that the final state has a total angular momentum of $\pm2\hbar$ or $0$, which violates angular momentum conservation and renders the $E^\prime$~mode silent.
By contrast, the phonon corresponding to the $A_1^\prime$~mode does not carry angular momentum and hence the corresponding process is allowed.
This can also be understood from the point of view of quantum interference.
For this purpose, we show the $\mathbf{k}$-resolved Raman susceptibly as a function of $\omega$ in Figure~\ref{fig:1l_raman_bands}b.
We color-encode the phase when the amplitude is larger than 7\% of the maximum amplitude at that laser energy.
For the $E^\prime$ mode, the positive contribution from one side of the valley is added to the negative contribution from the other side,
which leads to an overall cancelation of the Raman intensity.
By contrast, for the $A^\prime$ mode the contributions add up constructively.
At higher laser energies, the full rotation symmetry gradually gets broken down to the 120$^\circ$ rotation symmetry of the lattice, an effect known as trigonal warping\cite{saito_trigonal_2000} of the electronic structure.
Angular momentum is then only conserved up to integer multiples of $\pm3\hbar$ and both the $A_1^\prime$\xspace and $E^\prime$\xspace modes become allowed.
In order to track down the origin of the phase of the Raman susceptibility, we take a closer look at the derivative of $\chi_\mathbf{k}$($\omega$) with respect to atomic displacements:\cite{cardona_light_scattering_solids_II}
\begin{align}
\frac{\partial\chi^{ij}_\mathbf{k}(\omega)}{\partial Q} \propto& \Bigg\{
\frac{\partial(\Delta \epsilon_{cv\mathbf{k}})}{\partial Q} \frac{(\Lambda^i_{cv\mathbf{k}})^* (\Lambda^j_{cv\mathbf{k}})}
{(\omega - \Delta \epsilon_{cv\mathbf{k}} + i\gamma)^2}
+ \frac{\partial \left[(\Lambda^i_{cv\mathbf{k}})^*(\Lambda^j_{cv\mathbf{k}}) \right]}{\partial Q} \frac{1}
{\omega - \Delta \epsilon_{cv\mathbf{k}} + i\gamma}
+ (\omega \to -\omega)\Bigg\}.
\end{align}
where $\Delta \epsilon_{cv\mathbf{k}} = \epsilon_{c\mathbf{k}} - \epsilon_{v\mathbf{k}}$.
The first term involves the change of the electronic band energies, which is given by the diagonal (intra-band) electron-phonon coupling (EPC) matrix elements.
The second term stems from the change of the ELC upon atomic displacements and involves the off-diagonal EPC matrix elements.
The first term is double-resonant and corresponds to a process where an electron is excited to a conduction band, then scatters with a phonon within the same band, and finally decays to the valence band by emitting a photon.
Since this term is double-resonant, we assume it to be dominant and we can directly relate the phase of the Raman susceptibility with the sign of the diagonal EPC matrix elements.
We visualize these by plotting the change of the electronic band energies with respect to atomic displacements, which correspond to the diagonal EPC matrix elements, as shown in Figure~\ref{fig:1l_raman_bands}c.
From this plot we observe a direct correlation between the sign of the diagonal EPC and the phase of the Raman susceptibility in Figure~\ref{fig:1l_raman_bands}b.
Therefore, we attribute constructive or destructive interference between regions of the BZ to differences in sign of the change of the electronic band energies.
\subsubsection{Triple-layer MoTe$_2$\xspace}
\iffalse
\begin{figure}[h!]
\center
\includegraphics[width=0.45\textwidth]{figures/3l/raman.pdf}
\includegraphics[width=0.45\textwidth]{figures/3l/raman_ratio.pdf}
\caption{
a) Calculated $\rm xx$-component of the Raman susceptibility tensor squared ($|\alpha^{\rm xx}|^2$) at the IP~level (dashed line) and at the BSE~level (solid lines) for triple-layer MoTe$_2$\xspace as a function of incident laser energy for the $A^\prime_1$(a) and $A^\prime_1$(b) modes.
The blue squares and green circles correspond to the same quantity of the $A^\prime_1$(a) and $A^\prime_1$(b) modes, respectively, extracted from the spectra in Figure~\ref{fig:Fig_IPCMS}b using Eq.~{\ref{eqI}}.
Vertical lines correspond to the laser energies used in experiments.
The BSE optical absorption is represented by a gray area.
b) The ratios between the $A^\prime_1$(a) and $A^\prime_1$(b) calculated on the IP~level (dashed line) and BSE~level (solid line), the black squares correspond to the experimentally observed ratios between the $A^\prime_1$(a) and $A^\prime_1$(b) modes.
}
\label{fig:3l_raman}
\end{figure}
\fi
In the case of triple-layer MoTe$_2$\xspace, we focus our attention on the $A_1^\prime$(a)\xspace and (b) modes, for which experiments reported here and in the literature\cite{froehlicher2015,grzeszczyk_raman_2016,song_physical_2016} show a variation of the relative Raman intensity as a function of laser energy.
Our calculations, both on the IP and BSE level, reproduce this observation very well, as shown in Figure~\ref{fig:raman}b and d.
Common to both calculations is that the $A_1^\prime$(a)\xspace phonon mode is dominant in intensity for laser energies up to 1.8 eV while at higher laser energies the $A_1^\prime$(b)\xspace mode is dominant.
However, only with the inclusion of excitonic effects (BSE) do we obtain the experimentally observed ratio.
Contrary to the single-layer case, where the different intensities are related to different symmetries of the phonon modes, in the triple-layer case, the $A_1^\prime$(a)\xspace and (b) modes belong to the same representation and hence symmetry based-arguments do not apply.
However, we can still use the concept of quantum interference introduced previously to explain the intensity inversion.
\begin{figure}
\center
\includegraphics[width=0.5\textwidth]{figures/3l/ip/bands_raman_abs.pdf}
\includegraphics[width=0.5\textwidth]{figures/3l/ip/bands_raman.pdf}
\includegraphics[width=0.5\textwidth]{figures/3l/bands_change.pdf}
\caption{
$\mathbf{k}$-point resolved contributions $\alpha^{\rm xx}_\mathbf{k}$($\omega$) to the total Raman susceptibility for triple-layer MoTe$_2$\xspace.
Panel (a) shows the absolute value, while panel (b) shows the phase of $\alpha^{\rm xx}_{\mathbf{k}}$($\omega$).
The phase is only shown if the absolute value if greater than 7\% of the maximum value at that $\omega$.
Panel (c) shows the change of the electronic bands with atomic displacements according to the $A_1^\prime$(a)\xspace (left) and $A_1^\prime$(b)\xspace~modes (right).}
\label{fig:3l_raman_bands}
\end{figure}
We start by analyzing the behavior of the Raman susceptibility for laser energies near the band gap energy.
There, the $A_1^\prime$(b)~mode has a large intensity while the $A_1^\prime$(a)\xspace~mode is practically silent.
This can be understood from Figure~\ref{fig:3l_raman_bands}c, where we show the diagonal EPC matrix elements along the high-symmetry line in the BZ.
For the $A_1^\prime$(b)~mode the conduction band states at K contribute with the same sign, while for the $A_1^\prime$(a)\xspace~mode they have opposite signs.
This is a direct consequence of the band composition at K and the way the layers vibrate (in-phase in the $A^\prime_1$(b) mode and out-of-phase in the $A^\prime_1$(a) mode, respectively - see Supporting Information)
.
\begin{figure}[h!]
\center
\includegraphics[width=0.4\textwidth]{figures/3l/ip/raman_interference.pdf}
\caption{
Argand plot of $\alpha_\mathbf{k}$($\omega$) for the $A_1^\prime$(a) and (b)~modes of triple-layer MoTe$_2$\xspace for laser energies $E_{\rm L}$=1.58~eV (bottom panel) and 1.96~eV (top panel).
The colors represent the position of the point in the Brillouin zone (see inset).}
\label{fig:3l_interference}
\end{figure}
The Raman intensities at higher laser energies (between 1.58 and 1.96 eV) can also be understood from the point of view of quantum interference.
For this, we represent the contributions from all $\mathbf{k}$-points in the BZ as points in the complex plane (``Argand plot'') as shown in Figure~\ref{fig:3l_interference}.
By color-encoding the $\mathbf{k}$-point location in the BZ, we can identify the regions which contribute constructively to the total Raman amplitude and those that are interfering destructively.
The overall phase of the different contributions has been fixed such that the total Raman susceptibility is real and positive (solid black line).
At a laser energy of 1.58~eV, the contributions from the edge of the BZ, i.e., between K and M (purple dots), scatter concentrically around the origin and mostly cancel each other for both phonon modes.
However, the regions between K and $\Gamma$ and M and $\Gamma$ (blue dots) are building the signal up.
Since these contributions have larger amplitude for the $A_1^\prime$(b)~mode than for the $A_1^\prime$(a)\xspace mode, the former has to a larger intensity at this laser energy.
This becomes clear by looking at Figure~\ref{fig:3l_raman_bands}, where we represent the absolute value of $\alpha_\mathbf{k}(\omega)$ along the high-symmetry line in panel (a) and its phase in panel (b).
For a laser energy of 1.58~eV, there are resonant transitions between K and $\Gamma$ and at M (see arrows in panel (a)).
At these points the modulus of $\alpha_\mathbf{k}(\omega)$ is large and the phases are the same, which leads to constructive interference of the signal and an increase in the observed Raman intensity for both phonon modes.
At a laser energy of 1.96~eV, the situation is rather different.
The $\alpha_\mathbf{k}(\omega)$ contributions from the region between K and M (purple dots in the Argand plot) no longer scatter concentrically around the origin and now destructively interfere with the contributions from the K-$\Gamma$ and M-$\Gamma$ regions (blue dots).
We resolve which electronic transitions lead to these destructive interference effects by referring once more to Figure~\ref{fig:3l_raman_bands}.
The destructive contributions stem from transitions at M, which have a relative phase of $\pi/3$ (blue areas in Figure~\ref{fig:3l_raman_bands}b) while the constructive ones have relative phases between $-\pi/2$ and $-\pi$ (green, yellow, and red areas).
In the case of the~$A_1^\prime$(a)\xspace mode, the amplitude of these destructive contributions is small and hence the resulting signal is larger than the one of the $A_1^\prime$(b)~mode, for which the destructive contributions have a sizable amplitude.
From Figure~\ref{fig:3l_raman_bands}a we can verify that both the amplitude of the $\alpha_\mathbf{k}(\omega)$ near the M point is larger for the $A_1^\prime$(b)~mode and that their phases are opposite to the ones from the contributions of the constructively interfering points (see dashed and solid arrows in panel (b)).
The reason for the small amplitudes in the K-M region for the $A_1^\prime$(a)\xspace~mode can be deduced from Figure~\ref{fig:3l_raman_bands}c.
The diagonal EPC matrix elements for the $A_1^\prime$(a)\xspace~mode and the lowest conduction bands along the K-M direction have both positive and negative signs.
Consequently, their contribution to $\alpha_{\mathbf{k}}$($\omega$) mostly cancels out, which leads to a small contribution to the Raman susceptibility.
On the other hand, for the $A_1^\prime$(b)~mode, the different EPC matrix elements add up with the same sign and the $\mathbf{k}$-points from this region give a larger contribution.
\section{Conclusions and Outlook}
We calculated the laser energy-dependent Raman susceptibility in an \textit{ab initio} framework by taking finite differences of the dynamic dielectric susceptibility in the frozen-phonon approximation.
We applied our method to study the Raman spectrum of single- and triple-layer MoTe$_2$\xspace, reproducing and explaining the experimentally observed behavior of the intensity ratio as a function of laser energy for the different $A^\prime_{1}$ phonon modes.
We demonstrated that quantum interference effects between contributions of electronic transitions from different parts of the Brillouin zone are responsible for this behavior.
We also found a correlation between the phase of these contributions and the sign of the diagonal electron-phonon coupling matrix elements.
Quantum interference effects make the direct correlation of the optical absorption spectrum with the measured and calculated Raman intensities highly non-trivial.
Additionally, we showed that symmetry arguments are not always enough to explain the counterintuitive behavior of the intensities as a function of laser photon energy as seen in the case of the $A^\prime_1$ modes of triple-layer MoTe$_2$.
Instead, a careful and detailed analysis is required to trace down which features of the electronic structure, vibrational spectra, and interplay between them are responsible for the observed behavior.
Furthermore, we showed that the proper inclusion of excitonic effects is necessary to accurately describe the experimentally observed intensity ratio of the modes as a function of laser energy.
The approach presented here offers a way to systematically analyze resonant Raman spectra.
Because of its \textit{ab initio} nature, it can be directly used to study different phonon modes of various materials in different phases.
Additionally, it can also be applied to study the temperature dependence of the Raman spectrum, as recently investigated experimentally.\cite{golasa_resonant_2017} This could be done by including the electron lifetimes and renormalization from electron-phonon coupling as recently shown for the temperature dependent optical absorption of MoS$_2$.\cite{molina-sanchez_temperature-dependent_2016}
\begin{acknowledgement}
We thank Etienne Lorchat for fruitful discussions.
The authors acknowledge support by the National Research Fund, Luxembourg (Projects OTPMD, RAMGRASEA, C14/MS/773152/FAST-2DMAT, and INTER/ANR/13/20/NANOTMD) and the Agence Nationale de la Recherche, France (under grant H2DH ANR-15-CE24-0016). S.B. is a member of the Institut Universitaire de France (IUF).
The simulations were done using the HPC facilities of the University of Luxembourg~\cite{VBCG_HPCS14}.
The authors declare no competing financial interests.
\end{acknowledgement}
\newpage
\begin{center}
{\Huge\textbf{Supporting Information}}
\end{center}
\setcounter{equation}{0
\setcounter{figure}{0}
\setcounter{section}{0}
\renewcommand{\theequation}{S\arabic{equation}}
\renewcommand{\thefigure}{S\arabic{figure}}
\renewcommand{\thesection}{S\arabic{section}}
\section{Ground state properties and phonons}
Calculations of the electronic ground-state properties were done within density functional theory (DFT) in the local density approximation (LDA). Since LDA is known to underestimate the lattice parameters, we use the experimentally determined lattice constant of MoTe$_2$, $a$=3.52~\AA\cite{podberezskaya_crystal}. We chose an LDA exchange-correlation function over more elaborate van~der~Waals functionals, as it has been shown to perform well in predicting vibrational properties of layered materials.\cite{luo_effects_2013,luo_anomalous_2013}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{figures/1l/mote2_bands}
\includegraphics[width=.45\textwidth]{figures/3l/bands_layers}
\caption{Band structures of single- and triple-layer MoTe$_2$\xspace in the LDA approximation including spin-orbit coupling.}\label{fig:bands}
\end{figure}
Figure \ref{fig:bands} shows the electronic band structure of single- (left panel) and triple-layer (right panel) MoTe$_2$\xspace.
When passing from the single- to the triple layer case, each single-layer band splits into a triplet of bands. We represent the contributions of the different layers to the orbital composition of each band by color (red for the outer layers 1 and 3 and blue for the inner layer 2).
This decomposition should be compared to Figure~4c in the main text, where the sign of the band energy change with atomic displacements according to the A$^\prime_1$(a) and (b) phonon modes is shown.
In the case of the A$^\prime_1$(b)~mode, the three layers vibrate in phase (see inset in Figure~1b of the main text) and hence the band energies within each band triplet always change with the same sign, independent of the layer composition of the bands.
For the A$^\prime_1$(a)~mode, on the other hand, the oscillation phase of the inner layer is opposite to that of the two outer layers (see inset in Figure~1b of the main text) and due to the different layer contributions to each band triplet member, the sign of the band energy changes varies within the triplet.
\section{Optical absorption}
We calculated the GW quasi-particle correction to the LDA eigenvalues using the
\texttt{yambo} code.\cite{marini_yambo:_2009} We used a $36\times36\times1$ sampling of the Brillouin
zone (BZ) for single- and triple-layer MoTe$_2$\xspace. We used a 40 Ry cutoff for the
plane-wave basis set, a Coulomb cutoff technique\cite{rozzi_exact_2006} to avoid spurious interactions
between the periodic copies in the z-direction and a vacuum separation of 50 and
70 Bohr for single- and triple-layer, respectively.
We calculated the GW quasiparticle corrections for the band gap and applied a scissor shift\cite{gonze_dynamical_1997} to the LDA band energies of the other bands to account for this corrections without having to compute them explicitly.
The scissor operator is kept fixed for the different atomic displacements.
This approximation has the advantage that only one calculation of the correction of the band gap
energy is needed. However, it does not account for the changes of the screening effects in the electron-phonon
interaction.
A consistent way of including these corrections is still desirable and will be the topic of future work.
\begin{table}
\begin{tabular}{cc}
& scissor shift (eV)\\\hline
Single-layer & 0.667\\
Triple-layer & 0.548\\
\end{tabular}
\caption{Scissor operator for single- and triple-layer MoTe$_2$\xspace.}
\label{tab:scissor}
\end{table}
The calculation of the dielectric susceptibility including many-body effects has
been performed by solving the Bethe-Salpeter equation (BSE) with the \texttt{yambo} code.\cite{marini_yambo:_2009}
The static dielectric screening was calculated using the same vacuum separation between the layers as in the GW case.
The number of electronic transitions included to construct the BSE Hamiltonian was selected to include electronic transitions inside a energy window of 3~eV.
We find this criterion to be more meaningful physically and the convergence of the spectra to be more stable compared to selecting the number of valence and conduction bands separately.
Especially in the triple-layer case, we find many dispersive and crossing bands near the lower conduction and topmost valence band (see Figure~\ref{fig:bands}b), which makes it difficult to know \textit{a priori} how many bands need to be included in the calculation.
Additionally, checking the convergence with the gradual inclusion of valence and conduction bands can lead to a false convergence of the dielectric susceptibility.
\section{Results}
\subsection{Single-layer}
As complementary information to the main text, we represent the intensity of the individual contributions $\alpha_\mathbf{k}(\omega)$ in transition space for the two phonon modes (A$_1^\prime$ and E$^\prime$) of single-layer MoTe$_2$ in Figure~\ref{fig:1l_raman_bands}.
\begin{figure}[h!]
\includegraphics[width=0.5\textwidth]{figures/1l/ip/bands_raman_abs.pdf}
\caption{Absolute value of the Raman susceptibility resolved along the high-symmetry $\mathbf{k}$-points $\alpha^{\mathrm{xx}}_\mathbf{k}$($\omega$) for single-layer MoTe$_2$\xspace.}
\label{fig:1l_raman_bands}
\end{figure}
It is also instructive to look at the individual contributions $\alpha_\mathbf{k}(\omega)$ over the full Brillouin zone (FBZ) as shown in Figure \ref{fig:1l_raman_bz}.
A line cut along the high-symmetry points of this is shown in Figure~3 of the main text.
However, there are additional contributions from regions not along the high-symmetry, shown in Figure \ref{fig:1l_raman_bz}.
In all cases, we consider the contributions for incoming and outgoing light polarized along the x-direction.
This leads to a breaking of some symmetries of the lattice and to the emergence of two inequivalent M and K points.
We choose to represent the contributions along the high-symmetry line represented in Figure~\ref{fig:1l_raman_bz} to
simplify the analysis without compromising the main conclusions.
In the case of the $E^\prime$~mode the phonon was chosen to be polarized along the x-direction.
\begin{figure}[h!]
\center
\includegraphics[width=0.8\textwidth]{figures/1l/ramanbz.pdf}
\caption{
$\mathbf{k}$-point-resolved contributions to the absorption spectrum $\chi^{\rm xx}_\mathbf{k}$ (left panel) and Raman susceptibility $\alpha^{\mathrm{xx}}_\mathbf{k}$($\omega$) (right panel) for single-layer MoTe$_2$ across the BZ for two different laser energies used in experiment. The E$^\prime$~mode was chosen to be polarized in the y-direction (compare Raman tensors in Table~1 of the main text).
}
\label{fig:1l_raman_bz}
\end{figure}
\subsection{Triple-layer}
For the triple-layer case, we represent Im$\{\chi^{\rm xx}_\mathbf{k}(\omega)\}$ along the high-symmetry line in Figure~\ref{fig:3l_chi}.
We additionally represent the individual contributions $\alpha_\mathbf{k}(\omega)$ on the full BZ as represented in Figure~\ref{fig:3l_raman_bz} for the two energies (1.57~eV and 1.96~eV) used in our experiments.
A line cut through this figure along the high-symmetry points is shown in Figure~4a of the main text.
Similarly to the $A_1^\prime$ mode in single-layer, the symmetry is broken along the x-direction.
\begin{figure}
\center
\includegraphics[width=0.5\textwidth]{figures/3l/ip/bands_chi.pdf}
\caption{
IP absorption Im\{$\chi_\mathbf{k}(\omega)\}$ represented in transition space along the high-symmetry points in the Brillouin zone for triple-layer MoTe$_2$\xspace.
}
\label{fig:3l_chi}
\end{figure}
\begin{figure}
\center
\includegraphics[width=0.8\textwidth]{figures/3l/ramanbz}
\caption{
$\mathbf{k}$-point-resolved contributions to the absorption spectrum $\chi^{\rm xx}_\mathbf{k}$ (left panel) and Raman susceptibility $\alpha^{\mathrm{xx}}_\mathbf{k}$($\omega$) (right panel) for single-layer MoTe$_2$ across the BZ for two different laser energies used in experiment.
} \label{fig:3l_raman_bz}
\end{figure}
\section{Direct and interference terms}
We performed calculations with and without the interference terms in the IP level.
The omission of the interference terms leads to Raman intensities that are orders of magnitude smaller than those obtained by including them.
This is consistent with the fact that the calculation of the interference terms involves two integrations over the Brillouin zone compared to only one integration for the ``direct'' terms (see Equation 8 in the main text). Thus their weight compared to the ``direct'' terms is in general much larger.
Ignoring the interference terms leads to the absence of the observed intensity inversion of the Davydov multiplet of the A$_1^\prime$ modes.
\begin{figure}
\center
\includegraphics[width=0.6\textwidth]{figures/3l/interference.pdf}
\caption{
Relative contributions of the ``direct'' (dashed line) and ``interference+direct'' (solid line) terms to the total Raman susceptibility for triple-layer MoTe$_2$\xspace. The distinction between direct and interference terms is explained in Equation 8 in the main text.
} \label{fig:3l_raman_interference}
\end{figure}
\section{Normalization procedure for the experimental data}
To quantitatively compare Raman susceptibilities recorded at different laser photon energies, one has to carefully normalize the Raman spectra. Indeed, the spectra may not be acquired under the exact same conditions (e.g., different integration time, laser intensity,\dots) and the detection efficiency of the experimental setup may also be different. To get rid of all these dependencies, one can normalize the measured Raman intensities to the integrated intensity of a close-lying and well-known Raman feature. We chose the Raman mode of the bulk silicon substrate at around $\sim 520~{\rm cm}^{-1}$, which has been very well documented, for instance in Ref.\cite{Renucci1975}. Furthermore, one also has to take into account the dependence of the measured Raman intensity on the laser photon energy as well as optical interference effects.
Consequently, the normalized Raman intensity of a given Raman mode X is given by
\begin{equation}
\left.\frac{I_{\rm X}}{I_{\rm Si}}\right|_{\rm normalized}(E_{\rm L})=\left(\frac{E_{\rm Si}}{E_{\rm X}}\right)^3 \frac{F_{\rm Si}(E_{\rm L},E_{\rm Si})}{F_{\rm X}(E_{\rm L},E_{\rm X})}C_{\rm Si}(E_{\rm L})\left.\frac{I_{\rm X}}{I_{\rm Si}}\right|_{\rm measured}(E_{\rm L},E_{\rm X},E_{\rm Si}), \label{eq_norm_X_Si}
\end{equation}
where $E_{\rm L}$ is the incoming laser photon energy, $C_{\rm Si}$ is a coefficient that takes into account the resonance effect in the Si mode intensity as shown in Ref.\cite{Renucci1975}, $I_{\rm X}$ and $I_{\rm Si}$ are the integrated intensity of the X and Si mode, $E_{\rm X}$ and $E_{\rm Si}$ are the energies of the Raman scattered photons contributing to the X and Si modes, and $F_{\rm X}$ and $F_{\rm Si}$ are the enhancement factors for the X and Si modes in the [Si/SiO$_2$/single- or triple-layer MoTe$_2$/air] layered system, respectively. Note that after applying Eq.~\eqref{eq_norm_X_Si}, the integrated intensity ratio $\left.\frac{I_{\rm X}}{I_{\rm Si}}\right|_{\rm normalized}$ only depends on $E_{\rm L}$. Let us also note that the $\left(\frac{E_{\rm Si}}{E_{\rm X}}\right)^3$ term stems from the photon energy dependence of the Raman scattered energy flux ($\propto E^4$) and from the fact that our detector -a charge-coupled device (CCD) array- measures a signal proportional the number of incoming photons, not to the energy flux.
In the range of energies studied here, the coefficient $C_{\rm Si}$ is directly deduced from Figure~6 in Ref.\cite{Renucci1975}. The enhancement factors are obtained following Yoon \textit{et al.}\cite{Yoon2009} and Soubelet \textit{et al.}\cite{soubelet_resonance_2016}. To obtain reliable enhancement factors, we have first carefully estimated the refractive index of few-layer MoTe$_2$\xspace from the measurement of the intensity of the Si Raman mode in a [Si/SiO$_2$/$N$-layer MoTe$_2$\xspace/air] layered system as a function of the number of layers $N$, similarly to Zhang \textit{et al.}\cite{Zhang2015e}. Second, to accurately estimate the measured Raman signal from the Si substrate, we have considered the semi-transparency of bulk Si and the fact that we use a confocal Raman setup. Indeed, since bulk Si absorbs strongly in the visible range, the Si thickness that contributes to the Raman signal is much smaller than the Rayleigh length of our focused laser beam and the assumption of a semi-infinite Si layer is valid. However, bulk Si becomes quasi-transparent in the near-infrared region and a Si thickness on the order of the Rayleigh length contributes to the Raman signal. Therefore assuming that the Raman signal stems from a semi-infinite Si layer would lead to strong overestimation of the Si Raman signal\cite{soubelet_resonance_2016}.
Finally, in order to obtain a quantity proportional to the square modulus of the Raman susceptibility $\left|{\alpha}\right|^2$ (see comparison between experimental and theoretical values in Figure 2 in the main text), we have also considered the distinct frequencies and occupation numbers of the $E^\prime$\xspace and $A_1^\prime$\xspace phonon modes (see Eq.1 in the main text).
\providecommand{\latin}[1]{#1}
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{49}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Geim and Grigorieva(2013)Geim, and Grigorieva]{geim_van_2013}
Geim,~A.~K.; Grigorieva,~I.~V. \emph{Nature} \textbf{2013}, \emph{499},
419--425\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Cardona and G\"untherodt(1982)Cardona, and
G\"untherodt]{cardona_light_scattering_solids_II}
Cardona,~M.; G\"untherodt,~G. \emph{Light scattering in solids {II}: basic
concepts and instrumentation}; Springer-Verlag, 1982\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Loudon(1963)]{loudon_theory_1963}
Loudon,~R. \emph{Proceedings of the Royal Society of London A: Mathematical,
Physical and Engineering Sciences} \textbf{1963}, \emph{275}, 218--232\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Soubelet \latin{et~al.}(2016)Soubelet, Bruchhausen, Fainstein,
Nogajewski, and Faugeras]{soubelet_resonance_2016}
Soubelet,~P.; Bruchhausen,~A.~E.; Fainstein,~A.; Nogajewski,~K.; Faugeras,~C.
\emph{Physical Review B} \textbf{2016}, \emph{93}, 155407\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kim \latin{et~al.}(2016)Kim, Lee, Nam, and Cheong]{kim_davydov_2016}
Kim,~K.; Lee,~J.-U.; Nam,~D.; Cheong,~H. \emph{ACS Nano} \textbf{2016},
\emph{10}, 8113--8120\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lee \latin{et~al.}(2015)Lee, Park, Son, and
Cheong]{lee_anomalous_2015}
Lee,~J.-U.; Park,~J.; Son,~Y.-W.; Cheong,~H. \emph{Nanoscale} \textbf{2015},
\emph{7}, 3229--3236\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Staiger \latin{et~al.}(2015)Staiger, Gillen, Scheuschner, Ochedowski,
Kampmann, Schleberger, Thomsen, and Maultzsch]{staiger_splitting_2015}
Staiger,~M.; Gillen,~R.; Scheuschner,~N.; Ochedowski,~O.; Kampmann,~F.;
Schleberger,~M.; Thomsen,~C.; Maultzsch,~J. \emph{Physical Review B}
\textbf{2015}, \emph{91}, 195419\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Carvalho \latin{et~al.}(2015)Carvalho, Malard, Alves, Fantini, and
Pimenta]{carvalho2015}
Carvalho,~B.~R.; Malard,~L.~M.; Alves,~J.~M.; Fantini,~C.; Pimenta,~M.~A.
\emph{Physical Review Letters} \textbf{2015}, \emph{114}, 136403\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Scheuschner \latin{et~al.}(2015)Scheuschner, Gillen, Staiger, and
Maultzsch]{scheuschner_interlayer_2015}
Scheuschner,~N.; Gillen,~R.; Staiger,~M.; Maultzsch,~J. \emph{Physical Review
B} \textbf{2015}, \emph{91}, 235409\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[del Corro \latin{et~al.}(2016)del Corro, Botello-M\'endez, Gillet,
Elias, Terrones, Feng, Fantini, Rhodes, Pradhan, Balicas, Gonze, Charlier,
Terrones, and Pimenta]{del_corro_atypical_2016}
del Corro,~E.; Botello-M\'endez,~A.; Gillet,~Y.; Elias,~A.~L.; Terrones,~H.;
Feng,~S.; Fantini,~C.; Rhodes,~D.; Pradhan,~N.; Balicas,~L. \latin{et~al.}
\emph{Nano Letters} \textbf{2016}, \emph{16}, 2363--2368\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yamamoto \latin{et~al.}(2014)Yamamoto, Wang, Ni, Lin, Li, Aikawa,
Jian, Ueno, Wakabayashi, and Tsukagoshi]{yamamoto_strong_2014}
Yamamoto,~M.; Wang,~S.~T.; Ni,~M.; Lin,~Y.-F.; Li,~S.-L.; Aikawa,~S.;
Jian,~W.-B.; Ueno,~K.; Wakabayashi,~K.; Tsukagoshi,~K. \emph{ACS Nano}
\textbf{2014}, \emph{8}, 3895--3903\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ruppert \latin{et~al.}(2014)Ruppert, Aslan, and
Heinz]{ruppert_optical_2014}
Ruppert,~C.; Aslan,~O.~B.; Heinz,~T.~F. \emph{Nano Letters} \textbf{2014},
\emph{14}, 6231--6236\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Froehlicher \latin{et~al.}(2015)Froehlicher, Lorchat, Fernique, Joshi,
Molina-S\'anchez, Wirtz, and Berciaud]{froehlicher2015}
Froehlicher,~G.; Lorchat,~E.; Fernique,~F.; Joshi,~C.; Molina-S\'anchez,~A.;
Wirtz,~L.; Berciaud,~S. \emph{Nano Letters} \textbf{2015}, \emph{15},
6481--6489\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Grzeszczyk \latin{et~al.}(2016)Grzeszczyk, Go\l{}asa, Zinkiewicz,
Nogajewski, Molas, Potemski, Wysmo\l{}ek, and
Babi\'nski]{grzeszczyk_raman_2016}
Grzeszczyk,~M.; Go\l{}asa,~K.; Zinkiewicz,~M.; Nogajewski,~K.; Molas,~M.~R.;
Potemski,~M.; Wysmo\l{}ek,~A.; Babi\'nski,~A. \emph{2D Materials}
\textbf{2016}, \emph{3}, 025010\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Song \latin{et~al.}(2016)Song, Tan, Zhang, Wu, Sheng, Wan, Wang, Dai,
and Tan]{song_physical_2016}
Song,~Q.~J.; Tan,~Q.~H.; Zhang,~X.; Wu,~J.~B.; Sheng,~B.~W.; Wan,~Y.;
Wang,~X.~Q.; Dai,~L.; Tan,~P.~H. \emph{Physical Review B} \textbf{2016},
\emph{93}, 115409\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Guo \latin{et~al.}(2015)Guo, Yang, Yamamoto, Zhou, Ishikawa, Ueno,
Tsukagoshi, Zhang, Dresselhaus, and Saito]{guo_double_2015}
Guo,~H.; Yang,~T.; Yamamoto,~M.; Zhou,~L.; Ishikawa,~R.; Ueno,~K.;
Tsukagoshi,~K.; Zhang,~Z.; Dresselhaus,~M.~S.; Saito,~R. \emph{Physical
Review B} \textbf{2015}, \emph{91}, 205415\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Go\l{}asa \latin{et~al.}(2017)Go\l{}asa, Grzeszczyk, Molas,
Zinkiewicz, Bala, Nogajewski, Potemski, Wysmo\l{}ek, and
Babi\'nski]{golasa_resonant_2017}
Go\l{}asa,~K.; Grzeszczyk,~M.; Molas,~M.~R.; Zinkiewicz,~M.; Bala,~L.;
Nogajewski,~K.; Potemski,~M.; Wysmo\l{}ek,~A.; Babi\'nski,~A.
\emph{Nanophotonics} \textbf{2017}, \emph{0}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Froehlicher \latin{et~al.}(2016)Froehlicher, Lorchat, and
Berciaud]{froehlicher_direct_2016}
Froehlicher,~G.; Lorchat,~E.; Berciaud,~S. \emph{Physical Review B}
\textbf{2016}, \emph{94}, 085429\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Luo \latin{et~al.}(2013)Luo, Zhao, Zhang, Xiong, and
Quek]{luo_anomalous_2013}
Luo,~X.; Zhao,~Y.; Zhang,~J.; Xiong,~Q.; Quek,~S.~Y. \emph{Physical Review B}
\textbf{2013}, \emph{88}, 075320\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Luo \latin{et~al.}(2013)Luo, Zhao, Zhang, Toh, Kloc, Xiong, and
Quek]{luo_effects_2013}
Luo,~X.; Zhao,~Y.; Zhang,~J.; Toh,~M.; Kloc,~C.; Xiong,~Q.; Quek,~S.~Y.
\emph{Physical Review B} \textbf{2013}, \emph{88}, 195313\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Umari \latin{et~al.}(2001)Umari, Pasquarello, and
Dal~Corso]{umari_raman_2001}
Umari,~P.; Pasquarello,~A.; Dal~Corso,~A. \emph{Physical Review B}
\textbf{2001}, \emph{63}, 094305\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Cantarero \latin{et~al.}(1989)Cantarero, Trallero-Giner, and
Cardona]{cantarero_excitons_1989}
Cantarero,~A.; Trallero-Giner,~C.; Cardona,~M. \emph{Physical Review B}
\textbf{1989}, \emph{39}, 8388--8397\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gillet \latin{et~al.}(2013)Gillet, Giantomassi, and Gonze]{gillet2013}
Gillet,~Y.; Giantomassi,~M.; Gonze,~X. \emph{Physical Review B} \textbf{2013},
\emph{88}, 094305\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Birman and Ganguly(1966)Birman, and Ganguly]{birman_theory_1966}
Birman,~J.~L.; Ganguly,~A.~K. \emph{Physical Review Letters} \textbf{1966},
\emph{17}, 647--649\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Qiu \latin{et~al.}(2013)Qiu, da~Jornada, and Louie]{qiu_optical_2013}
Qiu,~D.~Y.; da~Jornada,~F.~H.; Louie,~S.~G. \emph{Physical Review Letters}
\textbf{2013}, \emph{111}, 216805\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Molina-S\'anchez \latin{et~al.}(2013)Molina-S\'anchez, Sangalli,
Hummer, Marini, and Wirtz]{molina-sanchez_effect_2013}
Molina-S\'anchez,~A.; Sangalli,~D.; Hummer,~K.; Marini,~A.; Wirtz,~L.
\emph{Physical Review B} \textbf{2013}, \emph{88}, 045412\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Molina-S\'anchez \latin{et~al.}(2016)Molina-S\'anchez, Palummo,
Marini, and Wirtz]{molina-sanchez_temperature-dependent_2016}
Molina-S\'anchez,~A.; Palummo,~M.; Marini,~A.; Wirtz,~L. \emph{Physical Review
B} \textbf{2016}, \emph{93}, 155435\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lazzeri and Mauri(2003)Lazzeri, and
Mauri]{lazzeri_first-principles_2003}
Lazzeri,~M.; Mauri,~F. \emph{Physical Review Letters} \textbf{2003}, \emph{90},
036401\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Veithen \latin{et~al.}(2005)Veithen, Gonze, and
Ghosez]{veithen_nonlinear_2005}
Veithen,~M.; Gonze,~X.; Ghosez,~P. \emph{Physical Review B} \textbf{2005},
\emph{71}, 125107\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Giannozzi \latin{et~al.}(2009)Giannozzi, Baroni, Bonini, Calandra,
Car, Cavazzoni, Ceresoli, Chiarotti, Cococcioni, Dabo, Dal~Corso,
de~Gironcoli, Fabris, Fratesi, Gebauer, Gerstmann, Gougoussis, Kokalj,
Lazzeri, Martin-Samos, Marzari, Mauri, Mazzarello, Paolini, Pasquarello,
Paulatto, Sbraccia, Scandolo, Sclauzero, Seitsonen, Smogunov, Umari, and
Wentzcovitch]{giannozzi_quantum_2009}
Giannozzi,~P.; Baroni,~S.; Bonini,~N.; Calandra,~M.; Car,~R.; Cavazzoni,~C.;
Ceresoli,~D.; Chiarotti,~G.~L.; Cococcioni,~M.; Dabo,~I. \latin{et~al.}
\emph{Journal of physics: Condensed matter} \textbf{2009}, \emph{21},
395502\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Podberezskaya \latin{et~al.}(2001)Podberezskaya, Magarill, Pervukhina,
and Borisov]{podberezskaya_crystal}
Podberezskaya,~N.~V.; Magarill,~S.~A.; Pervukhina,~N.~V.; Borisov,~S.~V.
\emph{Journal of Structural Chemistry} \textbf{2001}, \emph{42},
654--681\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Molina-S\'anchez \latin{et~al.}(2015)Molina-S\'anchez, Hummer, and
Wirtz]{molina-sanchez_vibrational_2015}
Molina-S\'anchez,~A.; Hummer,~K.; Wirtz,~L. \emph{Surface Science Reports}
\textbf{2015}, \emph{70}, 554--586\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Onida \latin{et~al.}(2002)Onida, Reining, and
Rubio]{onida_electronic_2002}
Onida,~G.; Reining,~L.; Rubio,~A. \emph{Reviews of Modern Physics}
\textbf{2002}, \emph{74}, 601--659\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Baroni and Resta(1986)Baroni, and Resta]{baroni_abinitio_1986}
Baroni,~S.; Resta,~R. \emph{Physical Review B} \textbf{1986}, \emph{33},
7017--7021\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Marini \latin{et~al.}(2009)Marini, Hogan, Gr\"uning, and
Varsano]{marini_yambo:_2009}
Marini,~A.; Hogan,~C.; Gr\"uning,~M.; Varsano,~D. \emph{Computer Physics
Communications} \textbf{2009}, \emph{180}, 1392--1403\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rohlfing and Louie(2000)Rohlfing, and
Louie]{rohlfing_electron-hole_2000}
Rohlfing,~M.; Louie,~S.~G. \emph{Physical Review B} \textbf{2000}, \emph{62},
4927--4944\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rohlfing \latin{et~al.}(1995)Rohlfing, Kr\"uger, and
Pollmann]{rohlfing_quasiparticle_1995}
Rohlfing,~M.; Kr\"uger,~P.; Pollmann,~J. \emph{Physical Review Letters}
\textbf{1995}, \emph{75}, 3489--3492\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rozzi \latin{et~al.}(2006)Rozzi, Varsano, Marini, Gross, and
Rubio]{rozzi_exact_2006}
Rozzi,~C.~A.; Varsano,~D.; Marini,~A.; Gross,~E. K.~U.; Rubio,~A.
\emph{Physical Review B} \textbf{2006}, \emph{73}, 205119\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Basko(2009)]{basko_calculation_2009}
Basko,~D.~M. \emph{New Journal of Physics} \textbf{2009}, \emph{11},
095011\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kalbac \latin{et~al.}(2010)Kalbac, Reina-Cecco, Farhat, Kong, Kavan,
and Dresselhaus]{kalbac_influence_2010}
Kalbac,~M.; Reina-Cecco,~A.; Farhat,~H.; Kong,~J.; Kavan,~L.;
Dresselhaus,~M.~S. \emph{ACS Nano} \textbf{2010}, \emph{4}, 6055--6063\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chen \latin{et~al.}(2011)Chen, Park, Boudouris, Horng, Geng, Girit,
Zettl, Crommie, Segalman, Louie, and Wang]{chen_controlling_2011}
Chen,~C.-F.; Park,~C.-H.; Boudouris,~B.~W.; Horng,~J.; Geng,~B.; Girit,~C.;
Zettl,~A.; Crommie,~M.~F.; Segalman,~R.~A.; Louie,~S.~G. \latin{et~al.}
\emph{Nature} \textbf{2011}, \emph{471}, 617--620\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Reichardt and Wirtz(2017)Reichardt, and Wirtz]{reichardt_ab_2017}
Reichardt,~S.; Wirtz,~L. \emph{arXiv:1701.06284} \textbf{2017}, arXiv:
1701.06284\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Saito \latin{et~al.}(2000)Saito, Dresselhaus, and
Dresselhaus]{saito_trigonal_2000}
Saito,~R.; Dresselhaus,~G.; Dresselhaus,~M.~S. \emph{Physical Review B}
\textbf{2000}, \emph{61}, 2981--2990\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Varrette \latin{et~al.}(2014)Varrette, Bouvry, Cartiaux, and
Georgatos]{VBCG_HPCS14}
Varrette,~S.; Bouvry,~P.; Cartiaux,~H.; Georgatos,~F. Management of an Academic
HPC Cluster: The UL Experience. Proc. of the 2014 Intl. Conf. on High
Performance Computing \& Simulation (HPCS 2014). Bologna, Italy, 2014; pp
959--967\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gonze and Lee(1997)Gonze, and Lee]{gonze_dynamical_1997}
Gonze,~X.; Lee,~C. \emph{Physical Review B} \textbf{1997}, \emph{55},
10355--10368\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Renucci \latin{et~al.}(1975)Renucci, Tyte, and Cardona]{Renucci1975}
Renucci,~J.; Tyte,~R.; Cardona,~M. \emph{Phys. Rev. B} \textbf{1975},
\emph{11}, 3885\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yoon \latin{et~al.}(2009)Yoon, Moon, Son, Choi, Park, Cha, Kim, and
Cheong]{Yoon2009}
Yoon,~D.; Moon,~H.; Son,~Y.-W.; Choi,~J.~S.; Park,~B.~H.; Cha,~Y.~H.;
Kim,~Y.~D.; Cheong,~H. \emph{Phys. Rev. B} \textbf{2009}, \emph{80},
125422\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhang \latin{et~al.}(2015)Zhang, Ma, Wan, Rong, Xie, Wang, and
Dai]{Zhang2015e}
Zhang,~H.; Ma,~Y.; Wan,~Y.; Rong,~X.; Xie,~Z.; Wang,~W.; Dai,~L. \emph{Sci.
Rep.} \textbf{2015}, \emph{5}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
1,108,101,565,235 | arxiv | \section{Introduction}
Heterogeneous structures of materials appear often in physics, chemistry, mechanics, life sciences, and engineering. Very often one is also led to consider heterogeneous structures with a very fine
and complicated microstructure. Phenomena like heat conduction or transport phenomenon are such structures which typically modeled by mathematical systems such as ordinary differential equations (ODE's) or partial differential equations (PDE's), where the presence of fine microscopic scale is reflected in rapid oscillations of the coefficients. This situation can in general not be treated directly, and if it could be feasible, the numerical methods employed to solve the problem require very fine degree of resolution so that the mesh can capture the oscillations which of course costs a lot, and in some situations, despite of mesh refinement, the solution will be out of reach.
In the present work we consider the convergence of the eigenvalue problem
\vspace{-0.2cm}
\begin{equation*}
\mathbf{H}_hu_h=\lambda_hu_h
\end{equation*}
for two different operators $\mathbf{H}_h$ in some suitable Hilbert spaces. We first consider $\mathbf{H}_h=-{\rm div}(\text{\textbf{A}}_h(x)\nabla)$ defined on $L^2(\Omega)$ with domain $H^1_0(\Omega)$, where $\textbf{A}_h(x)$ is some admissible coefficient matrix, and $\Omega$ is an open bounded subset of $\mathbb{R}^N$, $N\geq1$. Then we consider another arbitrary operator $\mathbf{H}_h=\mathbf{H}_0+V_h$ defined on $L^2(\Omega)$, $\Omega$ as defined before, where $\mathbf{H}_0$ is a positive definite bounded self-adjoint operator, and $V_h$ is a positive bounded Hermitian multiplicative perturbation. We are interested in the behavior of the operator $\mathbf{H}_h$ as the parameter $h\to\infty$, particularly we are interested in the asymptotic behavior of the point spectrum (the eigenvalues).
We will use classical operator and variational convergence theory. G-convergence theory is well-known for its applications in homogenization of partial differential equations. The concept was introduced in the late 1960's \cite{DEGS,SPA67,SPA68,SPA75} for linear elliptic and parabolic problems with symmetric coefficient matrices. Then the concept was extended to non-symmetric coefficient matrices \cite{MUR,TAR1,TAR2,TAR3} known as H-convergence. The definition was then generalized to positive definite self-adjoint operators \cite{DAL}. Later on, plenty of invaluable results are achieved for the elliptic and hyperbolic problems. In \cite{BRAC,CHDA,CHDE} G-convergence of monotone operators is proved. In \cite{SVA99,SVA00,SVA05} G-convergence of nonlinear parabolic operators is studied. The theory of G-convergence of differential operators in general is treated in \cite{ZKO,ZKOk}. Through out this paper, we will use the name G-convergence of the case of non-symmetric matrices as well.
The study of G-convergence of operators is often associated to the study of the asymptotic behavior for the associated quadratic forms in the calculus of variations via the notion of $\Gamma$-convergence which was introduced in the mid 1970's \cite{DEGF}. Here, we utilize and combine the two concepts in order to prove G-compactness for the operator $\mathbf{H}_h=\mathbf{H}_0+V_h$.
For the operator $\mathbf{H}_h=-{\rm div}(\text{\textbf{A}}_h(x)\nabla)$, the coefficient matrix $\text{\textbf{A}}_h$ is positive definite and bounded, then by the G-compactness criterion for elliptic operators, $\mathbf{H}_h$ has a G-limit as $h\to\infty$. The operator $\mathbf{H}_h=\mathbf{H}_0+V_h$ is positive definite, bounded, and self-adjoint, then using $\Gamma$-convergence for its associated quadratic form and the relation between G-convergence and $\Gamma$-convergence, we prove that $\mathbf{H}_h$ admits a G-limit as $h\to\infty$. Under suitable assumptions on the coefficient matrix $\textbf{A}_h(x)$ and on the perturbation $V_h$ we characterize the G-limits. Consequently we prove the convergence of the corresponding eigenvalues.\\
The paper is arranged as follows: In Section 2 we provide the reader with basic preliminaries on G-convergence and $\Gamma$-convergence. In Section 3 we revisit G-convergence of elliptic operators, and study the convergence properties of the corresponding eigenvalue problems. In Section 4 we prove the G-limit of the operator $\mathbf{H}_h=\mathbf{H}_0+V_h$.
\section{Preliminaries}
In what follows $\Omega$ will be an open bounded subset of $\mathbb{R}^N$, $N\geq1$, further the notations $\rightharpoonup$ and $\ast\negthickspace\rightharpoonup$ will denote weak and weak$^\ast$ convergence respectively. The domain is denoted by $\mathbf{D}$. Also $c$ and $C$ will denote real constants that might be different at each occurrence and are independent of all parameters, unless otherwise explicitly specified. The scalar products and norms are denoted by $\langle\cdot,\cdot\rangle$ and $\|\cdot\|$ respectively, where the norms $\|\cdot\|$ will be given indices to distinguish between them, while $\langle\cdot,\cdot\rangle$ are left without indices and their current definitions are obvious from the content.
\subsection{G-convergence}
For comprehensive materials on G-convergence we refer to e.g. \cite{DEF,OLS,SIL}, and for a general setting to positive definite self-adjoint operators to the monograph \cite{DAL}. Below we state two definitions of G-convergence; of elliptic operators and the general definition of positive definite self-adjoint operators.\\
Consider two positive real numbers $\alpha$ and $\beta$ such that $0<\alpha\leq\beta<\infty$, and define the following set of matrices\\
$
S(\alpha,\beta,\Omega)=\{\text{\textbf{A}}\in L^\infty(\Omega)^{N\times N}\,;\; (\text{\textbf{A}}(x,\xi),\xi)\geq\alpha |\xi|^2\,\text{and}\, |\text{\textbf{A}}(x,\xi)|\leq\beta|\xi|\; , \forall\xi\in\mathbb{R}^N$ and a.e $\;x\in\Omega\}\,.$\\
We shall define G-convergence of the following sequence of elliptic Dirichlet boundary value problem
\begin{equation}\label{16}
\left\{ \begin{array}{l}
-{\rm div}(\text{\textbf{A}}_h(x,Du_h)) = f \mbox{ in }\Omega,\\
u_h\in H_0^1(\Omega).
\end{array} \right.
\end{equation}
\begin{Def}\emph{ The sequence $\text{\textbf{A}}_h$ in $S(\alpha,\beta,\Omega)$
is said to be $G$-convergent to $\text{\textbf{A}}\in S(\alpha,\beta,\Omega)$, denoted as $\text{\textbf{A}}_h\xrightarrow{\;\,\text{\tiny G}\;\,}\text{\textbf{A}}$, if for every $f\in H^{-1}(\Omega)$, the sequence $u_h$ of solutions of (\ref{16}) satisfies
$$
\left. \begin{array}{l}
u_{h} \rightharpoonup {u}\;\mbox{ in }\;H_0^1(\Omega), \\
\text{\textbf{A}}_{h}(\cdot,Du_{h})\;\rightharpoonup {\text{\textbf{A}}(\cdot,Du)}\;\mbox{ in }
\;[L^2(\Omega)]^N,
\end{array} \right.
$$
where $u$ is the unique solution of the problem
\begin{equation}\label{17}
\left\{ \begin{array}{l}
-{\rm div}(\text{\textbf{A}}(x,Du)) = f \mbox{ in }\Omega,\\
u\in H_0^1(\Omega).
\end{array} \right.
\end{equation}
}
\end{Def}
In the sequel we will only consider the case of linear coefficients matrix $\text{\textbf{A}}_h$, i.e., from now on $\text{\textbf{A}}_h(x,\xi) =\text{\textbf{A}}_h(x)\xi$.
Here are some results that will be used later. These results are given without proofs, for the proofs we refer to \cite{DEF,MUR}.
\begin{Theo} G-compactness Theorem.
\emph{
For every sequence $\text{\textbf{A}}_h$ in $S(\alpha,\beta,\Omega)$ there exists a subsequence, still denoted by $\text{\textbf{A}}_h$, and a map $\text{\textbf{A}}\in S(\alpha,\beta,\Omega)$, such that
$\text{\textbf{A}}_h\xrightarrow{\;\,\text{\tiny G}\;\,}\text{\textbf{A}}$.
}
\end{Theo}
\begin{Theo} Uniqueness and Locality of G-limit.
\emph{
\begin{itemize}
\item [$(i)$] $\text{\textbf{A}}_h$ has at most one G-limit.
\item [$(ii)$] If $\text{\textbf{A}}_h=\tilde{A}_h$ on $\omega\subset\subset\Omega$ and $\text{\textbf{A}}_h\xrightarrow{\;\,\text{\tiny G}\;\,}\text{\textbf{A}}$ and $\tilde{\text{\textbf{A}}}_h \xrightarrow{\;\,\text{\tiny G}\;\,}\tilde{\text{\textbf{A}}}$ then $\text{\textbf{A}}=\tilde{\text{\textbf{A}}}$ on $\omega$.
\end{itemize}
}
\end{Theo}
\begin{Theo}
\emph{
If $\text{\textbf{A}}_h\xrightarrow{\;\,\text{\tiny G}\;\,}\text{\textbf{A}}$, then $\text{\textbf{A}}_h^t\xrightarrow{\;\,\text{\tiny G}\;\,}\text{\textbf{A}}^t$.
}
\end{Theo}
Let $\mathcal{Y}$ be a Hilbert space, we provide below the general definition of G-convergence, first we set some useful definitions.
\begin{Def}
\emph{
A function $F:\mathcal{Y}\to[0,\infty]$ is said to be lower semi-continuous ($lsc$) at $u\in\mathcal{Y}$, if $$F(u)\leq \sup_{U\in\mathcal{N}(u)}\inf_{v\in U}F(v)\, ,$$ where $\mathcal{N}(u)$ is the set of all open neighborhoods of $u$ in $\mathcal{Y}$.
}
\end{Def}
As a consequence of the above definition we have the following
\begin{itemize}
\item [$(i)$] The inequality in the above definition can be replaced by equality due to the fact that $F(u)\geq\inf \{F(v), v\in U\},\;\; \forall U\!\in\mathcal{N}(u)$.
\item [$(ii)$] $F$ is $lsc$ on $\mathcal{Y}$, if it is so at each $u\in\mathcal{Y}$.
\end{itemize}
\begin{Def}
\emph{
A function $F$ in $\mathcal{Y}$ is called quadratic form if there exists a linear dense subspace $\mathcal{X}$ of $\mathcal{Y}$
and a symmetric bilinear form $B:\mathcal{X}\times \mathcal{X}\to[0,\infty)$ such that
$$
F(u)=\left\{ \begin{array}{ll}
B(u,u)\, ,&\forall u\in \mathcal{X}\, ,\\
\infty\, ,&\forall u\in\mathcal{Y}\backslash \mathcal{X}.
\end{array}\right.
$$
}
\end{Def}
Let $F$ and $B$ be as in the definition above, where $\mathbf{D}(F)=\{u\in\mathcal{Y}\,;\;F(u)<\infty\}$. The
operator associated to $F$ is the linear operator $A$ on $\overline{\mathbf{D}(F)}$ with domain being the set of
all $u\in \mathbf{D}(F)$ such that there exists $v\in \overline{\mathbf{D}(F)}$ satisfying $B(u,f)=\langle v,f\rangle,\;\;\forall f\in \mathbf{D}(F)$ and $Au=v$, $\;\forall u\in \mathbf{D}(A)$. If $f=u$
then $F(u)=\langle Au,u\rangle,\;\;\forall u\in \mathbf{D}(A)$.
For $\lambda\geq0$ we denote the following
\begin{itemize}
\item [$(1)$] By $\widetilde{\mathcal{Q}}_\lambda (\mathcal{Y})$ we denote the class of quadratic forms $F:\mathcal{Y}\to[0,\infty]$
such that $F(u)\geq\lambda ||u||^2_\mathcal{Y}$, and by $\mathcal{Q}_\lambda (\mathcal{Y})$ the subset of $\widetilde{\mathcal{Q}}_\lambda (\mathcal{Y})$
whose elements are $lsc$.
\item [$(2)$] By $\mathcal{P}_\lambda (\mathcal{Y})$ we denote the class of self-adjoint operators $A$ on a closed linear
subspace $\mathscr{V}=\overline{\mathbf{D}(A)}$ of $\mathcal{Y}$ such that $\langle Au,u\rangle\geq\lambda ||u||^2_\mathcal{Y},\;\; \forall u\in \mathbf{D}(A)$.
\end{itemize}
\begin{Def}
\emph{
Let $\lambda\geq0$, and let $A_h\in\mathcal{P}_\lambda(\mathcal{Y})$. If $\lambda>0$, we say that $A_h\xrightarrow[]{\;\,\text{\tiny G}\;\,}A\in\mathcal{P}_\lambda(\mathcal{Y})$ in $\mathcal{Y}$ if $A_h^{-1}P_hu\to A^{-1}P u$ in $\mathcal{Y}$, $\forall u\in \mathcal{Y}$, where $P_h$ and $P$ are the orthogonal projections onto $\mathscr{V}_h:=\overline{\mathbf{D}(A_h)}$ and $\mathscr{V}:=\overline{\mathbf{D}(A)}$ respectively. If $\lambda=0$, we say that $A_h\in\mathcal{P}_0(\mathcal{Y})$ converges to $A\in\mathcal{P}_0(\mathcal{Y})$ in the strong resolvent sense (SRS) if $(\mu I+A_h)\xrightarrow[]{\;\,\text{\tiny G}\;\,}(\mu I+A)$ in $\mathcal{Y}$, $\forall\mu>0$.
}
\end{Def}
The following result provides a useful criterion for G-convergence of positive definite self-adjoint operators. See \cite{DAL} for the proof.
\begin{Lem}
\emph{
Given $\lambda>0$, $A_h\in\mathcal{P}_\lambda(\mathcal{Y})$, and an orthogonal projection $P_h$ onto $\mathscr{V}_h$. Suppose that for every $u\in\mathcal{Y}$, $A_h^{-1}P_hu$ converges in $\mathcal{Y}$, then there exists an operator $A\in\mathcal{P}_\lambda(\mathcal{Y})$ such that $A_h\xrightarrow{\;\,\text{\tiny G}\;\,}A$ in $\mathcal{Y}$.
}
\end{Lem}
\subsection{$\Gamma$-convergence}
For comprehensive introductions to $\Gamma$-convergence we refer to the monographs \cite{BRA, DAL}.\\
Let $\mathcal{Y}$ be a topological space, and let $F_h$ be a sequence
of functionals from $\mathcal{Y}$ to $\overline{\mathbb{R}}$.
\begin{Def}\emph{ A sequence of functionals $F_h:\mathcal{Y}\to \overline{\mathbb{R}}$ is said to be $\Gamma$-convergent to $F:\mathcal{Y}\to \overline{\mathbb{R}}$, written as $F(u)=\Gamma-\displaystyle\lim_{h\to\infty}F_h(u)$ and denoted by $F_h\xrightarrow{\;\,{\tiny \Gamma}\;\,}F$ if
$$
F(u)=\Gamma-\displaystyle\liminf_{h\to\infty}F_h(u)=\Gamma-\displaystyle\limsup_{h\to\infty} F_h(u)\,,
$$
where $\Gamma-\displaystyle\liminf_{h\to\infty}$ and $\Gamma-\displaystyle\limsup_{h\to\infty}$ are the $\Gamma$-lower and $\Gamma$-upper limits respectively defined by
$$
F^i(u):=\Gamma-\liminf_{h\to\infty} F_h(u)=\sup_{U\in{\mathcal N}(u)}\liminf_{h\to\infty}
\inf_{v\in U}F_h(v)
$$
and
$$
F^s(u):=\Gamma-\limsup_{h\to\infty} F_h(u)=\sup_{U\in{\mathcal N}(u)}\limsup_{h\to\infty}
\inf_{v\in U}F_h(v).
$$
}
\end{Def}
By Definition 5, it is obvious that the sequence $F_h$ $\Gamma$-converges to $F$ if and only if $F^s\leq F \leq F^i$, this means that $\Gamma$-convergence and lower semi-continuity are closely related concepts. If in addition $\mathcal{Y}$ satisfies the first axiom of countability (the neighborhood system of every point in $\mathcal{Y}$ has a countable base), then $F_h\xrightarrow{\;\,{\tiny \Gamma}\;\,}F$ in $\mathcal{Y}$ if and only if the following two conditions are satisfied
\begin{itemize}
\item [$(i)$] $\forall u\in \mathcal{Y}$ and $\forall u_h$ converging to $u$,
$F(u)\leq\displaystyle\liminf_{h\to\infty} F_h(u_h)$.
\item [$(ii)$] $\forall u\in \mathcal{Y}$, $\exists u_h$ converging to $u$ such that
$F(u)=\displaystyle\lim_{h\to\infty} F_h(u_h)$.
\end{itemize}
\begin{Rem}
\emph{
The following are some useful properties of $\Gamma$-convergence
\begin{itemize}
\item [(1)] A constant sequence of functionals $F_h=f$ does not necessarily $\Gamma$-converge to $f$, but to the relaxation of $f$, the largest $lsc$ functional below $f$. This is due to the fact that $f$ might not be $lsc$.
\item [(2)] The $\Gamma$-limit is always $lsc$.
\item [(3)] $\Gamma$-convergence is stable under continuous perturbation, i.e., if $F_h\xrightarrow{\;\,{\tiny \Gamma}\;\,}F$ in $\mathcal{Y}$ and $G:\mathcal{Y}\to[0,\infty]$ is continuous, then $F_h+G\xrightarrow{\;\,{\tiny \Gamma}\;\,}F+G$.
\item [(4)] The $\Gamma$-limit of a non-negative quadratic form is also a non-negative quadratic form.
\end{itemize}
}
\end{Rem}
$\Gamma$-convergence possesses the compactness property, that is, if $\mathcal{Y}$ is a separable metric space, then every sequence of functionals $F_h:\mathcal{Y}\to \overline{\mathbb{R}}$ has a $\Gamma$-convergent subsequence.\\
The following theorem is the cornerstone of the relation between $\Gamma$-convergence of quadratic forms of the class $\mathcal{Q}_\lambda(\mathcal{Y})$ (respectively $\mathcal{Q}_0(\mathcal{Y})$) and G-convergence of the associated operators of the class $\mathcal{P}_\lambda(\mathcal{Y})$ for $\lambda>0$ (respectively strong resolvent convergence of the associated operators of the class $\mathcal{P}_0(\mathcal{Y})$ ). For the proof of this theorem we refer to \cite{BRA, DAL}.
\begin{Theo}
\emph{
Let $\lambda>0$ be a real number, $F_h$ and $F$ be elements of $\mathcal{Q}_0(\mathcal{Y})$, and let $A_h\,,\,A\in\mathcal{P}_0(\mathcal{Y})$ be the associated operators respectively. Then the following are equivalent
\begin{itemize}
\item [(a)] $F_h\xrightarrow{\;\,{\tiny \Gamma}\;\,}F$.
\item [(b)] $(F_h+\lambda||\cdot||^2_\mathcal{Y})\xrightarrow[]{\;\,{\tiny \Gamma}\;\,}(F+\lambda||\cdot||^2_\mathcal{Y})$.
\item [(c)] $(A_h+\lambda I)\xrightarrow{\;\,{\tiny G}\;\,}(A+\lambda I)$.
\item [(d)] $A_h\to A$ in the SRS.
\end{itemize}
Also if $F_h\,,\,F\in\mathcal{Q}_\mu(\mathcal{Y})$ for $\mu>0$, and $A_h\,,\,A\in\mathcal{P}_\mu(\mathcal{Y})$ are the associated operators respectively, then the following are equivalent
\begin{itemize}
\item [(e)] $F_h\xrightarrow{\;\,{\tiny \Gamma}\;\,}F$.
\item [(f)] $A_h\xrightarrow{\;\,{\tiny G}\;\,}A$.
\end{itemize}
}
\end{Theo}
\section{G-convergence of elliptic operators}
In this section we review some basic results of G-convergence of elliptic operators with source function $f_h$, where the main task is the discussion of eigenvalue problems ($f_h=\lambda_hu_h$). Before proceeding, a time is devoted to study the Dirichlet boundary value problem with $h$-dependent source function, which turns out to be useful in setting the results of the corresponding eigenvalue problem. The following two lemmas are useful in proving the homogenization results, we refer to \cite{MUR} for the proofs.
\begin{Lem}
\emph{
Let $\xi_h\in [L^2(\Omega)]^N$ be weakly convergent to $\xi$ in $[L^2(\Omega)]^N$, and $u_h\in H^1(\Omega)$ weakly convergent to $u$ in $H^1(\Omega)$, if
$$
{\rm div}(\xi_h)\rightarrow {\rm div}(\xi)\;\;\text{in}\; H^{-1}(\Omega)\, ,
$$
then
$$
\langle\xi_h,Du_h\rangle\;\ast\negthickspace\rightharpoonup \langle\xi,Du\rangle\;\;\text{in}\;D^\star(\Omega)\, ,
$$
where $D^\star(\Omega)$ is the dual space of the dense space $D(\Omega)=C^\infty_0(\Omega)$.
}
\end{Lem}
\begin{Lem}
\emph{
Let $\text{\textbf{A}}_h\in S(\alpha,\beta,\Omega)$, and assume that $u_h$ and $v_h$ in $H^1(\Omega)$ are weakly convergent to $u$ and $v$ in $H^1(\Omega)$ respectively, and such that
$$
\xi_h=\text{\textbf{A}}_h\nabla u_h\rightharpoonup \xi\;\;\text{in}\; [L^2(\Omega)]^N\, .
$$
$$
{\rm div}(\xi_h)\rightarrow {\rm div}(\xi)\;\;\text{in}\; H^{-1}(\Omega)\, .
$$
$$
\zeta_h=\text{\textbf{A}}_h^t\nabla v_h\rightharpoonup \zeta\;\;\text{in}\; [L^2(\Omega)]^N\, .
$$
$$
{\rm div}(\zeta_h)\rightarrow {\rm div}(\zeta)\;\;\text{in}\; H^{-1}(\Omega)\, .
$$
then
$$
\langle\xi,\nabla v\rangle=\langle\nabla u,\zeta\rangle\;\;\text{a.e in}\; \Omega.
$$
}
\end{Lem}
The main homogenization results for the linear elliptic eigenvalue problem are stated in the following theorem.
\begin{Theo}
\emph{
Consider the linear elliptic eigenvalue problem
\begin{equation}\label{28}
\left\{ \begin{array}{ll}
-{\rm div}(\text{\textbf{A}}_h(x)\nabla u_h^k) =\lambda_h^k u_h^k& \text{in}\; \Omega\, , \\
u_h^k\in H^1_0(\Omega)\, ,
\end{array} \right.
\end{equation}
where $\text{\textbf{A}}_h\in S(\alpha,\beta,\Omega)$ is symmetric and positive definite. Then the sequences of eigenvalues $\lambda_h^k$ and the corresponding eigenfunctions $u_h^k$ of (\ref{28}) converge to $\lambda^k$ in $\mathbb{R}$ and weakly to $u^k$ in $H_0^1(\Omega)$ respectively, where the eigencouple $\{\lambda^k,u^k\}$ is the solution to the G-limit problem
\begin{equation}\label{29}
\left\{ \begin{array}{ll}
-{\rm div}(\text{\textbf{A}}(x)\nabla u^k) =\lambda^k u^k& \text{in}\; \Omega\, , \\
u^k\in H^1_0(\Omega).
\end{array} \right.
\end{equation}
}
\end{Theo}
\begin{Rem}
\emph{
For equation (\ref{28}) the following are well-known facts
\begin{itemize}
\item [$(i)$] $0<\lambda_h^1\leq\lambda_h^2\leq\lambda_h^3\leq\cdots < \infty$.
\item [$(ii)$] The multiplicity of $\lambda_h^k$ is finite.
\item [$(iii)$] The sequence $u_h^k$ forms an orthonormal basis for $L^2(\Omega)$.
\end{itemize}
}
\end{Rem}
Before proving Theorem 5, we state and prove the following theorem for elliptic boundary value problem with source function $f_h$.
\begin{Theo}
\emph{
For the Dirichlet boundary value problem
\begin{equation}\label{34}
\left\{ \begin{array}{ll}
-{\rm div}(\text{\textbf{A}}_h(x)\nabla u_h) =f_h & \text{in}\; \Omega\, , \\
u_h\in H_0^1(\Omega),
\end{array} \right.
\end{equation}
if $\text{\textbf{A}}_h\in S(\alpha,\beta,\Omega)$ and if $f_h$ converges in $H^{-1}(\Omega)$ to $f$, then the sequence $u_h$ of solutions to (\ref{34}) is weakly convergent in $H_0^1(\Omega)$ to the solution of
\begin{equation}\label{34_2}
\left\{ \begin{array}{ll}
-{\rm div}(\text{\textbf{A}}(x)\nabla u) =f & \text{in}\; \Omega\, , \\
u\in H_0^1(\Omega),
\end{array} \right.
\end{equation}
where $\text{\textbf{A}}$ is the G-limit of $\text{\textbf{A}}_h$.
}
\end{Theo}
\hspace{-4mm}\emph{\underline{Proof}}. The weak form of (\ref{34}) is to find $u_h\in H_0^1(\Omega)$ such that $\forall v\in H_0^1(\Omega)$
\begin{equation}\label{35}
a_h(u_h,v)=\langle f_h,v\rangle,
\end{equation}
where $a_h(u_h,v)=\langle \text{\textbf{A}}_h \nabla u_h,\nabla v\rangle$. Since $\text{\textbf{A}}_h\in S(\alpha,\beta,\Omega)$, we have the following a priori estimate
$$
\alpha||u_h||_{H_0^1(\Omega)}^2\leq a_h(u_h,u_h)=\langle f_h,u_h\rangle\leq c||f_h||_{H^{-1}(\Omega)}||u_h||_{H_0^1(\Omega)}\, ,
$$
hence
\begin{equation}\label{36}
||u_h||_{H_0^1(\Omega)}\leq \frac{C}{\alpha}.
\end{equation}
By (\ref{36}) and the upper bound of $\text{\textbf{A}}_h$
\begin{equation}\label{37}
||\text{\textbf{A}}_h\nabla u_h||_{L^2(\Omega)}\leq C\frac{\beta}{\alpha}.
\end{equation}
So both $u_h$ and $\text{\textbf{A}}_h\nabla u_h$ are bounded sequences in $H_0^1(\Omega)$ and $[L^2(\Omega)]^N$ respectively, therefore, up to subsequences still denoted by $u_h$ and $\text{\textbf{A}}_h\nabla u_h$
\begin{equation}\label{38}
u_h\rightharpoonup u\;\;\text{in}\; H_0^1(\Omega)
\end{equation}
and
\begin{equation}\label{39}
\text{\textbf{A}}_h\nabla u_h\rightharpoonup \mathcal{M}\;\;\text{in}\; [L^2(\Omega)]^N\, .
\end{equation}
\textbf{Claim}: we argue that $\mathcal{M}=\text{\textbf{A}}\nabla u$, where $\text{\textbf{A}}$ is the G-limit of $\text{\textbf{A}}_h$ (the existence and uniqueness of $\text{\textbf{A}}$ is guaranteed by virtue of Theorems 1 and 2).\\
\underline{\emph{Proof of the claim}}. By (\ref{39}) it holds that
\begin{equation}\label{40}
-{\rm div}(\text{\textbf{A}}_h\nabla u_h)\rightharpoonup -{\rm div}(\mathcal{M})\;\;\text{in}\; H^{-1}(\Omega)\,,
\end{equation}
which means that $\forall v\in H_0^1(\Omega)$
\begin{equation}\label{41}
\lim_{h\to\infty}\langle-{\rm div}(\text{\textbf{A}}_h\nabla u_h)\, ,\, v\rangle = \langle-{\rm div}(\mathcal{M})\, ,\, v\rangle.
\end{equation}
Since $f_h$ converges to $f$ in $H^{-1}(\Omega)$ and by (\ref{34}),
\begin{equation}\label{42}
\lim_{h\to\infty}\langle-{\rm div}(\text{\textbf{A}}_h\nabla u_h)\, ,\, v\rangle = \lim_{h\to\infty}\langle f_h\, ,\, v\rangle= \langle f\, ,\, v\rangle.
\end{equation}
By the uniqueness of weak limit, together with (\ref{41}) and (\ref{42}) we get
\begin{equation}\label{43}
-{\rm div}(\mathcal{M})=f\, .
\end{equation}
Since $\text{\textbf{A}}_h\xrightarrow{\;\,{\tiny G}\;\,}\text{\textbf{A}}$, by Theorem 3 it is also true that $\text{\textbf{A}}_h^t\xrightarrow{\;\,{\tiny G}\;\,}\text{\textbf{A}}^t$. Consider now
\begin{equation}\label{44}
\langle\text{\textbf{A}}_h\nabla u_h,\nabla v_h\rangle=\langle\nabla u_h,\text{\textbf{A}}_h^t\nabla v_h\rangle
\end{equation}
for a sequence $v_h\in H_0^1(\Omega)$ converging weakly to $v$ in $H_0^1(\Omega)$. The limit passage of (\ref{44}) together with Lemma 3 gives
\begin{equation}\label{45}
\langle\mathcal{M},\nabla v\rangle=\langle\nabla u,A^t\nabla v\rangle\, ,
\end{equation}
hence
\begin{equation}\label{46}
\langle\mathcal{M},\nabla v\rangle=\langle\text{\textbf{A}}\nabla u,\nabla v\rangle\, .
\end{equation}
Take $\omega\subset\subset\Omega$ and $v_h$ such that $\nabla v=z\in \mathbb{R}^N$ on $\omega$, then (\ref{46}) can be written as
\begin{equation}\label{47}
\langle\mathcal{M}-\text{\textbf{A}}\nabla u,z\rangle=0,
\end{equation}
consequently, by the density of $v$ in $H_0^1(\Omega)$ we have
\begin{equation}\label{48}
\mathcal{M}-\text{\textbf{A}}\nabla u=0\, ,
\end{equation}
which completes the proof of the claim.
By virtue of (\ref{43}) and (\ref{48}), $u_h$ is convergent to $u$, where $u$ is the solution of the homogenized equation
\begin{equation}\label{49}
\left\{ \begin{array}{ll}
-{\rm div}(\text{\textbf{A}}(x)\nabla u) =f & \text{in}\; \Omega\, , \\
u\in H^1_0(\Omega)\, ,
\end{array} \right.
\end{equation}
and where $\text{\textbf{A}}(x)$ is the G-limit of $\text{\textbf{A}}_h(x)$.\hfill{$\blacksquare$}\\
By the uniqueness of the solution $u$ to (\ref{49}), one can drop the subsequence assumption and conclude that the whole sequence is convergent to $u$ (any other subsequence of $u_h$ has to converge only to $u$ by the uniqueness of solution, thus the entire sequence converges to the same limit as its all subsequences).
Now we give the proof of the main result of this section: Theorem 5.\\
\emph{\underline{Proof of Theorem 5}}.
By virtue of Theorem 6, it suffices to prove
\begin{equation}\label{51}
\lambda_h^k\rightarrow \lambda^k\;\;\text{in}\; \mathbb{R}\, ,
\end{equation}
\begin{equation}\label{52}
u_h^k\rightarrow u^k\;\;\text{in}\; L^2(\Omega)\, .
\end{equation}
Indeed, since we can set in Theorem 6 $f_h\!=\!\lambda_h^k u_h^k$ which, using (\ref{51}) and (\ref{52}), converges to $f=\lambda^k u^k$ in $L^2(\Omega)$ $\forall k$ by the following
\begin{eqnarray*}
\begin{array}{ll}
\|f_h-f\|_{L^2(\Omega)}&\!\!\!=\|\lambda_h^k u_h^k-\lambda^k u_h^k+\lambda^k u_h^k-\lambda^k u^k\|_{L^2(\Omega)}\\
&\!\!\!\leq |\lambda_h^k -\lambda^k|\,\| u_h^k\|_{L^2(\Omega)}+|\lambda^k|\,\| u_h^k- u^k\|_{L^2(\Omega)}\\
&\!\!\!\rightarrow 0.
\end{array}
\end{eqnarray*}
Hence if (\ref{51}) and (\ref{52}) are satisfied, then by Theorem 6 the eigencouple $\{\lambda^k,u^k\}$ is the solution to the homogenized eigenvalue problem
\begin{equation}\label{53}
\left\{ \begin{array}{ll}
-{\rm div}(\text{\textbf{A}}(x)\nabla u^k) =\lambda^k u^k& \text{in}\; \Omega\, , \\
\{\lambda^k,u^k\}\in \mathbb{R}\times H_0^1(\Omega)\, .
\end{array} \right.
\end{equation}
Note that by Remark 2 part $(i)$, the sequence $\lambda_h^k$ is bounded in $\mathbb{R}$ for all $k$, so a subsequence, denoted by $\lambda_h^k$, can be extracted from $\lambda_h^k$ such that
\begin{equation}\label{57}
\lambda_h^k\rightarrow \lambda^k\;\;\text{in}\;\mathbb{R}\, .
\end{equation}
Also, since $\text{\textbf{A}}_h(x)\in S(\alpha,\beta,\Omega)$ we have
\begin{equation}\label{57_2}
\alpha\|u_h^k\|^2_{H_0^1(\Omega)}\leq a_h(u_h,u_h)=\langle \text{\textbf{A}}_hu_h,u_h\rangle\leq \beta\lambda_h^k \|u_h^k\|^2_{L^2(\Omega)},
\end{equation}
which implies $\|u_h^k\|_{H_0^1(\Omega)}\leq C$ for all $k$. Thus, up to a subsequence still denoted by $u_h^k$,
\begin{equation}\label{57_3}
u_h^k \rightharpoonup u^k\;\;\text{in}\;H_0^1(\Omega).
\end{equation}
Hence (\ref{52}) is justified for a subsequence by Rellich-Kondrachov compactness theorem. The subsequence assumptions can be dropped by the uniqueness of limits, therefore the entire sequences $\lambda_h^k$ and $u_h^k$ converge to $\lambda^k$ and $u^k$ in $\mathbb{R}$ and $L^2(\Omega)$ respectively.\hfill{$\blacksquare$}
\section{G-convergence of positive definite bounded self-adjoint operators}
Let $\mathbf{H}_0$ be a positive definite bounded self-adjoint operator defined on $L^2(\Omega)$ with domain $H^1_0(\Omega)$. Assume that $V_h(x)$, $x\in\Omega$, is a positive bounded real-valued perturbation. In this section we discuss the asymptotic limit of the eigenvalue problem $\mathbf{H}_hu_h=(\mathbf{H}_0+V_h)u_h=\lambda_hu_h$ as the parameter $h$ tends to infinity. We utilize the general definition of G-convergence of positive definite self-adjoint operator together with $\Gamma$-convergence of the associated quadratic form to characterize the G-limit of $\mathbf{H}_h$ and to discuss the asymptotic limit of the corresponding eigenvalue problem.
The following theorem is a general setting for the relation between the eigenvalue problems of an operator and its G-limit in the class $\mathcal{P}_\lambda (\mathcal{Y})$ for $\lambda>0$. Here we consider general Hilbert spaces $\mathcal{X}$ and $\mathcal{Y}$.
\begin{Theo}
\emph{
Let $\lambda>0$, let $A_h$ be a sequence in $\mathcal{P}_\lambda(\mathcal{Y})$ G-converging to $A\in\mathcal{P}_\lambda(\mathcal{Y})$,
and let $\{\mu_h,u_h\}$ be the solution of the eigenvalue problem $A_hu_h=\mu_hu_h$. If
$\{\mu_h,u_h\}\rightarrow \{\mu,u\}$ in $\mathbb{R}\times\mathcal{Y}$, then the limit couple $\{\mu,u\}$ is the solution of the eigenvalue problem $Au=\mu u$.
}
\end{Theo}
\hspace{-4mm}\underline{\emph{Proof}}. See \cite{ALM}.\hfill{$\blacksquare$}\\
On contrary to the uniform resolvent convergence (uniform convergence), the strong resolvent convergence (strong convergence) does not imply the convergence of the spectrum, but at most we have that, if a sequence $A_h$ is convergent in the SRS (or strongly convergent) to $A$, then every $\mu\in\sigma(A)$ is the limit of a sequence $\mu_h$ where $\mu_h \in\sigma(A_h)$, but not the limit of every sequence $\mu_h$ lies in the spectrum of $A$, see, e.g., \cite{WEI97}. The theorem below provides conditions on which G-convergence of an operator in $\mathcal{P}_\lambda(\mathcal{Y})$ for $\lambda>0$ (hence convergence in the SRS of operators of the class $\mathcal{P}_0(\mathcal{Y})$) implies the convergence of the corresponding eigenvalues.
\begin{Theo}
\emph{
Let $\mathcal{X}$ be compactly and densely embedded in $\mathcal{Y}$, and let $A_h$ be a family of operators in $\mathcal{P}_\lambda(\mathcal{Y})$,
$\lambda>0$, with domain $\mathcal{X}$. If $A_h$ G-converges to $A\in\mathcal{P}_\lambda(\mathcal{Y})$, then $\mathcal{K}_h:=A_h^{-1}$ converges in the norm of $\mathcal{B}(\mathcal{Y})$ ($\mathcal{B}(\mathcal{Y})$ is the set of bounded linear operators on $\mathcal{Y}$) to $\mathcal{K}:=A^{-1}$. Moreover the $k^{th}$ eigenvalue $\mu_h^k$ of $A_h$ converges to the $k^{th}$ eigenvalue $\mu^k$ of $A$, $\forall k\in \mathbb{N}$.
}
\end{Theo}
\hspace{-4mm}\underline{\emph{Proof}}. See \cite{ALM}.\hfill{$\blacksquare$}\\
The following lemma provides sufficient conditions for which $\Gamma$-convergence and pointwise convergence are equivalent.
\begin{Lem}
\emph{
Let $\mathcal{Y}$ be a normed vector space and let $F_h$ be a sequence of convex functions on $\mathcal{Y}$. Suppose that $F_h$ is equi-bounded in a neighborhood of $u\in\mathcal{Y}$ (i.e., there exists $U\in\mathcal{N}(u)$ such that $|F_h(v)|\leq C$ for every $v\in U$ and all $h$), then
$$
F^i(u)=\displaystyle\liminf_{h\to\infty}F_h(u),\quad\text{and}\quad F^s(u)=\displaystyle\limsup_{h\to\infty}F_h(u).
$$
}
\end{Lem}
For the operators $\mathbf{H}_h=\mathbf{H}_0+V_h$ and $\mathbf{H}=\mathbf{H}_0+V$ we define respectively the corresponding quadratic forms
\begin{equation*}
F_h(u)=\left\{ \begin{array}{ll}
\langle \mathbf{H}_hu,u\rangle\,,& u\in H^1_0(\Omega) \, , \\
\infty\,,& u\in L^2(\Omega)\backslash H^1_0(\Omega)\, ,
\end{array} \right.
\end{equation*}
and
\begin{equation*}
F(u)=\left\{ \begin{array}{ll}
\langle \mathbf{H}u,u\rangle\,,& u\in H^1_0(\Omega) \, , \\
\infty\,,& u\in L^2(\Omega)\backslash H^1_0(\Omega)\, .
\end{array} \right.
\end{equation*}
\begin{Theo}
\emph{
Let $V_h$ be a sequence in $L^\infty(\Omega)$ that converges weakly$^\ast$ to $V$. Then $\mathbf{H}_h$ G-converges to $\mathbf{H}=\mathbf{H}_0+V$.
}
\end{Theo}
\hspace{-4mm}\underline{\emph{Proof}}. By Theorem 4, it suffices to prove that the associated quadratic form $\mathbf{F}_h(u)$ of $H_h$ $\Gamma$-converges to the associated quadratic form $F(u)$ of $\mathbf{H}$. The convergence of the quadratic form is clear since by the weak$^\ast$ convergence of $V_h$ to $V$ in $L^\infty(\Omega)$ we have
\begin{eqnarray*}
\begin{array}{ll}
\displaystyle\lim_{h\to\infty}F_h(u)&\!\!\!=\displaystyle\lim_{h\to\infty} \Big(\langle \mathbf{H}_0u,u\rangle+\langle V_hu,u\rangle\Big)\\
&\!\!\!=\displaystyle\langle \mathbf{H}_0u,u\rangle+\langle Vu,u\rangle\\
&\!\!\!=F(u)\,.
\end{array}
\end{eqnarray*}
Then using Lemma 4 above we conclude the proof.\hfill{$\blacksquare$}\\
The following lemma proves the continuity of $F_h$ in $ H^1_0(\Omega)$.
\begin{Lem}
\emph{
$F_h(u)$ is continuous in $H^1_0(\Omega)$.
}
\end{Lem}
\hspace{-4mm}\underline{\emph{Proof}}. Let $u,v\in H^1_0(\Omega)$ be such that $\|u-v\|_{H^1_0(\Omega)}<\varepsilon$, then
\begin{eqnarray*}
\begin{array}{ll}
\vspace{1mm}
\hspace{-1.5cm}\big|F_h(u)-F_h(v)\big|\!\!\!&=\Big|\langle \mathbf{H}_hu,u\rangle - \langle \mathbf{H}_hv,v\rangle\Big|\\
&\leq\Big|\langle \mathbf{H}_h(u+v),(u-v)\rangle\Big|\\
&\leq\|\mathbf{H}_h(u+v)\|_{L^2(\Omega)}\,\|u-v\|_{L^2(\Omega)}\\
&\leq \varepsilon\|\mathbf{H}_h(u+v)\|_{L^2(\Omega)}.
\end{array}
\end{eqnarray*}
The term in the last inequality approaches zero as $\varepsilon\to0$, thus the lemma is proved.\hfill{$\blacksquare$}\\
The following theorem proves and characterizes the G-limit of $\mathbf{H}_h$ for another class of potentials $V_h$.
\begin{Theo}
\emph{
If $V_h$ is a weakly convergent sequence in $L^p(\Omega)$, $2\leq p<\infty$, with a weak limit denoted by $V$, then $\mathbf{H}_h$ G-converges to $\mathbf{H}=\mathbf{H}_0+V$.
}
\end{Theo}
\hspace{-4mm}\underline{\emph{Proof}}. Let $F_h$ and $F$ be the quadratic forms corresponding to $\mathbf{H}_h$ and $\mathbf{H}$ respectively. Following Theorem 4, to prove that $\mathbf{H}_h$ G-converges to $\mathbf{H}$, is equivalent to show that $F_h$ $\Gamma$-converges to $F$. To this end, we consider the quadratic form $F_h$ of $\mathbf{H}_h$,
\begin{equation*}
F_h(u)=\left\{ \begin{array}{ll}
\langle \mathbf{H}_hu,u\rangle\,,& u\in H^1_0(\Omega) \, , \\
\infty\,,& u\in L^2(\Omega)\backslash H^1_0(\Omega)\, .
\end{array} \right.
\end{equation*}
By the definition of $\Gamma$-convergence, to prove that $F$ is the $\Gamma$-limit of $F_h$, is equivalent to justify the following two conditions
\begin{itemize}
\item [$(i)$] $\liminf$-inequality: For every $u\in L^2(\Omega)$, and for every sequence $u_h$ converging to $u$ in $L^2(\Omega)$, $F(u)\leq\displaystyle\liminf_{h\to\infty} F_h(u_h)$.
\item [$(ii)$] $\lim$-equality: For every $u\in L^2(\Omega)$, there exists a sequence $u_h$ converging to $u$ in $L^2(\Omega)$ such that
$F(u)=\displaystyle\lim_{h\to\infty} F_h(u_h)$.
\end{itemize}
To prove the $\liminf$-inequality we assume that $u_h\in H^1_0(\Omega)$. Otherwise the proof is obvious. By the continuity of $F_h$ in $H^1_0(\Omega)$ and since piecewise affine functions are dense in $H^1_0(\Omega)$, it suffices to prove the inequality for this class of functions (the same holds true for the $\lim$-equality).
Let $\Omega=\cup_{j=1}^m \Omega^{j}$ where $\Omega^{j}$ are disjoint sets, and let $u_h$ be linear in each $\Omega^{j}$ converging in $L^2(\Omega)$ to $u=\displaystyle\sum_{j=1}^m (a_jx+b_j)\chi_{\Omega^{j}}$, where $a_j$ and $b_j$ are elements of $\mathbb{C}^3$ and the product $a_jx$ is understood to be componentwise. Consider now $F_h$ with the sequence $u_h$,
\begin{equation}\label{83}
F_h(u_h)=\langle\mathbf{H}_0u_h,u_h\rangle + \langle V_hu_h,u_h\rangle\,.
\end{equation}
Since $u_h\to u$ in $L^2(\Omega)$,
\begin{equation}\label{84}
\langle\mathbf{H}_0u,u\rangle=\|\mathbf{H}^{1/2}_0u\|^2_{L^2(\Omega)}\leq\displaystyle \liminf_{h\to\infty} \|\mathbf{H}^{1/2}_0u_h\|^2_{L^2(\Omega)}=\displaystyle \liminf_{h\to\infty}\langle\mathbf{H}_0u_h,u_h \rangle\,.
\end{equation}
Hence
\begin{equation}\label{85}
\displaystyle\liminf_{h\to\infty}F_h(u_h)\geq \langle\mathbf{H}_0u,u\rangle\,+\,\displaystyle\liminf_{h\to\infty}\int_{\Omega}V_h|u_h|^2\,dx\,.
\end{equation}
For the last term of (\ref{85})
\begin{eqnarray}\label{2_2_2}
\begin{array}{lll}
\displaystyle\liminf_{h\to\infty}\displaystyle \int_{\Omega}V_h|u_h|^2\,dx & \!\!\!=&\!\!\!\!\!\!\!\!\!\displaystyle\liminf_{h\to\infty} \displaystyle\int_{\Omega} V_h|u+u_h-u|^2\,dx\\
&\!\!\!\geq&\!\!\!\!\!\!\!\displaystyle\liminf_{h\to\infty}\displaystyle \int_{\Omega}\!\!V_h|u|^2\,dx+\displaystyle \liminf_{h\to\infty}\displaystyle \int_{\Omega}\!\!V_h\,u^*\,(u_h-u)\,dx+\\
&\;+&\!\!\!\displaystyle\liminf_{h\to\infty}\displaystyle \int_{\Omega}\!\!V_h\,u\,(u_h-u)^*\,dx.
\end{array}
\end{eqnarray}
The symbol $*$ in (\ref{2_2_2}) refers to the complex conjugate. The first term to the right of the inequality of (\ref{2_2_2}) converges to $\displaystyle\int_{\Omega}V|u|^2\,dx$ by the weak convergence of $V_h$ to $V$ in $L^p(\Omega)$, $2\leq p<\infty$. Since $u_h\to u$ in $L^2(\Omega)$, the second and third terms to the right of the inequality of (\ref{2_2_2}) are vanishing as $h\to\infty$. Thus we have the $\liminf$-inequality, namely
\begin{equation}\label{86}
\displaystyle\liminf_{h\to\infty}F_h(u_h)\geq \langle\mathbf{H}_0u,u\rangle\,+\,\langle Vu,u\rangle=F(u).
\end{equation}
To prove the $\lim$-equality for some convergent sequence, again by the continuity argument it is enough to justify the equality for a piecewise affine sequence. So consider $u_h=u=(ax+b)\chi_{\Omega}$, then
\begin{equation*}
\begin{array}{ll}
\displaystyle\lim_{h\to\infty}F_h(u_h)&\!\!\!= \langle\mathbf{H}_0u,u\rangle\,+\,\displaystyle \lim_{h\to\infty}\langle V_hu,u\rangle\\
&\!\!\!=\langle\mathbf{H}_0u,u\rangle\,+\,\langle Vu,u\rangle\,,
\end{array}
\end{equation*}
the resulted limit is due to the boundedness of the set $\Omega$ and the linearity of $u$.\hfill{$\blacksquare$}\\
By Theorem 8, the eigenvalues of the operator $\mathbf{H}_h$ converge to the eigenvalues of the G-limit operator $\mathbf{H}$ for those types of potentials considered in the last two theorems. Also employing Theorem 7, the eigenvalue problem $\mathbf{H}_hu_h^k=\lambda_h^ku_h^k$ converges to the limit problem $\mathbf{H}u^k=\lambda^ku^k$ for all $k\in\mathbb{N}$.
As a consequence of G-convergence, if $E^{\mathbf{H}_h}$ and $E^\mathbf{H}$ are the spectral measures of $\mathbf{H}_h$ and $\mathbf{H}$ respectively, then
\begin{equation*}
E^{\mathbf{H}_h}(\lambda)\to\displaystyle E^{\mathbf{H}}(\lambda)\;\,\text{strongly},\;\,\text{for all }\lambda\in\mathbb{R}\;\,\text{such that } E^{\mathbf{H}}(\lambda)=E^{\mathbf{H}}(-\lambda).
\end{equation*}
For the convergence of the associated unitary group, if $\mathcal{U}^{\mathbf{H}_h}(t)$ and $\mathcal{U}^\mathbf{H}(t)$ are the unitary operators generated by $\mathbf{H}_h$ and $\mathbf{H}$ respectively, then $\mathcal{U}_h^{\mathbf{H}_h}(t)\to \mathcal{U}^\mathbf{H}(t)$ for all $t\in \mathbb{R}^+$.
|
1,108,101,565,236 | arxiv | \section{Introduction}
\thispagestyle{empty}
The \textsc{DAG Partitioning}{} problem was introduced by \citet{LBK09} in order to analyze
how short, distinctive phrases (typically, parts or mutations of
quotations, also called \emph{memes}) spread to various news sites and
blogs. To demonstrate their approach, \citet{LBK09} collected and
analyzed phrases from 90 million articles from the time around the
United States presidential elections in 2008; the results were featured
in the New York Times~\citep{Loh09}. Meanwhile, their approach has grown
up into \textsc{Nifty}{}, a system that allows near real-time observation of
the rise and fall of trends, ideas, and topics in the internet
\citep{SHE+13}.
A core component in the approach of \citet{LBK09} is a heuristic to
solve the NP-hard \textsc{DAG Partitioning}{} problem. They use it to cluster short phrases,
which may undergo modifications while propagating through the web, with
respect to their origins. To this end, they create an arc-weighted
directed acyclic graph with phrases as vertices and draw an arc from
phrase~$p$ to phrase~$q$ if $p$~presumably originates from~$q$, where
the weight of an arc represents the support for this hypothesis: the
weight assigned to an arc~$(p,q)$ is chosen inversely proportional to
the time difference between~$p$ and~$q$ and their Levenshtein distance
when using words as tokens, whereas it is proportional to the total
number of documents in the corpus that contain the
phrase~$q$.\footnote{Unfortunately, \citet{LBK09} neither give a more
precise description nor the range of their weights. All our hardness results
even hold for unit weights, while all our algorithms work whenever the
weight of each arc is at least one; however, we choose the weights to
be positive integers to avoid representation issues.}
A vertex without outgoing arcs is called a \emph{sink} and can be
interpreted as the origin of a phrase. If a phrase has directed path{}s to more
than one sink, its ultimate origin is ambiguous and, in the model of
\citet{LBK09}, at least one of the ``$p$ originates from~$q$''
hypotheses is wrong. \citet{LBK09} introduced \textsc{DAG Partitioning}{} with the aim to
resolve these inconsistencies by removing a set of arcs (hypotheses)
with least support:
\decprob{\textsc{DAG Partitioning}}
%
{A directed acyclic graph~$G = (V,A)$ with positive integer arc
weights~$\ensuremath{\omega}{}\colon A \rightarrow \ensuremath{\mathbb{N}}$ and a positive integer~$k \in
\ensuremath{\mathbb{N}}$.}
%
{Is there a set~$\ensuremath{S} \subseteq A$ with $\sum_{a\in \ensuremath{S}}
\ensuremath{\omega}{}(a)\leq k$ such that each weakly-connected component in~$G' =
(V,A \setminus \ensuremath{S})$ has exactly one sink?}
\noindent Herein, the model of \citet{LBK09} exploits that a
weakly-connected component of a directed acyclic graph contains exactly
one sink if and only if all its vertices have directed path{}s to only one
sink. We call a set~$\ensuremath{S} \subseteq A$ such that each weakly-connected
component in~$G' = (V,A \setminus \ensuremath{S})$ has exactly one sink a~\emph{partitioning set{}}.
\citet{LBK09} showed that \textsc{DAG Partitioning}{} is NP-hard and presented a heuristic
to find partitioning set{}s of small weight. \citet{AM12} showed that, for
fixed~$\varepsilon>0$, even approximating the minimum weight of
a~partitioning set{} within a factor of~$O(n^{1-\varepsilon})$ is NP-hard. In
the absence of approximation algorithms, exact solutions to \textsc{DAG Partitioning}{}
become interesting for the reason of evaluating the quality of known
heuristics alone.
We aim for solving \textsc{DAG Partitioning}{} exactly using \emph{fixed-parameter
algorithms}---a~framework to obtain algorithms to optimally solve
NP-hard problems that run efficiently given that certain parameters of
the input data are small \cite{FG06, Nie06, DF13}. A natural parameter
to consider is the minimum weight~$k$ of the partitioning set{} sought, since
one would expect that wrong hypotheses have little support.
\paragraph{Known results} To date, there are only few studies on \textsc{DAG Partitioning}.
\Citet{LBK09} showed that \textsc{DAG Partitioning}{} is NP-hard and present heuristics. \citet{AM12} showed that, on $n$-vertex graphs,
\textsc{DAG Partitioning}{} is hard to approximate in the sense that if $\text{P} \neq
\text{NP}$, then there is no polynomial-time factor-$(n^{1-\varepsilon})$
approximation algorithm for any fixed $\varepsilon > 0$,
even if the input graph has unit-weight arcs, maximum outdegree three,
and only two sinks. Moreover, \citet{AM12} showed that \textsc{DAG Partitioning}{} can be
solved in~$2^{O(\ensuremath{t}{}^2)} \cdot n$ time if a width-$\ensuremath{t}{}$ path
decomposition of the input graph is given.
\textsc{DAG Partitioning}{} is very similar to the well-known NP-hard \textsc{Multiway Cut}{} problem~\citep{DJP+94}: given an undirected edge-weighted graph and a
subset of the vertices called \emph{terminals}, delete edges of total
weight at most~$k$ such that each terminal is separated from all others.
\textsc{DAG Partitioning}{} can be considered as \textsc{Multiway Cut}{} with the sinks being terminals and the
additional constraint that not all arcs outgoing from a vertex may be
deleted, since this would create a new sink. \Citet{Xia10} gave an
algorithm to solve \textsc{Multiway Cut}{} in $O(2^k\cdot \min(n^{2/3},m^{1/2})\cdot
nm)$~time. Interestingly, in contrast to \textsc{DAG Partitioning}{}, \textsc{Multiway Cut}{} is
constant-factor approximable (see, e.\,g., \citet{KKS+04}).
\paragraph{Our results} We provide algorithmic as well as intractability results. On the algorithmic side, we present an $O(2^k\cdot (n+m))$~time algorithm for \textsc{DAG Partitioning}{} and complement it with linear-time executable data reduction rules. We experimentally evaluated both and, in combination, they solved instances with~$k\leq190$~optimally within five minutes, the number of input arcs being~$10^7$ and larger. Moreover, we use the optimal solutions found by our algorithm to evaluate the quality of \citet{LBK09}'s heuristic and find that it finds optimal solutions for most instances that our algorithm solves quickly, but performs worse by a factor of more than two on other instances.
Also, we give an algorithm that solves \textsc{DAG Partitioning}{} in~$2^{O(\ensuremath{t}{}^2)} \cdot n$
time if a width-$\ensuremath{t}{}$ tree decomposition of the input graph is
given. We thus answer an open question by \citet{AM12}. Since every
width-$\ensuremath{t}$ path decomposition is a width-$\ensuremath{t}$ tree decomposition but
not every graph allowing for a width-$\ensuremath{t}$ tree decomposition allows for
a width-$\ensuremath{t}$ path decomposition, our algorithm is an improvement over
the $2^{O({\ensuremath{t}{}}^2)} \cdot n$-time algorithm of \citet{AM12}, which
requires a path decomposition as input.
On the side of intractability results, we strengthen the NP-hardness
results of \citet{LBK09} and \citet{AM12} to graphs of diameter two and
maximum degree three and we show that our $O(2^k\cdot (n+m))$~time
algorithm cannot be improved to $O(2^{o(k)} \cdot \ensuremath{\operatorname{poly}}(n))$~time unless
the Exponential Time Hypothesis fails. Moreover, we show that \textsc{DAG Partitioning}{} does not admit polynomial-size problem kernels with respect to~$k$
unless \NoPolyKernelAssume.
\paragraph{Organization of this paper} %
In \autoref{sec:prelims}, we introduce necessary notation and two basic structural
observations for \textsc{DAG Partitioning}{} that are important in our proofs.
In \autoref{sec:smallsol}, we present our $O(2^k\cdot (n+m))$~time algorithm and
its experimental evaluation.
With the help of the optimal solutions computed by our algorithm we also evaluate the
quality of a heuristic presented by \citet{LBK09}.
Moreover, we discuss the limits of parameterized algorithms and problem kernelization
for \textsc{DAG Partitioning}{} parameterized by~$k$.
\autoref{sec:tw} presents our $2^{O(\ensuremath{t}^2)}\cdot n$~time algorithm. It follows that
\textsc{DAG Partitioning}{} is linear-time solvable when at least one of the parameters~$k$ or~$\ensuremath{t}$ is fixed.
We further show that the heuristic presented by \citet{LBK09} works optimally on trees.
\autoref{sec:classical} then shows that other parameters are not as helpful in solving
\textsc{DAG Partitioning}{}: \textsc{DAG Partitioning}{} remains NP-hard even when graph parameters like the diameter or maximum
degree are constants.
\section{Preliminaries and basic observations}\label{sec:prelims}
We consider finite simple directed graphs~$G=(V,A)$ with \emph{vertex
set}~$V(G):=V$ and \emph{arc set}~$A(G):=A\subseteq V\times V$, as
well as finite simple undirected graphs~$G=(V,E)$ with vertex set~$V$
and \emph{edge set}~$E(G):=E\subseteq\{\{u,v\} \mid u,v\in V\}$.
For a directed graph~$G$, the \emph{underlying undirected graph}~$G'$
is the graph that has undirected edges in the places where~$G$ has arcs,
that is, $G'= (V(G), \{\{v,w\} : (v,w) \in A(G)\})$.
We will use $n$~to denote the number of vertices and $m$~to denote
the number of arcs or edges of a graph.
For a (directed or undirected) graph~$G$, we denote by $G \setminus A'$
the subgraph obtained by removing from~$G$ the arcs or edges in~$A'$ and
by $G-V'$ the subgraph obtained by removing from~$G$ the vertices
in~$V'$. For $V'\subseteq V$, we denote by $G[V']:=G-(V\setminus V')$
the subgraph of~$G$ \emph{induced} by the vertex set~$V'$.
The set of \emph{out-neighbors} and \emph{in-neighbors} of a vertex~$v$
in a directed graph is~$N^+ (v) := \{u \mid (v,u) \in A\}$ and $N^- (v) :=
\{u \mid (u,v) \in A\}$, respectively. The \emph{outdegree}, the
\emph{indegree}, and the \emph{degree} of a vertex~$v \in V$ are $d^+
(v) := |N^+ (v)|$, $d^- (v) := |N^- (v)|$, and $d(v) := d^+ (v) + d^-
(v)$, respectively. A vertex~$v$ is a \emph{sink} if $d^+(v)=0$; it is
\emph{isolated} if $d(v)=0$.
A \emph{path} of length~$\ell-1$ from~$v_1$ to~$v_\ell$ in an undirected
graph~$G$ is a tuple~$(v_1,\dots,v_\ell)\in V^\ell$ such that
$\{v_i,v_{i+1}\}$ is an edge in~$G$ for~$1\leq i\leq\ell-1$. An
\emph{undirected path{}} in a directed graph is a path in its underlying undirected
graph. A \emph{directed path{}} of length~$\ell-1$ from~$v_1$ to~$v_\ell$
in a directed graph~$G$ is a tuple~$(v_1,\dots,v_\ell)\in V^\ell$ such
that $(v_i,v_{i+1})$ is an arc in~$G$ for~$1\leq i\leq\ell-1$.
We say that $u\in V$ \emph{can reach} $v\in V$ or that $v$ \emph{is
reachable from}~$u$ in~$G$ if there is a directed path{} from~$u$ to~$v$
in~$G$. We say that $u$~and~$v$ are \emph{connected} if there is a (not
necessarily directed) path from~$u$ to~$v$ in~$G$. In particular,
$u$~is reachable from~$u$ and connected to~$u$. We use \emph{connected
component} as an abbreviation for \emph{weakly connected component},
that is, a maximal set of pairwise connected vertices. The
\emph{diameter} of~$G$ is the maximum length of a shortest path between
two different vertices in the underlying undirected graph of~$G$.
\paragraph{Fixed-parameter algorithms} The main idea in fi\-xed-pa\-ra\-me\-ter{} algorithms
is to accept the super-polynomial running time, which is seemingly
inevitable when optimally solving NP-hard problems, but to restrict it to
one aspect of the problem, the \emph{parameter}. More precisely, a
problem $\Pi$ is \emph{fi\-xed-pa\-ra\-me\-ter{} tractable~(FPT)} with respect to a
parameter~$k$ if there is an algorithm solving any instance of $\Pi$ with
size~$n$ in $f(k) \cdot \ensuremath{\operatorname{poly}}(n)$~time for some computable function~$f$
\citep{FG06,Nie06,DF13,CFK+15}.
Such an algorithm is called \emph{fixed-parameter algorithm}.
Since \citet{SHE+13} point out that the input instances of \textsc{DAG Partitioning}{} can
be so large that even running times quadratic in the input size are
prohibitively large, we focus on finding algorithms that run in
\emph{linear time} if the parameter~$k$ is a constant.
An important ingredient of our algorithms is linear-time data reduction,
which recently received increased interest since data reduction is
potentially applied to large input data \citep{PDS09,BHK+12,Hag12,Bev14b,
Kam15,FK15}.
\paragraph{Problem kernelization} One way of deriving fi\-xed-pa\-ra\-me\-ter{} algorithms is \emph{(problem) kernelization} \citep{GN07,Bod09}. As a formal approach of describing efficient data reduction that preserves optimal solutions, problem kernelization is a powerful tool for attacking NP-hard problems. A \emph{kernelization algorithm} consists of \emph{data reduction rules} that, applied to any instance $x$ with parameter~$k$, yield an instance $x'$ with parameter~$k'$ in time polynomial in $|x|+k$ such that $(x,k)$~is a yes-instance if and only if $(x',k')$~is a yes-instance, and if both $|x'|$ and $k'$ are bounded by some functions $g$ and $g'$ in~$k$, respectively. The function~$g$ is referred to as the \emph{size} of the \emph{problem kernel}~$(x',k')$.
Note that it is the parameter that allows us to measure the
effectiveness of polynomial-time executable data reduction, since a
statement like ``the data reduction shrinks the input by a
factor~$\alpha$'' would imply that we can solve NP-hard problems in
polynomial time.
From a practical point of view, problem kernelization is potentially applicable to speed up exact and heuristic algorithms to solve a problem. Since kernelization is applied to shrink potentially \emph{large} input instances, recently the running time of kernelization has stepped into the focus and linear-time kernelization algorithms have been developed for various NP-hard problems \citep{PDS09,BHK+12,Hag12,Bev14b,Kam15,FK15}.
\paragraph{Two basic observations.} The following easy to prove structural observations will be exploited in many proofs of our work.
The first observation states that a minimal partitioning set{} does not introduce new sinks.
\begin{observation}\label{lem:no_new_sinks}
Let $G$~be a directed acyclic graph and~$\ensuremath{S}{}$ be a
minimal partitioning set{} for~$G$. Then, a vertex is a sink in
$G\setminus\ensuremath{S}{}$ if and only if it is a sink in~$G$.
\end{observation}
\begin{proof}
Clearly, deleting arcs from a directed acyclic graph cannot turn a sink into a non-sink. Therefore, it remains to show that every sink in~$G\setminus\ensuremath{S}{}$ was a sink already in~$G$. Towards a contradiction, assume that there is a vertex~$s$ that is a sink in~$G\setminus\ensuremath{S}{}$ but not in~$G$. Then, there is an arc~$(s, v)$ in~$G$ for some vertex~$v$ of~$G$. Let $C_v$ and~$C_s$ be the connected components in~$G\setminus\ensuremath{S}{}$ containing~$v$ and~$s$, respectively, and let~$s_v$ be the sink in~$C_v$. Then, for $\ensuremath{S}{}':=\ensuremath{S}{}\setminus\{(s,v)\}$, the connected component $C_v \cup C_s$ in~$G\setminus\ensuremath{S}{}'$ has only one sink, namely $s_v$. Thus, $\ensuremath{S}{}'$~is also a partitioning set{} for~$G$, but $\ensuremath{S}{}' \subsetneq \ensuremath{S}{}$, a contradiction to $\ensuremath{S}{}$~being minimal. \qed
\end{proof}
The second observation is that each vertex in a directed acyclic graph is connected to exactly one sink if and only if it can reach that sink.
\begin{observation}\label{lem:dirundirequiv}
Let $G$~be a directed acyclic graph. An arc set~$\ensuremath{S}{}$ is
a partitioning set{} for~$G$ if and only if each vertex in~$G$ can reach
exactly one sink in~$G\setminus\ensuremath{S}{}$.
\end{observation}
\newcommand{directed acyclic graph}{directed acyclic graph}
\begin{proof}
If $\ensuremath{S}{}$~is a partitioning set{} for~$G$, then, by definition, each
connected component of~$G\setminus\ensuremath{S}{}$ contains exactly one
sink. Therefore, each vertex in~$G\setminus\ensuremath{S}{}$ can reach
\emph{at most} one sink. Moreover, since $G\setminus\ensuremath{S}{}$ is a
directed acyclic graph{} and each vertex in a directed acyclic graph{} can reach \emph{at least} one
sink, it follows that each vertex in~$G\setminus\ensuremath{S}{}$ can reach
\emph{exactly} one sink.
Now, assume that each vertex in~$G\setminus\ensuremath{S}{}$ can
reach exactly one sink. We show that each connected component
of~$G\setminus\ensuremath{S}{}$ contains exactly one sink. For the sake of
contradiction, assume that a connected component~$C$
of~$G\setminus\ensuremath{S}{}$ contains multiple sinks~$s_1,\dots,s_t$.
For~$i\in[t]$, let $A_i$~be the set of vertices that reach~$s_i$.
These vertex sets are pairwise disjoint and, since every vertex in a
directed acyclic graph reaches some sink, they partition the vertex
set of~$C$.
Since $C$~is a connected component, there are~$i,j\in[t]$ with~$i\ne
j$ and some arc~$(v,w)$ in~$G\setminus\ensuremath{S}{}$ from some vertex~$v\in
A_i$ to some vertex~$w\in A_j$. This is a contradiction, since
$v$~can reach~$s_i$ as well as~$s_j$. \qed
\end{proof}
\section{Parameter weight of the partitioning set{} sought}\label{sec:smallsol}
This section investigates the influence of the parameter ``weight~$k$ of
the partitioning set{} sought'' on \textsc{DAG Partitioning}{}. First, in \autoref{sec:klintime},
we present an $O(2^k\cdot(n+m))$-time algorithm and design linear-time
executable data reduction rules.
In \autoref{sec:experiments}, we experimentally evaluate the algorithm
and data reduction rules and---with the help of the optimal solutions
computed by our algorithm---we also evaluate the quality of a heuristic
presented by \citet{LBK09}.
In \autoref{sec:kkern}, we investigate the question whether the
provided data reduction rules might have a provable shrinking effect on
the input instance in form of a polynomial-size problem kernel.
We will see that, despite the fact that data reduction rules work very
effectively in experiments, polynomial-size problem kernels for \textsc{DAG Partitioning}{}
do not exist under reasonable complexity-theoretic assumptions.
Moreover, \autoref{sec:kkern} will also show that the algorithm presented
in \autoref{sec:klintime} is essentially optimal.
\subsection{Constant-weight partitioning set{}s in linear
time}\label{sec:klintime}
We now present an algorithm to compute partitioning set{}s of
weight~$k$ in $O(2^k\cdot(n+m))$~time. Interestingly, although both problems are
NP-hard, it will turn out that \textsc{DAG Partitioning}{} is substantially easier to solve exactly
than the closely related \textsc{Multiway Cut}{} problem, for which a sophisticated
algorithm running in $O(2^k \min(n^{2/3},m^{1/2})nm)$~time was given by
\citet{Xia10}. This is in contrast to \textsc{Multiway Cut}{} being constant-factor
approximable \citep{KKS+04}, while \textsc{DAG Partitioning}{} is inapproximable unless
P${}={}$NP \citep{AM12}.
The main structural advantage of \textsc{DAG Partitioning}{} over \textsc{Multiway Cut}{} is the alternative characterization of partitioning set{}s given in \autoref{lem:dirundirequiv}: we only have to decide which sink each vertex~$v$ will reach in the optimally partitioned graph. To this end, we first decide the question for all out-neighbors of~$v$. This natural processing order allows us to solve \textsc{DAG Partitioning}{} by the simple search tree algorithm shown in \autoref{alg:simple-st}, which we explain in the proof of the following~theorem.
\begin{theorem}
\label{thm:search-tree-DAGP} \autoref{alg:simple-st} solves \textsc{DAG Partitioning}{} in
$O(2^k \cdot (n+m))$ time.
\end{theorem}
\begin{proof}
\autoref{alg:simple-st} is based on recursive branching and computes a partitioning set{}~$\ensuremath{S}{}$ of weight at most~$k$ for a directed acyclic graph~$G=(V,A)$. It exploits the following structural properties of a minimal partitioning set{}~\(\ensuremath{S}\): by \autoref{lem:dirundirequiv}, a vertex~$v$ is connected to a sink~$s$ in~$G\setminus\ensuremath{S}{}$ if and only if it can reach that sink~$s$ in~$G\setminus\ensuremath{S}{}$. Thus, consider a vertex~$v$ of~$G$ and assume that we know, for each out-neighbor~$w$ of~$v$ in~$G$, the sink~$s$ that $w$~can reach in~$G\setminus\ensuremath{S}{}$. We call a sink~$s$ of~$G$ \emph{feasible} for~$v$ if an out-neighbor of~$v$ in~$G$ can reach~$s$ in~$G\setminus \ensuremath{S}{}$. Let $D$~be the set of feasible sinks for~$v$. Since $v$~may be connected to only one sink in~$G\setminus\ensuremath{S}{}$, at least~$|D|-1$ arcs outgoing from~$v$ are deleted by~$\ensuremath{S}{}$. However, $\ensuremath{S}{}$~does not disconnect $v$~from all sinks in~$D$, since then $\ensuremath{S}{}$ would delete all arcs outgoing from~$v$, contradicting \autoref{lem:no_new_sinks}. Hence, exactly one sink~$s\in D$ must be reachable by~$v$ in~$G\setminus\ensuremath{S}{}$. For each such sink~$s\in D$, the partitioning set{}~$\ensuremath{S}{}$ has to delete at least~$|D|-1$ arcs outgoing from~$v$. We simply try out all these possibilities, which gives rise to the following search tree algorithm.
\begin{algorithm}[t]
\caption{Compute a partitioning set{} of weight at most~$k$.}
\label{alg:simple-st}
\KwIn{A directed acyclic graph~$G=(V,A)$ with arc weights~$\ensuremath{\omega}{}$ and a positive integer~$k$.}
\KwOut{A partitioning set{}~$S$ of weight at most~$k$ if it exists; otherwise `no'.}
%
%
\vspace{1em}
$(v_1,v_2,\dots,v_n)\gets{}$reverse topological order of the vertices
of~$G$\;
$L\gets{}$array with $n$~entries\; %
searchtree($1,\emptyset$)\tcp*{\textrm{start with vertex~$v_1$ and~$S=\emptyset$}}
\textbf{output} `no'\tcp*{\textrm{there is no partitioning set of weight at most~$k$}}
\vspace{1em}
\SetKwProg{myproc}{Procedure}{}{}
\myproc{searchtree($i$, $\ensuremath{S}{}$)\tcp*[f]{\textrm{vertex counter~$i$; (partial) partitioning set~$S$}}}{
\While(\nllabel{lin:skipverts}){$v_i$ is a sink~$s$ or there is a sink~$s$ such that $\forall w\in N^+(v_i): L[w]=s$}{
$L[v_i]\gets s$\nllabel{lin:asself}\tcp*{\textrm{associate~$v_i$ with sink~$s$}}
$i\gets i+1$\tcp*{\textrm{continue with next vertex}}
}
\If(\tcp*[f]{\textrm{all vertices have been handled, $\ensuremath{S}{}$ is a partitioning set{}}}){$i>n$}{
\lIf(\nllabel{lin:outputS}){$\ensuremath{\omega}(\ensuremath{S}{})\leq k$}{\textbf{output}~$\ensuremath{S}{}$\tcp*[f]{\textrm{a partitioning set of weight at most~$k$ has been found}}}
}
\Else{%
$D\gets\{L[w]\mid w\in N^+(v_i)\}$\nllabel{lin:D}\tcp*{\textrm{the set of feasible sinks for~$v_i$}}
\If(\nllabel{dsmallcheck}\tcp*[f]{\textrm{check whether we are allowed to delete $|D|-1$~arcs}}){$|D|-1\leq k-\ensuremath{\omega}(\ensuremath{S}{})$}{
\ForEach(\nllabel{lin:branch}\tcp*[f]{\textrm{try to associate~$v_i$ to each feasible sink~$s$}}){$s\in D$}{
$L[v_i]\gets s$\nllabel{lin:asss}\;
$S'\gets S\cup\{(v_i,w)\mid w\in N^+(v_i)\text{ and } L[w]\ne s\}$\nllabel{lin:Sprime}\;
searchtree($i+1, S'$)\nllabel{lin:reccall}\;
}
}
}
}
\end{algorithm}
\autoref{alg:simple-st} starts with~$\ensuremath{S}{}=\emptyset{}$ and processes
the vertices of~$G$ in reverse topological order, that is, each vertex
is processed by procedure~''searchtree'' after its out-neighbors. The
procedure exploits the invariant that, when processing a vertex~$v$,
each out-neighbor~$w$ of~$v$ has already been \emph{associated} with
the sink~$s$ that it reaches in~$G\setminus\ensuremath{S}{}$ (in terms of
\autoref{alg:simple-st}, $L[w]=s$). It then tries all possibilities of
associating $v$~with a feasible sink and augmenting~$\ensuremath{S}{}$
accordingly, so that the invariant also holds for the vertex processed after~$v$.
Specifically, if $v$~is a sink, then \autoref{lin:asself} associates
$v$~with itself. If all out-neighbors of~$v$ are associated with the
same sink~$s$, then \autoref{lin:asself} associates
$v$~with~$s$. Otherwise, \autoref{lin:D} computes the set~$D$ of
feasible sinks for~$v$. In lines~\ref{lin:branch}--\ref{lin:reccall}, the algorithm
branches into all possibilities of associating~$v$ with one of
the~$|D|$ feasible sinks~$s\in D$ (by way of setting~$L[v]\gets s$ in
\autoref{lin:asss}) and augmenting~$\ensuremath{S}{}$ so
that $v$~only reaches~$s$ in~$G\setminus\ensuremath{S}{}$. That is, in each of
the $|D|$~branches, it adds to~$\ensuremath{S}{}$ the arcs
outgoing from~$v$ to the out-neighbors of~$v$ that are associated with
sinks different from~$s$ (the weight of~$\ensuremath{S}{}$ increases by at
least~$|D|-1$). Then, \autoref{lin:reccall} continues with the next
vertex in the reverse topological order. After processing the last
vertex, each vertex of~$G\setminus\ensuremath{S}{}$ can reach exactly one sink,
that is, $\ensuremath{S}{}$~is a partitioning set{}. If a branch finds a partitioning set{} with
weight at most~$k$, \autoref{lin:outputS} outputs it.
%
%
%
%
%
%
\looseness=-1 We analyze the running time of this algorithm. To this end, we first bound the total number of times that procedure ``searchtree'' is called. To this end, we first analyze the number of \emph{terminal calls}, that is, calls that do not recursively call the procedure. Let $T(\alpha)$~denote the maximum possible number of terminal calls caused by the procedure ``searchtree'' when called with a set~$\ensuremath{S}{}$ satisfying~$\ensuremath{\omega}(\ensuremath{S}{})\geq \alpha$, including itself if it is a terminal call. Note that procedure ``searchtree'' calls itself only in \autoref{lin:reccall}, that is, for each sink~$s$ of some set~$D$ of feasible sinks with~$1\leq |D|-1\leq k-\alpha$, it calls itself with a set~$S'$ of weight at least~$\alpha+|D|-1$. Thus, we have
\[T(\alpha)\leq |D|\cdot T(\alpha+|D|-1).\] We now inductively show
that~$T(\alpha)\leq 2^{k-\alpha}$ for~$0\leq\alpha\leq k$. Then, it
follows that there is a total number of $T(0)\leq 2^k$~terminal
calls. For the induction base case, observe that $T(k)=1$, since,
if procedure ``searchtree'' is called with a set~$\ensuremath{S}{}$ of weight
at least~$k$, then any recursive call is prevented by the check in
\autoref{dsmallcheck}. Now, assume that $T(\alpha')\leq
2^{k-\alpha'}$~holds for all~$\alpha'$ with~$\alpha\leq \alpha'\leq k$. We show
$T(\alpha-1)\leq 2^{k-(\alpha -1)}$ by exploiting $2\leq|D|\leq
2^{|D|-1}$ as follows:
\begin{align*}
T(\alpha-1)&\leq |D|\cdot T(\alpha-1+|D|-1)
\leq |D|\cdot 2^{k-(\alpha-1)-(|D|-1)} \leq |D|\cdot
\frac{2^{k-(\alpha-1)}}{2^{|D|-1}} \leq 2^{k-(\alpha-1)}.
\end{align*}
It follows that there are at most~$T(0)=2^k$ terminal calls to
procedure ``searchtree''.
In order to bound the total number of calls to procedure ``searchtree'', observe the following: if each inner node of a tree has at least two children, then the number of inner nodes in a tree is at most its number of leaves. Now, since procedure ``searchtree'' calls itself only in \autoref{lin:reccall}, that is, for each sink~$s$ of some set~$D$ of feasible sinks with~$1\leq |D|-1\leq k-\alpha$, each non-terminal call causes at least two new calls. Thus, since there are $O(2^k)$~terminal calls, there are also $O(2^k)$~non-terminal calls.
It follows that there are $O(2^k)$~total calls of procedure ``searchtree''. For each such call, we iterate, in the worst case, over all out-neighbors of all vertices in the graph in lines~\ref{lin:skipverts}--\ref{lin:D}, which works in $O(n+m)$~time. Moreover, for each call of procedure ``searchtree'', we compute a set~$S'$ in \autoref{lin:Sprime} in $O(n+m)$~time. Hence, a total amount of~$O(2^k\cdot (n+m))$~time is spent in procedure ``searchtree''. Initially, \autoref{alg:simple-st} uses $O(n+m)$~time to compute a reverse topological ordering~\citep[Section~22.4]{CLRS01}. \qed
\end{proof}
\noindent The experimental results in \autoref{sec:experiments} will show that
\autoref{alg:simple-st} alone cannot solve even moderately large
instances. Therefore, we complement it by linear-time executable data
reduction rules that will allow for a significant speedup. The following data
reduction rule is illustrated in \autoref{fig:halfblind}.
\begin{figure}
\centering
\begin{tikzpicture}[shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex,label=below:$s_1$] (s1) at (1,0) {};
\node[vertex,label=below:$s_2$] (s2) at (3,0) {};
\node[vertex] (a) at (0,1) {};
\node[vertex] (b) at (1,1) {};
\node[vertex,label=above left:$w$] (c) at (1,2) {};
\node[vertex,label=above left:$v$] (d) at (2,3) {};
\node[vertex] (e) at (2,4) {};
\node[vertex] (f) at (3,2) {};
\draw[->] (a)--(s1);
\draw[->] (b)--(s1);
\draw[->] (c)--(b);
\draw[->] (d)--(c);
\draw[->] (d) to[out=-85](s2);
\draw[->] (c) to(a);
\draw[->] (e)--(d);
\draw[->] (d)--(f);
\draw[->] (f)--(s2);
\end{tikzpicture}\hspace{2cm}
\begin{tikzpicture}[shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex,label=below:$s_1$] (s1) at (1,0) {};
\node[vertex,label=below:$s_2$] (s2) at (3,0) {};
\node[vertex] (a) at (0,1) {};
\node[vertex] (b) at (1,1) {};
\node[vertex] (c) at (1,2) {};
\node[vertex,label=above left:$v$] (d) at (2,3) {};
\node[vertex] (e) at (2,4) {};
\node[vertex] (f) at (3,2) {};
\draw[->] (a)--(s1);
\draw[->] (b)--(s1);
\draw[->] (c)--(b);
\draw[->] (d) to[out=-95,in=45](s1);
\draw[->] (d) to[out=-85] node[right]{$2$} (s2);
\draw[->] (c) to(a);
\draw[->] (e)--(d);
\draw[->] (f)--(s2);
\end{tikzpicture}
\caption{The left side shows an input graph with unit weights to which \autoref{halfblind} is applicable. The right side shows the same graph to which \autoref{halfblind} has been applied as often as possible. Since~$v$ can reach multiple sinks and its out-neighbors each can reach only one sink, its arcs got redirected. Unlabeled arcs have weight one.}
\label{fig:halfblind}
\end{figure}
\begin{reductionrule}\label{halfblind}
If there is an arc~$(v,w)$ such that $w$~can reach exactly one
sink~$s\neq w$ and $v$~can reach multiple sinks, then
\begin{itemize}
\item if there is no arc~$(v,s)$, then add it with weight~$\ensuremath{\omega}(v,w)$,
\item otherwise, increase $\ensuremath{\omega}{}(v,s)$ by $\ensuremath{\omega}{}(v,w)$, and
\end{itemize}
delete the arc~$(v,w)$.
\end{reductionrule}
\noindent Note that, in the formulation of the data reduction rule, both
$v$~and~$w$ may be \emph{connected} to an arbitrary number of sinks
by an undirected path{}. However, we require that $w$~\emph{can reach}
exactly one sink and that $v$~\emph{can reach} multiple sinks, that is,
using a directed path{}.
\begin{lemma}
Let $(G,\ensuremath{\omega}{},k)$ be a \textsc{DAG Partitioning}{} instance and consider the
graph~$G'$ with weights~$\ensuremath{\omega}{}'$ output by \autoref{halfblind}
applied to an arc~$(v,w)$ of~$G$. Then $(G,\ensuremath{\omega}{},k)$~is a
yes-instance if and only if~$(G',\ensuremath{\omega}{}',k)$~is a yes-instance.
\end{lemma}
\begin{proof}
First, assume that $(G,\ensuremath{\omega}{},k)$ is a yes-instance and that~$\ensuremath{S}{}$
is a minimal partitioning set{} of weight at most~$k$ for~$G$. We show how to
transform~$\ensuremath{S}{}$ into a partitioning set{} of equal weight for~$G'$. We
distinguish two cases: either $\ensuremath{S}{}$ disconnects~$v$ from~$s$ or
not, where $s$~is the only sink that~$w$ can reach.
\begin{caselist}
\item Assume that $S$~disconnects~$v$ from~$s$. Note that every
subgraph of a directed acyclic graph is again a directed acyclic
graph and that every vertex in a directed acyclic graph is not only
connected to, but also can reach some sink. Hence, by
\autoref{lem:no_new_sinks}, $\ensuremath{S}{}$ cannot disconnect~$w$ from~$s$,
since $w$~can only reach~$s$ in~$G$ and would have to reach some
other, that is, new sink in~$G\setminus\ensuremath{S}{}$. It follows that
$\ensuremath{S}{}$~contains the arc~$(v,w)$. Now, however,
$S':=(S\setminus\{v,w\})\cup\{(v,s)\}$ is a partitioning set{} for~$G'$,
since $G\setminus S=G'\setminus S'$. Moreover, since
$\ensuremath{\omega}{}'(v,s)=\ensuremath{\omega}{}(v,s)+\ensuremath{\omega}{}(v,w)$, we have
$\ensuremath{\omega}{}'(S')=\ensuremath{\omega}{}(S)$, where we, for convenience, declare $\ensuremath{\omega}{}(v,s)=0$ if
there is no arc~$(v,s)$ in~$G$.
\item Assume that $\ensuremath{S}{}$~does not disconnect~$v$ from~$s$ and, for
the sake of a contradiction, that $\ensuremath{S}{}$ is not a partitioning set{}
for~$G'$. Observe that $\ensuremath{S}{}$~contains neither~$(v,w)$
nor~$(v,s)$, because it is a minimal partitioning set{} and does not
disconnect~$v$ from~$s$. Therefore, $G'\setminus S$ differs
from~$G\setminus S$ only in the fact that~$G'\setminus S$ does not
have the arc~$(v,w)$ but an arc~$(v,s)$ that was possibly not
present in~$G\setminus S$. Hence, since $\ensuremath{S}{}$ is a partitioning set{}
for~$G$ but not for~$G'$, two sinks are connected to each other
in~$G'\setminus\ensuremath{S}{}$ via an undirected path{} using the arc~$(v,s)$. Thus,
one of the two sinks is~$s$ and the undirected path{} consists of~$(v,s)$ and
a subpath~$p$ between~$v$ and some sink~$s'$. Then, however, $s$~is
connected to~$s'$ also in~$G\setminus\ensuremath{S}{}$ via an undirected path{}
between~$s$ and~$w$ ($\ensuremath{S}{}$~cannot disconnect~$s$ from~$w$ by
\autoref{lem:no_new_sinks}), the arc~$(v,w)$ and the undirected path{}~$p$
from~$v$ to~$s'$. This contradicts $\ensuremath{S}{}$~being a partitioning set{}
for~$G$. We conclude that $S$~is a partitioning set{} for~$G'$. Moreover, since $S$~contains neither~$(v,w)$ nor~\((v,s)\), one has $\ensuremath{\omega}(S)=\ensuremath{\omega}'(S)$.
\end{caselist}
Now, assume that~$(G',\ensuremath{\omega}{}',k)$ is a yes-instance and that~$\ensuremath{S}{}$
is a minimal partitioning set{} of weight at most~$k$ for~$G'$. We show how to
transform~$\ensuremath{S}{}$ into a partitioning set{} of equal weight for~$G$. Again,
we distinguish between two cases: either $\ensuremath{S}{}$ disconnects~$v$
from~$s$ or not.
\begin{caselist}
\item Assume that $S$~disconnects~$v$ from~$s$. Then, $(v,s)\in
S$. Now, $S':=S\cup\{(v,w)\}$ is a partitioning set{} for~$G$,
since~$G\setminus S'=G'\setminus S$. Moreover, since
$\ensuremath{\omega}{}'(v,s)=\ensuremath{\omega}{}(v,s)+\ensuremath{\omega}{}(v,w)$, we have
$\ensuremath{\omega}{}(S')=\ensuremath{\omega}{}'(S)$, where we assume that $\ensuremath{\omega}{}(v,s)=0$ if
there is no arc~$(v,s)$ in~$G$.
\item Assume that $S$~does not disconnect~$v$ from~$s$ and, for the
sake of a contradiction, assume that~$\ensuremath{S}{}$ is not a partitioning set{}
for~$G$. Then, since $\ensuremath{S}{}$~is minimal, $\ensuremath{S}{}$ does not
contain~$(v,s)$. Now, observe that $G\setminus S$
and~$G'\setminus\ensuremath{S}{}$ differ only in the fact
that~$G\setminus\ensuremath{S}{}$ has an additional arc~$(v,w)$ and that,
possibly, $(v,s)$ is missing. Hence, since~$\ensuremath{S}{}$ is a partitioning set{}
for~$G'$ but not for~$G$, there is an undirected path{} between two sinks
in~$G\setminus\ensuremath{S}{}$ through~$(v,w)$. Because~$\ensuremath{S}{}$ by
\autoref{lem:no_new_sinks} cannot disconnect~$w$ from~$s$, one of
these sinks is~$s$ and the undirected path{} consists of a subpath between~$s$
and~$w$, the arc~$(v,w)$, and a subpath~$p$ between~$v$ and a
sink~$s'$. Then, however, $s$~and~$s'$ are also connected
in~$G'\setminus\ensuremath{S}{}$ via the arc~$(v,s)$ and the subpath~$p$
between~$v$ and~$s'$. This contradicts $\ensuremath{S}{}$~being a partitioning set{}
for~$G'$. Finally, since $S$~does not contain~$(v,s)$, one has $\ensuremath{\omega}(S)=\ensuremath{\omega}'(S)$.\qed
\end{caselist}
\end{proof}
\noindent After applying \autoref{halfblind} exhaustively, that is, as
often as it is applicable, we apply a second data reduction rule, which
is illustrated in \autoref{fig:kill-loners}.
\begin{figure}
\centering
\begin{tikzpicture}[shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex,label=below:$s_1$] (s1) at (1,0) {};
\node[vertex,label=below:$s_2$] (s2) at (3,0) {};
\node[vertex] (a) at (0,1) {};
\node[vertex] (b) at (1,1) {};
\node[vertex] (c) at (1,2) {};
\node[vertex,label=above left:$v$] (d) at (2,3) {};
\node[vertex] (e) at (2,4) {};
\node[vertex] (f) at (3,2) {};
\draw[->] (a)--(s1);
\draw[->] (b)--(s1);
\draw[->] (c)--(b);
\draw[->] (d) to[out=-95,in=45](s1);
\draw[->] (d) to[out=-85] node[right]{$2$} (s2);
\draw[->] (c) to(a);
\draw[->] (e)--(d);
\draw[->] (f)--(s2);
\begin{pgfonlayer}{background}
\draw[fill=black!10,line width=25pt,line join=round, line cap=round, draw=black!10] (a.center)--(b.center)--(c.center)--cycle;
%
\end{pgfonlayer}
\end{tikzpicture}\hspace{2cm}
\begin{tikzpicture}[shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex,label=below:$s_1$] (s1) at (1,0) {};
\node[vertex,label=below:$s_2$] (s2) at (3,0) {};
\node[vertex,label=above left:$v$] (d) at (2,3) {};
\node[vertex] (e) at (2,4) {};
\draw[->] (d) to[out=-95,in=45](s1);
\draw[->] (d) to[out=-85] node[right]{$2$} (s2);
\draw[->] (e)--(d);
\end{tikzpicture}
\caption{On the left side, a graph that cannot be reduced by \autoref{halfblind} is shown. The gray background highlights the non-sink vertices that can only reach the sink~$s_1$. The right side shows the same graph to which \autoref{kill-loners} has been applied as often as possible. Unlabeled arcs have weight one.}
\label{fig:kill-loners}
\end{figure}
\begin{reductionrule}\label{kill-loners}
If, for some sink~\(s\), the set~\(L\) of non-sink vertices that can
reach only~$s$ is nonempty, then delete all vertices in~$L$.
\end{reductionrule}
\begin{lemma}
Let $G$~be a graph that is exhaustively reduced with respect to
\autoref{halfblind} and~$G':=G-L$ be the graph output by
\autoref{kill-loners} when applied to~$G$ for some sink~$s$.
Then, any partitioning set{} for~$G$ is a partitioning set{} of equal weight for~$G'$
and vice versa.
\end{lemma}
\begin{proof}
In order to prove the lemma, we first make three structural
observations about the set~$L$.
\begin{enumerate}[i)]
\item \label{obsL1} There is no arc~$(v,w)$ from a vertex~$v\notin L$
to a vertex~$w\in L$ in~$G$: for the sake of a contradiction, assume that
such an arc exists. Then, since $v\notin L$ and $v$~is obviously not
a sink, $v$~can reach a sink~$s'\neq s$. It follows that $v$~can
reach two sinks: $s$ via~$w$ and~$s'$. This contradicts the
assumption that \autoref{halfblind} is not applicable.
\item \label{obsL2} There is no arc~$(v,w)$ from a vertex~$v\in L$ to
a vertex~$w\notin L$ with $w\neq s$ in~$G$: for the sake of a
contradiction, assume that such an arc exists. Then, since $w\notin L$
and~$w\neq s$, it follows that $w$~can reach a sink~$s'$ different
from~$s$. Then, also $v$~can reach two sinks: $s'$~via~$w$
and~$s$. This contradicts~$v\in L$.
\item \label{obsL3} A minimal partitioning set{}~$\ensuremath{S}{}$ for~$G$ does not contain any arc between vertices in~$L\cup\{s\}$: this is because, by \autoref{lem:no_new_sinks}, no minimal partitioning set{}~$\ensuremath{S}{}$ can disconnect any vertex~$v\in L$ from~$s$, since otherwise~$v$ would reach another, that is, new sink in~$G\setminus\ensuremath{S}{}$.
\end{enumerate}
Now, let $\ensuremath{S}{}$~be a minimal partitioning set{} for~$G$. Then, $\ensuremath{S}{}$~is also a partitioning set{} for~$G'$, since $G'\setminus\ensuremath{S}{}=(G\setminus\ensuremath{S}{})-L$ and deleting~$L$ from~$G\setminus\ensuremath{S}{}$ cannot create new sinks.
In the opposite direction, let $S$~be a partitioning set{} for~$G'$. Then, $S$~is also a partitioning set{} for~$G$, since $G\setminus\ensuremath{S}{}$~is just $G'\setminus\ensuremath{S}{}$~with the vertices in~$L$ and their arcs added. These, however, can reach only the sink~$s$ and are only connected to vertices to which~$s$ is connected. Hence, they do not create new sinks or connect distinct components of~$G'\setminus\ensuremath{S}{}$.\qed
\end{proof}
\noindent We now show how to exhaustively apply both data reduction
rules in linear time. To this end, we apply \autoref{reducealg}: in
lines~\ref{startinit}--\ref{dynprog}, it computes an array~$L$ such
that, for each vertex~$v\in V$, we have $L[v]=\{s\}$ if $v$~reaches
exactly one sink~$s$ and $L[v]=\emptyset$~otherwise. It uses this
information to apply \autoref{halfblind} in
lines~\ref{apprule1}--\ref{delarc} and \autoref{kill-loners} in
lines~\ref{apprule2} and~\ref{apprule2end}.
\begin{algorithm}[t]
\caption{Apply Reduction Rules~\ref{halfblind} and~\ref{kill-loners} exhaustively.}
\label{reducealg}
\KwIn{A directed acyclic graph~$G=(V,A)$ with arc weights~$\ensuremath{\omega}{}$.}
\KwOut{The result of exhaustively applying Reduction
Rules~\ref{halfblind} and~\ref{kill-loners} to~$G$.}
$\ensuremath{\mathcal S}\gets{}$sinks of~$G$\nllabel{startinit}\; $L\gets{}$array with
$n$~entries\; \lForEach(\nllabel{endinit}\tcp*[f]{\textrm{$L[v]=\{s\}$ for any sink~$s\in V$ will mean that~$v$ only reaches the sink~$s$.}}){$v\in
\ensuremath{\mathcal S}$}{$L[v]\gets\{v\}$} \ForEach(\nllabel{dynprog}){$v\in
V\setminus \ensuremath{\mathcal S}$ in reverse topological
order}{$L[v]\gets\smashoperator{\bigcap\limits_{u\in N^+(v)}}L[u]$\nllabel{lin:bicap}\tcp*[f]{\textrm{Invariant: the intersection contains at most one sink.}}}
\ForEach(\nllabel{apprule1}\tcp*[f]{\rm Application of \autoref{halfblind}}
){$v\in V$ with $L[v]=\emptyset$}{
\ForEach(\nllabel{arrayhere}){$w\in N^+(v)$ with $L[w]=\{s\}$ for some~$s\in \ensuremath{\mathcal S}$ and
$w\notin \ensuremath{\mathcal S}$}{ \lIf(\nllabel{isarcthere}){$(v,s)\notin A$}{add
$(v,s)$ with $\ensuremath{\omega}{}(v,s):=0$ to~$A$}
$\ensuremath{\omega}{}(v,s)\gets\ensuremath{\omega}{}(v,s)+\ensuremath{\omega}{}(v,w)$\nllabel{weiinc}\; delete
arc $(v,w)$\nllabel{delarc}\;} }
\ForEach(\nllabel{apprule2}\tcp*[f]{\rm Application of
\autoref{kill-loners}}){
$v\in V\setminus \ensuremath{\mathcal S}$ such that $L[v]=\{s\}$ for some~$s\in \ensuremath{\mathcal S}$}{delete
vertex~$v$\nllabel{apprule2end}}
\Return{$(G,\ensuremath{\omega}{})$}\nllabel{returnred}\;
\end{algorithm}
\begin{lemma}\label{lem:datareductionrules}
Given a directed acyclic graph~$G$ with weights $\ensuremath{\omega}{}$, in
$O(n+m)$~time \autoref{reducealg} produces a directed acyclic
graph~$G'$ with weights $\ensuremath{\omega}{}'$ such that $G'$~is exhaustively
reduced with respect to \autoref{halfblind} and \autoref{kill-loners}.
In particular, $(G,\ensuremath{\omega}{},k)$ is a yes-instance if and only
if~$(G',\ensuremath{\omega}{}',k)$ is a yes-instance.
\end{lemma}
\begin{proof}
We first discuss the semantics of \autoref{reducealg}, then its running time. After \autoref{lin:bicap}, $L[v]=\{s\}$ for some vertex~$v$ if~$v$ can reach exactly one sink~$s$ and $L[v]=\emptyset$ otherwise: this is, by definition, true for all~$L[v]$ with~$v\in \ensuremath{\mathcal S}$. For $v\in V\setminus \ensuremath{\mathcal S}$ it also holds, since $v$~can reach exactly one sink~$s$ if and only if all of its out-neighbors~$u\in N^+(v)$ can reach~$s$ and no other sinks, that is, if and only if $L[u]=\{s\}$ for all out-neighbors~$u\in N^+(v)$ of~$v$.
Hence, the loop in lines~\ref{apprule1}--\ref{delarc} applies
\autoref{halfblind} to all arcs to which \autoref{halfblind} is
applicable. Moreover, \autoref{halfblind} does not change which sinks
are reachable from any vertex and, hence, cannot create new arcs to
which \autoref{halfblind} may be applied. Hence, when reaching
\autoref{apprule2}, the graph will be exhaustively reduced with respect to
\autoref{halfblind} and we do not have to update the array~$L$.
The loop in lines~\ref{apprule2} and~\ref{apprule2end} now applies
\autoref{kill-loners}, which is allowed, since the graph is exhaustively reduced
with respect to \autoref{halfblind}. Moreover, an application of
\autoref{kill-loners} cannot create new vertices to which
\autoref{kill-loners} may become applicable or arcs to which
\autoref{halfblind} may become applicable. Hence, \autoref{returnred}
indeed returns a graph that is exhaustively reduced with respect to
both data reduction rules.
It remains to analyze the running time. Obviously, lines \ref{startinit}--\ref{endinit} of \autoref{reducealg} work in $O(n)$~time. To execute \autoref{dynprog} in $O(n+m)$~time, we iterate over the vertices in~$V\setminus \ensuremath{\mathcal S}$ in reverse topological order, which can be computed in $O(n+m)$~time \citep[Section~22.4]{CLRS01}. Hence, when computing~$L[v]$ for some vertex in \autoref{dynprog}, we already know the values~$L[u]$ for all~$u\in N^+(v)$. Moreover, $L[v]$~is the intersection of sets with at most one element and, therefore, also contains at most one element. It follows that we can compute~$L[v]$ in $O(|N^+(v)|)$~time for each vertex~$v\in V\setminus \ensuremath{\mathcal S}$ and, therefore, in $O(n+m)$~total time for all vertices. The rest of the algorithm only iterates once over all arcs and vertices. Hence, to show that it works in $O(n+m)$~time, it remains to show how to execute lines~\ref{isarcthere} and~\ref{weiinc} in constant time.
Herein, the main difficulty is that an adjacency list cannot answer queries of the form ``$(v,s)\in A$?'' in constant time. However, since we earlier required to iterate over all out-neighbors of a vertex~$v$ in~$O(|N^+(v)|)$~time, we cannot just use an adjacency matrix instead. We exploit a different trick, which, for the sake of clarity is not made explicit in the pseudo code: assume that, when considering a vertex~$v\in V$ in \autoref{apprule1}, we have an $n$-element array~$A$ such that~$A[s]$ holds a pointer to the value~$\ensuremath{\omega}(v,s)$ if~$(v,s)\in A$ and $A[s]=\bot$ otherwise. Then, we could in constant time check in \autoref{isarcthere} whether~$A[s]=\bot$ to find out whether $(v,s)\in A$ and, if this is the case, get a pointer to (and increment) the weight~$\ensuremath{\omega}(v,s)$ in constant time in \autoref{weiinc}. However, we cannot afford initializing an \(n\)-entry array~\(A\) for each vertex~\(v\in V\) and we cannot make assumptions on the value of uninitialized entries. Luckily, we access~$A[s]$ only if there is a vertex~$w\in N^+(v)$ with $L[w]=\{s\}$ for some~$s$. Hence, we can create an \(n\)-entry array~\(A\) once in the beginning of the algorithm and then, between lines~\ref{apprule1} and~\ref{arrayhere}, set up~\(A\) for~\(v\in V\) as follows: for each~$w\in N^+(v)$ with $L[w]=\{s\}$, set $A_v[s]:=\bot$. Then, for each $s\in N^+(v)$ with $s\in\ensuremath{\mathcal S}$, let $A_v[s]$ point to $\ensuremath{\omega}(v,s)$. \qed
\end{proof}
\noindent
\autoref{lem:datareductionrules} shows that we can exhaustively apply
the two Reduction Rules~\ref{halfblind} and~\ref{kill-loners} in linear
time using \autoref{reducealg}. A natural approach for evaluating the
quality of our preprocessing would be to provide a performance guarantee
in terms of small problem kernels. Unfortunately, in
\autoref{sec:kkern} we show that, under widely accepted
complexity-theoretic assumptions, there is no problem kernel with size
polynomial in~\(k\) for \textsc{DAG Partitioning}{}. Nevertheless, the next section shows
that our data reduction technique performs remarkably well in empirical
tests. Furthermore, we show that the running time of
\autoref{alg:simple-st} is significantly improved when it is applied
after \autoref{reducealg}. We achieve the largest speedup of
\autoref{alg:simple-st} by interleaving the application of
\autoref{reducealg} and the branching; a technique that generally works
well for search tree algorithms~\citep{NR00}.
\subsection{Experimental evaluation}\label{sec:experiments}
In this section, we aim for giving a proof of concept of our $O(2^k\cdot(n+m))$~time
search tree algorithm presented in \autoref{sec:klintime} by demonstrating to which
extent instances of \textsc{DAG Partitioning}{} are solvable within a time frame of five minutes.
Moreover, using the optimal solutions found by our algorithm,
we evaluate the quality of a heuristic presented by \citet{LBK09},
which can be considered a variant of our search tree algorithm (\autoref{alg:simple-st}):
The difference is that, while our search tree algorithm branches into all possibilities
of putting a vertex into a connected component with some sink, the heuristic just puts
each vertex into the connected component with the sink it would be most expensive
to disconnect from.
This is the strategy described by \citet{LBK09} as yielding the best results and is in more detail
described by \citet{SHE+13}. The pseudocode of the heuristics (\autoref{alg:simple-trees}) can be found in \autoref{sec:tree}, where we prove that the heuristic works optimally on trees.
\paragraph{Implementation details} We implemented the search tree algorithm
as well as the heuristic in three variants:
\begin{enumerate}
\item without data reduction,
\item with initially applying the data reduction algorithm presented in
\autoref{reducealg}, and
\item with interleaving the data reduction using \autoref{reducealg}
with the branching of \autoref{alg:simple-st}.
\end{enumerate}
The source code uses about 1000 lines of C++ and is freely
available.\footnote{\url{http://fpt.akt.tu-berlin.de/dagpart/}} The
experiments were run on a computer with a 3.6\,GHz Intel Xeon processor
and 64\,GiB RAM under Linux~3.2.0, where the source code has been
compiled using the GNU C++ compiler version~4.7.2 and using the
highest optimization level~(-O3).
\paragraph{Data} We tried to apply our algorithm to the data set
described by \citet{LBK09}; unfortunately, its optimum partitioning set{}s have
too large weight to be found by our algorithm.\footnote{The exact
weights of the optimum partitioning set{}s remain unknown, since our algorithm
could not compute them within several hours.} In order to prove the
feasibility of solving large instances with \emph{small}
minimum partitioning set{}s, we generated artificial instances. Herein, however,
we stick to the clustering motivation of \textsc{DAG Partitioning}{} and test our algorithm
using simulated citation networks: vertices in a graph represent
articles and if an article~$v$ cites an article~$w$, there is an
arc~$(v,w)$. Herein, we consider only directed acyclic graphs, which
model that an article only cites older articles. A partitioning of such
a network into connected components of which each contains only one sink
can be interpreted as a clustering into different topics of which we
want to identify the origins.
To simulate citation networks, we employ preferential attachment graphs---a random graph model commonly used to model citations between articles~\cite{Pri76,BA99,JNB03}. Preferential attachment graphs model the natural growth of citation networks, in which new articles are published over time and with high probability cite the already highly-cited articles. Indeed, \citet{JNB03} empirically verified that in this preferential attachment model, the probability of an article being cited is linear in the number of times the article has been cited in the past.
To create a preferential attachment graph, we first choose two parameters: the number~$c$ of sinks to create and the maximum outdegree~$d$ of each vertex. After creating $c$~sinks, $n$~new vertices are introduced one after another. After introducing a vertex, we add to it $d$~outgoing unit-weight arcs: for each of these \(d\)~outgoing arcs, the target is chosen independently at random among previously introduced vertices such that each vertex is chosen as the target with a probability proportional to its indegree plus one. We do not add an outgoing arc twice, which might result in a vertex having less than \(d\)~outgoing arcs if the target nodes of two arcs to be added coincide.
We compared our algorithm to the heuristic of \citet{LBK09} on these
graphs, but our algorithm could solve only instances with up to 300~arcs optimally,
since the optimum solution weight grows too quickly in the sizes of the
generated graphs. To show the running time behavior of our algorithm on
larger graphs with small solution sizes, we employ an additional
approach: we generate multiple connected components, each being a
preferential attachment graph with one sink, and randomly add
$k$~additional arcs between these connected components in a way so that
the graph remains acyclic.
Then, obviously, an optimal partitioning set{} cannot be larger than~$k$.
We call the set of $k$~randomly added arcs \emph{embedded partitioning set}
and it can be viewed as noise in data that clusters well.
\begin{figure}
\centering\ref{dagp-legends}
\begin{tikzpicture}
\begin{axis}[ymin=0, ymax=7, ylabel=running time,
xlabel=weight~$k$ of optimal partitioning set{}, y unit=s,
width=0.45\textwidth, legend to name=dagp-legends, legend
columns=-1, legend style={/tikz/every even column/.append
style={column sep=1cm}}, xmax=200.5, xmin=159.5]
\addplot[color=black,mark=o, only marks] table
[x=STiDRk, y=MCiDRs, col sep=comma] {
V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000 547,18764908 1074,547,1074,2.05,6.27,2.5,1.16,14.4,98.77,160,160,160,160,160
1000000 714,18770910 1408,714,1408,2.03,6.25,2.51,1.15,13.52,136.53,161,161,161,161,161
1000000 507,18766714 994,507,994,2.06,6.24,2.5,1.15,15.54,192.47,162,162,162,162,162
1000000 384,18746771 748,384,748,2.04,6.2,2.49,1.15,17.38,117.24,163,163,163,163,163
1000000 491,18759591 962,491,962,2.06,6.21,2.51,1.15,15.81,168.18,164,164,164,164,164
1000000 517,18779564 1014,517,1014,2.03,6.24,2.5,1.15,17.86,223.77,165,165,165,165,165
1000000 451,18760003 882,451,882,2.05,6.2,2.49,1.15,22.45,179.45,166,166,166,166,166
1000000 4697,18764259 9481,4697,9481,2.07,6.4,2.61,1.19,20.93,,167,167,167,167,
1000000 790,18762325 1562,790,1562,2.05,6.31,2.53,1.15,25.32,288.61,168,168,168,168,168
1000000 603,18768885 1187,603,1187,2.06,6.21,2.51,1.3,25.59,,169,169,169,169,
1000000 636,18772622 1252,636,1252,2.03,6.26,2.5,1.14,26.32,442.82,170,170,170,170,170
1000000 1083,18763249 2151,1083,2151,2.05,6.33,2.52,1.43,29.52,430.91,171,171,171,171,171
1000000 625,18755903 1230,625,1230,2.04,6.23,2.5,1.15,35.14,343.45,172,172,172,172,172
1000000 498,18752502 976,498,976,2.05,6.28,2.5,1.15,34.17,541.18,173,173,173,173,173
1000000 758,18760875 1496,758,1496,2.05,6.23,2.5,1.15,42.98,570.39,174,174,174,174,174
1000000 639,18761487 1258,639,1258,2.05,6.24,2.5,1.14,47.11,711.18,175,175,175,175,175
1000000 479,18744904 938,479,938,2.04,6.26,2.5,1.15,42.98,491.98,176,176,176,176,176
1000000 824,18753844 1629,824,1629,2.05,6.26,2.51,1.15,50.1,741.57,177,177,177,177,177
1000000 436,18757283 853,436,853,2.04,6.23,2.48,1.16,58.23,644.88,178,178,178,178,178
1000000 1217,18760997 2419,1217,2419,2.06,6.3,2.52,1.15,59.77,867.64,179,179,179,179,179
1000000 816,18764816 1613,816,1613,2.05,6.22,2.51,1.15,68.82,,180,180,180,180,
1000000 868,18758176 1716,868,1716,2.05,6.28,2.51,1.15,103.09,1402.45,181,181,181,181,181
1000000 2117,18762995 4222,2117,4222,2.05,6.36,2.56,1.15,87.46,,182,182,182,182,
1000000 614,18763594 1208,614,1208,2.05,6.26,2.49,1.15,124.71,,183,183,183,183,
1000000 742,18762792 1464,742,1464,2.05,6.25,2.51,1.14,115.92,2026.76,184,184,184,184,184
1000000 729,18763622 1438,729,1438,2.05,6.3,2.51,1.16,152.4,2601.62,185,185,185,185,185
1000000 596,18772695 1172,596,1172,2.05,6.28,2.5,1.15,150.19,,186,186,186,186,
1000000 575,18765267 1130,575,1130,2.04,6.19,2.49,1.15,147.16,,187,187,187,187,
1000000 1916,18759066 3827,1916,3827,2.06,6.34,2.55,1.15,157.14,1630.15,188,188,188,188,188
1000000 602,18758051 1184,602,1184,2.04,6.19,2.51,1.15,180.95,,189,189,189,189,
1000000 698,18759017 1376,698,1376,2.04,6.21,2.5,1.14,239.23,,190,190,190,190,
1000000 6702,18765728 13631,6702,13631,2.07,6.49,2.67,1.15,343.69,,191,191,191,191,
1000000 794,18764629 1571,794,1571,2.04,6.2,2.51,1.19,420.52,,192,192,192,192,
1000000 806,18760556 1594,806,1594,2.05,6.28,2.5,1.15,346.09,,193,193,193,193,
1000000 670,18759452 1321,670,1321,2.06,6.3,2.52,1.16,372.98,,194,194,194,194,
1000000 555,18760070 1091,555,1091,2.06,6.25,2.49,1.18,450.42,,195,195,195,195,
1000000 4102,18759508 8255,4102,8255,2.07,6.48,2.61,1.16,516.69,,196,196,196,196,
1000000 820,18759429 1620,820,1620,2.05,6.29,2.39,1.13,521.22,,197,197,197,197,
1000000 922,18756012 1825,922,1825,2.06,6.27,2.52,1.15,593.02,,198,198,198,198,
1000000 1847,18753561 3680,1847,3680,2.04,6.35,2.42,1.14,,,199,199,199,,
1000000 1151,18754132 2284,1151,2284,2.05,6.35,2.52,1.15,748.9,,200,200,200,200,
};
\addlegendentry{interleaved data reduction};
\addplot[color=black,mark=+, only marks] table
[x=STiDRk, y=MCDRs, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000 547,18764908 1074,547,1074,2.05,6.27,2.5,1.16,14.4,98.77,160,160,160,160,160
1000000 714,18770910 1408,714,1408,2.03,6.25,2.51,1.15,13.52,136.53,161,161,161,161,161
1000000 507,18766714 994,507,994,2.06,6.24,2.5,1.15,15.54,192.47,162,162,162,162,162
1000000 384,18746771 748,384,748,2.04,6.2,2.49,1.15,17.38,117.24,163,163,163,163,163
1000000 491,18759591 962,491,962,2.06,6.21,2.51,1.15,15.81,168.18,164,164,164,164,164
1000000 517,18779564 1014,517,1014,2.03,6.24,2.5,1.15,17.86,223.77,165,165,165,165,165
1000000 451,18760003 882,451,882,2.05,6.2,2.49,1.15,22.45,179.45,166,166,166,166,166
1000000 4697,18764259 9481,4697,9481,2.07,6.4,2.61,1.19,20.93,,167,167,167,167,
1000000 790,18762325 1562,790,1562,2.05,6.31,2.53,1.15,25.32,288.61,168,168,168,168,168
1000000 603,18768885 1187,603,1187,2.06,6.21,2.51,1.3,25.59,,169,169,169,169,
1000000 636,18772622 1252,636,1252,2.03,6.26,2.5,1.14,26.32,442.82,170,170,170,170,170
1000000 1083,18763249 2151,1083,2151,2.05,6.33,2.52,1.43,29.52,430.91,171,171,171,171,171
1000000 625,18755903 1230,625,1230,2.04,6.23,2.5,1.15,35.14,343.45,172,172,172,172,172
1000000 498,18752502 976,498,976,2.05,6.28,2.5,1.15,34.17,541.18,173,173,173,173,173
1000000 758,18760875 1496,758,1496,2.05,6.23,2.5,1.15,42.98,570.39,174,174,174,174,174
1000000 639,18761487 1258,639,1258,2.05,6.24,2.5,1.14,47.11,711.18,175,175,175,175,175
1000000 479,18744904 938,479,938,2.04,6.26,2.5,1.15,42.98,491.98,176,176,176,176,176
1000000 824,18753844 1629,824,1629,2.05,6.26,2.51,1.15,50.1,741.57,177,177,177,177,177
1000000 436,18757283 853,436,853,2.04,6.23,2.48,1.16,58.23,644.88,178,178,178,178,178
1000000 1217,18760997 2419,1217,2419,2.06,6.3,2.52,1.15,59.77,867.64,179,179,179,179,179
1000000 816,18764816 1613,816,1613,2.05,6.22,2.51,1.15,68.82,,180,180,180,180,
1000000 868,18758176 1716,868,1716,2.05,6.28,2.51,1.15,103.09,1402.45,181,181,181,181,181
1000000 2117,18762995 4222,2117,4222,2.05,6.36,2.56,1.15,87.46,,182,182,182,182,
1000000 614,18763594 1208,614,1208,2.05,6.26,2.49,1.15,124.71,,183,183,183,183,
1000000 742,18762792 1464,742,1464,2.05,6.25,2.51,1.14,115.92,2026.76,184,184,184,184,184
1000000 729,18763622 1438,729,1438,2.05,6.3,2.51,1.16,152.4,2601.62,185,185,185,185,185
1000000 596,18772695 1172,596,1172,2.05,6.28,2.5,1.15,150.19,,186,186,186,186,
1000000 575,18765267 1130,575,1130,2.04,6.19,2.49,1.15,147.16,,187,187,187,187,
1000000 1916,18759066 3827,1916,3827,2.06,6.34,2.55,1.15,157.14,1630.15,188,188,188,188,188
1000000 602,18758051 1184,602,1184,2.04,6.19,2.51,1.15,180.95,,189,189,189,189,
1000000 698,18759017 1376,698,1376,2.04,6.21,2.5,1.14,239.23,,190,190,190,190,
1000000 6702,18765728 13631,6702,13631,2.07,6.49,2.67,1.15,343.69,,191,191,191,191,
1000000 794,18764629 1571,794,1571,2.04,6.2,2.51,1.19,420.52,,192,192,192,192,
1000000 806,18760556 1594,806,1594,2.05,6.28,2.5,1.15,346.09,,193,193,193,193,
1000000 670,18759452 1321,670,1321,2.06,6.3,2.52,1.16,372.98,,194,194,194,194,
1000000 555,18760070 1091,555,1091,2.06,6.25,2.49,1.18,450.42,,195,195,195,195,
1000000 4102,18759508 8255,4102,8255,2.07,6.48,2.61,1.16,516.69,,196,196,196,196,
1000000 820,18759429 1620,820,1620,2.05,6.29,2.39,1.13,521.22,,197,197,197,197,
1000000 922,18756012 1825,922,1825,2.06,6.27,2.52,1.15,593.02,,198,198,198,198,
1000000 1847,18753561 3680,1847,3680,2.04,6.35,2.42,1.14,,,199,199,199,,
1000000 1151,18754132 2284,1151,2284,2.05,6.35,2.52,1.15,748.9,,200,200,200,200,
};
\addlegendentry{initial data reduction};
\addplot[color=black,mark=star, only marks]
table [x=STiDRk, y=MCs, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000 547,18764908 1074,547,1074,2.05,6.27,2.5,1.16,14.4,98.77,160,160,160,160,160
1000000 714,18770910 1408,714,1408,2.03,6.25,2.51,1.15,13.52,136.53,161,161,161,161,161
1000000 507,18766714 994,507,994,2.06,6.24,2.5,1.15,15.54,192.47,162,162,162,162,162
1000000 384,18746771 748,384,748,2.04,6.2,2.49,1.15,17.38,117.24,163,163,163,163,163
1000000 491,18759591 962,491,962,2.06,6.21,2.51,1.15,15.81,168.18,164,164,164,164,164
1000000 517,18779564 1014,517,1014,2.03,6.24,2.5,1.15,17.86,223.77,165,165,165,165,165
1000000 451,18760003 882,451,882,2.05,6.2,2.49,1.15,22.45,179.45,166,166,166,166,166
1000000 4697,18764259 9481,4697,9481,2.07,6.4,2.61,1.19,20.93,,167,167,167,167,
1000000 790,18762325 1562,790,1562,2.05,6.31,2.53,1.15,25.32,288.61,168,168,168,168,168
1000000 603,18768885 1187,603,1187,2.06,6.21,2.51,1.3,25.59,,169,169,169,169,
1000000 636,18772622 1252,636,1252,2.03,6.26,2.5,1.14,26.32,442.82,170,170,170,170,170
1000000 1083,18763249 2151,1083,2151,2.05,6.33,2.52,1.43,29.52,430.91,171,171,171,171,171
1000000 625,18755903 1230,625,1230,2.04,6.23,2.5,1.15,35.14,343.45,172,172,172,172,172
1000000 498,18752502 976,498,976,2.05,6.28,2.5,1.15,34.17,541.18,173,173,173,173,173
1000000 758,18760875 1496,758,1496,2.05,6.23,2.5,1.15,42.98,570.39,174,174,174,174,174
1000000 639,18761487 1258,639,1258,2.05,6.24,2.5,1.14,47.11,711.18,175,175,175,175,175
1000000 479,18744904 938,479,938,2.04,6.26,2.5,1.15,42.98,491.98,176,176,176,176,176
1000000 824,18753844 1629,824,1629,2.05,6.26,2.51,1.15,50.1,741.57,177,177,177,177,177
1000000 436,18757283 853,436,853,2.04,6.23,2.48,1.16,58.23,644.88,178,178,178,178,178
1000000 1217,18760997 2419,1217,2419,2.06,6.3,2.52,1.15,59.77,867.64,179,179,179,179,179
1000000 816,18764816 1613,816,1613,2.05,6.22,2.51,1.15,68.82,,180,180,180,180,
1000000 868,18758176 1716,868,1716,2.05,6.28,2.51,1.15,103.09,1402.45,181,181,181,181,181
1000000 2117,18762995 4222,2117,4222,2.05,6.36,2.56,1.15,87.46,,182,182,182,182,
1000000 614,18763594 1208,614,1208,2.05,6.26,2.49,1.15,124.71,,183,183,183,183,
1000000 742,18762792 1464,742,1464,2.05,6.25,2.51,1.14,115.92,2026.76,184,184,184,184,184
1000000 729,18763622 1438,729,1438,2.05,6.3,2.51,1.16,152.4,2601.62,185,185,185,185,185
1000000 596,18772695 1172,596,1172,2.05,6.28,2.5,1.15,150.19,,186,186,186,186,
1000000 575,18765267 1130,575,1130,2.04,6.19,2.49,1.15,147.16,,187,187,187,187,
1000000 1916,18759066 3827,1916,3827,2.06,6.34,2.55,1.15,157.14,1630.15,188,188,188,188,188
1000000 602,18758051 1184,602,1184,2.04,6.19,2.51,1.15,180.95,,189,189,189,189,
1000000 698,18759017 1376,698,1376,2.04,6.21,2.5,1.14,239.23,,190,190,190,190,
1000000 6702,18765728 13631,6702,13631,2.07,6.49,2.67,1.15,343.69,,191,191,191,191,
1000000 794,18764629 1571,794,1571,2.04,6.2,2.51,1.19,420.52,,192,192,192,192,
1000000 806,18760556 1594,806,1594,2.05,6.28,2.5,1.15,346.09,,193,193,193,193,
1000000 670,18759452 1321,670,1321,2.06,6.3,2.52,1.16,372.98,,194,194,194,194,
1000000 555,18760070 1091,555,1091,2.06,6.25,2.49,1.18,450.42,,195,195,195,195,
1000000 4102,18759508 8255,4102,8255,2.07,6.48,2.61,1.16,516.69,,196,196,196,196,
1000000 820,18759429 1620,820,1620,2.05,6.29,2.39,1.13,521.22,,197,197,197,197,
1000000 922,18756012 1825,922,1825,2.06,6.27,2.52,1.15,593.02,,198,198,198,198,
1000000 1847,18753561 3680,1847,3680,2.04,6.35,2.42,1.14,,,199,199,199,,
1000000 1151,18754132 2284,1151,2284,2.05,6.35,2.52,1.15,748.9,,200,200,200,200,
};
\addlegendentry{no data reduction};
\end{axis}
\end{tikzpicture}\hfill{}
\begin{tikzpicture}
\begin{axis}[ylabel=running time, xlabel=weight~$k$ of
optimal partitioning set{}, y unit=s, xmax=200.5, xmin=159.5, ymax=400,
width=0.45\textwidth]
\addplot[color=black,mark=o, only marks] table
[x=STiDRk, y=STiDRs, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000 547,18764908 1074,547,1074,2.05,6.27,2.5,1.16,14.4,98.77,160,160,160,160,160
1000000 714,18770910 1408,714,1408,2.03,6.25,2.51,1.15,13.52,136.53,161,161,161,161,161
1000000 507,18766714 994,507,994,2.06,6.24,2.5,1.15,15.54,192.47,162,162,162,162,162
1000000 384,18746771 748,384,748,2.04,6.2,2.49,1.15,17.38,117.24,163,163,163,163,163
1000000 491,18759591 962,491,962,2.06,6.21,2.51,1.15,15.81,168.18,164,164,164,164,164
1000000 517,18779564 1014,517,1014,2.03,6.24,2.5,1.15,17.86,223.77,165,165,165,165,165
1000000 451,18760003 882,451,882,2.05,6.2,2.49,1.15,22.45,179.45,166,166,166,166,166
1000000 4697,18764259 9481,4697,9481,2.07,6.4,2.61,1.19,20.93,,167,167,167,167,
1000000 790,18762325 1562,790,1562,2.05,6.31,2.53,1.15,25.32,288.61,168,168,168,168,168
1000000 603,18768885 1187,603,1187,2.06,6.21,2.51,1.3,25.59,,169,169,169,169,
1000000 636,18772622 1252,636,1252,2.03,6.26,2.5,1.14,26.32,442.82,170,170,170,170,170
1000000 1083,18763249 2151,1083,2151,2.05,6.33,2.52,1.43,29.52,430.91,171,171,171,171,171
1000000 625,18755903 1230,625,1230,2.04,6.23,2.5,1.15,35.14,343.45,172,172,172,172,172
1000000 498,18752502 976,498,976,2.05,6.28,2.5,1.15,34.17,541.18,173,173,173,173,173
1000000 758,18760875 1496,758,1496,2.05,6.23,2.5,1.15,42.98,570.39,174,174,174,174,174
1000000 639,18761487 1258,639,1258,2.05,6.24,2.5,1.14,47.11,711.18,175,175,175,175,175
1000000 479,18744904 938,479,938,2.04,6.26,2.5,1.15,42.98,491.98,176,176,176,176,176
1000000 824,18753844 1629,824,1629,2.05,6.26,2.51,1.15,50.1,741.57,177,177,177,177,177
1000000 436,18757283 853,436,853,2.04,6.23,2.48,1.16,58.23,644.88,178,178,178,178,178
1000000 1217,18760997 2419,1217,2419,2.06,6.3,2.52,1.15,59.77,867.64,179,179,179,179,179
1000000 816,18764816 1613,816,1613,2.05,6.22,2.51,1.15,68.82,,180,180,180,180,
1000000 868,18758176 1716,868,1716,2.05,6.28,2.51,1.15,103.09,1402.45,181,181,181,181,181
1000000 2117,18762995 4222,2117,4222,2.05,6.36,2.56,1.15,87.46,,182,182,182,182,
1000000 614,18763594 1208,614,1208,2.05,6.26,2.49,1.15,124.71,,183,183,183,183,
1000000 742,18762792 1464,742,1464,2.05,6.25,2.51,1.14,115.92,2026.76,184,184,184,184,184
1000000 729,18763622 1438,729,1438,2.05,6.3,2.51,1.16,152.4,2601.62,185,185,185,185,185
1000000 596,18772695 1172,596,1172,2.05,6.28,2.5,1.15,150.19,,186,186,186,186,
1000000 575,18765267 1130,575,1130,2.04,6.19,2.49,1.15,147.16,,187,187,187,187,
1000000 1916,18759066 3827,1916,3827,2.06,6.34,2.55,1.15,157.14,1630.15,188,188,188,188,188
1000000 602,18758051 1184,602,1184,2.04,6.19,2.51,1.15,180.95,,189,189,189,189,
1000000 698,18759017 1376,698,1376,2.04,6.21,2.5,1.14,239.23,,190,190,190,190,
1000000 6702,18765728 13631,6702,13631,2.07,6.49,2.67,1.15,343.69,,191,191,191,191,
1000000 794,18764629 1571,794,1571,2.04,6.2,2.51,1.19,420.52,,192,192,192,192,
1000000 806,18760556 1594,806,1594,2.05,6.28,2.5,1.15,346.09,,193,193,193,193,
1000000 670,18759452 1321,670,1321,2.06,6.3,2.52,1.16,372.98,,194,194,194,194,
1000000 555,18760070 1091,555,1091,2.06,6.25,2.49,1.18,450.42,,195,195,195,195,
1000000 4102,18759508 8255,4102,8255,2.07,6.48,2.61,1.16,516.69,,196,196,196,196,
1000000 820,18759429 1620,820,1620,2.05,6.29,2.39,1.13,521.22,,197,197,197,197,
1000000 922,18756012 1825,922,1825,2.06,6.27,2.52,1.15,593.02,,198,198,198,198,
1000000 1847,18753561 3680,1847,3680,2.04,6.35,2.42,1.14,,,199,199,199,,
1000000 1151,18754132 2284,1151,2284,2.05,6.35,2.52,1.15,748.9,,200,200,200,200,
};
\addplot[color=black,mark=+, only marks] table
[x=STDRk, y=STDRs, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000 547,18764908 1074,547,1074,2.05,6.27,2.5,1.16,14.4,98.77,160,160,160,160,160
1000000 714,18770910 1408,714,1408,2.03,6.25,2.51,1.15,13.52,136.53,161,161,161,161,161
1000000 507,18766714 994,507,994,2.06,6.24,2.5,1.15,15.54,192.47,162,162,162,162,162
1000000 384,18746771 748,384,748,2.04,6.2,2.49,1.15,17.38,117.24,163,163,163,163,163
1000000 491,18759591 962,491,962,2.06,6.21,2.51,1.15,15.81,168.18,164,164,164,164,164
1000000 517,18779564 1014,517,1014,2.03,6.24,2.5,1.15,17.86,223.77,165,165,165,165,165
1000000 451,18760003 882,451,882,2.05,6.2,2.49,1.15,22.45,179.45,166,166,166,166,166
1000000 4697,18764259 9481,4697,9481,2.07,6.4,2.61,1.19,20.93,,167,167,167,167,
1000000 790,18762325 1562,790,1562,2.05,6.31,2.53,1.15,25.32,288.61,168,168,168,168,168
1000000 603,18768885 1187,603,1187,2.06,6.21,2.51,1.3,25.59,,169,169,169,169,
1000000 636,18772622 1252,636,1252,2.03,6.26,2.5,1.14,26.32,442.82,170,170,170,170,170
1000000 1083,18763249 2151,1083,2151,2.05,6.33,2.52,1.43,29.52,430.91,171,171,171,171,171
1000000 625,18755903 1230,625,1230,2.04,6.23,2.5,1.15,35.14,343.45,172,172,172,172,172
1000000 498,18752502 976,498,976,2.05,6.28,2.5,1.15,34.17,541.18,173,173,173,173,173
1000000 758,18760875 1496,758,1496,2.05,6.23,2.5,1.15,42.98,570.39,174,174,174,174,174
1000000 639,18761487 1258,639,1258,2.05,6.24,2.5,1.14,47.11,711.18,175,175,175,175,175
1000000 479,18744904 938,479,938,2.04,6.26,2.5,1.15,42.98,491.98,176,176,176,176,176
1000000 824,18753844 1629,824,1629,2.05,6.26,2.51,1.15,50.1,741.57,177,177,177,177,177
1000000 436,18757283 853,436,853,2.04,6.23,2.48,1.16,58.23,644.88,178,178,178,178,178
1000000 1217,18760997 2419,1217,2419,2.06,6.3,2.52,1.15,59.77,867.64,179,179,179,179,179
1000000 816,18764816 1613,816,1613,2.05,6.22,2.51,1.15,68.82,,180,180,180,180,
1000000 868,18758176 1716,868,1716,2.05,6.28,2.51,1.15,103.09,1402.45,181,181,181,181,181
1000000 2117,18762995 4222,2117,4222,2.05,6.36,2.56,1.15,87.46,,182,182,182,182,
1000000 614,18763594 1208,614,1208,2.05,6.26,2.49,1.15,124.71,,183,183,183,183,
1000000 742,18762792 1464,742,1464,2.05,6.25,2.51,1.14,115.92,2026.76,184,184,184,184,184
1000000 729,18763622 1438,729,1438,2.05,6.3,2.51,1.16,152.4,2601.62,185,185,185,185,185
1000000 596,18772695 1172,596,1172,2.05,6.28,2.5,1.15,150.19,,186,186,186,186,
1000000 575,18765267 1130,575,1130,2.04,6.19,2.49,1.15,147.16,,187,187,187,187,
1000000 1916,18759066 3827,1916,3827,2.06,6.34,2.55,1.15,157.14,1630.15,188,188,188,188,188
1000000 602,18758051 1184,602,1184,2.04,6.19,2.51,1.15,180.95,,189,189,189,189,
1000000 698,18759017 1376,698,1376,2.04,6.21,2.5,1.14,239.23,,190,190,190,190,
1000000 6702,18765728 13631,6702,13631,2.07,6.49,2.67,1.15,343.69,,191,191,191,191,
1000000 794,18764629 1571,794,1571,2.04,6.2,2.51,1.19,420.52,,192,192,192,192,
1000000 806,18760556 1594,806,1594,2.05,6.28,2.5,1.15,346.09,,193,193,193,193,
1000000 670,18759452 1321,670,1321,2.06,6.3,2.52,1.16,372.98,,194,194,194,194,
1000000 555,18760070 1091,555,1091,2.06,6.25,2.49,1.18,450.42,,195,195,195,195,
1000000 4102,18759508 8255,4102,8255,2.07,6.48,2.61,1.16,516.69,,196,196,196,196,
1000000 820,18759429 1620,820,1620,2.05,6.29,2.39,1.13,521.22,,197,197,197,197,
1000000 922,18756012 1825,922,1825,2.06,6.27,2.52,1.15,593.02,,198,198,198,198,
1000000 1847,18753561 3680,1847,3680,2.04,6.35,2.42,1.14,,,199,199,199,,
1000000 1151,18754132 2284,1151,2284,2.05,6.35,2.52,1.15,748.9,,200,200,200,200,
};
\end{axis}
\end{tikzpicture}
\caption{Comparisons of the running time of \citet{LBK09}'s heuristic
(left) with the running time of our search tree algorithm (right).
All graphs have $10^6$~vertices and roughly $18\cdot 10^6$~arcs and
were generated by adding $k$~random arcs between ten connected
components, each being a preferential attachment graph on
$10^5$~vertices with outdegree twenty and a single sink. The
heuristic solved all 40~instances optimally. On the right hand
side, no instance was solved in less than an hour without data
reduction.}
\label{fig:ktime}
\end{figure}
\paragraph{Experimental results}
\autoref{fig:ktime} compares the running time of the heuristic of
\citet{LBK09} to the running time of our \autoref{alg:simple-st} with
increasing optimal partitioning set{} size~$k$. On the left side, it can be seen
that using the data reduction from \autoref{reducealg} slows down the
heuristic. This is not surprising, since the heuristic itself is
implemented to run in linear time and, hence, instead of first shrinking
the input instance by \autoref{reducealg} in linear time, one might
right away solve the instance heuristically. On the right side, one can
observe that, as expected, the running time of \autoref{alg:simple-st}
increases exponentially in~$k$. We only show the running time of the
implementations with data reduction: without data reduction, we could
not solve any instance in less than an hour. We can solve instances
with~$k\leq 190$ optimally within five minutes. This allowed us to
verify that the heuristic solved all 40~generated instances optimally,
regardless of the type of data reduction applied.
\begin{figure}
\centering\ref{dagp-legends}
\begin{tikzpicture}
\begin{axis}[ylabel=running time, xmax=80000000, xlabel=input arcs,
y unit=s, change x base, axis base prefix={axis x base -6 prefix
{}}, x unit=10^6, width=0.45\textwidth]
\addplot[color=black,mark=o,only marks, each nth point=2] table [x=E,
y=MCiDRs, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000,18767086,651,1283,2.08,6.49,2.52,1.12,225.68,,190,190,190,190,
1050000,19709159,660,1300,2.17,6.89,2.65,1.16,276.93,,190,190,190,190,
1100000,20655825,469,919,2.27,7.21,2.76,1.22,232.47,,190,190,190,190,
1150000,21604341,2901,5801,2.4,7.79,2.99,1.28,235.17,,190,190,190,190,
1200000,22564319,820,1621,2.49,7.92,3.04,1.34,276.79,,190,190,190,190,
1250000,23531472,1394,2769,2.6,8.49,3.21,1.42,241.42,,190,190,190,190,
1300000,24480730,501,982,2.72,8.81,3.3,1.49,277.68,,190,190,190,190,
1350000,25440966,467,914,2.82,9.04,3.26,1.53,233.7,,190,190,190,190,
1400000,26383694,658,1296,2.93,9.45,3.56,1.59,237.37,,190,190,190,190,
1450000,27342944,585,1151,3.03,9.8,3.7,1.67,232.26,,190,190,190,190,
1500000,28291769,445,870,3.13,10.16,3.8,1.71,237,,190,190,190,190,
1550000,29269274,2200,4399,3.28,10.78,4.03,1.88,222.51,,190,190,190,190,
1600000,30211184,1070,2120,3.34,11.01,4.09,1.84,220.52,,190,190,190,190,
1650000,31153242,1061,2102,3.47,11.38,4.23,1.96,259.86,,190,190,190,190,
1700000,32118855,1043,2067,3.59,11.85,4.37,1.97,197.58,,190,190,190,190,
1750000,33087214,1549,3083,3.7,12.16,4.53,2.1,223.84,,190,190,190,190,
1800000,34052317,1038,2056,3.79,12.57,4.64,2.1,304.66,,190,190,190,190,
1850000,34969772,711,1402,3.9,12.93,4.75,2.17,247.05,,190,190,190,190,
1900000,35955033,664,1309,4.01,13.38,4.87,2.23,227.51,,190,190,190,190,
1950000,36919935,593,1166,4.12,13.69,5.02,2.34,240.66,,190,190,190,190,
2000000,37879080,846,1674,4.25,14.25,5.17,2.42,265.78,,190,190,190,190,
2050000,38826564,562,1104,4.33,14.41,5.29,2.44,238.87,,190,190,190,190,
2100000,39796860,1075,2130,4.45,14.89,5.45,2.51,200.9,,190,190,190,190,
2150000,40753264,698,1376,4.57,15.47,5.58,2.59,225.85,,190,190,190,190,
2200000,41687628,1652,3284,4.68,15.78,5.74,2.64,191.86,,190,190,190,190,
2250000,42676754,949,1878,4.78,16.06,5.84,2.7,211.87,,190,190,190,190,
2300000,43640247,1108,2199,4.9,16.6,5.99,2.83,219.43,,190,190,190,190,
2350000,44569595,7875,15886,5.05,17.64,6.35,2.85,231.67,,190,190,190,190,
2400000,45529873,509,999,5.11,17.22,6.22,2.92,199.19,,190,190,190,190,
2450000,46502969,1385,2752,5.24,17.96,6.4,3.25,224.79,,190,190,190,190,
2500000,47473942,739,1458,5.34,18.14,6.51,3.41,204.44,,190,190,190,190,
2550000,48407592,532,1044,5.47,18.53,6.64,3.13,236.06,,190,190,190,190,
2600000,49356528,869,1718,5.57,19.45,6.78,3.26,225.7,,190,190,190,190,
2650000,50353329,734,1448,5.69,19.34,6.92,3.37,207.39,,190,190,190,190,
2700000,51309157,567,1114,5.76,19.91,7.05,3.36,231.68,,190,190,190,190,
2750000,52280543,1271,2524,5.91,20.35,7.24,3.42,218.43,,190,190,190,190,
2800000,53230505,511,1002,6.03,20.42,7.32,3.58,209.18,,190,190,190,190,
2850000,54187746,563,1106,6.16,21.03,7.49,3.57,190.19,,190,190,190,190,
2900000,55161765,1032,2045,6.24,21.38,7.61,3.62,199.89,,190,190,190,190,
2950000,56118167,1224,2429,6.35,21.78,7.78,3.74,225.98,,190,190,190,190,
3000000,57082768,980,1942,6.46,22.25,7.9,3.74,241.76,,190,190,190,190,
3050000,58006703,746,1472,6.59,22.8,8.02,3.82,237.72,,190,190,190,190,
3100000,58997271,1432,2845,6.68,23.26,8.2,3.98,196.85,,190,190,190,190,
3150000,60003314,496,974,6.8,23.44,8.3,3.96,229.18,,190,190,190,190,
3200000,60919212,1745,3474,6.92,24.05,8.5,4.13,197.47,,190,190,190,190,
3250000,61874877,474,928,7.04,24.43,8.59,4.11,191.67,,190,190,190,190,
3300000,62849494,1218,2417,7.13,24.83,8.74,4.63,211.13,,190,190,190,190,
3350000,63798424,1131,2242,7.23,25.41,8.85,4.25,217.74,,190,190,190,190,
3400000,64745452,699,1378,7.36,25.69,9,4.42,235.04,,190,190,190,190,
3450000,65700700,434,848,7.47,26.04,9.1,4.4,205.8,,190,190,190,190,
3500000,66733434,669,1319,7.59,26.4,9.31,4.71,200.11,,190,190,190,190,
3550000,67646422,698,1376,7.68,26.82,9.38,4.54,209.94,,190,190,190,190,
3600000,68604668,647,1274,7.79,27.53,9.53,4.6,227.55,,190,190,190,190,
3650000,69587650,484,948,7.92,27.66,9.69,4.81,238.75,,190,190,190,190,
3700000,70541366,883,1746,8.01,28.28,9.84,4.77,246.52,,190,190,190,190,
3750000,71517645,450,880,8.12,28.81,9.94,4.83,205.23,,190,190,190,190,
3800000,72483078,588,1156,8.28,29.11,10.1,4.93,229.07,,190,190,190,190,
3850000,73419536,571,1122,8.32,29.37,10.24,5.35,247.33,,190,190,190,190,
3900000,74381081,918,1816,8.47,30.02,10.37,5.15,215.47,,190,190,190,190,
3950000,75353990,715,1410,8.57,30.13,10.5,5.18,205.36,,190,190,190,190,
4000000,76311956,769,1518,8.7,30.67,10.66,5.21,231.4,,190,190,190,190,
4050000,77313266,668,1316,8.8,30.95,10.77,5.39,228.89,,190,190,190,190,
4100000,78249789,1168,2317,8.94,31.71,10.97,5.35,224.48,,190,190,190,190,
4150000,79221466,549,1078,9.05,32.28,11.08,5.53,265.09,,190,190,190,190,
4200000,80154009,512,1004,7.38,35.59,10.97,5.4,226.14,,190,190,190,190,
};
\addplot[color=black,mark=star, only marks, each nth point=2]
table [x=E, y=MCs, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000,18767086,651,1283,2.08,6.49,2.52,1.12,225.68,,190,190,190,190,
1050000,19709159,660,1300,2.17,6.89,2.65,1.16,276.93,,190,190,190,190,
1100000,20655825,469,919,2.27,7.21,2.76,1.22,232.47,,190,190,190,190,
1150000,21604341,2901,5801,2.4,7.79,2.99,1.28,235.17,,190,190,190,190,
1200000,22564319,820,1621,2.49,7.92,3.04,1.34,276.79,,190,190,190,190,
1250000,23531472,1394,2769,2.6,8.49,3.21,1.42,241.42,,190,190,190,190,
1300000,24480730,501,982,2.72,8.81,3.3,1.49,277.68,,190,190,190,190,
1350000,25440966,467,914,2.82,9.04,3.26,1.53,233.7,,190,190,190,190,
1400000,26383694,658,1296,2.93,9.45,3.56,1.59,237.37,,190,190,190,190,
1450000,27342944,585,1151,3.03,9.8,3.7,1.67,232.26,,190,190,190,190,
1500000,28291769,445,870,3.13,10.16,3.8,1.71,237,,190,190,190,190,
1550000,29269274,2200,4399,3.28,10.78,4.03,1.88,222.51,,190,190,190,190,
1600000,30211184,1070,2120,3.34,11.01,4.09,1.84,220.52,,190,190,190,190,
1650000,31153242,1061,2102,3.47,11.38,4.23,1.96,259.86,,190,190,190,190,
1700000,32118855,1043,2067,3.59,11.85,4.37,1.97,197.58,,190,190,190,190,
1750000,33087214,1549,3083,3.7,12.16,4.53,2.1,223.84,,190,190,190,190,
1800000,34052317,1038,2056,3.79,12.57,4.64,2.1,304.66,,190,190,190,190,
1850000,34969772,711,1402,3.9,12.93,4.75,2.17,247.05,,190,190,190,190,
1900000,35955033,664,1309,4.01,13.38,4.87,2.23,227.51,,190,190,190,190,
1950000,36919935,593,1166,4.12,13.69,5.02,2.34,240.66,,190,190,190,190,
2000000,37879080,846,1674,4.25,14.25,5.17,2.42,265.78,,190,190,190,190,
2050000,38826564,562,1104,4.33,14.41,5.29,2.44,238.87,,190,190,190,190,
2100000,39796860,1075,2130,4.45,14.89,5.45,2.51,200.9,,190,190,190,190,
2150000,40753264,698,1376,4.57,15.47,5.58,2.59,225.85,,190,190,190,190,
2200000,41687628,1652,3284,4.68,15.78,5.74,2.64,191.86,,190,190,190,190,
2250000,42676754,949,1878,4.78,16.06,5.84,2.7,211.87,,190,190,190,190,
2300000,43640247,1108,2199,4.9,16.6,5.99,2.83,219.43,,190,190,190,190,
2350000,44569595,7875,15886,5.05,17.64,6.35,2.85,231.67,,190,190,190,190,
2400000,45529873,509,999,5.11,17.22,6.22,2.92,199.19,,190,190,190,190,
2450000,46502969,1385,2752,5.24,17.96,6.4,3.25,224.79,,190,190,190,190,
2500000,47473942,739,1458,5.34,18.14,6.51,3.41,204.44,,190,190,190,190,
2550000,48407592,532,1044,5.47,18.53,6.64,3.13,236.06,,190,190,190,190,
2600000,49356528,869,1718,5.57,19.45,6.78,3.26,225.7,,190,190,190,190,
2650000,50353329,734,1448,5.69,19.34,6.92,3.37,207.39,,190,190,190,190,
2700000,51309157,567,1114,5.76,19.91,7.05,3.36,231.68,,190,190,190,190,
2750000,52280543,1271,2524,5.91,20.35,7.24,3.42,218.43,,190,190,190,190,
2800000,53230505,511,1002,6.03,20.42,7.32,3.58,209.18,,190,190,190,190,
2850000,54187746,563,1106,6.16,21.03,7.49,3.57,190.19,,190,190,190,190,
2900000,55161765,1032,2045,6.24,21.38,7.61,3.62,199.89,,190,190,190,190,
2950000,56118167,1224,2429,6.35,21.78,7.78,3.74,225.98,,190,190,190,190,
3000000,57082768,980,1942,6.46,22.25,7.9,3.74,241.76,,190,190,190,190,
3050000,58006703,746,1472,6.59,22.8,8.02,3.82,237.72,,190,190,190,190,
3100000,58997271,1432,2845,6.68,23.26,8.2,3.98,196.85,,190,190,190,190,
3150000,60003314,496,974,6.8,23.44,8.3,3.96,229.18,,190,190,190,190,
3200000,60919212,1745,3474,6.92,24.05,8.5,4.13,197.47,,190,190,190,190,
3250000,61874877,474,928,7.04,24.43,8.59,4.11,191.67,,190,190,190,190,
3300000,62849494,1218,2417,7.13,24.83,8.74,4.63,211.13,,190,190,190,190,
3350000,63798424,1131,2242,7.23,25.41,8.85,4.25,217.74,,190,190,190,190,
3400000,64745452,699,1378,7.36,25.69,9,4.42,235.04,,190,190,190,190,
3450000,65700700,434,848,7.47,26.04,9.1,4.4,205.8,,190,190,190,190,
3500000,66733434,669,1319,7.59,26.4,9.31,4.71,200.11,,190,190,190,190,
3550000,67646422,698,1376,7.68,26.82,9.38,4.54,209.94,,190,190,190,190,
3600000,68604668,647,1274,7.79,27.53,9.53,4.6,227.55,,190,190,190,190,
3650000,69587650,484,948,7.92,27.66,9.69,4.81,238.75,,190,190,190,190,
3700000,70541366,883,1746,8.01,28.28,9.84,4.77,246.52,,190,190,190,190,
3750000,71517645,450,880,8.12,28.81,9.94,4.83,205.23,,190,190,190,190,
3800000,72483078,588,1156,8.28,29.11,10.1,4.93,229.07,,190,190,190,190,
3850000,73419536,571,1122,8.32,29.37,10.24,5.35,247.33,,190,190,190,190,
3900000,74381081,918,1816,8.47,30.02,10.37,5.15,215.47,,190,190,190,190,
3950000,75353990,715,1410,8.57,30.13,10.5,5.18,205.36,,190,190,190,190,
4000000,76311956,769,1518,8.7,30.67,10.66,5.21,231.4,,190,190,190,190,
4050000,77313266,668,1316,8.8,30.95,10.77,5.39,228.89,,190,190,190,190,
4100000,78249789,1168,2317,8.94,31.71,10.97,5.35,224.48,,190,190,190,190,
4150000,79221466,549,1078,9.05,32.28,11.08,5.53,265.09,,190,190,190,190,
4200000,80154009,512,1004,7.38,35.59,10.97,5.4,226.14,,190,190,190,190,
};
\addplot[color=black,mark=+, only marks, each nth point=2] table
[x=E, y=MCDRs, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000,18767086,651,1283,2.08,6.49,2.52,1.12,225.68,,190,190,190,190,
1050000,19709159,660,1300,2.17,6.89,2.65,1.16,276.93,,190,190,190,190,
1100000,20655825,469,919,2.27,7.21,2.76,1.22,232.47,,190,190,190,190,
1150000,21604341,2901,5801,2.4,7.79,2.99,1.28,235.17,,190,190,190,190,
1200000,22564319,820,1621,2.49,7.92,3.04,1.34,276.79,,190,190,190,190,
1250000,23531472,1394,2769,2.6,8.49,3.21,1.42,241.42,,190,190,190,190,
1300000,24480730,501,982,2.72,8.81,3.3,1.49,277.68,,190,190,190,190,
1350000,25440966,467,914,2.82,9.04,3.26,1.53,233.7,,190,190,190,190,
1400000,26383694,658,1296,2.93,9.45,3.56,1.59,237.37,,190,190,190,190,
1450000,27342944,585,1151,3.03,9.8,3.7,1.67,232.26,,190,190,190,190,
1500000,28291769,445,870,3.13,10.16,3.8,1.71,237,,190,190,190,190,
1550000,29269274,2200,4399,3.28,10.78,4.03,1.88,222.51,,190,190,190,190,
1600000,30211184,1070,2120,3.34,11.01,4.09,1.84,220.52,,190,190,190,190,
1650000,31153242,1061,2102,3.47,11.38,4.23,1.96,259.86,,190,190,190,190,
1700000,32118855,1043,2067,3.59,11.85,4.37,1.97,197.58,,190,190,190,190,
1750000,33087214,1549,3083,3.7,12.16,4.53,2.1,223.84,,190,190,190,190,
1800000,34052317,1038,2056,3.79,12.57,4.64,2.1,304.66,,190,190,190,190,
1850000,34969772,711,1402,3.9,12.93,4.75,2.17,247.05,,190,190,190,190,
1900000,35955033,664,1309,4.01,13.38,4.87,2.23,227.51,,190,190,190,190,
1950000,36919935,593,1166,4.12,13.69,5.02,2.34,240.66,,190,190,190,190,
2000000,37879080,846,1674,4.25,14.25,5.17,2.42,265.78,,190,190,190,190,
2050000,38826564,562,1104,4.33,14.41,5.29,2.44,238.87,,190,190,190,190,
2100000,39796860,1075,2130,4.45,14.89,5.45,2.51,200.9,,190,190,190,190,
2150000,40753264,698,1376,4.57,15.47,5.58,2.59,225.85,,190,190,190,190,
2200000,41687628,1652,3284,4.68,15.78,5.74,2.64,191.86,,190,190,190,190,
2250000,42676754,949,1878,4.78,16.06,5.84,2.7,211.87,,190,190,190,190,
2300000,43640247,1108,2199,4.9,16.6,5.99,2.83,219.43,,190,190,190,190,
2350000,44569595,7875,15886,5.05,17.64,6.35,2.85,231.67,,190,190,190,190,
2400000,45529873,509,999,5.11,17.22,6.22,2.92,199.19,,190,190,190,190,
2450000,46502969,1385,2752,5.24,17.96,6.4,3.25,224.79,,190,190,190,190,
2500000,47473942,739,1458,5.34,18.14,6.51,3.41,204.44,,190,190,190,190,
2550000,48407592,532,1044,5.47,18.53,6.64,3.13,236.06,,190,190,190,190,
2600000,49356528,869,1718,5.57,19.45,6.78,3.26,225.7,,190,190,190,190,
2650000,50353329,734,1448,5.69,19.34,6.92,3.37,207.39,,190,190,190,190,
2700000,51309157,567,1114,5.76,19.91,7.05,3.36,231.68,,190,190,190,190,
2750000,52280543,1271,2524,5.91,20.35,7.24,3.42,218.43,,190,190,190,190,
2800000,53230505,511,1002,6.03,20.42,7.32,3.58,209.18,,190,190,190,190,
2850000,54187746,563,1106,6.16,21.03,7.49,3.57,190.19,,190,190,190,190,
2900000,55161765,1032,2045,6.24,21.38,7.61,3.62,199.89,,190,190,190,190,
2950000,56118167,1224,2429,6.35,21.78,7.78,3.74,225.98,,190,190,190,190,
3000000,57082768,980,1942,6.46,22.25,7.9,3.74,241.76,,190,190,190,190,
3050000,58006703,746,1472,6.59,22.8,8.02,3.82,237.72,,190,190,190,190,
3100000,58997271,1432,2845,6.68,23.26,8.2,3.98,196.85,,190,190,190,190,
3150000,60003314,496,974,6.8,23.44,8.3,3.96,229.18,,190,190,190,190,
3200000,60919212,1745,3474,6.92,24.05,8.5,4.13,197.47,,190,190,190,190,
3250000,61874877,474,928,7.04,24.43,8.59,4.11,191.67,,190,190,190,190,
3300000,62849494,1218,2417,7.13,24.83,8.74,4.63,211.13,,190,190,190,190,
3350000,63798424,1131,2242,7.23,25.41,8.85,4.25,217.74,,190,190,190,190,
3400000,64745452,699,1378,7.36,25.69,9,4.42,235.04,,190,190,190,190,
3450000,65700700,434,848,7.47,26.04,9.1,4.4,205.8,,190,190,190,190,
3500000,66733434,669,1319,7.59,26.4,9.31,4.71,200.11,,190,190,190,190,
3550000,67646422,698,1376,7.68,26.82,9.38,4.54,209.94,,190,190,190,190,
3600000,68604668,647,1274,7.79,27.53,9.53,4.6,227.55,,190,190,190,190,
3650000,69587650,484,948,7.92,27.66,9.69,4.81,238.75,,190,190,190,190,
3700000,70541366,883,1746,8.01,28.28,9.84,4.77,246.52,,190,190,190,190,
3750000,71517645,450,880,8.12,28.81,9.94,4.83,205.23,,190,190,190,190,
3800000,72483078,588,1156,8.28,29.11,10.1,4.93,229.07,,190,190,190,190,
3850000,73419536,571,1122,8.32,29.37,10.24,5.35,247.33,,190,190,190,190,
3900000,74381081,918,1816,8.47,30.02,10.37,5.15,215.47,,190,190,190,190,
3950000,75353990,715,1410,8.57,30.13,10.5,5.18,205.36,,190,190,190,190,
4000000,76311956,769,1518,8.7,30.67,10.66,5.21,231.4,,190,190,190,190,
4050000,77313266,668,1316,8.8,30.95,10.77,5.39,228.89,,190,190,190,190,
4100000,78249789,1168,2317,8.94,31.71,10.97,5.35,224.48,,190,190,190,190,
4150000,79221466,549,1078,9.05,32.28,11.08,5.53,265.09,,190,190,190,190,
4200000,80154009,512,1004,7.38,35.59,10.97,5.4,226.14,,190,190,190,190,
};
\end{axis}
\end{tikzpicture}\hfill{}
\begin{tikzpicture}
\begin{axis}[ymin=0,ylabel=running time, xmax=80000000,
xlabel=input arcs,change x base, axis base prefix={axis x base -6
prefix {}}, x unit=10^6, y unit=s, width=0.45\textwidth,ymax=400]
\addplot[color=black,mark=o,only marks] table
[x=E, y=STiDRs, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000,18767086,651,1283,2.08,6.49,2.52,1.12,225.68,,190,190,190,190,
1050000,19709159,660,1300,2.17,6.89,2.65,1.16,276.93,,190,190,190,190,
1100000,20655825,469,919,2.27,7.21,2.76,1.22,232.47,,190,190,190,190,
1150000,21604341,2901,5801,2.4,7.79,2.99,1.28,235.17,,190,190,190,190,
1200000,22564319,820,1621,2.49,7.92,3.04,1.34,276.79,,190,190,190,190,
1250000,23531472,1394,2769,2.6,8.49,3.21,1.42,241.42,,190,190,190,190,
1300000,24480730,501,982,2.72,8.81,3.3,1.49,277.68,,190,190,190,190,
1350000,25440966,467,914,2.82,9.04,3.26,1.53,233.7,,190,190,190,190,
1400000,26383694,658,1296,2.93,9.45,3.56,1.59,237.37,,190,190,190,190,
1450000,27342944,585,1151,3.03,9.8,3.7,1.67,232.26,,190,190,190,190,
1500000,28291769,445,870,3.13,10.16,3.8,1.71,237,,190,190,190,190,
1550000,29269274,2200,4399,3.28,10.78,4.03,1.88,222.51,,190,190,190,190,
1600000,30211184,1070,2120,3.34,11.01,4.09,1.84,220.52,,190,190,190,190,
1650000,31153242,1061,2102,3.47,11.38,4.23,1.96,259.86,,190,190,190,190,
1700000,32118855,1043,2067,3.59,11.85,4.37,1.97,197.58,,190,190,190,190,
1750000,33087214,1549,3083,3.7,12.16,4.53,2.1,223.84,,190,190,190,190,
1800000,34052317,1038,2056,3.79,12.57,4.64,2.1,304.66,,190,190,190,190,
1850000,34969772,711,1402,3.9,12.93,4.75,2.17,247.05,,190,190,190,190,
1900000,35955033,664,1309,4.01,13.38,4.87,2.23,227.51,,190,190,190,190,
1950000,36919935,593,1166,4.12,13.69,5.02,2.34,240.66,,190,190,190,190,
2000000,37879080,846,1674,4.25,14.25,5.17,2.42,265.78,,190,190,190,190,
2050000,38826564,562,1104,4.33,14.41,5.29,2.44,238.87,,190,190,190,190,
2100000,39796860,1075,2130,4.45,14.89,5.45,2.51,200.9,,190,190,190,190,
2150000,40753264,698,1376,4.57,15.47,5.58,2.59,225.85,,190,190,190,190,
2200000,41687628,1652,3284,4.68,15.78,5.74,2.64,191.86,,190,190,190,190,
2250000,42676754,949,1878,4.78,16.06,5.84,2.7,211.87,,190,190,190,190,
2300000,43640247,1108,2199,4.9,16.6,5.99,2.83,219.43,,190,190,190,190,
2350000,44569595,7875,15886,5.05,17.64,6.35,2.85,231.67,,190,190,190,190,
2400000,45529873,509,999,5.11,17.22,6.22,2.92,199.19,,190,190,190,190,
2450000,46502969,1385,2752,5.24,17.96,6.4,3.25,224.79,,190,190,190,190,
2500000,47473942,739,1458,5.34,18.14,6.51,3.41,204.44,,190,190,190,190,
2550000,48407592,532,1044,5.47,18.53,6.64,3.13,236.06,,190,190,190,190,
2600000,49356528,869,1718,5.57,19.45,6.78,3.26,225.7,,190,190,190,190,
2650000,50353329,734,1448,5.69,19.34,6.92,3.37,207.39,,190,190,190,190,
2700000,51309157,567,1114,5.76,19.91,7.05,3.36,231.68,,190,190,190,190,
2750000,52280543,1271,2524,5.91,20.35,7.24,3.42,218.43,,190,190,190,190,
2800000,53230505,511,1002,6.03,20.42,7.32,3.58,209.18,,190,190,190,190,
2850000,54187746,563,1106,6.16,21.03,7.49,3.57,190.19,,190,190,190,190,
2900000,55161765,1032,2045,6.24,21.38,7.61,3.62,199.89,,190,190,190,190,
2950000,56118167,1224,2429,6.35,21.78,7.78,3.74,225.98,,190,190,190,190,
3000000,57082768,980,1942,6.46,22.25,7.9,3.74,241.76,,190,190,190,190,
3050000,58006703,746,1472,6.59,22.8,8.02,3.82,237.72,,190,190,190,190,
3100000,58997271,1432,2845,6.68,23.26,8.2,3.98,196.85,,190,190,190,190,
3150000,60003314,496,974,6.8,23.44,8.3,3.96,229.18,,190,190,190,190,
3200000,60919212,1745,3474,6.92,24.05,8.5,4.13,197.47,,190,190,190,190,
3250000,61874877,474,928,7.04,24.43,8.59,4.11,191.67,,190,190,190,190,
3300000,62849494,1218,2417,7.13,24.83,8.74,4.63,211.13,,190,190,190,190,
3350000,63798424,1131,2242,7.23,25.41,8.85,4.25,217.74,,190,190,190,190,
3400000,64745452,699,1378,7.36,25.69,9,4.42,235.04,,190,190,190,190,
3450000,65700700,434,848,7.47,26.04,9.1,4.4,205.8,,190,190,190,190,
3500000,66733434,669,1319,7.59,26.4,9.31,4.71,200.11,,190,190,190,190,
3550000,67646422,698,1376,7.68,26.82,9.38,4.54,209.94,,190,190,190,190,
3600000,68604668,647,1274,7.79,27.53,9.53,4.6,227.55,,190,190,190,190,
3650000,69587650,484,948,7.92,27.66,9.69,4.81,238.75,,190,190,190,190,
3700000,70541366,883,1746,8.01,28.28,9.84,4.77,246.52,,190,190,190,190,
3750000,71517645,450,880,8.12,28.81,9.94,4.83,205.23,,190,190,190,190,
3800000,72483078,588,1156,8.28,29.11,10.1,4.93,229.07,,190,190,190,190,
3850000,73419536,571,1122,8.32,29.37,10.24,5.35,247.33,,190,190,190,190,
3900000,74381081,918,1816,8.47,30.02,10.37,5.15,215.47,,190,190,190,190,
3950000,75353990,715,1410,8.57,30.13,10.5,5.18,205.36,,190,190,190,190,
4000000,76311956,769,1518,8.7,30.67,10.66,5.21,231.4,,190,190,190,190,
4050000,77313266,668,1316,8.8,30.95,10.77,5.39,228.89,,190,190,190,190,
4100000,78249789,1168,2317,8.94,31.71,10.97,5.35,224.48,,190,190,190,190,
4150000,79221466,549,1078,9.05,32.28,11.08,5.53,265.09,,190,190,190,190,
4200000,80154009,512,1004,7.38,35.59,10.97,5.4,226.14,,190,190,190,190,
};
\end{axis}
\end{tikzpicture}
\caption{Comparisons of the running time of \citet{LBK09}'s heuristic
(left) with the running time of our search tree algorithm
(right). Without interleaved data reduction, the search tree solved
no instance in less than 5~minutes. The graphs were generated by
adding $k=190$ random arcs between ten connected components, each
being a preferential attachment graph with outdegree twenty and a
single sink. The heuristic solved all 80~instances optimally.}
\label{fig:runtime}
\end{figure}
\autoref{fig:runtime} compares the running time of the heuristic of
\citet{LBK09} to the running time of our \autoref{alg:simple-st} with
increasing graph size. While the heuristic shows a linear increase of
running time with the graph size (on the left side), such a behavior
cannot be observed for the search tree algorithm (on the right
side). The reason for this can be seen in \autoref{fig:dreffect}: the
data reduction applied by \autoref{reducealg} initially shrinks most
input instances to about 2000~arcs in less than ten seconds. Thus, what
we observe in the right plot of \autoref{fig:runtime} is, to a large
extent, the running time of \autoref{alg:simple-st} for constant~$k=190$
and roughly constant graph size. Our search tree algorithm allowed us
to verify that the heuristic by \citet{LBK09} solved all 80~generated
instances optimally regardless of the type of data reduction applied.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[ylabel=output arcs, xmax=80000000, xlabel=input arcs,
change x base, axis base prefix={axis x base -6 prefix {}}, x
unit=10^6, width=0.45\textwidth, ymax=7000]
\addplot[color=black,mark=+,only marks] table [x=E,
y=DRE, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000,18767086,651,1283,2.08,6.49,2.52,1.12,225.68,,190,190,190,190,
1050000,19709159,660,1300,2.17,6.89,2.65,1.16,276.93,,190,190,190,190,
1100000,20655825,469,919,2.27,7.21,2.76,1.22,232.47,,190,190,190,190,
1150000,21604341,2901,5801,2.4,7.79,2.99,1.28,235.17,,190,190,190,190,
1200000,22564319,820,1621,2.49,7.92,3.04,1.34,276.79,,190,190,190,190,
1250000,23531472,1394,2769,2.6,8.49,3.21,1.42,241.42,,190,190,190,190,
1300000,24480730,501,982,2.72,8.81,3.3,1.49,277.68,,190,190,190,190,
1350000,25440966,467,914,2.82,9.04,3.26,1.53,233.7,,190,190,190,190,
1400000,26383694,658,1296,2.93,9.45,3.56,1.59,237.37,,190,190,190,190,
1450000,27342944,585,1151,3.03,9.8,3.7,1.67,232.26,,190,190,190,190,
1500000,28291769,445,870,3.13,10.16,3.8,1.71,237,,190,190,190,190,
1550000,29269274,2200,4399,3.28,10.78,4.03,1.88,222.51,,190,190,190,190,
1600000,30211184,1070,2120,3.34,11.01,4.09,1.84,220.52,,190,190,190,190,
1650000,31153242,1061,2102,3.47,11.38,4.23,1.96,259.86,,190,190,190,190,
1700000,32118855,1043,2067,3.59,11.85,4.37,1.97,197.58,,190,190,190,190,
1750000,33087214,1549,3083,3.7,12.16,4.53,2.1,223.84,,190,190,190,190,
1800000,34052317,1038,2056,3.79,12.57,4.64,2.1,304.66,,190,190,190,190,
1850000,34969772,711,1402,3.9,12.93,4.75,2.17,247.05,,190,190,190,190,
1900000,35955033,664,1309,4.01,13.38,4.87,2.23,227.51,,190,190,190,190,
1950000,36919935,593,1166,4.12,13.69,5.02,2.34,240.66,,190,190,190,190,
2000000,37879080,846,1674,4.25,14.25,5.17,2.42,265.78,,190,190,190,190,
2050000,38826564,562,1104,4.33,14.41,5.29,2.44,238.87,,190,190,190,190,
2100000,39796860,1075,2130,4.45,14.89,5.45,2.51,200.9,,190,190,190,190,
2150000,40753264,698,1376,4.57,15.47,5.58,2.59,225.85,,190,190,190,190,
2200000,41687628,1652,3284,4.68,15.78,5.74,2.64,191.86,,190,190,190,190,
2250000,42676754,949,1878,4.78,16.06,5.84,2.7,211.87,,190,190,190,190,
2300000,43640247,1108,2199,4.9,16.6,5.99,2.83,219.43,,190,190,190,190,
2350000,44569595,7875,15886,5.05,17.64,6.35,2.85,231.67,,190,190,190,190,
2400000,45529873,509,999,5.11,17.22,6.22,2.92,199.19,,190,190,190,190,
2450000,46502969,1385,2752,5.24,17.96,6.4,3.25,224.79,,190,190,190,190,
2500000,47473942,739,1458,5.34,18.14,6.51,3.41,204.44,,190,190,190,190,
2550000,48407592,532,1044,5.47,18.53,6.64,3.13,236.06,,190,190,190,190,
2600000,49356528,869,1718,5.57,19.45,6.78,3.26,225.7,,190,190,190,190,
2650000,50353329,734,1448,5.69,19.34,6.92,3.37,207.39,,190,190,190,190,
2700000,51309157,567,1114,5.76,19.91,7.05,3.36,231.68,,190,190,190,190,
2750000,52280543,1271,2524,5.91,20.35,7.24,3.42,218.43,,190,190,190,190,
2800000,53230505,511,1002,6.03,20.42,7.32,3.58,209.18,,190,190,190,190,
2850000,54187746,563,1106,6.16,21.03,7.49,3.57,190.19,,190,190,190,190,
2900000,55161765,1032,2045,6.24,21.38,7.61,3.62,199.89,,190,190,190,190,
2950000,56118167,1224,2429,6.35,21.78,7.78,3.74,225.98,,190,190,190,190,
3000000,57082768,980,1942,6.46,22.25,7.9,3.74,241.76,,190,190,190,190,
3050000,58006703,746,1472,6.59,22.8,8.02,3.82,237.72,,190,190,190,190,
3100000,58997271,1432,2845,6.68,23.26,8.2,3.98,196.85,,190,190,190,190,
3150000,60003314,496,974,6.8,23.44,8.3,3.96,229.18,,190,190,190,190,
3200000,60919212,1745,3474,6.92,24.05,8.5,4.13,197.47,,190,190,190,190,
3250000,61874877,474,928,7.04,24.43,8.59,4.11,191.67,,190,190,190,190,
3300000,62849494,1218,2417,7.13,24.83,8.74,4.63,211.13,,190,190,190,190,
3350000,63798424,1131,2242,7.23,25.41,8.85,4.25,217.74,,190,190,190,190,
3400000,64745452,699,1378,7.36,25.69,9,4.42,235.04,,190,190,190,190,
3450000,65700700,434,848,7.47,26.04,9.1,4.4,205.8,,190,190,190,190,
3500000,66733434,669,1319,7.59,26.4,9.31,4.71,200.11,,190,190,190,190,
3550000,67646422,698,1376,7.68,26.82,9.38,4.54,209.94,,190,190,190,190,
3600000,68604668,647,1274,7.79,27.53,9.53,4.6,227.55,,190,190,190,190,
3650000,69587650,484,948,7.92,27.66,9.69,4.81,238.75,,190,190,190,190,
3700000,70541366,883,1746,8.01,28.28,9.84,4.77,246.52,,190,190,190,190,
3750000,71517645,450,880,8.12,28.81,9.94,4.83,205.23,,190,190,190,190,
3800000,72483078,588,1156,8.28,29.11,10.1,4.93,229.07,,190,190,190,190,
3850000,73419536,571,1122,8.32,29.37,10.24,5.35,247.33,,190,190,190,190,
3900000,74381081,918,1816,8.47,30.02,10.37,5.15,215.47,,190,190,190,190,
3950000,75353990,715,1410,8.57,30.13,10.5,5.18,205.36,,190,190,190,190,
4000000,76311956,769,1518,8.7,30.67,10.66,5.21,231.4,,190,190,190,190,
4050000,77313266,668,1316,8.8,30.95,10.77,5.39,228.89,,190,190,190,190,
4100000,78249789,1168,2317,8.94,31.71,10.97,5.35,224.48,,190,190,190,190,
4150000,79221466,549,1078,9.05,32.28,11.08,5.53,265.09,,190,190,190,190,
4200000,80154009,512,1004,7.38,35.59,10.97,5.4,226.14,,190,190,190,190,
};
\end{axis}
\end{tikzpicture}\hfill{}
\begin{tikzpicture}
\begin{axis}[ymin=0,ylabel=running time, xmax=80000000,
xlabel=input arcs,change x base, axis base prefix={axis x base -6
prefix {}}, x unit=10^6, y unit=s, width=0.45\textwidth]
\addplot[color=black,mark=+,only marks, each nth point=2] table
[x=E, y=DRs, col sep=comma] {V,E,DRV,DRE,DRs,MCiDRs,MCDRs,MCs,STiDRs,STDRs,MCiDRk,MCDRk,MCk,STiDRk,STDRk
1000000,18767086,651,1283,2.08,6.49,2.52,1.12,225.68,,190,190,190,190,
1050000,19709159,660,1300,2.17,6.89,2.65,1.16,276.93,,190,190,190,190,
1100000,20655825,469,919,2.27,7.21,2.76,1.22,232.47,,190,190,190,190,
1150000,21604341,2901,5801,2.4,7.79,2.99,1.28,235.17,,190,190,190,190,
1200000,22564319,820,1621,2.49,7.92,3.04,1.34,276.79,,190,190,190,190,
1250000,23531472,1394,2769,2.6,8.49,3.21,1.42,241.42,,190,190,190,190,
1300000,24480730,501,982,2.72,8.81,3.3,1.49,277.68,,190,190,190,190,
1350000,25440966,467,914,2.82,9.04,3.26,1.53,233.7,,190,190,190,190,
1400000,26383694,658,1296,2.93,9.45,3.56,1.59,237.37,,190,190,190,190,
1450000,27342944,585,1151,3.03,9.8,3.7,1.67,232.26,,190,190,190,190,
1500000,28291769,445,870,3.13,10.16,3.8,1.71,237,,190,190,190,190,
1550000,29269274,2200,4399,3.28,10.78,4.03,1.88,222.51,,190,190,190,190,
1600000,30211184,1070,2120,3.34,11.01,4.09,1.84,220.52,,190,190,190,190,
1650000,31153242,1061,2102,3.47,11.38,4.23,1.96,259.86,,190,190,190,190,
1700000,32118855,1043,2067,3.59,11.85,4.37,1.97,197.58,,190,190,190,190,
1750000,33087214,1549,3083,3.7,12.16,4.53,2.1,223.84,,190,190,190,190,
1800000,34052317,1038,2056,3.79,12.57,4.64,2.1,304.66,,190,190,190,190,
1850000,34969772,711,1402,3.9,12.93,4.75,2.17,247.05,,190,190,190,190,
1900000,35955033,664,1309,4.01,13.38,4.87,2.23,227.51,,190,190,190,190,
1950000,36919935,593,1166,4.12,13.69,5.02,2.34,240.66,,190,190,190,190,
2000000,37879080,846,1674,4.25,14.25,5.17,2.42,265.78,,190,190,190,190,
2050000,38826564,562,1104,4.33,14.41,5.29,2.44,238.87,,190,190,190,190,
2100000,39796860,1075,2130,4.45,14.89,5.45,2.51,200.9,,190,190,190,190,
2150000,40753264,698,1376,4.57,15.47,5.58,2.59,225.85,,190,190,190,190,
2200000,41687628,1652,3284,4.68,15.78,5.74,2.64,191.86,,190,190,190,190,
2250000,42676754,949,1878,4.78,16.06,5.84,2.7,211.87,,190,190,190,190,
2300000,43640247,1108,2199,4.9,16.6,5.99,2.83,219.43,,190,190,190,190,
2350000,44569595,7875,15886,5.05,17.64,6.35,2.85,231.67,,190,190,190,190,
2400000,45529873,509,999,5.11,17.22,6.22,2.92,199.19,,190,190,190,190,
2450000,46502969,1385,2752,5.24,17.96,6.4,3.25,224.79,,190,190,190,190,
2500000,47473942,739,1458,5.34,18.14,6.51,3.41,204.44,,190,190,190,190,
2550000,48407592,532,1044,5.47,18.53,6.64,3.13,236.06,,190,190,190,190,
2600000,49356528,869,1718,5.57,19.45,6.78,3.26,225.7,,190,190,190,190,
2650000,50353329,734,1448,5.69,19.34,6.92,3.37,207.39,,190,190,190,190,
2700000,51309157,567,1114,5.76,19.91,7.05,3.36,231.68,,190,190,190,190,
2750000,52280543,1271,2524,5.91,20.35,7.24,3.42,218.43,,190,190,190,190,
2800000,53230505,511,1002,6.03,20.42,7.32,3.58,209.18,,190,190,190,190,
2850000,54187746,563,1106,6.16,21.03,7.49,3.57,190.19,,190,190,190,190,
2900000,55161765,1032,2045,6.24,21.38,7.61,3.62,199.89,,190,190,190,190,
2950000,56118167,1224,2429,6.35,21.78,7.78,3.74,225.98,,190,190,190,190,
3000000,57082768,980,1942,6.46,22.25,7.9,3.74,241.76,,190,190,190,190,
3050000,58006703,746,1472,6.59,22.8,8.02,3.82,237.72,,190,190,190,190,
3100000,58997271,1432,2845,6.68,23.26,8.2,3.98,196.85,,190,190,190,190,
3150000,60003314,496,974,6.8,23.44,8.3,3.96,229.18,,190,190,190,190,
3200000,60919212,1745,3474,6.92,24.05,8.5,4.13,197.47,,190,190,190,190,
3250000,61874877,474,928,7.04,24.43,8.59,4.11,191.67,,190,190,190,190,
3300000,62849494,1218,2417,7.13,24.83,8.74,4.63,211.13,,190,190,190,190,
3350000,63798424,1131,2242,7.23,25.41,8.85,4.25,217.74,,190,190,190,190,
3400000,64745452,699,1378,7.36,25.69,9,4.42,235.04,,190,190,190,190,
3450000,65700700,434,848,7.47,26.04,9.1,4.4,205.8,,190,190,190,190,
3500000,66733434,669,1319,7.59,26.4,9.31,4.71,200.11,,190,190,190,190,
3550000,67646422,698,1376,7.68,26.82,9.38,4.54,209.94,,190,190,190,190,
3600000,68604668,647,1274,7.79,27.53,9.53,4.6,227.55,,190,190,190,190,
3650000,69587650,484,948,7.92,27.66,9.69,4.81,238.75,,190,190,190,190,
3700000,70541366,883,1746,8.01,28.28,9.84,4.77,246.52,,190,190,190,190,
3750000,71517645,450,880,8.12,28.81,9.94,4.83,205.23,,190,190,190,190,
3800000,72483078,588,1156,8.28,29.11,10.1,4.93,229.07,,190,190,190,190,
3850000,73419536,571,1122,8.32,29.37,10.24,5.35,247.33,,190,190,190,190,
3900000,74381081,918,1816,8.47,30.02,10.37,5.15,215.47,,190,190,190,190,
3950000,75353990,715,1410,8.57,30.13,10.5,5.18,205.36,,190,190,190,190,
4000000,76311956,769,1518,8.7,30.67,10.66,5.21,231.4,,190,190,190,190,
4050000,77313266,668,1316,8.8,30.95,10.77,5.39,228.89,,190,190,190,190,
4100000,78249789,1168,2317,8.94,31.71,10.97,5.35,224.48,,190,190,190,190,
4150000,79221466,549,1078,9.05,32.28,11.08,5.53,265.09,,190,190,190,190,
4200000,80154009,512,1004,7.38,35.59,10.97,5.4,226.14,,190,190,190,190,
};
\end{axis}
\end{tikzpicture}
\caption{Effect (left) and running time (right) of initially running
\autoref{reducealg} for data reduction. The graphs were generated by
adding $k=190$ random arcs between ten connected components, each
being a preferential attachment graph with outdegree twenty and a
single sink.}
\label{fig:dreffect}
\end{figure}
Finally, \autoref{fig:prefat} presents instances that could not be
optimally solved by \citet{LBK09}'s heuristic. In the left plot, we see
that in instances with large embedded partitioning set{}s of several hundred thousand arcs, the heuristic of
\citet{LBK09} does not find the embedded partitioning set{} but an about
5\textperthousand{} larger one. In all cases, the heuristic found the
same partitioning set{}s regardless of the type of data reduction applied. Note
that the plot only gives a lower bound on the deviation factor, since
there might be even better partitioning set{}s in the instances than the
embedded one; we were unable to compute the optimal partitioning set{}s in these
instances. In the right plot of \autoref{fig:prefat}, we used smaller
preferential attachment graphs (this time without embedded partitioning set{}s)
and see that \citet{LBK09}'s heuristic can be off by more than a factor of
two from the optimal partitioning set{}. Data reduction had no effect on the
quality of the partitioning set{}s found.
\begin{figure}
\centering
%
%
%
%
%
%
%
%
%
%
%
%
%
\begin{tikzpicture}
\begin{axis}[xlabel=weight of embedded partitioning set{}, ylabel=deviation
factor, width=0.45\textwidth, yticklabel
style={/pgf/number format/fixed, /pgf/number format/precision=4},
change x base, axis base prefix={axis x base -3 prefix {}}, x
unit=10^3]
\addplot[color=black,mark=+, only marks] table [x=OPT1, y=DIV1,
col sep=comma] {F,V1,E1,DRV1,DRE1,DRt1,MCiDRk1,MCiDRs1,MCDRk1,MCDRs1,MCk1,MCs1,STiDRk1,STiDRs1,STDRk1,STDRs1,STk1,STs1,DIViDR1,DIVDR1,DIV1,OPT1,V2,E2,DRV2,DRE2,DRt2,MCiDRk2,MCiDRs2,MCDRk2,MCDRs2,MCk2,MCs2,STiDRk2,STiDRs2,STDRk2,STDRs2,STk2,STs2,DIViDR2,DIVDR2,DIV2,OPT2,V3,E3,DRV3,DRE3,DRt3,MCiDRk3,MCiDRs3,MCDRk3,MCDRs3,MCk3,MCs3,STiDRk3,STiDRs3,STDRk3,STDRs3,STk3,STs3,DIViDR3,DIVDR3,DIV3,OPT3,V4,E4,DRV4,DRE4,DRt4,MCiDRk4,MCiDRs4,MCDRk4,MCDRs4,MCk4,MCs4,STiDRk4,STiDRs4,STDRk4,STDRs4,STk4,STs4,DIViDR4,DIVDR4,DIV4,OPT4,V5,E5,DRV5,DRE5,DRt5,MCiDRk5,MCiDRs5,MCDRk5,MCDRs5,MCk5,MCs5,STiDRk5,STiDRs5,STDRk5,STDRs5,STk5,STs5,DIViDR5,DIVDR5,DIV5,OPT5,
dagpart:-A:-c:10:-n:100000:-o:2:-k:100000:-t:5:1,1000000,2098871,169288,355285,1.08,100254,4,100254,2.48,100254,0.62,,TIMEOUT 41s,,,,,1.00254000000000000000,1.00254000000000000000,1.00254000000000000000,100000,1000000,2098727,166761,350133,1.07,100270,4.01,100270,2.48,100270,0.62,,TIMEOUT 41s,,,,,1.00270000000000000000,1.00270000000000000000,1.00270000000000000000,100000,1000000,2098718,174263,365775,1.07,100276,4.05,100276,2.49,100276,0.63,,TIMEOUT 42s,,,,,1.00276000000000000000,1.00276000000000000000,1.00276000000000000000,100000,1000000,2098748,170269,357350,1.08,100297,4.01,100297,2.49,100297,0.63,,TIMEOUT 41s,,,,,1.00297000000000000000,1.00297000000000000000,1.00297000000000000000,100000,1000000,2098716,167831,352032,1.08,100331,4,100331,2.49,100331,0.63,,TIMEOUT 41s,,,,,1.00331000000000000000,1.00331000000000000000,1.00331000000000000000,100000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:110000:-t:5:1,1000000,2108657,180465,380707,1.09,110380,4.08,110380,2.49,110380,0.63,,TIMEOUT 41s,,,,,1.00345454545454545454,1.00345454545454545454,1.00345454545454545454,110000,1000000,2108649,182016,383844,1.1,110333,4.07,110333,2.5,110333,0.64,,TIMEOUT 42s,,,,,1.00302727272727272727,1.00302727272727272727,1.00302727272727272727,110000,1000000,2108745,193444,408125,1.11,110367,4.13,110367,2.5,110367,0.63,,TIMEOUT 41s,,,,,1.00333636363636363636,1.00333636363636363636,1.00333636363636363636,110000,1000000,2108690,187046,394383,1.09,110327,4.1,110327,2.51,110327,0.63,,TIMEOUT 42s,,,,,1.00297272727272727272,1.00297272727272727272,1.00297272727272727272,110000,1000000,2108686,184396,388793,1.1,110393,4.09,110393,2.49,110393,0.63,,TIMEOUT 41s,,,,,1.00357272727272727272,1.00357272727272727272,1.00357272727272727272,110000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:120000:-t:5:1,1000000,2118693,198200,420015,1.13,120386,4.17,120386,2.52,120386,0.64,,,,,,,1.00321666666666666666,1.00321666666666666666,1.00321666666666666666,120000,1000000,2118784,196928,417565,1.13,120418,4.14,120418,2.49,120418,0.63,,,,,,,1.00348333333333333333,1.00348333333333333333,1.00348333333333333333,120000,1000000,2118731,207374,439512,1.13,120406,4.21,120406,2.52,120406,0.64,,,,,,,1.00338333333333333333,1.00338333333333333333,1.00338333333333333333,120000,1000000,2118641,202470,428975,1.13,120434,4.18,120434,2.51,120434,0.64,,,,,,,1.00361666666666666666,1.00361666666666666666,1.00361666666666666666,120000,1000000,2118780,198951,421599,1.12,120372,4.16,120372,2.35,120372,0.61,,,,,,,1.00310000000000000000,1.00310000000000000000,1.00310000000000000000,120000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:130000:-t:5:1,1000000,2128766,216857,461629,1.14,130488,4.26,130488,2.53,130488,0.64,,,,,,,1.00375384615384615384,1.00375384615384615384,1.00375384615384615384,130000,1000000,2128602,205349,437192,1.14,130469,4.21,130469,2.51,130469,0.65,,,,,,,1.00360769230769230769,1.00360769230769230769,1.00360769230769230769,130000,1000000,2128689,209671,446517,1.14,130425,4.24,130425,2.52,130425,0.64,,,,,,,1.00326923076923076923,1.00326923076923076923,1.00326923076923076923,130000,1000000,2128607,208905,445017,1.14,130467,4.22,130467,2.52,130467,0.64,,,,,,,1.00359230769230769230,1.00359230769230769230,1.00359230769230769230,130000,1000000,2128804,207599,442001,1.14,130403,4.23,130403,2.51,130403,0.63,,,,,,,1.00310000000000000000,1.00310000000000000000,1.00310000000000000000,130000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:140000:-t:5:1,1000000,2138600,220044,470818,1.17,140583,4.3,140583,2.53,140583,0.65,,,,,,,1.00416428571428571428,1.00416428571428571428,1.00416428571428571428,140000,1000000,2138649,229995,492265,1.17,140594,4.33,140594,2.54,140594,0.65,,,,,,,1.00424285714285714285,1.00424285714285714285,1.00424285714285714285,140000,1000000,2138716,223297,477814,1.17,140541,4.3,140541,2.53,140541,0.65,,,,,,,1.00386428571428571428,1.00386428571428571428,1.00386428571428571428,140000,1000000,2138743,224510,480283,1.17,140570,4.29,140570,2.52,140570,0.64,,,,,,,1.00407142857142857142,1.00407142857142857142,1.00407142857142857142,140000,1000000,2138769,216819,463559,1.16,140499,4.28,140499,2.52,140499,0.64,,,,,,,1.00356428571428571428,1.00356428571428571428,1.00356428571428571428,140000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:150000:-t:5:1,1000000,2148684,239163,513971,1.18,150638,4.38,150638,2.54,150638,0.65,,,,,,,1.00425333333333333333,1.00425333333333333333,1.00425333333333333333,150000,1000000,2148735,231857,498184,1.18,150563,4.35,150563,2.55,150563,0.65,,,,,,,1.00375333333333333333,1.00375333333333333333,1.00375333333333333333,150000,1000000,2148768,231288,496921,1.19,150604,4.37,150604,2.54,150604,0.66,,,,,,,1.00402666666666666666,1.00402666666666666666,1.00402666666666666666,150000,1000000,2148732,236596,508555,1.18,150639,4.38,150639,2.52,150639,0.65,,,,,,,1.00426000000000000000,1.00426000000000000000,1.00426000000000000000,150000,1000000,2148768,233552,501908,1.19,150611,4.37,150611,2.55,150611,0.66,,,,,,,1.00407333333333333333,1.00407333333333333333,1.00407333333333333333,150000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:160000:-t:5:1,1000000,2158694,252062,544405,1.2,160769,4.46,160769,2.56,160769,0.66,,,,,,,1.00480625000000000000,1.00480625000000000000,1.00480625000000000000,160000,1000000,2158608,250113,540249,1.2,160677,4.45,160677,2.55,160677,0.67,,,,,,,1.00423125000000000000,1.00423125000000000000,1.00423125000000000000,160000,1000000,2158692,245107,528934,1.21,160674,4.43,160674,2.54,160674,0.67,,,,,,,1.00421250000000000000,1.00421250000000000000,1.00421250000000000000,160000,1000000,2158835,304409,657829,1.2,160676,4.42,160676,2.44,160676,0.68,,,,,,,1.00422500000000000000,1.00422500000000000000,1.00422500000000000000,160000,1000000,2158740,245507,529670,1.22,160847,4.43,160847,2.57,160847,0.66,,,,,,,1.00529375000000000000,1.00529375000000000000,1.00529375000000000000,160000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:170000:-t:5:1,1000000,2168649,257513,558369,1.22,170771,4.49,170771,2.56,170771,0.67,,,,,,,1.00453529411764705882,1.00453529411764705882,1.00453529411764705882,170000,1000000,2168683,262475,569692,1.23,170887,4.53,170887,2.54,170887,0.68,,,,,,,1.00521764705882352941,1.00521764705882352941,1.00521764705882352941,170000,1000000,2168872,257627,559037,1.22,170675,4.5,170675,2.55,170675,0.67,,,,,,,1.00397058823529411764,1.00397058823529411764,1.00397058823529411764,170000,1000000,2168740,257962,559665,1.23,170774,4.49,170774,2.55,170774,0.67,,,,,,,1.00455294117647058823,1.00455294117647058823,1.00455294117647058823,170000,1000000,2168735,266142,577316,1.22,170789,4.54,170789,2.58,170789,0.67,,,,,,,1.00464117647058823529,1.00464117647058823529,1.00464117647058823529,170000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:180000:-t:5:1,1000000,2178654,275685,600881,1.25,180809,4.58,180809,2.58,180809,0.68,,,,,,,1.00449444444444444444,1.00449444444444444444,1.00449444444444444444,180000,1000000,2178561,271851,592462,1.23,180855,4.58,180855,2.54,180855,0.67,,,,,,,1.00475000000000000000,1.00475000000000000000,1.00475000000000000000,180000,1000000,2178751,271700,592271,1.25,180836,4.55,180836,2.57,180836,0.68,,,,,,,1.00464444444444444444,1.00464444444444444444,1.00464444444444444444,180000,1000000,2178581,274668,598300,1.24,180775,4.58,180775,2.58,180775,0.68,,,,,,,1.00430555555555555555,1.00430555555555555555,1.00430555555555555555,180000,1000000,2178662,271874,592746,1.24,180887,4.59,180887,2.58,180887,0.68,,,,,,,1.00492777777777777777,1.00492777777777777777,1.00492777777777777777,180000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:190000:-t:5:1,1000000,2188659,284104,621994,1.26,191011,4.63,191011,2.59,191011,0.69,,,,,,,1.00532105263157894736,1.00532105263157894736,1.00532105263157894736,190000,1000000,2188721,285658,625533,1.27,191095,4.64,191095,2.6,191095,0.69,,,,,,,1.00576315789473684210,1.00576315789473684210,1.00576315789473684210,190000,1000000,2188691,280366,613496,1.25,190978,4.58,190978,2.58,190978,0.68,,,,,,,1.00514736842105263157,1.00514736842105263157,1.00514736842105263157,190000,1000000,2188708,286055,626530,1.26,190881,4.63,190881,2.6,190881,0.69,,,,,,,1.00463684210526315789,1.00463684210526315789,1.00463684210526315789,190000,1000000,2188785,282609,618678,1.26,190947,4.63,190947,2.58,190947,0.68,,,,,,,1.00498421052631578947,1.00498421052631578947,1.00498421052631578947,190000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:200000:-t:5:1,1000000,2198744,292044,641931,1.28,201052,4.71,201052,2.59,201052,0.7,,,,,,,1.00526000000000000000,1.00526000000000000000,1.00526000000000000000,200000,1000000,2198738,300311,660321,1.26,201013,4.69,201013,2.58,201013,0.71,,,,,,,1.00506500000000000000,1.00506500000000000000,1.00506500000000000000,200000,1000000,2198608,296246,651487,1.28,201088,4.68,201088,2.57,201088,0.69,,,,,,,1.00544000000000000000,1.00544000000000000000,1.00544000000000000000,200000,1000000,2198536,291200,640097,1.28,201061,4.68,201061,2.59,201061,0.7,,,,,,,1.00530500000000000000,1.00530500000000000000,1.00530500000000000000,200000,1000000,2198747,293304,644659,1.28,201009,4.67,201009,2.6,201009,0.7,,,,,,,1.00504500000000000000,1.00504500000000000000,1.00504500000000000000,200000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:210000:-t:5:1,1000000,2208632,305952,676009,1.28,211288,4.76,211288,2.6,211288,0.71,,,,,,,1.00613333333333333333,1.00613333333333333333,1.00613333333333333333,210000,1000000,2208704,306792,677816,1.29,211069,4.72,211069,2.61,211069,0.71,,,,,,,1.00509047619047619047,1.00509047619047619047,1.00509047619047619047,210000,1000000,2208700,312588,691087,1.28,211179,4.8,211179,2.61,211179,0.7,,,,,,,1.00561428571428571428,1.00561428571428571428,1.00561428571428571428,210000,1000000,2208705,306132,676462,1.29,211136,4.76,211136,2.58,211136,0.71,,,,,,,1.00540952380952380952,1.00540952380952380952,1.00540952380952380952,210000,1000000,2208580,307022,678542,1.29,211077,4.74,211077,2.62,211077,0.7,,,,,,,1.00512857142857142857,1.00512857142857142857,1.00512857142857142857,210000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:220000:-t:5:1,1000000,2218626,313833,696341,1.3,221221,4.84,221221,2.61,221221,0.72,,,,,,,1.00555000000000000000,1.00555000000000000000,1.00555000000000000000,220000,1000000,2218688,318074,706218,1.3,221208,4.85,221208,2.46,221208,0.7,,,,,,,1.00549090909090909090,1.00549090909090909090,1.00549090909090909090,220000,1000000,2218756,315946,701035,1.31,221219,4.84,221219,2.6,221219,0.72,,,,,,,1.00554090909090909090,1.00554090909090909090,1.00554090909090909090,220000,1000000,2218745,312341,693310,1.3,221367,4.8,221367,2.62,221367,0.71,,,,,,,1.00621363636363636363,1.00621363636363636363,1.00621363636363636363,220000,1000000,2218769,320420,711378,1.31,221154,4.87,221154,2.62,221154,0.72,,,,,,,1.00524545454545454545,1.00524545454545454545,1.00524545454545454545,220000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:230000:-t:5:1,1000000,2228658,361431,807046,1.32,231178,5.03,231178,2.64,231178,0.73,,,,,,,1.00512173913043478260,1.00512173913043478260,1.00512173913043478260,230000,1000000,2228608,318308,709697,1.31,231296,4.89,231296,2.62,231296,0.73,,,,,,,1.00563478260869565217,1.00563478260869565217,1.00563478260869565217,230000,1000000,2228638,321081,715399,1.32,231374,4.88,231374,2.61,231374,0.72,,,,,,,1.00597391304347826086,1.00597391304347826086,1.00597391304347826086,230000,1000000,2228782,323009,719750,1.32,231138,4.91,231138,2.63,231138,0.73,,,,,,,1.00494782608695652173,1.00494782608695652173,1.00494782608695652173,230000,1000000,2228569,323371,720475,1.31,231331,4.91,231331,2.63,231331,0.72,,,,,,,1.00578695652173913043,1.00578695652173913043,1.00578695652173913043,230000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:240000:-t:5:1,1000000,2238768,337667,756498,1.35,241522,4.97,241522,2.65,241522,0.73,,,,,,,1.00634166666666666666,1.00634166666666666666,1.00634166666666666666,240000,1000000,2238769,340028,762203,1.33,241399,4.96,241399,2.64,241399,0.73,,,,,,,1.00582916666666666666,1.00582916666666666666,1.00582916666666666666,240000,1000000,2238645,332559,744899,1.34,241353,4.94,241353,2.64,241353,0.73,,,,,,,1.00563750000000000000,1.00563750000000000000,1.00563750000000000000,240000,1000000,2238713,334884,750345,1.34,241377,4.98,241377,2.65,241377,0.73,,,,,,,1.00573750000000000000,1.00573750000000000000,1.00573750000000000000,240000,1000000,2238731,332653,744982,1.35,241474,4.97,241474,2.64,241474,0.74,,,,,,,1.00614166666666666666,1.00614166666666666666,1.00614166666666666666,240000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:250000:-t:5:1,1000000,2248692,350131,787773,1.35,251624,5.07,251624,2.66,251624,0.73,,,,,,,1.00649600000000000000,1.00649600000000000000,1.00649600000000000000,250000,1000000,2248662,348516,784112,1.35,251676,5.04,251676,2.68,251676,0.74,,,,,,,1.00670400000000000000,1.00670400000000000000,1.00670400000000000000,250000,1000000,2248776,343885,773650,1.35,251560,5.03,251560,2.65,251560,0.73,,,,,,,1.00624000000000000000,1.00624000000000000000,1.00624000000000000000,250000,1000000,2248683,345637,777686,1.36,251622,5.04,251622,2.67,251622,0.73,,,,,,,1.00648800000000000000,1.00648800000000000000,1.00648800000000000000,250000,1000000,2248597,346661,779628,1.35,251542,5.04,251542,2.64,251542,0.74,,,,,,,1.00616800000000000000,1.00616800000000000000,1.00616800000000000000,250000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:260000:-t:5:1,1000000,2258807,357913,809077,1.37,261627,5.11,261627,2.7,261627,0.77,,,,,,,1.00625769230769230769,1.00625769230769230769,1.00625769230769230769,260000,1000000,2258697,356233,805208,1.36,261592,5.09,261592,2.67,261592,0.74,,,,,,,1.00612307692307692307,1.00612307692307692307,1.00612307692307692307,260000,1000000,2258732,354685,801795,1.37,261509,5.09,261509,2.68,261509,0.73,,,,,,,1.00580384615384615384,1.00580384615384615384,1.00580384615384615384,260000,1000000,2258757,351351,794011,1.36,261655,5.09,261655,2.65,261655,0.74,,,,,,,1.00636538461538461538,1.00636538461538461538,1.00636538461538461538,260000,1000000,2258631,352852,797149,1.37,261574,5.1,261574,2.7,261574,0.74,,,,,,,1.00605384615384615384,1.00605384615384615384,1.00605384615384615384,260000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:270000:-t:5:1,1000000,2268691,361474,820697,1.38,271808,5.17,271808,2.7,271808,0.74,,,,,,,1.00669629629629629629,1.00669629629629629629,1.00669629629629629629,270000,1000000,2268644,362093,821866,1.37,271751,5.15,271751,2.68,271751,0.75,,,,,,,1.00648518518518518518,1.00648518518518518518,1.00648518518518518518,270000,1000000,2268726,362557,823030,1.38,271621,5.16,271621,2.69,271621,0.74,,,,,,,1.00600370370370370370,1.00600370370370370370,1.00600370370370370370,270000,1000000,2268742,376682,855524,1.39,271682,5.25,271682,2.7,271682,0.75,,,,,,,1.00622962962962962962,1.00622962962962962962,1.00622962962962962962,270000,1000000,2268685,362337,822312,1.38,271760,5.13,271760,2.66,271760,0.75,,,,,,,1.00651851851851851851,1.00651851851851851851,1.00651851851851851851,270000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:280000:-t:5:1,1000000,2278722,380784,868590,1.4,281856,5.28,281856,2.58,281856,0.74,,,,,,,1.00662857142857142857,1.00662857142857142857,1.00662857142857142857,280000,1000000,2278756,380115,867063,1.39,281931,5.28,281931,2.7,281931,0.76,,,,,,,1.00689642857142857142,1.00689642857142857142,1.00689642857142857142,280000,1000000,2278611,369348,842441,1.39,281899,5.21,281899,2.71,281899,0.75,,,,,,,1.00678214285714285714,1.00678214285714285714,1.00678214285714285714,280000,1000000,2278759,374822,854933,1.39,281815,5.26,281815,2.71,281815,0.75,,,,,,,1.00648214285714285714,1.00648214285714285714,1.00648214285714285714,280000,1000000,2278765,374389,854229,1.4,281717,5.26,281717,2.72,281717,0.76,,,,,,,1.00613214285714285714,1.00613214285714285714,1.00613214285714285714,280000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:290000:-t:5:1,1000000,2288773,384608,881648,1.41,291976,5.32,291976,2.73,291976,0.76,,,,,,,1.00681379310344827586,1.00681379310344827586,1.00681379310344827586,290000,1000000,2288649,390000,893495,1.41,292079,5.35,292079,2.72,292079,0.77,,,,,,,1.00716896551724137931,1.00716896551724137931,1.00716896551724137931,290000,1000000,2288673,381487,873873,1.41,292123,5.32,292123,2.73,292123,0.76,,,,,,,1.00732068965517241379,1.00732068965517241379,1.00732068965517241379,290000,1000000,2288661,379151,868496,1.41,291881,5.3,291881,2.73,291881,0.76,,,,,,,1.00648620689655172413,1.00648620689655172413,1.00648620689655172413,290000,1000000,2288760,377997,865306,1.41,291929,5.29,291929,2.72,291929,0.76,,,,,,,1.00665172413793103448,1.00665172413793103448,1.00665172413793103448,290000,
dagpart:-A:-c:10:-n:100000:-o:2:-k:300000:-t:5:1,1000000,2298603,391124,899883,1.42,302009,5.37,302009,2.74,302009,0.76,,,,,,,1.00669666666666666666,1.00669666666666666666,1.00669666666666666666,300000,1000000,2298666,399048,918565,1.42,302039,5.41,302039,2.74,302039,0.76,,,,,,,1.00679666666666666666,1.00679666666666666666,1.00679666666666666666,300000,1000000,2298862,387149,890492,1.42,302345,5.39,302345,2.74,302345,0.77,,,,,,,1.00781666666666666666,1.00781666666666666666,1.00781666666666666666,300000,1000000,2298739,388921,895051,1.42,302094,5.33,302094,2.73,302094,0.76,,,,,,,1.00698000000000000000,1.00698000000000000000,1.00698000000000000000,300000,1000000,2298647,390824,899043,1.43,302071,5.37,302071,2.73,302071,0.77,,,,,,,1.00690333333333333333,1.00690333333333333333,1.00690333333333333333,300000,
};
\end{axis}
\end{tikzpicture}\hfill{}
\begin{tikzpicture}
\begin{axis}[xlabel=weight~$k$ of optimum partitioning set{},
ylabel=deviation factor, width=0.45\textwidth,
xmin=9.5, ymax=3, xmax=29.5]
\addplot[color=black,mark=+,only marks] table [x=STiDRk, y
expr=(\thisrow{MCk}/\thisrow{STiDRk}), col sep=comma]
{F,V,E,DRV,DRE,MCiDRk,MCDRk,MCk,STiDRk,STDRk
iter-dagpart:-C:-c:2:-n:101:-o:3:-t:4000,103,282,102,281,74,51,51,21,
iter-dagpart:-C:-c:2:-n:102:-o:3:-t:4000,104,289,104,289,28,21,21,21,
iter-dagpart:-C:-c:2:-n:104:-o:3:-t:4000,106,283,106,283,77,45,45,33,
iter-dagpart:-C:-c:2:-n:105:-o:3:-t:4000,107,291,57,126,11,11,11,11,
iter-dagpart:-C:-c:2:-n:106:-o:3:-t:4000,108,289,97,240,33,28,28,28,
iter-dagpart:-C:-c:2:-n:107:-o:3:-t:4000,109,297,109,297,75,21,21,12,
iter-dagpart:-C:-c:2:-n:110:-o:3:-t:4000,112,302,111,301,76,27,27,27,
iter-dagpart:-C:-c:2:-n:113:-o:3:-t:4000,115,313,115,313,74,34,34,34,
iter-dagpart:-C:-c:2:-n:115:-o:3:-t:4000,117,313,111,300,40,40,40,26,
iter-dagpart:-C:-c:2:-n:116:-o:3:-t:4000,118,317,73,161,22,22,22,21,
iter-dagpart:-C:-c:2:-n:118:-o:3:-t:4000,120,331,116,312,37,29,29,29,
iter-dagpart:-C:-c:2:-n:119:-o:3:-t:4000,121,327,60,127,29,29,29,17,
iter-dagpart:-C:-c:2:-n:120:-o:3:-t:4000,122,327,122,327,74,8,8,8,
iter-dagpart:-C:-c:2:-n:121:-o:3:-t:4000,123,334,123,334,88,53,53,24,
iter-dagpart:-C:-c:2:-n:122:-o:3:-t:4000,124,338,122,331,50,50,50,14,
iter-dagpart:-C:-c:2:-n:123:-o:3:-t:4000,125,344,123,334,39,39,39,27,
iter-dagpart:-C:-c:2:-n:126:-o:3:-t:4000,128,350,127,349,36,36,36,16,
iter-dagpart:-C:-c:2:-n:128:-o:3:-t:4000,130,343,117,293,38,32,32,32,
iter-dagpart:-C:-c:2:-n:50:-o:3:-t:4000,52,134,5,6,3,3,3,3,
iter-dagpart:-C:-c:2:-n:52:-o:3:-t:4000,54,141,52,138,21,19,19,19,
iter-dagpart:-C:-c:2:-n:53:-o:3:-t:4000,55,131,51,126,34,17,17,17,
iter-dagpart:-C:-c:2:-n:54:-o:3:-t:4000,56,141,43,94,30,26,26,26,
iter-dagpart:-C:-c:2:-n:55:-o:3:-t:4000,57,144,52,131,21,10,10,10,
iter-dagpart:-C:-c:2:-n:56:-o:3:-t:4000,58,145,57,142,18,13,13,13,
iter-dagpart:-C:-c:2:-n:57:-o:3:-t:4000,59,149,59,149,23,23,23,18,
iter-dagpart:-C:-c:2:-n:58:-o:3:-t:4000,60,158,58,152,25,23,23,23,
iter-dagpart:-C:-c:2:-n:59:-o:3:-t:4000,61,156,53,130,30,30,30,19,
iter-dagpart:-C:-c:2:-n:60:-o:3:-t:4000,62,158,62,158,48,28,28,28,
iter-dagpart:-C:-c:2:-n:61:-o:3:-t:4000,63,165,61,153,36,28,28,28,
iter-dagpart:-C:-c:2:-n:62:-o:3:-t:4000,64,169,51,118,10,10,10,10,
iter-dagpart:-C:-c:2:-n:63:-o:3:-t:4000,65,168,61,156,31,29,29,26,
iter-dagpart:-C:-c:2:-n:64:-o:3:-t:4000,66,170,66,170,43,30,30,23,
iter-dagpart:-C:-c:2:-n:65:-o:3:-t:4000,67,171,12,21,8,7,7,7,
iter-dagpart:-C:-c:2:-n:66:-o:3:-t:4000,68,171,45,97,15,14,14,14,
iter-dagpart:-C:-c:2:-n:67:-o:3:-t:4000,69,179,68,178,33,30,30,28,
iter-dagpart:-C:-c:2:-n:68:-o:3:-t:4000,70,179,70,179,25,18,18,18,
iter-dagpart:-C:-c:2:-n:69:-o:3:-t:4000,71,181,68,174,44,14,14,14,
iter-dagpart:-C:-c:2:-n:70:-o:3:-t:4000,72,192,71,191,35,30,30,24,
iter-dagpart:-C:-c:2:-n:71:-o:3:-t:4000,73,197,72,195,42,11,11,11,
iter-dagpart:-C:-c:2:-n:72:-o:3:-t:4000,74,190,74,190,55,22,22,22,
iter-dagpart:-C:-c:2:-n:73:-o:3:-t:4000,75,196,44,96,33,18,18,18,
iter-dagpart:-C:-c:2:-n:74:-o:3:-t:4000,76,204,7,10,4,4,4,4,
iter-dagpart:-C:-c:2:-n:76:-o:3:-t:4000,78,206,65,164,19,11,11,11,
iter-dagpart:-C:-c:2:-n:77:-o:3:-t:4000,79,215,79,215,54,24,24,17,
iter-dagpart:-C:-c:2:-n:78:-o:3:-t:4000,80,211,55,121,19,17,17,17,
iter-dagpart:-C:-c:2:-n:79:-o:3:-t:4000,81,226,72,178,41,41,41,37,
iter-dagpart:-C:-c:2:-n:80:-o:3:-t:4000,82,222,79,204,51,26,26,26,
iter-dagpart:-C:-c:2:-n:81:-o:3:-t:4000,83,220,54,114,11,10,10,10,
iter-dagpart:-C:-c:2:-n:82:-o:3:-t:4000,84,220,80,203,27,20,20,20,
iter-dagpart:-C:-c:2:-n:83:-o:3:-t:4000,85,221,81,209,52,11,11,11,
iter-dagpart:-C:-c:2:-n:84:-o:3:-t:4000,86,225,85,224,18,14,14,14,
iter-dagpart:-C:-c:2:-n:86:-o:3:-t:4000,88,238,85,224,48,48,48,31,
iter-dagpart:-C:-c:2:-n:87:-o:3:-t:4000,89,241,89,241,64,30,30,25,
iter-dagpart:-C:-c:2:-n:88:-o:3:-t:4000,90,239,20,37,16,14,14,14,
iter-dagpart:-C:-c:2:-n:89:-o:3:-t:4000,91,234,90,232,64,30,30,29,
iter-dagpart:-C:-c:2:-n:90:-o:3:-t:4000,92,245,92,245,61,36,36,14,
iter-dagpart:-C:-c:2:-n:91:-o:3:-t:4000,93,244,93,244,38,30,30,27,
iter-dagpart:-C:-c:2:-n:92:-o:3:-t:4000,94,251,91,247,58,35,35,35,
iter-dagpart:-C:-c:2:-n:93:-o:3:-t:4000,95,257,95,257,50,23,23,23,
iter-dagpart:-C:-c:2:-n:94:-o:3:-t:4000,96,252,92,238,35,35,35,32,
iter-dagpart:-C:-c:2:-n:96:-o:3:-t:4000,98,259,97,258,66,30,30,30,
iter-dagpart:-C:-c:2:-n:97:-o:3:-t:4000,99,262,99,262,24,24,24,24,
iter-dagpart:-C:-c:2:-n:99:-o:3:-t:4000,101,274,101,274,60,26,26,26,
};
%
%
%
\end{axis}
\end{tikzpicture}
%
%
%
%
%
%
%
%
%
\caption{Comparison of partitioning set{}s found by \citet{LBK09}'s heuristic
with an embedded partitioning set{} of size~$k$ (left) and an
optimal partitioning set{} (right). All graphs on the left side have
$10^6$~vertices and roughly $18\cdot 10^6$~arcs and were generated
by adding $k$~random arcs between ten connected components, each
being a preferential attachment graph on $10^5$~vertices with
outdegree fifteen and a single sink. All graphs on the right side
are preferential attachment graphs with varying number of vertices,
two sinks and outdegree three.}
\label{fig:prefat}
\end{figure}
\paragraph{Summary} We have seen that solving large instances
with partitioning set{}s of small weight is realistic using our algorithm. In
particular, instances with more than $10^7$~arcs and $k\leq 190$ could
be solved in less than five minutes. A crucial ingredient in this
success is the data reduction executed by \autoref{reducealg}; without
its help, we could not solve any of our instances in less than five
minutes.
\looseness=-1 However, we also observed that our algorithm works best on those
instances that can already be solved mostly optimally by \citet{LBK09}'s
heuristic and that the data reduction executed by \autoref{reducealg}
slows down the heuristic.
Having seen that the heuristic by \citet{LBK09} can be off by more than a
factor of two from the optimum on random preferential attachment graphs
diminishes the hope that, in spite of the non-approximability results of
\textsc{DAG Partitioning}{} by \citet{AM12}, the heuristic of \citet{LBK09} might find good
approximations on naturally occurring instances. As we see, we do not
have to construct adversarial instances to make the heuristic find
solutions far from optimal.
\subsection{Limits of data reduction and fixed-parameter
algorithms}\label{sec:kkern}
\noindent In \autoref{sec:klintime}, we have seen linear-time data
reduction rules for \textsc{DAG Partitioning}{}. Moreover, the experiments in
\autoref{sec:experiments} have shown that, on all input instancse we
tested our algorithm on, the running time of our
$O(2^k\cdot (n+m))$~time \autoref{alg:simple-st} merely depended
on~\(k\) because \autoref{reducealg} shrunk our random input instances
to roughly the same size.
Therefore it is natural to ask whether we can provide data reduction
rules that \emph{provably} shrink the size of each possible input
instance to some fixed polynomial in~$k$, that is, whether there is a
polynomial-size problem kernel for \textsc{DAG Partitioning}{}. Unfortunately, in this section, we give a negative answer to this question. Specifically, %
we prove that \textsc{DAG Partitioning}{} does not
admit problem kernels with size polynomial in~$k$, unless
\NoPolyKernelAssume. Moreover, we show that the running time
$O(2^k\cdot (n+m))$ of \autoref{alg:simple-st} cannot be improved to
$2^{o(k)}\mathrm{poly}(n)$, unless the Exponential Time Hypothesis
fails. Herein, the Exponential Time Hypothesis as well as
\NegNoPolyKernelAssume{} are hypotheses stronger than~P${}\neq{}$NP, but
widely accepted among complexity theorists~\citep{IPZ01,FS11}.
Towards proving these results, we first recall the polynomial-time
many-to-one reduction from \textsc{3-Sat} to \textsc{DAG Partitioning}{} given
by~\citet{AM12}. The \textsc{3-Sat} problem is, given a
formula~$\varphi$ in conjunctive normal form with at most three literals
per clause, to decide whether~$\varphi$ admits a satisfying assignment.
\begin{construction}[{\citet{AM12}}]\label{lem:red-from-3Sat-WDAGP}
\upshape Let $\varphi$ be an instance of \textsc{3-Sat} with the
variables $x_1, \ldots, x_n$ and the clauses $C_1, \dots, C_m$. We
construct a \textsc{DAG Partitioning}{} instance~$(G,\ensuremath{\omega}{},k)$ with $k:=4n+2m$ that is a
yes-instance if and only if $\varphi$ is satisfiable. The weight
function~$\ensuremath{\omega}{}$ will assign only two different weights to the arcs:
a \emph{normal arc} has weight one and a \emph{heavy arc} has weight
$k+1$ and thus cannot be contained in any partitioning set{} of
weight~$k$. The remainder of this construction is illustrated in
\autoref{fig:SAT-to-DAGP}. \begin{figure} \centering
\begin{tikzpicture}[shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex, label=right:$f$] at (6,0) (f) {};
\node[vertex, label=right:$f'$] at (6,-3) (f') {};
\node[vertex, label=left:$t$] at (0,0) (t) {};
\node[vertex, label=left:$t'$] at (0,-3) (t') {};
\node[vertex, label=left:$x_1^t$] at (1,-1) (x1t) {};
\node[vertex, label=left:$x_1$] at (1,-2) (x1) {};
\node[vertex, label=right:$x_1^f$] at (2,-1) (x1f) {};
\node[vertex, label=right:$\bar x_1$] at (2,-2) (bx1) {};
\node[vertex, label=left:$x_2^t$] at (4,-1) (x2t) {};
\node[vertex, label=left:$x_2$] at (4,-2) (x2) {};
\node[vertex, label=right:$x_2^f$] at (5,-1) (x2f) {};
\node[vertex, label=right:$\bar x_2$] at (5,-2) (bx2) {};
\node[vertex, label=below:$C_1$] at (3,-1.5) (C) {};
\draw[very thick, ->] (f) -> (f');
\draw[very thick, ->] (t) -> (t');
\draw[very thick, ->] (t) -> (x1t);
\draw[very thick, ->] (t) to[in=90,out=0,tension=0.1] (x2t);
\draw[very thick, ->] (f) to[in=90,out=180] (x1f);
\draw[very thick, ->] (f) -> (x2f);
\draw[->] (x1t)--(x1);
\draw[->,dotted] (x1t)--(bx1);
\draw[->,dotted] (x1f)--(x1);
\draw[->] (x1f)--(bx1);
\draw[->] (x1) to[out=240,in=30] (t');
\draw[->,dotted] (x1) to[out=-30,in=190] (f');
\draw[->,dotted] (bx1) to[out=230,in=0] (t');
\draw[->] (bx1) to[out=-30,in=180] (f');
\draw[->,dotted] (x2t)--(x2);
\draw[->] (x2t)--(bx2);
\draw[->] (x2f)--(x2);
\draw[->,dotted] (x2f)--(bx2);
\draw[->,dotted] (x2) to[out=210,in=0] (t');
\draw[->] (x2) to[out=-50,in=180] (f');
\draw[->] (bx2) to[out=210,in=-10] (t');
\draw[->,dotted] (bx2) to[out=-60,in=150] (f');
\draw[->] (C) to[out=180,in=30] (x1);
\draw[->] (C) to[out=0,in=150] (bx2);
\draw[very thick, ->] (t) to[in=90,out=-20] (C);
\begin{pgfonlayer}{background}
\draw[fill=black!10,line width=35pt,line join=round,
draw=black!10] (x1t.center) rectangle (bx1.center);
\draw[fill=black!10,line width=35pt,line join=round,
draw=black!10] (x2t.center) rectangle (bx2.center);
\end{pgfonlayer}
\end{tikzpicture}
\caption{\textsc{DAG Partitioning}{} instance constructed from the formula consisting
only of the clause $C_1:=(x_1\vee\bar x_2)$. Heavy arcs are drawn
bold; dotted arcs are a partitioning set{} that corresponds to the
satisfying assignment setting~$x_1$ to true and $x_2$ to
false. The variable gadgets are drawn on a gray background.}
\label{fig:SAT-to-DAGP}
\end{figure}
We start constructing the directed acyclic graph~$G$ by adding the
special vertices~$f,f',t,$ and~$t'$ together with the heavy
arcs~$(f,f')$ and~$(t,t')$. The vertices~$f'$ and~$t'$ will be the
only sinks in~$G$. For each variable $x_i$, introduce the
vertices~$x^t_i, x_i^f, x_i$ and~$\bar x_i$ together with the
heavy arcs~$(t,x_i^t)$ and $(f,x_i^f)$ and the normal arcs
$(x_i^t,x_i)$, $(x_i^t,\bar x_i)$, $(x_i^f,x_i)$,
$(x_i^f,\bar x_i)$, $(x_i,f')$, $(\bar x_i,f')$,
$(x_i,t')$, and $(\bar x_i,t')$. For each clause~$C_j$, add a
vertex~$C_j$ together with the heavy arc~$(t,C_j)$. Finally, if some
clause~$C_j$ contains the literal~$x_i$, then add the arc~$(C_j,x_i)$;
if some clause~$C_j$ contains the literal~$\bar x_i$, then add the
arc~$(C_j,\bar x_i)$.
\end{construction}
\noindent \citet{AM12} showed that, given a formula~$\varphi$ in \textsc{3-Cnf}{}
with $n$~variables and $m$~clauses, \autoref{lem:red-from-3Sat-WDAGP}
outputs a graph~$G$ with arc weights~$\ensuremath{\omega}{}$ such that $\varphi$~is
satisfiable if and only if there is a partitioning set{}~$\ensuremath{S}{}$ for~$G$ that
does not contain heavy arcs.
\looseness=-1 Since $G$~has only the two sinks~$t'$ and~$f'$, by \autoref{lem:no_new_sinks}, a minimal such partitioning set{} has to partition~$G$ into two connected components, one connected component containing the heavy arc~$(t,t')$ and the other containing~$(f,f')$. Moreover, if \(\varphi\)~is satisfiable, then such a partitioning set{} has weight at most~$4n+2m$, since for each~$x_i$ of the $n$~variables of~$\varphi$, it deletes at most one of two arcs outgoing from each of the vertices~$x^t_i, x_i^f, x_i$, and~$\bar x_i$, and for each~$C_j$ of the $m$~clauses, it deletes at most two out of the three arcs outgoing from the clause vertex~$C_j$. We thus obtain the following lemma:
\begin{lemma}\label{lem:constr1}
Given a formula~$\varphi$ in \textsc{3-Cnf}{} with $n$~variables and $m$~clauses,
\autoref{lem:red-from-3Sat-WDAGP} outputs a graph~$G$ with arc
weights~$\ensuremath{\omega}{}$ such that $\varphi$~is satisfiable if and only if
there is a partitioning set{}~$\ensuremath{S}{}$ for~$G$ that does not contain heavy
arcs. %
Moreover, if \(\varphi\)~is satisfiable, then $\ensuremath{S}{}$ has weight at most~$4n+2m$ and partitions~$G$ into
one connected component containing the constructed vertices~$t$
and~$t'$ and the other containing~$f$ and~$f'$.
\end{lemma}
\noindent We will later exploit \autoref{lem:constr1} to show our hardness results.
Next, we show that arcs with non-unit weights in our constructions
can be simulated by arcs with unit weights.
This allows us to show stronger hardness results and
to keep our constructions simple.
\subsubsection{Strengthening of hardness results to unit-weight graphs}\label{sec:tounit}
\autoref{lem:red-from-3Sat-WDAGP} heavily relied on
forbidding the deletion of certain arcs by giving them a high weight.
The next lemma shows that we can replace these arcs by a gadget only
using unit-weight arcs without changing the weight of the partitioning set{} sought.
\begin{lemma}
\label{lem:red-WDAGP-to-DAGP} There is a polynomial-time many-one reduction{} from \textsc{DAG Partitioning}{} with
polynomially bounded weights to unweighted \textsc{DAG Partitioning}{} that does not change
the weight~$k$ of the partitioning set{} sought.
\end{lemma}
\begin{figure}[t]
\centering
\begin{tikzpicture}[shorten >= 0.5mm,baseline=(v)]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex, label=left:$v$] (v) at (0,0) {};
\node[vertex, label=right:$w$] (w) at (2,0) {};
\draw[->] (v) -- node[midway,above] {5} (w);
\end{tikzpicture}\hspace{2cm}
\begin{tikzpicture}[shorten >= 0.5mm,baseline=(v)]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex, label=left:$v$] (v) at (0,0) {};
\node[vertex, label=right:$w$] (w) at (2,0) {};
\node[vertex] (a1) at (1,1) {};
\node[vertex] (a2) at (1,0.5) {};
\node[vertex] (a3) at (1,-0.5) {};
\node[vertex] (a4) at (1,-1) {};
\draw[->] (v) -- (w);
\draw[->] (v) -- (a1);
\draw[->] (v) -- (a2);
\draw[->] (v) -- (a3);
\draw[->] (v) -- (a4);
\draw[->] (a1) -- (w);
\draw[->] (a2) -- (w);
\draw[->] (a3) -- (w);
\draw[->] (a4) -- (w);
\end{tikzpicture}
\caption{Replacing an arc of weight~5 on the left by the gadget of unit-weight arcs on the right.}
\label{fig:unweighted}
\end{figure}
\begin{proof}
\newcommand{\Va}{X} Let $(G,\ensuremath{\omega}{},k)$ be an instance of \textsc{DAG Partitioning}. We
show how to obtain an instance~$(G',\ensuremath{\omega}{}',k)$ by replacing a single
arc of weight more than one by arcs of weight one such
that~$(G,\ensuremath{\omega}{},k)$ is a yes-instance if and only if
$(G',\ensuremath{\omega}{}',k)$~is a yes-instance. The replacement will be done as
illustrated in \autoref{fig:unweighted}. The claim then follows by
repeating this procedure for every arc of weight more than one.
Consider an arc~$a=(v,w)$ in~$G$ with $\ensuremath{\omega}{}(a)>1$. We obtain~$G'$
and~$\ensuremath{\omega}{}'$ from~$G$ and~$\ensuremath{\omega}{}$ by setting the
weight~$\ensuremath{\omega}{}'(a)=1$, adding a set~$\Va{}$ of $\ensuremath{\omega}{}(a)-1$~vertices
to~$G'$, and inserting for each $u\in \Va{}$ a weight-one arc~$(v,u)$
and a weight-one arc~$(u,w)$.
First, assume that $(G,\ensuremath{\omega}{},k)$ is a yes-instance and that
$\ensuremath{S}{}$~is a minimal partitioning set{} for~$G$. We show how to obtain
a partitioning set{} of weight~$k$ for~$G'$. Clearly, if $a\notin \ensuremath{S}{}$,
then~$\ensuremath{S}{}$ is a partitioning set{} of equal weight for~$(G',\ensuremath{\omega}{}',k)$. If
$a\in \ensuremath{S}{}$, then we get a partitioning set{} of equal weight for
$(G',\ensuremath{\omega}{}',k)$ by adding the arcs between~$v$ and~$\Va{}$
to~$\ensuremath{S}{}$.
Second, assume that $(G',\ensuremath{\omega}{}',k)$ is a yes-instance and
that~$\ensuremath{S}{}$ is a minimal partitioning set{} for~$G'$. We show how to obtain
a partitioning set{} of weight~$k$ for~$G$. To this end, we consider two
cases: $v$~and~$w$ are in a common or in separate connected components
of~$G'\setminus\ensuremath{S}{}$.
\begin{caselist}
\item If $v$~and~$w$ are in one connected component
of~$G'\setminus\ensuremath{S}{}$, then, by minimality, $\ensuremath{S}{}$~does not
contain~$a$ or any arc incident to vertices in~$\Va{}$. Hence,
$\ensuremath{S}{}$ is a partitioning set{} of equal weight for~$G$.
\item If $v$~and~$w$ are in separate connected components
of~$G'\setminus\ensuremath{S}{}$, then $a\in\ensuremath{S}{}$. Moreover, the vertices
in~$\Va{}$ have only one outgoing arc. Hence, by
\autoref{lem:no_new_sinks}, $\ensuremath{S}{}$~does not contain arcs
from~$\Va{}$ to~$w$ but, therefore, contains all arcs from~$v$
to~$\Va{}$. Removing these arcs from~$\ensuremath{S}{}$ results in
a partitioning set{} of equal weight for~$(G,\ensuremath{\omega}{},k)$.\qed
\end{caselist}
\end{proof}
\subsubsection{Limits of fixed-parameter algorithms}
\label{sec:limits}
We now show that \textsc{DAG Partitioning}{} cannot be solved in $2^{o(k)}\textrm{poly}(n)$~time
unless the Exponential Time Hypothesis fails.
Thus, if our search tree algorithm for \textsc{DAG Partitioning}{} can be improved, then only by
replacing the base of the exponential $2^k$-term by some smaller constant.
The Exponential Time Hypothesis was introduced by \citet{IP01} and
states that $n$-variable \textsc{3-Sat} cannot be solved
in~$2^{o(n)}\ensuremath{\operatorname{poly}}(n)$ time. Using the reduction from \textsc{3-Sat} to
\textsc{DAG Partitioning}{} given by \citet{AM12} (\autoref{lem:red-from-3Sat-WDAGP}), we
can easily show the following:
\begin{theorem}\label{cor:no-sub-exp-algorithm}
Unless the Exponential Time Hypothesis fails, \textsc{DAG Partitioning}{} cannot be solved
in $2^{o(k)}\ensuremath{\operatorname{poly}}(n)$ time even if all arcs have unit weight.
\end{theorem}
\begin{proof}
\autoref{lem:red-from-3Sat-WDAGP} reduces an instance of
\textsc{3-Sat} consisting of a formula with $n$~variables and
$m$~clauses to an equivalent instance~$(G,\ensuremath{\omega}{},k)$ of \textsc{DAG Partitioning}{} with
$k=4n+ 2m$. Thus, a $2^{o(k)}\ensuremath{\operatorname{poly}}(n)$-time algorithm for \textsc{DAG Partitioning}{}
would yield a $2^{o(m)}\ensuremath{\operatorname{poly}}(m)$-time algorithm for
\textsc{3-Sat}. This, in turn, by the so-called \emph{Sparsification
Lemma} of \citet[Corollary~2]{IPZ01}, would imply a
$2^{o(n)}\ensuremath{\operatorname{poly}}(n)$~time algorithm for \textsc{3-Sat}, which
contradicts the Exponential Time Hypothesis.
Since the weights used in \autoref{lem:red-from-3Sat-WDAGP} are
polynomial in the number of created vertices and edges,
we can apply \autoref{lem:red-WDAGP-to-DAGP} to transfer
the result to the unit-weight case. \qed
\end{proof}
\subsubsection{Limits of problem kernelization} We now show that \textsc{DAG Partitioning}{}
has no polynomial-size problem kernel with respect to the
parameter~$k$---the weight of the partitioning set{} sought. It follows that,
despite the effectiveness of data reduction observed in experiments in
\autoref{sec:experiments}, we presumably cannot generally shrink a \textsc{DAG Partitioning}{}
instance in polynomial time to a size polynomial in~$k$.
To show that \textsc{DAG Partitioning}{} does not allow for polynomial-size kernels, we
first provide the necessary concepts and techniques introduced by
\citet{BJK13}.
\begin{definition}[{\citet[Definition~3.3]{BJK13}}]
For some finite alphabet~\(\Sigma\), a problem~$L\subseteq\Sigma^*$ \emph{(OR-)cross-composes} into a parameterized problem~$Q\subseteq\Sigma^*\times\mathbb N$ if there is an algorithm (a \emph{(OR-)cross-composition}) that transforms instances~$x_1,\ldots,x_s$ of~$L$ into an instance~$(x^*,k)$ for~$Q$ in time polynomial in~\(\sum_{i=1}^s|x_i|\) such that
\begin{enumerate}[i)]
\item $k$~is bounded by a polynomial in~$\max^s_{i=1}|x_i|+\log s$ and
\item $(x^*,k)\in Q$ if and only if there is an~$i\in\{1,\dots,s\}$
such that~$x_i\in L$.
\end{enumerate}
Furthermore, the cross-composition may exploit that the input
instances~$x_1,\ldots,x_s$ belong to the same equivalence class of a
\emph{polynomial equivalence relation}~$R\subseteq \Sigma^*\times
\Sigma^*$, which is an equivalence relation such that
\begin{enumerate}[i)]
\item it can be
decided in polynomial time whether two instances are equivalent and
\item any finite set~$S\subseteq \Sigma^*$ is partitioned into
$\ensuremath{\operatorname{poly}}(\max_{x\in S}|x|)$ equivalence classes.
\end{enumerate}
\end{definition}
\noindent The assumption that all instances belong to the same
equivalence class of a polynomial equivalence relation can make the
construction of a cross-composition remarkably easier: when giving a
cross-composition from \textsc{3-Sat}, we can, for example, assume that
the input instances all have the same number of clauses and variables.
Cross-compositions can be used to prove that a parameterized problem has
no polynomial-size kernel unless \NoPolyKernelAssume{}.
\begin{theorem}[{\citet[Corollary~3.6]{BJK13}}] \label{thm:Bod-No-Poly-Kernel} If some
problem~$L\subseteq \Sigma^*$ is NP-hard under polynomial-time many-one reduction{}s and
$L$~cross-composes into the parameterized problem~$Q\subseteq
\Sigma^*\times \mathbb{N}$, then there is no polynomial-size problem
kernel for~$Q$ unless \NoPolyKernelAssume.
\end{theorem}
\noindent
In the following, we show that \textsc{3-Sat} cross-composes into \textsc{DAG Partitioning}{} parameterized by~$k$, which yields the following theorem:
\begin{theorem}\label{thm:no-poly}
Unless \NoPolyKernelAssume, \textsc{DAG Partitioning}{} does not have a polynomial-size problem kernel with respect to
the weight~$k$ of the partitioning set{} sought even if all arcs have unit weight.
\end{theorem}
\noindent Although the proof of \autoref{thm:no-poly} is based on the
following construction, which requires arc weights, by using \autoref{lem:red-WDAGP-to-DAGP}
from \autoref{sec:tounit}, we obtain that \autoref{thm:no-poly} holds even on graphs
with unit weights.
\begin{construction}\label{lem:3-sat-cross-comp-to-w-DAGP}\upshape
\begin{figure}
\centering
\begin{tikzpicture}[x=2cm,y=1.25cm,shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex,label=left:$t$] (t) at (-0.5,1.5) {};
\node[vertex,label=below:$t'$] (t') at (1.5,-2.5) {};
\node[vertex,label=above:$t''$] (t'') at (1.5,1.5) {};
\node[vertex] (i1) at (1,1) {};
\node[vertex] (i2) at (2,1) {};
\node at (2.1,1.6) {$O$};
\node at (2.1,-2.6) {$I$};
\node[vertex] (o1) at (1,-2) {};
\node[vertex] (o2) at (2,-2) {};
\node[vertex,label=above:$f$] (f) at (1.5,-0.5) {};
\node[vertex,label=below:$f'$] (f') at (1.5,-1.5) {};
\node[vertex,label=left:$t_1$] (t1) at (0,0) {};
\node[vertex,label=left:$t_1'$] (t1') at (0,-1) {};
\node[vertex,label=left:$t_2$] (t2) at (1,0) {};
\node[vertex,label=left:$t_2'$] (t2') at (1,-1) {};
\node[vertex,label=right:$t_3$] (t3) at (2,0) {};
\node[vertex,label=right:$t_3'$] (t3') at (2,-1) {};
\node[vertex,label=right:$t_4$] (t4) at (3,0) {};
\node[vertex,label=right:$t_4'$] (t4') at (3,-1) {};
%
\draw[very thick,->] (t) -- (t'');
\draw[very thick,->] (t) -- (-0.5,-1) to[out=-90,in=180] (t');
\draw[very thick,->] (f) -- (f');
\draw[->] (t'') to (i1);
\draw[->,dotted] (t'') to (i2);
\draw[->] (i1) to (t1);
\draw[->,dotted] (i1) to (t2);
\draw[->] (i2) to (t3);
\draw[->] (i2) to (t4);
\draw[->] (t1') to (o1);
\draw[->,dotted] (t2') to (o1);
\draw[->] (t3') to (o2);
\draw[->] (t4') to (o2);
\draw[->] (o1) to (t');
\draw[->,dotted ] (o2) to (t');
\draw[->,dotted] (o1) to (f');
\draw[->] (o2) to (f');
\draw[->,dotted] (t1') to[out=-10,in=180] (f');
\draw[->] (t2') to (f');
\draw[->] (t3') to (f');
\draw[->] (t4') to[out=-170,in=0] (f');
\draw[->,very thick] (t1) to (t1');
\draw[->,very thick] (t2) to (t2');
\draw[->,very thick] (t3) to (t3');
\draw[->,very thick] (t4) to (t4');
\begin{pgfonlayer}{background}
%
%
%
%
%
%
\tikzstyle{treedge}=[color=black!10, line width=25pt, line cap=round]
\draw[treedge] (t''.center)--(i1.center);
\draw[treedge] (i1.center)--(t2.center);
\draw[treedge] (t''.center)--(i2.center);
\draw[treedge] (i2.center)--(t4.center);
\draw[treedge] (i2.center)--(t3.center);
\draw[treedge] (i1.center)--(t1.center);
\draw[treedge] (t1.center)--(t2.center);
\draw[treedge] (t3.center)--(t4.center);
\draw[treedge] (t'.center)--(o1.center);
\draw[treedge] (o1.center)--(t2'.center);
\draw[treedge] (t'.center)--(o2.center);
\draw[treedge] (o2.center)--(t4'.center);
\draw[treedge] (o2.center)--(t3'.center);
\draw[treedge] (o1.center)--(t1'.center);
\draw[treedge] (t1'.center)--(t2'.center);
\draw[treedge] (t3'.center)--(t4'.center);
\end{pgfonlayer}
\end{tikzpicture}
\caption{Cross composition of four formulas $\phi_1,\dots,\phi_4$
into a \textsc{DAG Partitioning}{} instance. Of each subgraph~$G_i$ corresponding to
formula~$\phi_i$, only the vertices~$t_i$, $t_i'$, and their
connecting heavy arc are shown. The introduced binary trees~$O$
and~$I$ are highlighted in gray. Deleting the $3\log 4=6$~dotted
arcs requires the graph~$G_1$ to be partitioned, since its
vertices~$t_1$~and~$t'_1$ are in a different connected component
than its vertices~$f_1=f$~and~$f_1'=f'$. The graphs~$G_i$ for
$i>1$ do not have to be partitioned; they are completely
contained in one connected component with~$f$ and~$f'$.}
\label{fig:DAGP-crossco}
\end{figure}
Let $\varphi_1,\ldots, \varphi_s$ be instances of \textsc{3-Sat}.
Since we may assume $\varphi_1,\dots,\varphi_s$ to be from the same
equivalence class of a polynomial equivalence relation, we may assume
that each of the formulas~$\varphi_1,\dots,\varphi_s$ has the same
number~$n$ of variables and the same number~$m$ of clauses. Moreover,
we may assume that~$s$ is a power of two; otherwise we simply add
unsatisfiable formulas to the list of instances. We now construct a
\textsc{DAG Partitioning}{} instance~$(G,\ensuremath{\omega}{},k)$ with $k:=4n+2m+3 \log s$ that is a
yes-instance if and only if $\varphi_i$~is satisfiable for at least
one~$1\leq i\leq s$, where we use ``\(\log\)'' to denote the binary
logarithm. As in \autoref{lem:red-from-3Sat-WDAGP}, the weight
function~$\ensuremath{\omega}{}$ will only assign two possible weight values: a
\emph{heavy arc} has weight $k+1$ and thus cannot be contained in
any partitioning set{}. A \emph{normal arc} has weight one. The remainder of
the construction is illustrated in \autoref{fig:DAGP-crossco}.
For each instance~$\varphi_i$, let~$G_i$ be the graph produced by
\autoref{lem:red-from-3Sat-WDAGP}. By \autoref{lem:constr1}, $G_i$~can
be partitioned with $4n+2m$~arc deletions if and only if~$\varphi_i$
is a yes-instance. We now build a gadget that, by means of additional
$3\log s$~arc deletions, chooses exactly one graph~$G_i$ that has to
be partitioned.
\looseness=-1To distinguish between multiple instances, we denote the
special vertices~$f,f',t,$ and~$t'$ in~$G_i$ by~$f_i,f'_i,t_i,$
and~$t'_i$. For all $1\leq i\leq s$, we add~$G_i$ to the output
graph~$G$ and merge the vertices~$f_1,f_2,\ldots,f_s$ into a
vertex~$f$ and the vertices~$f'_1,f'_2,\ldots,f'_s$ into a
vertex~$f'$. Furthermore, we add the vertices~$t,t',$ and~$t''$ and
the heavy arcs~$(t,t')$ and~$(t,t'')$ to~$G$.
We add a balanced binary tree~$O$ rooted in~$t''$ and its leaves being
the vertices~$t_1,\ldots,t_s$ that is formed by normal arcs directed
from the root to the leaves. That is, $O$~is an \emph{out-tree}. %
%
Moreover, add a balanced binary tree~$I$ rooted in~$t'$ and its leaves
being the vertices~$t'_1,\ldots,t'_s$ that is formed by normal arcs
directed from the leaves to the root. That is, $I$~is an
\emph{in-tree}. For each vertex~$v\ne t'$ in~$I$, add a normal
arc~$(v,f')$.
\end{construction}
\noindent Using this construction, we can now prove
\autoref{thm:no-poly}.
\begin{proof}[of \autoref{thm:no-poly}]
We only have to show that the instance~$(G,\ensuremath{\omega},k)$ constructed by
\autoref{lem:3-sat-cross-comp-to-w-DAGP} is a yes-instance if and only
if at least one of the input formulas~$\varphi_i$ is
satisfiable. Then, the theorem for the weighted case follows from
\autoref{thm:Bod-No-Poly-Kernel}. Since the weights used in
\autoref{lem:3-sat-cross-comp-to-w-DAGP} are polynomial in the number
of created vertices and edges, we can apply
\autoref{lem:red-WDAGP-to-DAGP} to transfer the result to the
unit-weight case.
First, assume that a formula~$\varphi_i$ is satisfiable for
some~$1\leq i\leq s$. By \autoref{lem:constr1} it follows that
$G_i$ can be partitioned by $k':=4n+2m$~arc deletions into two
connected components~$P_t$ and~$P_f$ such that $P_t$~contains~$t_i$
and~$t'_i$ and such that $P_f$~contains~$f_i=f$ and~$f_i'=f'$. We
apply these arc deletions to~$G$ and delete $3\log s$ additional arcs
from~$G$ as follows. Let $L$ be the unique directed path{} in~$O$ from $t''$
to $t_i$. Analogously, let~$L'$ be the unique directed path{} in~$I$
from~$t'_i$ to~$t'$. Observe that each of these directed path{}s has $\log
s$ arcs. We partition~$G$ into the connected component~$P'_t = P_t
\cup \{t,t',t''\} \cup V(L) \cup V(L')$ with sink~$t'$ and into the
connected component~$P'_f= V(G) \setminus P'_t$ with sink~$f'$. To
this end, for each vertex~$v\in V(L) \setminus \{t_i\}$,
we remove the outgoing arc that does not belong to~$L$. Hence, exactly
$\log s$~arcs incident to vertices of~$O$ are removed. Similarly, for
each vertex~$v\in V(L') \setminus \{t_i'\}$, we remove the
incoming arc not belonging to~$L'$. For each vertex~$v\neq t'$ of~$L'$,
we remove the arc to~$f'$. Hence, exactly $2\log s$~arcs incident to
vertices of~$I$ are removed. Thus, in total, at most $k= 4n +2m + 3\log
s$ normal arcs are removed to partition~$G$ into~$P'_t$ and~$P'_f$.
Conversely, let $\ensuremath{S}{}$ be a minimal partitioning set{} for~$G$ with
$\ensuremath{\omega}{}(\ensuremath{S}{})\leq k$. Then, by \autoref{lem:no_new_sinks},
$G\setminus\ensuremath{S}{}$ has two connected components, namely $P'_t$~with
sink~$t'$ and $P'_f$~with sink~$f'$. Since $\ensuremath{S}{}$ cannot contain
heavy arcs, $t$~and~$t''$~are in~$P'_t$. Hence, $t''$~can reach~$t'$
in~$G\setminus\ensuremath{S}{}$, since they are in the same component of~$G \setminus \ensuremath{S}{}$.
As every directed path{} from~$t''$ to~$t'$ goes
through some vertices~$t_i$ and~$t'_i$, it follows that there is
an~$i\in \{1,\ldots s\}$ such that~$t_i$ and~$t'_i$ are
in~$P'_t$. Since $f=f_i$ and~$f'=f'_i$ are in~$P'_f$,
the partitioning set{}~$\ensuremath{S}{} \cap A(G_i)$ partitions~$G_i$ into two
connected components: one containing~$t_i$ and~$t'_i$ and the other
containing~$f=f_i$ and~$f'=f_i'$. Since~$\ensuremath{S}{}$ does not contain
heavy arcs, from \autoref{lem:constr1} it follows that $\varphi_i$ is
satisfiable. \qed
\end{proof}
\section{Parameter treewidth}\label{sec:tw}
In \autoref{sec:smallsol}, we have seen that \textsc{DAG Partitioning}{} is linear-time solvable when the weight~$k$ of the requested
partitioning set{} is constant. %
\citet{AM12} asked whether \textsc{DAG Partitioning}{} is fixed-parameter tractable with
respect to the parameter treewidth, which is a measure of the
``tree-likeness'' of a graph. We will answer this question
affirmatively.
In \autoref{sec:tree}, we first show that, if the input graph is indeed
a tree, then the heuristic by \citet{LBK09} solves the instance
optimally in linear time. Afterwards, in \autoref{sec:twalg}, we prove
that this result can be generalized to graphs of bounded treewidth and
thus improve the algorithm for pathwidth given by \citet{AM12}, since the
treewidth of a graph is at most its pathwidth.
\subsection{Partitioning trees}\label{sec:tree}
In this section, we show that the heuristic by \citet{LBK09} solves
\textsc{DAG Partitioning}{} in linear time on trees. This result will be generalized in the
next section, where we show how to solve \textsc{DAG Partitioning}{} in linear time on
graphs of bounded treewidth. The heuristic by \citet{LBK09} is similar
to our search tree algorithm presented in \autoref{alg:simple-st}:
instead of trying all possibilities of associating a vertex with one of
its feasible sinks, it associates each vertex with the sink that it
would be most expensive to disconnect from. The algorithm is presented
in \autoref{alg:simple-trees}.
\begin{algorithm}[t]
\caption{\citet{LBK09}'s heuristic to compute small partitioning set{}s.}
\label{alg:simple-trees}
\KwIn{A directed acyclic graph~$G=(V,A)$ with arc weights~$\ensuremath{\omega}{}$.}
\KwOut{A partitioning set{}~$S$.}
$(v_1,v_2,\dots,v_n)\gets{}$reverse topological order of the vertices
of~$G$\;
$L\gets{}$array with $n$~entries\; %
$S\gets\emptyset$\;
\For{$i=1$ to~$n$}{
\lIf(\tcp*[f]{\textrm{associate~$v_i$ with itself}}\nllabel{treeL}){$v_i$ is a sink}{$L[v_i]\gets v_i$}
\Else{
$D\gets\{L[w]\mid w\in N^+(v_i)\}$\tcp*{\textrm{the set of feasible sinks for~$v_i$}}
\For{$s\in D$}{$A_s\gets\{(v_i,w)\in A\mid w\in N^+(v_i)\wedge L[w]=s\}$\nllabel{lin:Ss}\tcp*{\textrm{set of arcs to keep if $v_i$~is associated with sink~$s$}}}
$s^*\gets\arg\max_{s\in D}\ensuremath{\omega}(A_s)$\nllabel{lin:superstar}\tcp*{\textrm{cheapest sink $s^*$ to associate $v_i$ with}}
$L[v_i]\gets s^*$\;
$S\gets S\cup \{(v_i,w)\mid w\in N^+(v_i)\wedge L[w]\ne s^*\}$\;
}
}
\Return{S}
\end{algorithm}
\begin{theorem}\label{thm:treelin}
\autoref{alg:simple-trees} solves \textsc{DAG Partitioning}{} optimally in linear time if
the underlying undirected graph is a tree.
\end{theorem}
\begin{proof}
\autoref{alg:simple-trees} clearly works in linear time: to implement
it, we only have to iterate over the out-neighbors of each
vertex~$v_i$ once. In particular, all sets~$A_s$ in \autoref{lin:Ss}
can be computed by one iteration over each~$w\in N^+(v_i)$ and adding
the arc~$(v_i,w)$ to~$A_{L[w]}$. Moreover, \autoref{alg:simple-trees}
returns a partitioning set{}: each vertex~$v$ is associated with exactly one
sink~$L[v]$ of~$G$ and the returned set~$S$ deletes exactly the
arcs~$(v,w)$ for which $L[v]\ne L[w]$.
\begin{figure}
\centering
\begin{tikzpicture}[shorten >= 0.5mm,baseline=(v),y=0.75cm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex, label=right:$v_i$] (vi) at (0,0) {};
\node[vertex, label=right:$v$] (v) at (0,2) {};
\node[vertex, label=left:$w$] (w) at (-2,-2) {};
\node (w') at (-1,-2) {$R_{s'}$};
\node (u') at (1,-2) {$R_{L[v_i]}$};
\node[vertex, label=left:$s'$] (s') at (-4,-4) {};
\node[vertex, label=right:$u$] (u) at (2,-2) {};
\node[vertex, label=right:${L[v_i]}$] (Lvi) at (2,-4) {};
\draw[->] (vi)--(w);
\draw[->,dashed] (vi)--(w');
\draw[->,dashed] (vi)--(u');
\draw[->] (vi)--(u);
\draw[->, decorate, decoration = {snake, segment length = 0.9cm}, dashed] (v) to[bend right=30](s');
\draw[->, decorate, decoration = {snake, segment length = 0.9cm}] (w)--(s');
\draw[->, decorate, decoration = {snake, segment length = 0.9cm}] (u)--(Lvi);
\draw[->, decorate, decoration = {snake, segment length = 0.9cm}] (v)--(vi);
\begin{pgfonlayer}{background}
\draw[fill=black!10,line width=35pt,line join=round, line cap=round,
draw=black!10] (w.center) -- (w'.center);
\draw[fill=black!10,line width=35pt,line join=round, line cap=round,
draw=black!10] (u.center) -- (u'.center);
\end{pgfonlayer}
\end{tikzpicture}
\caption{Illustration of the proof of \autoref{thm:treelin}. Straight lines represent arcs. Wavy lines represent directed paths. Dashed arcs and paths represent arcs and paths that cannot exist since the underlying undirected graph is a tree.}
\label{fig:treelin}
\end{figure}
We show by induction on~$i$ that \autoref{alg:simple-trees} computes a
\emph{minimum-weight} partitioning set{} for~$G[\{v_1, \ldots, v_i\}]$. For the induction base case,
observe that $v_1$~is a sink and, thus, $v_1$ only reaches the
sink $L[v_1]=v_1$ in~$G\setminus S$ for all possible
minimum-weight partitioning set{}s~$S$ for~$G$. Now, assume that there is a
minimum-weight partitioning set{}~$S$ such that
each~$v\in\{v_1,\dots,v_{i-1}\}$ only reaches the sink~$L[v]$
in~$G\setminus S$. We show that there is a
minimum-weight partitioning set{}~$S'$ such that
each~$v\in\{v_1,\dots,v_{i}\}$ reaches only the sink~$L[v]$
in~$G\setminus S'$. If $v_i$~only reaches~$L[v_i]$ in~$G\setminus S$,
then we are done. Otherwise, $v_i$~reaches some sink~$s'\ne L[v_i]$
in~$G\setminus S$ and, hence, is not itself a sink. The graph~$G$ is partly illustrated in \autoref{fig:treelin}.
\looseness=-1 Let $R_s$~be the set of vertices reachable from~$v_i$ that reach some sink~$s$ in~$G$. Since the underlying undirected graph of~$G$ is a tree, $v_i$~has exactly one arc into~$R_s$ for each sink~$s$ reachable from~$v_i$. Let $(v_i,u)$~be the arc of~$v_i$ into~$R_{L[v_i]}$ and $(v_i,w)$~be the arc of~$v_i$ into~$R_{s'}$. Observe that the arc $(v_i,u)$~exists since the algorithm can set $L[v_i]$ only to sinks reachable from~$v_i$. To show that $S':=(S\setminus\{(v_i,w)\})\cup\{(v_i,u)\}$ is still a partitioning set{}, we only have to verify that~$v_i$ and all vertices reaching~$v_i$ only reach one sink in~$G\setminus S'$. For all other vertices, this follows from $S$~being a partitioning set{}.
\begin{enumerate}
\item The vertex~$v_i$ only reaches the sink~$L[v_i]$ in~$G\setminus
S'$: this is because $u=v_j$ for some $j<i$ which, by induction
hypothesis, reaches exactly one sink in~$G\setminus S$ and, hence,
in $G\setminus S'$.
\item A vertex~$v$ that reaches~$v_i$ in~$G\setminus S'$ reaches only
the sink~$L[v_i]$ in~$G\setminus S'$: otherwise $v$~reaches~$s'$
in~$G\setminus S'$ since $v_i$ and, therefore $v$, reaches~$s'$
in~$G\setminus S$. This, however, means that $v$ has a path to $s'$ that bypasses~$v_i$ in $G\setminus S'$ and hence, in $G\setminus S$, which contradicts the undirected underlying graph of~$G\setminus S$ being a tree.
\end{enumerate}
It remains to show $\ensuremath{\omega}(S')\leq\ensuremath{\omega}(S)$, implying that $S'$ is also a
minimum-weight partitioning set{}. To see this, we analyze the sets~$A_{s'}$
and~$A_{L[v_i]}$ computed in \autoref{lin:Ss} of
\autoref{alg:simple-trees}. Observe that~$A_{s'}=\{(v_i,w)\}$ and
that $A_{L[v_i]}=\{(v_i,u)\}$. Since
$\ensuremath{\omega}(A_{s'})\leq\ensuremath{\omega}(A_{L[v_i]})$ because of the choice of~$L[v_i]$
in \autoref{lin:superstar}, we conclude that
$\ensuremath{\omega}(v_i,u)\leq\ensuremath{\omega}(v_i,w)$ and hence, $\ensuremath{\omega}(S')\leq\ensuremath{\omega}(S)$.\qed
\end{proof}
\subsection{Partitioning DAGs of bounded treewidth}\label{sec:twalg}
\newcommand{\textsl{Tab}}{\textsl{Tab}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{P}{P}
We now give an algorithm that solves \textsc{DAG Partitioning}{} in linear time on graphs of
bounded treewidth. In contrast to \autoref{sec:smallsol}, which
presented our search tree algorithm and an experimental evaluation
thereof, the algorithm presented below is of rather theoretical
interest: \citet{AM12} asked whether \textsc{DAG Partitioning}{} is fixed-parameter
tractable with respect to the parameter treewidth. %
With a dynamic programming algorithm, we can prove the following
theorem, which answers their open question and is an improvement of
\citet{AM12}'s algorithm, since the treewidth of a graph is at most its
pathwidth.
\begin{theorem} \label{th:fpt-tw} Given a width-$\ensuremath{t}{}$ tree
decomposition of the underlying undirected graph, \textsc{DAG Partitioning}{} can be solved
in $2^{O(\ensuremath{t}^2)}\cdot n$ time.
\end{theorem}
\noindent We first formally define the tree decomposition of a graph and its width.
\begin{definition}[Treewidth, tree decomposition]\label{treedec}
Let $G=(V,A)$~be a directed graph. A \emph{tree decomposition}~$(T,\beta)$ for~$G$ consists of a rooted tree~$T= (X,E)$ and
a mapping~$\beta\colon X\to 2^V$ of each \emph{node}~$x$ of the
tree~$T$ to a subset~$V_x:=\beta(x)\subseteq V$, called \emph{bag},
such that
\begin{enumerate}[i)]
\item\label{treedec1} for each vertex~$v\in V$, there is a node~$x$ of~$T$
with~$v\in V_x$,
\item\label{treedec2} for each arc~$(u,w)\in A$, there is a node~$x$
of~$T$ with $\{u,w\}\subseteq V_x$,
\item\label{treedec3} for each vertex~$v\in V$, the nodes~$x$ of~$T$
for which~$v\in V_x$ induce a subtree in~$T$.
\end{enumerate}
A tree decomposition is \emph{nice} if $V_r=\emptyset$ for the root~$r$ of~$T$ and each node~$x$ of~$T$ is either
\begin{itemize}
\item a \emph{leaf}: then, $V_x=\emptyset$,
\item a \emph{forget node}: then, $x$ has exactly one child node~$y$
and $V_x=V_y\setminus\{v\}$ for some~$v\in V_y$,
\item an \emph{introduce node}: then, $x$ has exactly one child node~$y$ and
$V_x=V_y\cup\{v\}$ for some~$v\in V\setminus V_y$, or
\item a \emph{join node}: then, $x$ has exactly two child nodes~$y$ and~$z$
such that~$V_x=V_y=V_z$.
\end{itemize}
The \emph{width of a tree decomposition} is one less
than the size of its largest bag. The \emph{treewidth} of a graph~$G$
is the minimum width of a tree decomposition for~$G$. For a node~$x$
of $T$, we denote by~$U_x$ the union of~$V_y$ for all descendants~$y$
of the node~$x$.
\end{definition}
\noindent\looseness=-1 For any constant~$\ensuremath{t}$, it can be decided in linear time whether a graph has treewidth~$\ensuremath{t}$ and the corresponding tree decomposition of width~$\ensuremath{t}$ can be constructed in linear time~\cite{Bod96}. Also in $O(\ensuremath{t} n)$~time, the tree decomposition of width~$\ensuremath{t}$ can be transformed into a nice tree decomposition with the same width and $O(\ensuremath{t} n)$~nodes~\citep{Klo94}. Hence, we assume without loss of generality that we are given a nice tree decomposition.
\bigskip\noindent Our algorithm is based on leaf-to-root dynamic
programming. That is, intuitively, we start from the leaf nodes of the
tree decomposition and compute possible partial partitioning set{}s for
each bag from the possible partial partitioning set{}s for its child
bags. Since our algorithm for \textsc{DAG Partitioning}{} on graphs of bounded treewidth is relatively intricate, we refer the reader that is yet inexperienced with dynamic programming on tree decompositions to introductory chapters in corresponding text books \citep{Nie06,DF13,CFK+15,Klo94}.
We will now precisely define a partial partitioning set{} and show that
any partial partitioning set{} for the root bag is a partitioning set{} for the entire
graph. The definition is illustrated in \autoref{fig:partsol}.
\begin{figure}
\centering
\begin{tikzpicture}[shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner
sep=0pt]
\node[vertex] (v1) at (0,0) {};
\node[vertex] (v2) at (4,0) {};
\node[vertex,label=below:$s_1$] (v3) at (-1,-1) {};
\node[vertex] (v4) at (1,1) {};
\node[vertex,label=left:$s_4$] (v14) at (1,2) {};
\node[vertex] (v5) at (3,1) {};
\node[vertex,label=below:$s_2$] (v6) at (5,-1) {};
\node[vertex] (v10) at (2,2) {};
\node[vertex] (v11) at (1,-1) {};
\node[vertex,label=right:$s_3$] (v12) at (2,1) {};
\node[vertex] (v13) at (2,0) {};
\draw[->] (v1)--(v11);
\draw[->] (v11)--(v3);
\draw[->, very thick] (v4)--(v11);
\draw[->,dotted] (v11)--(v13);
\draw[->] (v13)--(v12);
\draw[->] (v13)--(v5);
\draw[->] (v12)--(v10);
\draw[->] (v1)--node[pos=0.5,label=above:$U_x$]{}(v3);
\draw[->] (v1)--(v4);
\draw[->] (v5)--(v2);
\draw[->] (v2)--(v6);
\draw[->] (v4)--(v14);
\draw[->,dotted] (v4)-- node[pos=0.5,label=above:$V_x$]{} (v12);
\draw[->] (v13)--(v2);
\draw[->] (v10)--(v5);
\begin{pgfonlayer}{background}
%
%
%
%
%
%
\tikzstyle{edge} = [color=black!10,line cap=round, line join=round,
line width=35pt]
\draw[edge,line width=40pt,fill] (v6.center)--(v5.center)--(v4.center)--(v3.center)--(v11.center)--(v13.center)--(v2.center);
\draw[edge,fill,color=black!20] (v4.center)--(v5.center)--(v13.center)--cycle;
\end{pgfonlayer}
\end{tikzpicture}
\caption{The set~$\ensuremath{S}{}$ consisting of the dotted arc is not a
partitioning set{} for the shown graph~$G$, but a partial partitioning set{}
for~$G[U_x]$. Note that $\ensuremath{S}{}$~would not be a partial partitioning set{} if
it would additionally contain the bold arc, since then the
sink~$s_1$ would be in a connected component with a vertex of~$V_x$
but not reachable from~$V_x$. Also note that $\ensuremath{S}{}$ does not
separate~$s_2$ from~$s_3$, which are sinks
in~$G[U_x]$ but not in~$G$.}
\label{fig:partsol}
\end{figure}
\begin{definition}[Partial partitioning set{}]\label{def:partsol}
A \emph{partial partitioning set{}}~$\ensuremath{S}{}$ for~$G[U_x]$ is an arc
set~$\ensuremath{S}{}\subseteq A(G[U_x])$ such that
\begin{enumerate}[(i)]
\item\label{partsol1} no connected component of $G[U_x] \setminus
\ensuremath{S}{}$ contains two different sinks of $U_x \setminus V_x$, and
\item\label{partsol2} every sink in a connected component of
$G[U_x]\setminus\ensuremath{S}{}$ that contains a vertex of~$V_x$ can be reached
from some vertex of~$V_x$ in $G[U_x]\setminus\ensuremath{S}{}$.
\end{enumerate}
\end{definition}
\noindent Since we assumed to work on a tree decomposition with a
root~$r$ such that the bag~$V_r$ is empty, any partial partitioning set{}
for~$G[U_r]=G$ will be a partitioning set{} for the entire graph~$G$. Moreover,
\autoref{def:partsol}(\ref{partsol1}) does not require
partial partitioning set{}s for~$G[U_x]$ to separate sinks in the
bag~$V_x$. This is because vertices in~$V_x$ that are sinks in~$G[U_x]$
might be non-sinks for a supergraph, as illustrated in
\autoref{fig:partsol}. Thus, it might be unnecessary to separate the
vertices in~$V_x$. However, due to
\autoref{treedec}(\ref{treedec2}~and~\ref{treedec3}) of a tree
decomposition, sinks in~$U_x\setminus V_x$ are sinks in all
supergraphs~$G[U_{q}]$ for $q$~being an ancestor node of~$x$.
\autoref{def:partsol}(\ref{partsol2}), by
\autoref{lem:dirundirequiv}, allows us to ensure that components containing both a
sink in~$U_x\setminus V_x$ and a vertex of~$V_x$ end up with
only one sink in some supergraph~$G[U_{q}]$. The precise purpose of \autoref{def:partsol}(\ref{partsol2}) will be explained in more detail after the upcoming \autoref{patsat}.
To keep the notation free from clutter, note that \autoref{def:partsol}
implicitly relies on the bag~$V_x$ belonging to each set~$U_x$. Thus,
when a tree decomposition has a node~$x$ and a child node~$y$ such
that~$V_x\subsetneq V_y$ but~$U_x=U_y$, a partial partitioning set{}~$\ensuremath{S}{}$
for~$G[U_y]$ is not necessarily a partial partitioning set{} for~$G[U_x]$,
although~$G[U_y]\setminus\ensuremath{S}{}=G[U_x]\setminus\ensuremath{S}{}$.
\bigskip\noindent Now, assume that we want to compute
partial partitioning set{}s for~$G[U_x]$ from partial partitioning set{}s for child
nodes of~$x$. These partial partitioning set{}s might, for example, disagree on
which arcs to delete in the child bags or which connected components of
the child bags are meant to end up in a common connected component of
the entire graph: for a child node~$y$ of~$x$, multiple connected
components of~$G[U_y]\setminus\ensuremath{S}{}$ might be one connected component
of~$G[U_x]\setminus\ensuremath{S}{}$. To prevent such incompatibilities, we only
consider those partial partitioning set{}s for~$G[U_x]$ that agree with
partial partitioning set{}s for the child nodes of~$x$ on certain
\emph{patterns}.
On a high level, our algorithm will store for each node of the tree
decomposition a table with one row for each possible pattern. The value
of a row will be the minimum weight of a partial partitioning set{}
\emph{satisfying} this pattern. To compute this value, our algorithm
will use the rows with \emph{corresponding} patterns in the tables of
the child nodes. In the following, we first formalize the terms
\emph{patterns} and \emph{satisfying} partial partitioning set{s}. Then, we
present our algorithm and we specify the \emph{corresponding} patterns.
We start by formally defining patterns, see \autoref{fig:pattern} for an
illustration.
\begin{definition}[Pattern]\label{pat}
Let $x$~be a node of a tree decomposition~$T$. A \emph{pattern
for~$x$} is a triple~$(\mathcal{R}{},\mathcal{G}{},\mathcal{P})$ such that
\begin{enumerate}[i)]
\item\label{pat3} $\mathcal{R}{}$ is a directed acyclic graph with the
vertices~$V_x$.
\item\label{pat1} $\mathcal{G}{}$ is a directed acyclic graph with the
vertices~$V_x$ and at most $|V_x|$~additional vertices such that
each vertex in~$V(\mathcal{G}{})\setminus V_x$ is a non-isolated sink, and
\item\label{pat2} $\mathcal{P}$ is a partition of the vertices
of~$\mathcal{G}{}$
%
%
such that each connected
component of $\mathcal{G}{}$
is within one set~$P_i\in\mathcal{P}$ and such that each $P_i$
contains at most one vertex of $V(\mathcal{G}{})\setminus V_x$.
\end{enumerate}
\end{definition}
We will use a pattern~$(\mathcal{R}{},\mathcal{G}{},\mathcal{P})$ for~$x$ to
capture important properties of partial partitioning set{}s for~$G[U_x]$.
Intuitively, the graph~$\mathcal{R}{}$ will describe which arcs between the
vertices in the bag~$V_x$ a partial partitioning set{}~$\ensuremath{S}{}$ for~$G[U_x]$
will not delete. The graph~$\mathcal{G}{}$ will
describe %
which vertices of~$V_x$ can reach each other in~$G[U_x]\setminus\ensuremath{S}{}$
and which sinks outside of~$V_x$ they can reach. The
partition~$\mathcal{P}$ describes which vertices are meant to end up in the
same connected component of~$G\setminus\ensuremath{S}{}$ for a partitioning set{}~$\ensuremath{S}{}$
of the entire graph. For this reason, the sets of the
partition~$\mathcal{P}$ are allowed to contain only one vertex
of~$V(\mathcal{G}{})\setminus V_x$ as these vertices are sinks.
We will now explain precisely what it means for a partial partitioning set{} to
\emph{satisfy} a pattern. The following definition is illustrated in
\autoref{fig:pattern}.
\begin{figure}
\centering
\begin{tikzpicture}[shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner
sep=0pt]
\node[vertex,label=below:$s_1$] (v3) at (-1,-1) {};
\node[vertex] (v4) at (1,1) {};
\node[vertex] (v5) at (3,1) {};
\node[vertex,label=below:$s_2$] (v6) at (5,-1) {};
\node[vertex,label=right:$s_3$] (v12) at (2,1) {};
\node[vertex] (v13) at (2,0) {};
\node at (2,2) {$V_x$};
\node at (-1,0) {$P_1$};
\node at (5,0) {$P_2$};
\draw[->] (v13)--(v12);
\draw[->] (v13)--(v5);
\draw[->] (v4)--(v3);
\draw[->] (v5)--(v6);
\draw[->] (v13)--(v6);
\begin{pgfonlayer}{background}
\tikzstyle{edge} = [color=black!10,line cap=round, line join=round,
line width=35pt]
\draw[edge,fill,color=black!10] (v4.center)--(v5.center)--(v13.center)--cycle;
\draw[edge,fill,color=black!20,line width=25pt] (v4.center)--(v3.center);
\draw[edge,fill,color=black!20,line width=25pt] (v6.center)--(v5.center)--(v12.center)--(v13.center)--cycle;
\end{pgfonlayer}
\end{tikzpicture}
\caption{A pattern~$(\mathcal{R},\mathcal{G},\mathcal{P})$, where shown are the graph~$\mathcal{G}{}$
and a partition~$\mathcal{P}{}$ of its vertices into two sets~$P_1$
and~$P_2$. If the graph~$\mathcal{R}$ is the subgraph of~$\mathcal{G}{}$ induced by the vertices~$V_x$, then, in terms of \autoref{patsat}, $\mathcal{R}=\mathcal{R}_x(\ensuremath{S}{})$ and~$\mathcal{G}{}=\mathcal{G}_x(\ensuremath{S}{})$ for the partial
partitioning set{}~$\ensuremath{S}{}$ for~$G[U_x]$ shown in \autoref{fig:partsol}. In
this case, the partial partitioning set{}~$\ensuremath{S}{}$ satisfies the shown
pattern. Moreover, a vertex is a sink in this figure if and only if
it is a sink in~$G[U_x]\setminus\ensuremath{S}{}$ shown in
\autoref{fig:partsol}.}
\label{fig:pattern}
\end{figure}
\newcommand{bag-reachable}{bag-reachable}
\begin{definition}[Pattern satisfaction]\label{patsat}
Let $\ensuremath{S}{}$ be a partial partitioning set{} for~$G[U_x]$. A~sink~$s$ in $U_x
\setminus V_x$ is \emph{bag-reachable{} in $G[U_x]\setminus\ensuremath{S}{}$} if
some vertex in~$V_x$ can reach~$s$ in~$G[U_x]\setminus\ensuremath{S}{}$. We
define a canonical
pattern~$(\mathcal{R}_x(\ensuremath{S}{}),\mathcal{G}_x(\ensuremath{S}{}),\mathcal{P}_x(\ensuremath{S}{}))$
at~$x$ for~$\ensuremath{S}$, where
\begin{itemize}
\item\label{defbagsol} $\mathcal{R}_x(\ensuremath{S}{})$ is~$G[V_x]\setminus\ensuremath{S}{}$,
\item\label{defbagreach} $\mathcal{G}{}_x(\ensuremath{S}{})$ is the directed
acyclic graph on the vertices~$V_x \cup V'$, where $V'$ is the set
of bag-reachable{} sinks in~$G[U_x]\setminus\ensuremath{S}{}$, and there is an
arc~$(u,v)$ in $\mathcal{G}{}_x(\ensuremath{S}{})$ if and only if the vertex~$u$
can reach the vertex~$v$ in~$G[U_x]\setminus\ensuremath{S}{}$, and
\item\label{defbagpart} $\mathcal{P}_x(\ensuremath{S}{})$ is the the partition of
the vertices of~$\mathcal{G}{}_x(\ensuremath{S}{})$ such that the vertices $u$ and
$v$ are in the same set of $\mathcal{P}_x(\ensuremath{S}{})$ if and only if they
are in the same connected component of~$G[U_x]\setminus\ensuremath{S}{}$.
\end{itemize}
%
%
%
%
%
%
\noindent Let $(\mathcal{R},\mathcal{G}{},\mathcal{P})$ be a pattern
for~$x$. We say that~$\ensuremath{S}{}$ \emph{satisfies} the pattern
$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$ if
\begin{enumerate}[i)]
\item\label{patsatsol} $\mathcal{R}=\mathcal{R}_x(\ensuremath{S}{})$,
\item\label{patsat1} $\mathcal{G}{}=\mathcal{G}{}_x(\ensuremath{S}{})$, and
\item\label{patsat2} for each set~$P \in \mathcal{P}_x(\ensuremath{S}{})$ there
exists a set~$P' \in \mathcal{P}$ such that $P \subseteq P'$, that is,
$\mathcal{P}$ is a \emph{coarsening} of $\mathcal{P}_x(\ensuremath{S}{})$.
\end{enumerate}
\end{definition}
\noindent It is easy to verify that a partial partitioning set{}~$\ensuremath{S}{}$
for~$G[U_x]\setminus\ensuremath{S}{}$ satisfies its canonical
pattern~$(\mathcal{R}_x(\ensuremath{S}{}), \mathcal{G}{}_x(\ensuremath{S}{}),\mathcal{P}_x(\ensuremath{S}{}))$
at node~$x$: to this end, observe that
$(\mathcal{R}_x(\ensuremath{S}{}),\mathcal{G}{}_x(\ensuremath{S}{}),\mathcal{P}_x(\ensuremath{S}{}))$~is indeed
a pattern for~$x$: for the vertex set~$V_x \cup V'$
of~$\mathcal{G}{}_x(\ensuremath{S}{})$, we have $|V'|\leq |V_x|$ since each vertex
in~$V_x$ can reach at most one distinct sink in~$V'\subseteq
U_x\setminus V_x$ in~$G[U_x]\setminus\ensuremath{S}{}$.
Note that, since $\mathcal{G}{}_x(\ensuremath{S}{})$~contains an arc $(u,v)$ if and
only if $u$~\emph{can reach}~$v$ instead of requiring them to be merely
connected, a vertex is a sink in~$\mathcal{G}{}_x(\ensuremath{S}{})$ if and only if
it is a sink in~$G[U_x]\setminus\ensuremath{S}{}$. Herein,
\autoref{def:partsol}(\ref{partsol2}) ensures that any sink~$s$
connected to a vertex in~$V_x$ is a vertex in~$\mathcal{G}{}_x(\ensuremath{S}{})$. %
While it might seem more natural to replace the condition (\ref{patsat2}) in \autoref{patsat} by simply $\mathcal{P}=\mathcal{P}_x(S)$,
we prefer the current definition, because it allows for several connected components of~$G[U_x]\setminus\ensuremath{S}{}$ becoming a part of one connected component of the entire graph. This greatly simplifies some parts of the algorithm.
\paragraph{The Algorithm} We now describe a dynamic programming
algorithm. Starting from the leaves of the tree decomposition~\(T\) and
working our way to its root, with each node~$x$ of~\(T\), we associate a table~$\textsl{Tab}_x$ that is indexed by all
possible patterns for~$x$. Semantically, we want that
\begin{align*}
\textsl{Tab}_x(\mathcal{R},\mathcal{G}{}, \mathcal{P}) &= \text{minimum weight of a partial
partitioning set{} for~$G[U_x]$ that satisfies the pattern
$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$}.
\end{align*}
Since %
we have~$V_r=\emptyset$, there is exactly one
pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ for the root~$r$:
$\mathcal{R}=\mathcal{G}{}$~is the empty graph and~$\mathcal{P}=\emptyset$. Thus,
$\textsl{Tab}_r$ has exactly one entry and it contains the minimum weight of a
partial partitioning set{}~$\ensuremath{S}{}$ for~$G[U_r]$, which is equivalent
to~$\ensuremath{S}{}$ being a partitioning set{} for~$G$. It follows that once the tables
are correctly filled, to decide the \textsc{DAG Partitioning}{} instance~$(G,\ensuremath{\omega}{},k)$, it
is enough to test whether the only entry of $\textsl{Tab}_r$ is at most~$k$.
We now present an algorithm to fill the tables and prove its
correctness. First, we initialize all table entries of all tables
by~$\infty$. By \emph{updating the entry
$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$ with~$m$} we mean setting
$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P}):= m$ if
$m < \textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$. For each leaf node~$x$, it
is obviously correct to set~$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})=0$ for
the only pattern $(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$, which has the
empty graph as~$\mathcal{R}$ and~$\mathcal{G}{}$ and the empty set
as~$\mathcal{P}$.
In the following, for each type of a node~\(x\) of a tree decomposition, that is, for forget nodes, introduce nodes, and join nodes, we independently show how to compute the table~$\textsl{Tab}_x$ given that we correctly computed the tables for all children of~$x$. To show that the table \(\textsl{Tab}_x\) is filled correctly, we prove the following lemma for each node type.
\begin{lemma}\label{lem:ulobo}\leavevmode
\begin{enumerate}[(i)]
\item \label{upperbound}
There is a partial partitioning set{} for~$G[U_x]$ satisfying a
pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$ with weight at
most~$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$.
\item \label{lowerbound}
The minimum weight of a partial partitioning set{} for~$G[U_x]$ satisfying a
pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$ is at
least~$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$.
\end{enumerate}
\end{lemma}
\noindent We present the algorithm and the proof for \autoref{lem:ulobo} independently for each node type in Sections~\ref{sec:fnod}, \ref{sec:inod}, and \ref{sec:jnod}, respectively, where we assume that all tables~$\textsl{Tab}_y$ for child nodes~$y$ of~$x$ have been computed correctly.
\subsubsection{Forget nodes}\label{sec:fnod}
We use the following procedure to compute the table~\(\textsl{Tab}_x\) of a forget node~\(x\) under the assumption that the table~$\textsl{Tab}_y$ for the child node~$y$ of~$x$ has been computed correctly.
\begin{proc}[Forget node]\label{forgproc}\upshape
Let $x$~be a forget node with a single child $y$. Assume that $v$~is
the vertex being ``forgotten'', that is, $v$~is in the child bag~$V_y$
but not in the current bag~$V_x$. From the weights of optimal partial
partitioning set{}s for~$G[U_y]$, we want to compute the weight of optimal
partial partitioning set{}s for~$G[U_x]$.
To this end, for each pattern $(\mathcal{R},\mathcal{G}{},\mathcal{P})$
for~$y$, we distinguish four cases. In each case, we will construct a
pattern~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ for~$x$ such that a
partial partitioning set{} for~$G[U_y]$ that
satisfies~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ is a partial
partitioning set{} for~$G[U_x]$ and
satisfies~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$. Then, we update
$\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with the value
of~$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P})$.
Herein, the following case distinction is not exhaustive. We do not
take action for patterns~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ that do not
satisfy any of the following conditions (for the reasons informally
explained in the cases).
In all cases, we set
$\mathcal{R}':=\mathcal{R}-\{v\}$.
\begin{caselist}
\item\label{forgcase1} If $v$~is isolated in~$\mathcal{G}{}$ and there is
a set~$\{v\}$ in~$\mathcal{P}$, then we let $\mathcal{G}{}':=\mathcal{G}{}
-\{v\}$ and $\mathcal{P}':=\mathcal{P}\setminus\{\{v\}\}$ and update
$\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with the value
of~$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P})$: an isolated vertex that is alone in its part of~$\mathcal{P}$ can simply be forgotten. %
\item\label{forgcase2} If $v$ is a non-isolated sink in~$\mathcal{G}{}$
and $v \in P_i \in \mathcal{P}$ such that $P_i \subseteq V_y$, then
we let $\mathcal{G}{}':=\mathcal{G}{}$ and~$\mathcal{P}':=\mathcal{P}$. We
update $\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with the value
of~$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P})$: in this case, the sink~$v$ ``moves'' from~\(V_y\) to %
$V(\mathcal{G}{}')\setminus V_x$. To ensure that $(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ is
a pattern, the part~\(P_i\) containing~$v$ cannot contain any additional sink in
$V(\mathcal{G}{})\setminus V_y$, thus we require \(P_i\subseteq V_y\).
\item\label{forgcase3} If $v$ is not a sink in $\mathcal{G}{}$ and there
is no sink in $V(\mathcal{G}{})\setminus V_y$ such that $v$~is its
only in-neighbor, then let $\mathcal{G}{}':=\mathcal{G}{} - \{v\}$ and
$\mathcal{P}'$ be the partition of the vertices of~$\mathcal{G}{}'$
obtained from~$\mathcal{P}$ by removing~$v$ from the set it is
in. Update $\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with the value
of~$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P})$.
This the simplest case, where the vertex is somewhat unimportant to partial partitioning sets satisfying the pattern $(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at $y$, so we simply forget it.
\item\label{forgcase4} If there is a sink~$u\in V(\mathcal{G}{})\setminus V_y$ such that $v$~is its only in-neighbor
and $\{u,v\}$ is a set of~$\mathcal{P}$, then let
$\mathcal{G}{}':=\mathcal{G}{} - \{u,v\}$ and $\mathcal{P}'$ be the partition
of the vertices of~$\mathcal{G}{}'$ obtained from $\mathcal{P}$ by removing
the set $\{u,v\}$. Update $\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with the
value of~$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P})$:
If there was a sink~$u$ in $V(\mathcal{G}{}) \setminus V_y$ only reachable from~$v$, then it would be unreachable from $V_x$ since $v$~is forgotten. Therefore, if the part~$P_i$ of~$\mathcal{P}$ containing~$u$ and~\(v\) contained more vertices, then we could not be sure that a partial partitioning set satisfying the pattern $(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$y$ is a partial partitioning set for~$G[U_x]$ at all. Namely, it may break \autoref{def:partsol}(\ref{partsol2}).
\end{caselist}
%
%
\end{proc}
\noindent We show that \autoref{forgproc} fills the table~\(\textsl{Tab}_x\) associated with a forget node~\(x\) correctly. First, we show that there is a partial partitioning set{} for~$G[U_x]$ satisfying a pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$ and having weight at most~$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$ as computed by \autoref{forgproc}.
\begin{proof}[of \autoref{lem:ulobo}\eqref{upperbound} for forget nodes]
Let $x$~be a forget node with child node~$y$ and let $v$~be the vertex
``forgotten'', that is, $v$~is in the child bag~$V_y$ but not in the
current bag~$V_x$. For any table
entry~$\textsl{Tab}_x(\mathcal{R}',\mathcal{G}',\mathcal{P}')<\infty$, we show that
there is a partial partitioning set{}~$\ensuremath{S}{}$ for~$G[U_x]$ satisfying
$(\mathcal{R}',\mathcal{G}',\mathcal{P}')$ and having weight at
most~$\textsl{Tab}_x(\mathcal{R}',\mathcal{G}',\mathcal{P}')$. To this end, observe
that, since $\textsl{Tab}_x(\mathcal{R}',\mathcal{G}',\mathcal{P}')<\infty$ there is a
pattern~$(\mathcal{R},\mathcal{G},\mathcal{P})$ for~$y$ from which
\autoref{forgproc} generates~$(\mathcal{R}',\mathcal{G}',\mathcal{P}')$ and
such that
$\textsl{Tab}_x(\mathcal{R}',\mathcal{G}',\mathcal{P}')=\textsl{Tab}_y(\mathcal{R},\mathcal{G},\mathcal{P})$.
Since there is a partitioning set{} for~$G[U_y]$ that
satisfies~$(\mathcal{R},\mathcal{G},\mathcal{P})$ and has weight at
most~$\textsl{Tab}_y(\mathcal{R},\mathcal{G},\mathcal{P})$, it is sufficient to show
that \emph{any} partial partitioning set{}~$\ensuremath{S}{}$ for~$G[U_y]$ that
satisfies the pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$y$ is also
a partial partitioning set{} for~$G[U_x]$ that satisfies at~$x$ the
pattern~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ generated in each of the
cases~(\ref{forgcase1})--(\ref{forgcase4}) of
\autoref{forgproc}.
We first argue that~$\ensuremath{S}{}$ is a partial partitioning set{} for~$G[U_x]$ if
any of the cases~(\ref{forgcase1})--(\ref{forgcase4}) of
\autoref{forgproc} applies. We first verify
\autoref{def:partsol}(\ref{partsol1}). To this end, observe that by
\autoref{patsat}, a vertex~$u \in V_y$ is a sink
in~$\mathcal{G}{}_y(\ensuremath{S}{})$ if and only if it is a sink
in~$G[U_y]\setminus\ensuremath{S}{}$. Now, assume that there is a connected
component of~$G[U_x]\setminus\ensuremath{S}{}=G[U_y]\setminus\ensuremath{S}{}$ that
contains two different sinks~$s_1,s_2$ in~$U_x \setminus V_x$. Then,
one of these sinks, say~$s_1$, must be~$v$. Since, then, $v\in V_y$~is
a sink in~$G[U_x]\setminus\ensuremath{S}{}=G[U_y]\setminus\ensuremath{S}{}$, it is a sink
in $\mathcal{G}{}=\mathcal{G}{}_y(\ensuremath{S}{})$ and none of the
cases~(\ref{forgcase3}) and~(\ref{forgcase4}) apply. Moreover, since
$s_2$~is connected to~$v\in V_y$ in~$G[U_y]\setminus\ensuremath{S}{}$, by
\autoref{def:partsol}(\ref{partsol2}), some vertex in~$V_y$ can
reach~$s_2$, implying that $s_2$~is a vertex
of~$\mathcal{G}{}=\mathcal{G}_y(\ensuremath{S}{})$. Thus, by
\autoref{patsat}(\ref{patsat2}), $s_2$~is in the same
set~$P_i\in\mathcal{P}$ as $s_1=v$ and, hence, (\ref{forgcase1})~does
not apply. Since $s_2\notin V_y$, also (\ref{forgcase2})~does not
apply.
We now verify \autoref{def:partsol}(\ref{partsol2}). It can only be
violated if $v$~is the only vertex of~$V_y$ that can reach some
sink~$u$ in the connected component of~$v$ in
%
$G[U_y]\setminus\ensuremath{S}{}$. However, then, $v$~is the only in-neighbor
of~$u$ in~$\mathcal{G}{}_y(\ensuremath{S}{})=\mathcal{G}{}$. Hence, only
case~(\ref{forgcase4}) might become applicable. When this case
applies, however, $\{u,v\}\in\mathcal{P}{}$ implies that no vertex
in~$V_y\supseteq V_x$ is connected to~$v$ or~$u$
in~$G[U_y]\setminus\ensuremath{S}=G[U_x]\setminus\ensuremath{S}$. Thus,
\autoref{def:partsol}(\ref{partsol2}) is satisfied.
It remains to show that $\ensuremath{S}{}$ satisfies the generated
pattern~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$, that is, to verify
$\mathcal{R}'=\mathcal{R}_x(\ensuremath{S}{})$ (\autoref{patsat}(\ref{patsatsol})),
$\mathcal{G}{}'=\mathcal{G}{}_x(\ensuremath{S}{})$ (\autoref{patsat}(\ref{patsat1}))
and that~$\mathcal{P}'$ is a coarsening of $\mathcal{P}_x(\ensuremath{S}{})$
(\autoref{patsat}(\ref{patsat2})). Herein,
$\mathcal{R}'=\mathcal{R}-\{v\}=\mathcal{R}_y(\ensuremath{S})-\{v\}=\mathcal{R}_x(\ensuremath{S})$ is
trivial. To show $\mathcal{G}{}'=\mathcal{G}{}_x(\ensuremath{S}{})$, we
distinguish between the case of \autoref{forgproc} applied.
\begin{caselist}
\item[Case~\ref{forgcase1})~~] In this case, $v$ is not in~$V_x$ and,
obviously, not a bag-reachable{} sink
in~$G[U_x]\setminus\ensuremath{S}{}$. Hence, $v$~is not
in~$\mathcal{G}_x(\ensuremath{S}{})$. Moreover, $v$~is isolated
in~$\mathcal{G}{}=\mathcal{G}{}_y(\ensuremath{S}{})$.
Therefore, \autoref{forgproc} sets
$\mathcal{G}{}':=\mathcal{G}-\{v\}=\mathcal{G}{}_y(\ensuremath{S}{})-\{v\} =
\mathcal{G}{}_x(\ensuremath{S}{})$.
\item[Case~\ref{forgcase2})~~] In this case, $v$ is not in $V_x$ but
it is a bag-reachable{} sink in $G[U_x]\setminus\ensuremath{S}{}$, since it is
not isolated in~$\mathcal{G}{}$. Therefore, \autoref{forgproc} sets
$\mathcal{G}':=\mathcal{G}=\mathcal{G}{}_y(\ensuremath{S}{})=\mathcal{G}{}_x(\ensuremath{S}{})$.
\item[Case~\ref{forgcase3})~~] In this case, $v$ is not a sink in
$\mathcal{G}{}$ (and thus also not in $G[U_x]\setminus\ensuremath{S}{}$) and,
therefore, clearly does not appear in~$\mathcal{G}{}_x(\ensuremath{S}{})$. Moreover, any sink not in~$V_y$ is
reachable from a vertex in~$V_y\setminus\{v\}=V_x$
in~$\mathcal{G}{}_y(\ensuremath{S}{})$ if and only if it is reachable
in~$\mathcal{G}{}_y(\ensuremath{S}{})-\{v\}$. Hence, \autoref{forgproc} sets
$\mathcal{G}':= \mathcal{G}{} - \{v\} = \mathcal{G}{}_y(\ensuremath{S}{}) - \{v\} = \mathcal{G}{}_x(\ensuremath{S}{})$.
\item[Case~\ref{forgcase4})~~] In this case, $u$~was a bag-reachable{}
sink in $G[U_y]\setminus\ensuremath{S}{}$ but is not bag-reachable{} in
$G[U_x]\setminus\ensuremath{S}{}$. Moreover, since $\{v,u\}\in\mathcal{P}$, no
vertex of~$V_x$ is connected to~$v$ in
$G[U_y]\setminus\ensuremath{S}=G[U_x]\setminus\ensuremath{S}{}$. Hence, neither~$v$
nor~$u$ are vertices of~$\mathcal{G}_x(\ensuremath{S}{})$. Hence,
\autoref{forgproc} sets $\mathcal{G}{}':=\mathcal{G}{} -
\{u,v\}=\mathcal{G}{}_y(\ensuremath{S}{})-\{u,v\}=\mathcal{G}{}_x(\ensuremath{S}{})$.
\end{caselist}
Finally, we verify \autoref{patsat}(\ref{patsat2}) by showing that
$\mathcal{P}'$~is a coarsening of~$\mathcal{P}_x(\ensuremath{S}{})$. Assume the
contrary. Then, there are two vertices~$u,w$ of~$\mathcal{G}{}_x(\ensuremath{S}{})$
in the same set of~$\mathcal{P}_x(\ensuremath{S}{})$ but in different sets of
$\mathcal{P}'$. By construction of~$\mathcal{P}'$ from~$\mathcal{P}$, they are
also in different sets of~$\mathcal{P}=\mathcal{P}_y(\ensuremath{S}{})$. It follows
that~$u$ and~$w$ lie in the same connected component
of~$G[U_x]\setminus\ensuremath{S}{}$ but in different connected components
of~$G[U_y]\setminus\ensuremath{S}{}$. Since these two graphs are the same, we have a contradiction.\qed
\end{proof}
\noindent We now show that the minimum weight of a partial partitioning set{} for~$G[U_x]$
satisfying a pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$ is at
least~$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$ as computed by \autoref{forgproc}.
\begin{proof}[of \autoref{lem:ulobo}\eqref{lowerbound} for forget nodes]
Let $x$~be a forget node with child node~$y$. Let $v$~be the vertex ``forgotten'' that is, $v\in V_y$ but $v\notin V_x$. Assume that $\ensuremath{S}{}$~is a partial partitioning set{} for~$G[U_x]$ satisfying the pattern~$(\mathcal{R}_x,\mathcal{G}{}_x, \mathcal{P}_x)$ at~$x$. It is sufficient to construct a pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P}{})$ that $\ensuremath{S}{}$~satisfies at~$y$ and from which \autoref{forgproc} generates exactly the pattern~$(\mathcal{R}_x,\mathcal{G}{}_x,\mathcal{P}_x)$ to update the table $\textsl{Tab}_x(\mathcal{R}_x,\mathcal{G}{}_x,\mathcal{P}_x)$ with $\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P})$. Then, \autoref{lem:ulobo}\eqref{lowerbound} follows for forget nodes, because we have $\textsl{Tab}_x(\mathcal{R}_x,\mathcal{G}{}_x,\mathcal{P}_x)\leq \textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P}) \leq \ensuremath{\omega}{}(\ensuremath{S}{})$. Herein, the last inequality follows from the induction hypothesis.
We first show that $\ensuremath{S}{}$ is a partial partitioning set{} for~$G[U_y]$, that
is, we verify \autoref{def:partsol}.
\autoref{def:partsol}(\ref{partsol1}) is easy to verify: since
$\ensuremath{S}{}$ is a partial partitioning set{} for~$G[U_x]$,
%
each connected component of
$G[U_x]\setminus\ensuremath{S}{}=G[U_y]\setminus\ensuremath{S}{}$ contains at most one
sink in~$U_x\setminus V_x\supseteq U_y \setminus V_y$. It remains to
verify
\autoref{def:partsol}(\ref{partsol2}). %
%
%
%
Assume, for a contradiction, that there is a connected component~$C$ in~$G[U_y]\setminus\ensuremath{S}{}$ that contains a vertex of~$V_y$ such that no vertex of~$V_y\supseteq V_x$ can reach some sink~$s\in C\setminus V_y$. Then, since $\ensuremath{S}{}$ is a partial partitioning set{} for~$G[U_x]\setminus\ensuremath{S}{}$, the connected component~$C$ cannot contain vertices of~$V_x$ and, hence, $C \cap V_y=\{v\}$. However, since $G[U_y]\setminus\ensuremath{S}{}$~is a directed acyclic graph, $v$~reaches some sink in~$C$. Since $v$~cannot reach~$s\in C$, it follows that $C$~contains two sinks. Since $C\cap V_x=\emptyset$, this contradicts~$\ensuremath{S}{}$ being a partial partitioning set{} for~$G[U_x]\setminus\ensuremath{S}{}$. It follows that $\ensuremath{S}{}$ is a partial
partitioning set{} for~$G[U_y]$.
We now construct a pattern~$(\mathcal{R},\mathcal{G},\mathcal{P})$ that~$\ensuremath{S}{}$ satisfies at~$y$. Consider $\mathcal{R}:=\mathcal{R}{}_y(\ensuremath{S}{})$ and $\mathcal{G}{} := \mathcal{G}{}_y(\ensuremath{S}{})$. %
%
%
Note that there are at most two vertices in~$\mathcal{G}{}$ that are not in~$\mathcal{G}{}_x$: one of them is~$v$, %
%
the possibly other vertex is a sink~$u$ in $V(\mathcal{G}{})\setminus V_y$ only reachable from~$v$. We define $\mathcal{P}$~as a partition of the vertices of~$\mathcal{G}{}$ that partitions the set $V(\mathcal{G}{}_x)$ in the same way as~$\mathcal{P}_x$. We add the possibly missing vertices~$v$ and~$u$ to that partition as follows: if there is a vertex~$w\in V_x$ in the same connected component of $G[U_y]\setminus\ensuremath{S}{} = G[U_x]\setminus\ensuremath{S}{}$ as~$v$, then we put~$v$ and~$u$ into same set as~$w$. Otherwise, we add the set~$\{v\}$ or~$\{u,v\}$, respectively, to~$\mathcal{P}$. By choice of~$\mathcal{R},\mathcal{G}{},$ and~$\mathcal{P}$, the partial
partitioning set{}~$\ensuremath{S}{}$ clearly satisfies the
pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$y$.
We have shown that $\ensuremath{S}{}$~satisfies the pattern~$(\mathcal{R},\mathcal{G}{}, \mathcal{P}{})$ at~$y$. Moreover, if any of the cases (\ref{forgcase1})--(\ref{forgcase4}) of \autoref{forgproc} applies to~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$, then it generates a pattern~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with $\mathcal{R}'=\mathcal{R}_x$ and $\mathcal{G}{}'=\mathcal{G}{}_x$, since we showed in the proof of \autoref{lem:ulobo}\eqref{upperbound} for forget nodes that $\ensuremath{S}{}$~satisfies the pattern generated by \autoref{forgproc} at~$x$. Hence, it remains to show that indeed at least one of the cases (\ref{forgcase1})--(\ref{forgcase4}) of \autoref{forgproc} applies and that in all cases~$\mathcal{P}'=\mathcal{P}_x$.
\begin{caselist}
\item[Case \ref{forgcase1})~~] If $v$ is an isolated sink
in~$\mathcal{G}{}$, then no vertex in~$V_y\supseteq V_x$ can reach~$v$
in~$G[U_y]\setminus\ensuremath{S}{}=G[U_x]\setminus\ensuremath{S}{}$. Hence, there is no
vertex of~$V_x$ in the same connected component of
$G[U_x]\setminus\ensuremath{S}{}$ as~$v$, as otherwise $v$~would be a sink
in~$U_x \setminus V_x$ not reachable from the vertices
of~$V_x$. Hence, by construction of~$\mathcal{P}$, we have
$\{v\}\in\mathcal{P}$ and case~(\ref{forgcase1}) of \autoref{forgproc}
applies. It sets $\mathcal{P}':=\mathcal{P} \setminus
\{\{v\}\}=\mathcal{P}_x$. %
%
%
\item[Case \ref{forgcase2})~~] If $v$~is a non-isolated sink
in~$\mathcal{G}{}$, then $v$~is a bag-reachable{} sink in
$G[U_x]\setminus\ensuremath{S}{}$. Hence, it is contained in $\mathcal{G}{}_x$
and we have $V(\mathcal{G}{}_x)=V(\mathcal{G})$. By construction
of~$\mathcal{P}$, we also have $\mathcal{P}_x=\mathcal{P}$. The
set~$P_i\in\mathcal{P}_x$ containing~$v\notin V_x$ cannot contain any other
vertex in~$V(\mathcal{G}{}_x) \setminus V_x$ by
\autoref{pat}(\ref{pat2}). Thus, $P_i\subseteq V_y$ and
case~(\ref{forgcase2}) of \autoref{forgproc} applies. It
sets~$\mathcal{P}':=\mathcal{P}=\mathcal{P}_x$. %
%
%
\item[Case \ref{forgcase3})~~] If $v$~is not a sink in~$\mathcal{G}{}$
and there is no sink in~$V(\mathcal{G}{}) \setminus V_y$ only
reachable from~$v$ in~$\mathcal{G}{}$, then case~(\ref{forgcase3}) of
\autoref{forgproc} applies. Since the sink~$s$ reachable from~$v$
is also reachable from some vertex~$u\notin\{v,s\}$, and thus,
connected to~$u$ in~$G[U_y]\setminus\ensuremath{S}{}$, the set in~$\mathcal{P}$
containing~$v$ also contains~$u$. \autoref{forgproc}
sets~$\mathcal{P}'$ to be~$\mathcal{P}$ with $v$~removed from the set it
is in. This, by construction of~$\mathcal{P}$, is exactly~$\mathcal{P}_x$.
\item[Case \ref{forgcase4})~~] Finally, if there is a sink~$u$
in~$V(\mathcal{G}{}) \setminus V_y$ only reachable from $v$, then the
connected component of~$G[U_y]\setminus\ensuremath{S}{}=G[U_x]\setminus\ensuremath{S}{}$
containing the vertex~$v$ does not contain any vertex of~$V_x$,
since $u$~is not reachable from any vertex of~$V_x$. It follows
that $\{u,v\}$~is a set of~$\mathcal{P}$. Case~(\ref{forgcase4}) of
\autoref{forgproc} applies. It sets
$\mathcal{P}':=\mathcal{P}\setminus\{\{v,u\}\}=\mathcal{P}_x$.\qed
\end{caselist}
\end{proof}
\subsubsection{Introduce nodes}\label{sec:inod}
We use the following procedure to compute the table~\(\textsl{Tab}_x\) of an introduce node~\(x\) under the assumption that the table~$\textsl{Tab}_y$ for the child node~$y$ of~$x$ has been computed correctly.
\begin{proc}[Introduce node]\label{intproc}\upshape Let $x$~be an
introduce node with a single child~$y$. Assume that~$v$ is the node being ``introduced'', that is, $v$~is not in the child bag~$V_y$ but in the current bag~$V_x$. Moreover, let $B\subseteq A(G[U_x])$~be the set of arcs incident to~$v$. By \autoref{treedec}(\ref{treedec2} and \ref{treedec3}) of a tree decomposition, one actually has $B\subseteq A(G[V_x])$.
We now try each possible subset~$B'\subseteq B$ and consider it \emph{not} deleted by a partial partitioning set{} for the graph~$G[U_x]$. Similarly as in the case for forget nodes, we will transform each pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ for~$y$ into a pattern~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ for~$x$ such that if a partial
partitioning set{}~$\ensuremath{S}{}$ for~$G[U_y]$
satisfies~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$, then
$\ensuremath{S}{}\cup(B\setminus B')$ is a partial partitioning set{} for~$G[U_x]$ and
satisfies~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$. Then, we update
$\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with the value
of~$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P})+\ensuremath{\omega}(B)-\ensuremath{\omega}(B')$.
For each pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ for~$y$ such that all vertices incident to the arcs in~$B'$ (if any) except for~$v$ are contained in the same set~$P_i\in\mathcal{P}$, we obtain~$\mathcal{R}'$ from~$\mathcal{R}$ by adding~$v$ and the arcs in~$B'$ to~$\mathcal{R}$. Similarly, we obtain $\mathcal{G}'$ from~$\mathcal{G}$ by adding~$v$ and the arcs in~$B'$ to~$\mathcal{G}$.
Moreover, for each~$u,w \in V(\mathcal{G}{}')$ such that $u$~can reach~$w$ in~$\mathcal{G}{}'$ we add the arc~$(u,w)$ to~$\mathcal{G}{}'$. For obtaining~$\mathcal{P}'$, we distinguish two cases.
\begin{caselist}
\item If $B'=\emptyset$, then we try all possibilities of adding~$v$
to a set in~$\mathcal{P}$. That is, for every~$P_i \in \mathcal{P}$, we
get a set~$\mathcal{P}'$ from $\mathcal{P}$ by adding~$v$ to~$P_i$ and
update $\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with
$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P}) + \ensuremath{\omega}{}(B)$. Additionally,
for $\mathcal{P}':=\mathcal{P}\cup\{\{v\}\}$, we update the
entry~$\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with
$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P}) + \ensuremath{\omega}{}(B)$.
\item If $B'\ne\emptyset$, then let $P_i$~be the set of~$\mathcal{P}$
that contains all vertices incident to arcs in~$B'$ except~$v$ and
let $\mathcal{P}'$~be obtained from~$\mathcal{P}$ by adding~$v$ to the
set~$P_i$. We update $\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with
$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P}) + \ensuremath{\omega}{}(B) - \ensuremath{\omega}{}(B')$.
\end{caselist}
Note that, since $\mathcal{P}$ simulates the connected components of the resulting graph,
all arcs incident on~$v$ remaining in the graph must be within one set of $\mathcal{P}$, i.e., their endpoints different from $v$ must be in one set of $\mathcal{P}$.
\end{proc}
\noindent We show that \autoref{intproc} fills the table associated
with an introduce node~\(x\) correctly. First, we show that there is a
partial partitioning set{} for~$G[U_x]$ satisfying a
pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$ and having weight at
most~$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$ as computed by
\autoref{intproc}.
\begin{proof}[of \autoref{lem:ulobo}\eqref{upperbound} for introduce nodes]
Let $x$~be an introduce node with child node~$y$ and let~$v$ be the vertex ``introduced'' that is, $v$~is not in the child bag~$V_y$ but in the current bag~$V_x$. Let $B\subseteq A(G[V_x])$ be the arcs incident to~$v$, $B'\subseteq B$ and, finally, $(\mathcal{R},\mathcal{G}{},\mathcal{P})$~be some pattern for~$y$ such that all vertices incident to the arcs in~$B'$ (if any) except for~$v$ are contained in the same set~$P_i\in\mathcal{P}$. For any partial partitioning set{} $\ensuremath{S}{}$ for~$G[U_y]$ satisfying the pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$y$, we show that $\ensuremath{S}{}'= \ensuremath{S}{} \cup (B \setminus B')$ is a partial partitioning set{} for~$G[U_x]$ that satisfies the pattern~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ constructed by \autoref{intproc}. From this, since $\ensuremath{\omega}(\ensuremath{S}{}')=\ensuremath{\omega}(\ensuremath{S}{}) + \ensuremath{\omega}{}(B) - \ensuremath{\omega}{}(B')$, \autoref{lem:ulobo}\eqref{upperbound} follows (as already discussed in the beginning of the proof of \autoref{lem:ulobo}\eqref{upperbound} for forget nodes).
We start by showing that~$\ensuremath{S}{}'$ is a partial partitioning set{} for~$G[U_x]$. First, we verify \autoref{def:partsol}(\ref{partsol1}). For the sake of a contradiction, assume that there is a connected component of~$G[U_x]\setminus\ensuremath{S}{}'$ that contains two distinct sinks~$s_1,s_2$ in~$U_x \setminus V_x$. Since there is no such connected component in~$G[U_y]\setminus\ensuremath{S}{} = (G[U_x]\setminus\ensuremath{S}{}') - \{v\}$, there are vertices~$s_1', s_2'\in V_y$ in the same connected components of $G[U_y]\setminus\ensuremath{S}{}$ as~$s_1$ and $s_2$, respectively, that are incident to some arcs in~$B'$. By \autoref{def:partsol}(\ref{partsol2}), there are vertices in~$V_y$ that can reach $s_1$ and $s_2$ in~$G[U_y]\setminus\ensuremath{S}{}$. Hence, $s_1$~and~$s_2$ are bag-reachable{} and, therefore, in $\mathcal{G}{}=\mathcal{G}_y(\ensuremath{S})$. Since $s_1',s_2'\inP_i$ and $\ensuremath{S}{}$~satisfies $(\mathcal{R},\mathcal{G}{},\mathcal{P})$, by \autoref{pat}(\ref{pat2}), we also have $s_1,s_2\inP_i$. Then, however, $P_i$~contains the two different vertices~$s_1$ and~$s_2$ of $V(\mathcal{G}{}) \setminus V_y$, which contradictions \autoref{pat}(\ref{pat2}).
To show that $\ensuremath{S}{}'$ is a partial partitioning set{}, it remains to verify
\autoref{def:partsol}(\ref{partsol2}). For the sake of contradiction,
assume that some connected component of~$G[U_x]\setminus\ensuremath{S}{}'$
contains some sink~$s\in U_x\setminus V_x$, some vertex in~$V_x$, but
$s$~is not reachable from any vertex in~$V_x$. Then, $s$~is not
reachable from any vertex in~$V_y\subseteq V_x$ in the
subgraph~$G[U_y]\setminus\ensuremath{S}{}$ either. Thus, the connected component
does not contain any vertex of~$V_y$ and, therefore, not of~$V_x$,
since the only vertex in~$V_x\setminus V_y$ is~$v$ and the added
arcs~$B'$ connect only vertices in~$V_y$.
We have shown that~$\ensuremath{S}{}'$ is a partial partitioning set{} for~$G[U_x]$. We
now show that it satisfies the
pattern~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}{}')$ generated by
\autoref{intproc}; we verify \autoref{patsat}.
\autoref{patsat}(\ref{patsatsol}), that is,
$\mathcal{R}'=\mathcal{R}_x(\ensuremath{S}{}')$ is trivial by the construction
of~$\mathcal{R}'$.
We verify \autoref{patsat}(\ref{patsat1}), that is,
$\mathcal{G}{}_x(\ensuremath{S}{}') = \mathcal{G}{}'$. First, observe
that~$V(\mathcal{G}{}_x(\ensuremath{S}{}'))=V(\mathcal{G}_y(\ensuremath{S}{}))\cup\{v\}=V(\mathcal{G}')$. We
have to show that %
%
%
%
there is an arc~$(u,w)$ in~$\mathcal{G}'$ %
%
if and only if~$u$ can reach~$w$ in~$G[U_x]\setminus\ensuremath{S}{}'$. Let
$(u,w)$~be such an arc in~$\mathcal{G}{}'$. If $(u,w)$ is already
in~$\mathcal{G}{}$, then $u$ can reach~$w$ in~$G[U_y]\setminus\ensuremath{S}{} =
(G[U_x]\setminus\ensuremath{S}{}') - \{v\}$. Otherwise, $u$ reaches~$w$
in~$\mathcal{G}{}'$ via some arcs~$(u',v),(v,w')\in B'$, that is, $u$
can reach~$u'$ and $w'$ can reach~$w$ in~$G[U_x]\setminus\ensuremath{S}{}'$. It
follows that $u$ can reach $w$ in $G[U_x]\setminus\ensuremath{S}{}'$. Now, for
the opposite direction, let $u,w$ be vertices of~$\mathcal{G}'$ such
that $u$~can reach~$w$ in~$G[U_x]\setminus\ensuremath{S}{}'$. If $u$ can
reach~$w$ in~$(G[U_x]\setminus\ensuremath{S}{}')-\{v\}=G[U_y]\setminus\ensuremath{S}{}$,
then the arc $(u,w)$~is already present in~$\mathcal{G}{}$. Otherwise,
$u$ reaches~$w$ via some arcs~$(u',v),(v,w')\in B'$. The arcs~$(u',v)$
and~$(v,w')$ are in~$\mathcal{G}{}'$, since $u',w'\in V_x$. Moreover,
$u$~reaches $u'$ and $w'$ reaches~$w$
in~$G[U_x]\setminus\ensuremath{S}{}'$. Hence, there are arcs~$(u,u')$ and
arc~$(w',w)$ in~$\mathcal{G}{}'$ and~$u$ reaches~$w$ in~$\mathcal{G}{}'$
via $u'$ and~$w'$. By construction of~$\mathcal{G}{}'$, it follows that
$\mathcal{G}{}'$~contains the arc~$(u,w)$.
Finally, we verify \autoref{patsat}(\ref{patsat2}); we show that~$\mathcal{P}'$ is a coarsening of $\mathcal{P}_x(\ensuremath{S}{}')$. For the sake of a contradiction, assume that there are two vertices~$u,w$ that are in the same set of~$\mathcal{P}_x(\ensuremath{S}{}')$ but in different sets of~$\mathcal{P}'$. By construction of~$\mathcal{P}'$ from~$\mathcal{P}$, this implies that $u$~and~$w$ are in different sets of~$\mathcal{P}$ and, therefore, in different connected components of~$G[U_y]\setminus\ensuremath{S}$. Thus, in order for $u$~and~$w$ to be connected in~$G[U_x]\setminus\ensuremath{S}'$, there are vertices~$u', w'$ in the same connected components of~$G[U_y]\setminus\ensuremath{S}$ as~$u$ and~$w$, respectively, that are incident to arcs in~$B'$ and, hence, $u',w'\in P_i\in\mathcal{P}$. But then, also $u,w \in P_i\in\mathcal{P}{}$ --- a~contradiction.\qed
\end{proof}
\noindent We now show that the minimum weight of a partial partitioning set{} for~$G[U_x]$
satisfying a pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$ is at
least~$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$ as computed by by
\autoref{intproc}.
\begin{proof}[of \autoref{lem:ulobo}\eqref{lowerbound} for introduce nodes]
Let $x$~be an introduce node with child node~$y$. Let $v$~be the vertex ``introduced'', that is, $v\notin V_y$ but $v\in V_x$. Assume that $\ensuremath{S}{}$~is a minimum-weight partial partitioning set{} for~$G[U_x]$ satisfying the pattern~$(\mathcal{R}_x,\mathcal{G}{}_x, \mathcal{P}_x)$ at~$x$. Let $B$~be the set of arcs incident to~$v$ in~$G[U_x]$ and~$B'':=B\cap\ensuremath{S}{}$. It is sufficient to construct a pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P}{})$ that~$\ensuremath{S}{}\setminus B''$ satisfies at~$y$ and from which \autoref{intproc} generates exactly the pattern~$(\mathcal{R}_x,\mathcal{G}{}_x,\mathcal{P}{}_x)$ to update the table~$\textsl{Tab}_x(\mathcal{R}_x,\mathcal{G}{}_x,\mathcal{P}{}_x)$ with~$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P}{})+\ensuremath{\omega}(B)-\ensuremath{\omega}(B')$, where~$B'=B\setminus B''$. Then, \autoref{lem:ulobo}\eqref{lowerbound} follows for introduce nodes, since $\textsl{Tab}_x(\mathcal{R}_x,\mathcal{G}{}_x,\mathcal{P}{}_x) \leq \textsl{Tab}_y(\mathcal{R}, \mathcal{G}{},\mathcal{P}{}) + \ensuremath{\omega}{}(B) -\ensuremath{\omega}{}(B') \leq \ensuremath{\omega}{}(\ensuremath{S}{} \setminus B'') + \ensuremath{\omega}{}(B) -\ensuremath{\omega}{}(B') =\ensuremath{\omega}{}(\ensuremath{S}{})$.
It is easy to verify that~$\ensuremath{S}{}\setminus B''$ is a partial partitioning set{} for~$G[U_y]$ (\autoref{def:partsol}), since $G[U_y]\setminus(\ensuremath{S}{}\setminus B'')=(G[U_x]\setminus\ensuremath{S}{})-\{v\}$ and $\ensuremath{S}{}$ is a partial partitioning set{} for~$G[U_x]$; to this end, observe that, by \autoref{treedec}(\ref{treedec2} and \ref{treedec3}) of a tree decomposition, $v$~only has arcs~$B\subseteq A(G[V_x])$ incident to vertices in~$V_y$.
We now construct a pattern. Let $\mathcal{R}=\mathcal{R}{}_y(\ensuremath{S}{}\setminus B'')$ and $\mathcal{G}{}=\mathcal{G}{}_y(\ensuremath{S}{}\setminus B'')$. Let $\mathcal{P}$~be the partition obtained from~$\mathcal{P}_x$ by removing the vertex~$v$ from the set it is in or by removing the set $\{v\}$ if it exists in~$\mathcal{P}{}_x$. It is easy to verify that $\ensuremath{S}{} \setminus B''$ satisfies~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$y$: \autoref{patsat}(\ref{patsatsol}) and~(\ref{patsat1}) are trivially satisfied by choice of~$\mathcal{R}$ and~$\mathcal{G}{}$; \autoref{patsat}(\ref{patsat2}) holds by construction of~$\mathcal{P}{}$ from~$\mathcal{P}{}_x$, since $G[U_y]\setminus(\ensuremath{S}{}\setminus B'')$ is a subgraph of~$G[U_x]\setminus\ensuremath{S}{}$.
It remains to show that \autoref{intproc} applies to the pattern~$(\mathcal{R},\mathcal{P}{},\mathcal{G}{})$ and the set~$B'$ in order to generate the pattern~$(\mathcal{R}_x,\mathcal{P}{}_x,\mathcal{P}{}_x)$. Since~$\ensuremath{S}{}$ satisfies~$(\mathcal{R}_x,\mathcal{G}{}_x,\mathcal{P}{}_x)$ at~$x$, all vertices incident to arcs in~$B'$ (if any) are contained in the same set~$P_i\in\mathcal{P}_x$ and, hence, all of them except~$v$ are contained in the set~$P_i \setminus \{v\}\in\mathcal{P}$. Therefore, \autoref{intproc} applies to~$B'$ and the pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$, produces some new pattern~$(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$, and updates $\textsl{Tab}_x(\mathcal{R}',\mathcal{G}{}',\mathcal{P}')$ with $\textsl{Tab}_y(\mathcal{R},\mathcal{G}{},\mathcal{P}{}) + \ensuremath{\omega}{}(B) -\ensuremath{\omega}{}(B')$.
It remains to show that, for at least one of the generated patterns, $\mathcal{R}'=\mathcal{R}_x$, $\mathcal{G}{}'=\mathcal{G}{}_x$, and $\mathcal{P}'=\mathcal{P}_x$. If $B' \neq \emptyset$ then $\mathcal{P}'=\mathcal{P}_x$ by construction of~$\mathcal{P}'$ from~$\mathcal{P}{}$ in \autoref{intproc}. If $B' = \emptyset$, then $\mathcal{P}_x$ is clearly among the partitions~$\mathcal{P}'$ generated from~$\mathcal{P}{}$ by \autoref{intproc}. Moreover, we already proved in the proof of \autoref{lem:ulobo}\eqref{upperbound} that~$\ensuremath{S}{}=(S\setminus B'') \cup (B \setminus B')$ satisfies the pattern generated by \autoref{intproc} at~$x$. Hence, $\mathcal{R}'=\mathcal{R}_x$ and $\mathcal{G}{}'=\mathcal{G}_x$. \qed
\end{proof}
\subsubsection{Join nodes}\label{sec:jnod}
We use the following procedure to compute the table~\(\textsl{Tab}_x\) of a join node~\(x\) under the assumption that the tables~$\textsl{Tab}_y$ for all child nodes~$y$ of~$x$ have been computed correctly.
\begin{proc}[Join node]\label{joiproc}\upshape
Let $x$~be a join node with children $y$ and $z$, that is, $V_x=V_y=V_z$. %
For each pair of patterns~$(\mathcal{R},\mathcal{G}{}_y,\mathcal{P}_y)$ for~$y$ and $(\mathcal{R},\mathcal{G}{}_z,\mathcal{P}_z)$ for~$z$ such that~$\mathcal{P}_y$ and~$\mathcal{P}_z$ partition the vertices of~$V_y=V_z=V_x$ in the same way, we construct a new pattern~$(\mathcal{R},\mathcal{G},\mathcal{P})$ as follows.
Let $\mathcal{G}{}'$~be the graph containing all vertices and arcs of~$\mathcal{G}_y$ and $\mathcal{G}_z$, and
for each~$u,w \in V(\mathcal{G}{}')$ such that $u$~can reach~$w$ in~$\mathcal{G}{}'$ add the arc~$(u,w)$ to~$\mathcal{G}{}'$.
%
Note that by
\autoref{treedec}(\ref{treedec3}) of a tree decomposition,
$\mathcal{G}_y$ and~$\mathcal{G}_z$ have only the vertices in~$V_x$ in
common.
Let $\mathcal{P}'$~be the partition of~$V_x$ that partitions~$V_x$ in the
same way as~$\mathcal{P}_y$ and~$\mathcal{P}_z$. We extend~$\mathcal{P}'$ to a
partition for the vertices of~$\mathcal{G}{}'$: for each $u \in
V(\mathcal{G}{}') \setminus V_x$, add~$u$ to a set~$P_i$ of~$\mathcal{P}'$
that contains a vertex~$v$ with $(v,u)$ being an arc
of~$\mathcal{G}{}'$. Since there are no arcs between different sets
of~$\mathcal{P}$ in~$\mathcal{G}{}_y$ or~$\mathcal{G}{}_z$, there is exactly
one such set~$P_i\in\mathcal{P}'$.
%
%
%
%
%
If we created some set~$P \in \mathcal{P}'$ with more than one vertex
of~$V(\mathcal{G}{}') \setminus V_x$, then continue with a different
pair of patterns. Otherwise, we update
$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{}',\mathcal{P}')$ with
$\textsl{Tab}_y(\mathcal{R},\mathcal{G}{}_y,\mathcal{P}_y) +
\textsl{Tab}_z(\mathcal{R},\mathcal{G}{}_z,\mathcal{P}_z)-\ensuremath{\omega}{}(A(G[V_x]))+\ensuremath{\omega}{}(A(\mathcal{R}))$.
\end{proc}
\noindent We show that \autoref{joiproc} fills the table associated with a join node~\(x\) correctly. First, we show that there is a partial partitioning set{} for~$G[U_x]$ satisfying a pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$ and having weight at most~$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$ as computed by \autoref{joiproc}.
\begin{proof}[of \autoref{lem:ulobo}\eqref{upperbound} for join nodes]
Let $x$ be a join node with child nodes~$y$ and~$z$, that is,
$V_x=V_y=V_z$. Let $\ensuremath{S}{}_y$ be a partial
partitioning set{} for~$G[U_y]$ satisfying the pattern~$(\mathcal{R},\mathcal{G}{}_y,\mathcal{P}_y)$ at~$y$ and let $\ensuremath{S}{}_z$~be a partial partitioning set{} for~$G[U_z]$ satisfying the pattern~$(\mathcal{R},\mathcal{G}{}_z,\mathcal{P}_z)$ at $z$. We show that~$\ensuremath{S}{} = \ensuremath{S}{}_y \cup \ensuremath{S}{}_z$ is a partial partitioning set{} for~$G[U_x]$ that satisfies the pattern~$(\mathcal{R},\mathcal{G}{}',\mathcal{P}')$ constructed by \autoref{joiproc}. Since $\ensuremath{\omega}(\ensuremath{S}{})=\ensuremath{\omega}(\ensuremath{S}{}_y)+\ensuremath{\omega}(\ensuremath{S}{}_z)-\ensuremath{\omega}{}(\ensuremath{S}{}_y \cap \ensuremath{S}{}_z)$, wherein $\ensuremath{S}{}_y \cap \ensuremath{S}{}_z = A(G[V_x]) \setminus A(\mathcal{R})$, \autoref{lem:ulobo}\eqref{upperbound} follows for join nodes.
%
%
%
%
%
%
%
%
We show that~$\ensuremath{S}{}$ is indeed a partial partitioning set{} for~$G[U_x]$,
that is, we verify \autoref{def:partsol}. We first verify
\autoref{def:partsol}(\ref{partsol2}) and then use it to verify
\autoref{def:partsol}(\ref{partsol1}). Let $s \in U_x \setminus V_x$
be a sink such that the connected component containing~$s$
in~$G[U_x]\setminus\ensuremath{S}{}$ contains a vertex of~$V_x$. Then, $s\in
U_y\setminus V_x$ or $s\in U_z\setminus V_x$. Without loss of
generality, let $s\in U_y\setminus V_x$. From
\autoref{treedec}(\ref{treedec3}) of a tree decomposition, we see
that~$U_z\cap U_y\subseteq V_x$ and, hence, $G[U_y]\setminus\ensuremath{S}{}_y=
G[U_x]\setminus\ensuremath{S}{}-(U_z\setminus V_z)$. It follows that there is
also a connected component of~$G[U_y]\setminus\ensuremath{S}{}_y=
G[U_x]\setminus\ensuremath{S}{}-(U_z\setminus V_z)$ that contains~$s$ and a
vertex of~$V_x$ and, therefore, $s$~is reachable from some
vertex~$v\in V_x$ in~$G[U_x]\setminus\ensuremath{S}{}-(U_z\setminus V_z)$ and,
hence, in~$G[U_x]\setminus\ensuremath{S}{}$. It also follows that $(v,s)$~is an
arc in~$\mathcal{G}_y$ and, by construction in \autoref{joiproc},
of~$\mathcal{G}'$.
To verify \autoref{def:partsol}(\ref{partsol1}), for the sake of a
contradiction, assume that there is a connected component
of~$G[U_x]\setminus\ensuremath{S}{}$ that contains two sinks~$s_1,s_2$ in~$U_x
\setminus V_x$. Note that, by \autoref{treedec}(\ref{treedec3}) of a
tree decomposition, there are no arcs between~$U_y\setminus V_x$
and~$U_z\setminus V_x$. Hence, this connected component contains a
vertex~$v$ of $V_x$; otherwise, it would be a connected component with
two sinks outside of~$V_x$ already in either~$G[U_y]\setminus\ensuremath{S}{}_y$
or~$G[U_z]\setminus\ensuremath{S}{}_z$. Thus, as seen in the previous paragraph,
we have arcs~$(s_1',s_1)$~and~$(s_2',s_2)$ with $s_1',s_2'\in V_x$.
%
%
It follows by construction of~$\mathcal{P}'$ from~$\mathcal{G}'$ in
\autoref{joiproc} that $s_1$~and~$s_1'$ are in a
set~$P_i\in\mathcal{P}'$ and $s_2$~and~$s_2'$ are in a
set~$P_j\in\mathcal{P}'$. We show~$i=j$, which contradicts the
construction of~$\mathcal{P}'$, since then~$P_i=P_j$ contains two
vertices~$s_1\notin V_x$ and~$s_2\notin V_x$.
Since $s_1$~and~$s_2$ are in the same connected component
of~$G[U_x]\setminus\ensuremath{S}{}$, also $s_1'$~and~$s_2'$ are, since they can
reach~$s_1$ and~$s_2$, respectively. Hence, there is an undirected path{}~$p$
between~$s_1$ and~$s_2$ in~$G[U_x]\setminus\ensuremath{S}{}$. It consists of
consecutive path segments~$p'$ that only have their endpoints~$u,w$
in~$V_x$ (possibly, such a path segment only consists of one arc). It
follows that such a path segment~$p'$ is entirely contained in
$G[U_y]\setminus\ensuremath{S}{}_y$ or $G[U_z]\setminus\ensuremath{S}{}_y$ and, hence, its
endpoints $u$~and~$w$ are in the same set of~$\mathcal{P}_y$
or~$\mathcal{P}_z$. Since $u,w\in V_x$, by construction of~$\mathcal{P}'$ in
\autoref{joiproc}, $u$~and~$w$ are in the same set of~$\mathcal{P}'$. It
follows that~$s_1'$ and~$s_2'$ are in the same set of~$\mathcal{P}'$, and
so are~$s_1$ and~$s_2$.
It follows that $\ensuremath{S}{}$ is indeed a partial partitioning set{} for~$G[U_x]\setminus\ensuremath{S}{}$. It remains to verify that~$\ensuremath{S}{}$ satisfies the pattern~$(\mathcal{R},\mathcal{G}',\mathcal{P}')$ (\autoref{patsat}). Herein, \autoref{patsat}(\ref{patsatsol}), $\mathcal{R}{}=\mathcal{R}_x(\ensuremath{S}{})$, is trivial. We verify~(\ref{patsat1}), that is, $\mathcal{G}{}_x(\ensuremath{S}{})=\mathcal{G}{}'$. Herein, $V(\mathcal{G}_x(\ensuremath{S}{}))\subseteq V(\mathcal{G}')$ we already verified when verifying \autoref{def:partsol}(\ref{partsol2}). Now, assume that there are two vertices $u,w$ in~$\mathcal{G}'$ such that $u$~can reach~$w$ in $G[U_x]\setminus\ensuremath{S}{}$. Since, then, $u$~is not a sink, it is in~$V_x$. The directed path{} from~$u$ to~$w$ consists of consecutive subpaths, each being entirely contained in~$G[U_y]\setminus\ensuremath{S}{}_y$ or~$G[U_z]\setminus\ensuremath{S}{}_z$ and thus, causing an arc in~$\mathcal{G}_y$ or~$\mathcal{G}_z$ and, therefore, in~$\mathcal{G}'$. It follows that~$u$ can reach~$w$ in~$\mathcal{G}'$, which therefore has an arc~$(u,w)$. In the opposite direction, for every arc~$(u,w)$ in~$\mathcal{G}'$ that is already in~$\mathcal{G}_y$ or~$\mathcal{G}_z$, there is an directed path{} in either $G[U_y]\setminus\ensuremath{S}{}_y$ or in~$G[U_z]\setminus\ensuremath{S}{}_z$ from~$u$ to~$w$ and, thus, $u$~can reach~$w$ in $G[U_x]\setminus\ensuremath{S}{}$. For an arc~$(u,w)$ in~$\mathcal{G}'$ that is neither present in~$\mathcal{G}_y$ nor~$\mathcal{G}_z$, there is an directed path{} in~$\mathcal{G}'$ from~$u$ to~$w$ consisting only of arcs that are already present in~$\mathcal{G}_y$ or~$\mathcal{G}_z$. Since we have seen that for each such arc there is a corresponding directed path{} in~$G[U_x]\setminus\ensuremath{S}{}$, we have that $u$~can reach~$w$ in~$G[U_x]\setminus\ensuremath{S}{}$.
For \autoref{patsat}(\ref{patsat2}), it has been shown above that if
two vertices of~$\mathcal{G}'$ are in the same connected component
of~$G[U_x]\setminus\ensuremath{S}{}$, then they are in the same set
in~$\mathcal{P}'$. \qed
\end{proof}
\noindent We now show that the minimum weight of a partial partitioning set{} for~$G[U_x]$
satisfying a pattern~$(\mathcal{R},\mathcal{G}{},\mathcal{P})$ at~$x$ is at
least~$\textsl{Tab}_x(\mathcal{R},\mathcal{G}{},\mathcal{P})$ as computed by by
\autoref{joiproc}.
\begin{proof}[of \autoref{lem:ulobo}\eqref{lowerbound} for join nodes]
Let $x$~be a join node with the child nodes~$y$ and~$z$, that
is~$V_x=V_y=V_z$. Assume that $\ensuremath{S}{}$ is a minimum-weight partial
partitioning set{} for~$G[U_x]$ satisfying the pattern
$(\mathcal{R},\mathcal{G}{}_x, \mathcal{P}_x)$ at~$x$. It is sufficient to
construct patterns~$(\mathcal{R},\mathcal{G}{}_y,\mathcal{P}{}_y)$
and~$(\mathcal{R},\mathcal{G}{}_z,\mathcal{P}{}_z)$ that are satisfied
by~$S_y:=\ensuremath{S}{}\cap A(G[U_y])$ at~$y$ and by~$S_z:=\ensuremath{S}{}\cap
A(G[U_z])$ at~$z$, respectively, such that from these patterns
\autoref{joiproc} generates exactly the
pattern~$(\mathcal{R},\mathcal{G}_x,\mathcal{P}_x)$ to update
$\textsl{Tab}_x(\mathcal{R},\mathcal{G}_x,\mathcal{P}_x)$ with
\begin{align*}
&\textsl{Tab}_y(\mathcal{R},\mathcal{G}{}_y,\mathcal{P}_y) +
\textsl{Tab}_z(\mathcal{R},\mathcal{G}{}_z,\mathcal{P}_z)-\ensuremath{\omega}{}(A(G[V_x]))
+\ensuremath{\omega}{}(A(\mathcal{R}))\\
&\leq \ensuremath{\omega}{}(\ensuremath{S}_y) +\ensuremath{\omega}{}(\ensuremath{S}{}_z)
-\ensuremath{\omega}{}(\ensuremath{S}_y\cap\ensuremath{S}_z)\\
&= \ensuremath{\omega}{}(\ensuremath{S}{}).
\end{align*}
We first show that $S_y$~is a partial partitioning set{}
for~$G[U_y]$. Symmetrically, it follows that $S_z$~is a
partial partitioning set{} for~$G[U_z]$. We first verify
\autoref{def:partsol}(\ref{partsol1}). Since by
\autoref{treedec}(\ref{treedec3}), there are no arcs between vertices
in~$U_y\setminus V_y$ and~$U_z\setminus V_z$ in~$G[U_x]$, it follows
from $G[U_y]\setminus S_y = G[U_x]\setminus\ensuremath{S}{}-(U_z\setminus V_z)$
that no connected component of~$G[U_y]\setminus S_y$ contains two
sinks not in~$V_y=V_x$. It remains to verify
\autoref{def:partsol}(\ref{partsol2}). To this end, let $u$~be a sink
in~$U_y \setminus V_y$ in a connected component of~$G[U_y]\setminus
S_y$ containing a vertex of~$V_y$. Then, by
\autoref{def:partsol}(\ref{partsol2}), the connected component of~$G[U_x]\setminus\ensuremath{S}{}$ containing~$u$ contains a directed path{} from some
vertex in~$V_x$ to~$u$. The subpath of this directed path{} that contains only
one vertex of~$V_x$ is preserved in~$G[U_y]\setminus \ensuremath{S}{}_y$. Hence,
$u$~is reachable from some vertex of~$V_x=V_y$.
We now construct the patterns. To this end, let
$\mathcal{G}{}_y:=\mathcal{G}{}_y(\ensuremath{S}{}_y)$ and
$\mathcal{G}{}_z:=\mathcal{G}{}_z(\ensuremath{S}{}_z)$. Moreover, we
choose~$\mathcal{P}_y$ and~$\mathcal{P}_z$ such that they partition the
set~$V_x=V_y=V_z$ in the same way as~$\mathcal{P}_x$ and such that the
vertices of~$V(\mathcal{G}{}_y) \setminus V_y$ (or $V(\mathcal{G}{}_z)
\setminus V_z$) are in the same set as the other vertices of their
connected components in $G[U_y]\setminus\ensuremath{S}_y$ (or
$G[U_z]\setminus\ensuremath{S}_z$).
We show that $\ensuremath{S}{}_y$~satisfies $(\mathcal{R},\mathcal{G}{}_y,
\mathcal{P}_y)$ at~$y$. Analogously, it then follows that
$\ensuremath{S}{}_z$~satisfies~$(\mathcal{R},\mathcal{G}{}_z, \mathcal{P}_z)$ at~$z$. We
verify \autoref{patsat}. Since
$\mathcal{R}=\mathcal{R}_x(\ensuremath{S})=\mathcal{R}_y(\ensuremath{S}_y)=\mathcal{R}_z(\ensuremath{S}_z)$ and
$\mathcal{G}{}_y=\mathcal{G}{}_y(S_y)$ hold by definition, it remains to
verify \autoref{patsat}(\ref{patsat2}). To this end, observe that
$G[U_y]\setminus\ensuremath{S}{}_y=G[U_x]\setminus\ensuremath{S}{}-(U_y\setminus
V_y)$. Now, assume, for the sake of a contradiction, that there are
two vertices~$v,w$ of~$\mathcal{G}_y$ in different sets of~$\mathcal{P}_y$
but in the same connected component of~$G[U_y]\setminus\ensuremath{S}{}_y$. It
follows that $v$~and~$w$ are in the same connected component
of~$G[U_x]\setminus\ensuremath{S}{}$. If $v,w\in V_y$, then, by construction
of~$\mathcal{P}_y$ from~$\mathcal{P}_x$, the vertices~$v$ and~$w$ are in
different sets of~$\mathcal{P}_x$, contradicting
$\ensuremath{S}{}_x$~satisfying~$(\mathcal{R},\mathcal{G}{}_x,\mathcal{P}{}_x)$. If
exactly one of~$v,w$ is in~$V_y$, then $v$~and~$w$ being in different
sets of~$\mathcal{P}_y$ contradicts the construction of~$\mathcal{P}_y$. If
both $v,w\notin V_y$, then $v$~and~$w$ are two bag-reachable{} sinks
in~$G[U_y]$, which contradicts~$v$ and~$w$ being in the same connected
component of~$G[U_y]\setminus\ensuremath{S}{}_y$.
Hence, indeed $\ensuremath{S}{}_y$ satisfies $(\mathcal{R},\mathcal{G}{}_y,
\mathcal{P}_y)$ at~$y$ and $S_z$ satisfies
$(\mathcal{R},\mathcal{G}{}_z,\mathcal{P}_z)$ at~$z$. Moreover, since
$\mathcal{P}_y$ and~$\mathcal{P}_z$ partition~$V_x$ in the same way,
\autoref{joiproc} applies to the patterns~$(\mathcal{R},\mathcal{G}{}_y,
\mathcal{P}_y)$ and~$(\mathcal{R},\mathcal{G}{}_z,\mathcal{P}_z)$ and produces a
pattern $(\mathcal{R},\mathcal{G}{}',\mathcal{P}')$. If no set of~$\mathcal{P}'$
contains more than one vertex of $V(\mathcal{G}{}') \setminus V_x$, it
indeed updates $\textsl{Tab}_x(\mathcal{R},\mathcal{G}{}',\mathcal{P}')$.
%
Hence it remains to show that $\mathcal{G}{}'=\mathcal{G}{}_x$ and $\mathcal{P}'=\mathcal{P}_x$, as no set of~$\mathcal{P}_x$ contains two vertices of $V(\mathcal{G}{}_x) \setminus V_x$ by \autoref{pat}(\ref{pat2}). We already showed in the proof of \autoref{lem:ulobo}\eqref{upperbound} for join nodes that $\ensuremath{S}{}$~satisfies the pattern~$(\mathcal{R},\mathcal{G}{}',\mathcal{P}{}')$ generated by \autoref{joiproc}. Hence, $\mathcal{G}{}'=\mathcal{G}{}_x$. Finally, by construction of~$\mathcal{P}'$ in \autoref{joiproc}, the vertices of~$V_x$ are partitioned the same way by~$\mathcal{P}'$ and~$\mathcal{P}_x$. For a vertex~$v \in V(\mathcal{G}{}') \setminus V_x$, there is a vertex~$u$ in~$V_x$ that can reach~$v$ in~$\mathcal{G}{}'$ and, therefore, in~$G[U_x]\setminus\ensuremath{S}{}$. Hence, $u$ and~$v$ must be in the same set of both $\mathcal{P}_x$ and~$\mathcal{P}'$ by construction of~$\mathcal{P}'$ in \autoref{joiproc}.
%
%
%
%
\qed\end{proof}
\subsubsection{Running time}
\noindent Having shown the correctness of the
Procedures~\ref{forgproc}--\ref{joiproc}, we can finally complete the proof of \autoref{th:fpt-tw} by analyzing the running time of the procedures.
\begin{proof}[of \autoref{th:fpt-tw}]
\autoref{lem:ulobo} showed that the presented dynamic programming algorithm is correct, that is, it solves \textsc{DAG Partitioning}{} given a tree decomposition of the input graph. It remains to analyze the running time.
To this end, recall that each bag in a tree decomposition of width~$\ensuremath{t}{}$ contains at most~$\ensuremath{t}{}+1$~vertices. This allows us to give an upper bound on the number of possible patterns~$(\mathcal{R},\mathcal{G}{},\mathcal{P}{})$. There are at most $3^{\binom{\ensuremath{t}+1}{2}}$ directed acyclic graphs~$\mathcal{R}$ on at most $\ensuremath{t}{}+1$ vertices: for each pair~$(v,w)$ of vertices: there is either no arc, or an arc from~$v$ to~$w$, or an arc from~$w$ to~$v$. Similarly, there are at most $3^{\binom{2\ensuremath{t}+2}{2}}$ directed graphs~$\mathcal{G}$ on at most $2\ensuremath{t}{}+2$ vertices. Moreover, there are at most $(2\ensuremath{t}+2)^{2\ensuremath{t}+2}$ partitions~$\mathcal{P}{}$ of at most $2\ensuremath{t}+2$~vertices into at most $2\ensuremath{t}+2$~sets. Hence, each table has at most $3^{\binom{2\ensuremath{t}+2}{2}}\cdot3^{\binom{\ensuremath{t}+1}{2}}\cdot(2\ensuremath{t}+2)^{2\ensuremath{t}+2}=3^{O(\ensuremath{t}^2+\ensuremath{t} \log \ensuremath{t})}=2^{O(\ensuremath{t}^2)}$~entries and looking up entries in the tables can be implemented to run in $O(\log 2^{\ensuremath{t}^2})$~time, which is polynomial in~$\ensuremath{t}$.
In each leaf node, we set the single table entry to~$0$ in constant
time.
In each forget node, \autoref{forgproc} iterates over the
entries of the table of the child node and for each entry spends time
polynomial in~$\ensuremath{t}$. Thus, it spends a total of $2^{O(\ensuremath{t}^2)}$~time in
each forget node.
To analyze the running time \autoref{intproc} spends in an introduce
node, observe that there are at most $\ensuremath{t}$~arcs in~$A(G[V_x])$
incident to the introduced vertex~$v$. Hence, there are at most
$2^\ensuremath{t}$ subsets of them. For each of these subsets and for each entry
of the child node, \autoref{intproc} spends time polynomial
in~$\ensuremath{t}$. This makes a total of $2^{O(\ensuremath{t}^2)}$~time spent in each
introduce node.
Finally, in a join node, \autoref{joiproc} considers every pair of
patterns of its two child nodes and for each combination spends time
polynomial in~$\ensuremath{t}$. Hence, the total time spent in a join node
is~$(2^{O(\ensuremath{t}^2)})^2=2^{O{(\ensuremath{t}^2)}}$.
Since the nice tree decomposition has $O(\ensuremath{t} n)$~nodes, the algorithm
runs in $2^{O(\ensuremath{t}^2)}\cdot n$~time. \qed
\end{proof}
\section{Other parameters yield stronger NP-hardness
results}\label{sec:classical}
In Sections~\ref{sec:smallsol} and~\ref{sec:tw}, we have seen that
\textsc{DAG Partitioning}{} is solvable in linear time when fixing the weight of the
partitioning set{} sought or the treewidth of the input graph. %
The question whether fixed-parameter algorithms can be obtained for
parameters that are smaller than the solution weight or the
treewidth naturally arises~\cite{Nie10,KN12,FJR13}.
One parameter of interest is the maximum vertex outdegree in the graph:
a citation network of journal articles will, for example, have a small
outdegree since journal articles seldom contain more than 50~references.
In this section, however, we will show that, among others, this parameter
being small will not help solving \textsc{DAG Partitioning}{} efficiently.
\citet{AM12} already showed that \textsc{DAG Partitioning}{} remains NP-hard even if the input graph has only two sinks. We complement this negative result by showing that the problem remains NP-hard even if the diameter or the maximum vertex degree of the input graph are constant. In conclusion, parameters like the number of sinks, the graph diameter or maximum degree cannot lead to fixed-parameter algorithms unless P${}={}$NP.
\begin{theorem} \label{thm:diam2} \textsc{DAG Partitioning}{} is solvable in linear time on
graphs of diameter one, but NP-complete on graphs of diameter two even if all arcs have unit weight.
\end{theorem}
\begin{proof}
On graphs of diameter one, the problem is trivial: a directed acyclic
graph with diameter one is an acyclic \emph{tournament}, that is,
there is no pair of vertices not joined by an arc. As such, it already
contains exactly one source and one sink. Hence, we just verify in
linear time whether the input graph is an acyclic tournament and
answer ``yes'' or ``no'' accordingly.
For graphs of diameter two, we show NP-hardness by means of a
polynomial-time many-one reduction from \textsc{DAG Partitioning}, which is NP-hard even
when all arcs have weight one. Therefore, we agree on all arcs in this
proof having weight one.
Given an arbitrary instance~$(G,\ensuremath{\omega},k)$ of \textsc{DAG Partitioning}, we add a gadget
to~$G$ to obtain in polynomial time an instance~$(G',\ensuremath{\omega}', k')$ such
that~$G'$ is a graph of diameter two and such that~$(G,\ensuremath{\omega},k)$ is a
yes-instance if and only if~$(G',\ensuremath{\omega}',k')$ is a yes-instance. We
obtain a graph~$G'$ from~$G$ by adding an acyclic tournament~\(T\)
consisting of $k+n+2$~vertices and outgoing arcs from the source~$s$
of~\(T\) to all vertices of~$V(G)$ in~$G'$. We set~$k':= k +
n$. Since every vertex in~$G'$ is in distance one from~$s$, the
constructed graph~$G'$ has diameter two.
If \((G,\ensuremath{\omega},k)\)~is a yes-instance, then let \(S\)~be a partitioning set{}
with \(k\)~arcs for~\(G\). A partitioning set{}~\(S'\) with \(k'\)~arcs
for~\(G'\) is obtained by adding to~\(S\) the \(n\)~arcs from the
source~\(s\) of~\(T\) to all vertices of~\(V(G)\) in~\(G'\). Thus,
\((G',\ensuremath{\omega}',k)\)~is a yes-instance.
If \((G',\ensuremath{\omega}',k')\)~is a yes-instance, then let \(S'\)~be a partitioning set{}
with \(k'\)~arcs for~\(G'\). By \autoref{lem:dirundirequiv}, every
vertex in~$V(G)$ reaches at least one sink in~\(G'\setminus S'\).
This sink cannot be the sink of~\(T\), since no vertex in~\(T\) is
reachable from~\(V(G)\). Thus, \(S'\)~has to disconnect the sink
of~\(T\) from all vertices of~$V(G)$ in~$G'$, where all paths between
the sink of~\(T\) and~\(V(G)\) are via the source~\(s\) of~\(T\).
Since \(T\) has $k+n+2$~vertices, \(S'\)~cannot disconnect~\(s\) from
the sink of~\(T\) and thus, has to remove from~\(G'\) the $n$~arcs
connecting~$s$ to the vertices of~$V(G)$. Then, the remaining
\(k\)~arcs in~\(S'\) have to be a partitioning set{} for~\(G'\) without the
tournament~\(T\), which is precisely the original graph~$G$. Thus,
$(G,\ensuremath{\omega},k)$~is a yes-instance. \qed
\end{proof}
\begin{theorem}
\label{thm:maxdeg3}
\textsc{DAG Partitioning}{} is solvable in linear time on graphs of maximum degree two,
but NP-complete on graphs of maximum degree three even if all arcs have unit weight.
\end{theorem}
\begin{proof}
\newcommand{X}{X}
\newcommand{Y}{Y}
\newcommand{Z}{Z}
Any graph of maximum degree two consists of undirected cycle{}s or undirected path{}s. Thus, the
underlying graph has treewidth at most two. We have seen in
\autoref{th:fpt-tw} that \textsc{DAG Partitioning}{} is linear-time solvable when the
treewidth of the input graph is bounded by a constant.
We prove the NP-hardness on graphs of maximum degree three. To this
end, we adapt the polynomial-time many-one reduction from \textsc{Multiway Cut}{} to
\textsc{DAG Partitioning}{} presented by \citet{LBK09}. In their reduction, we replace
vertices of degree greater than three by equivalent structures of
lower degree.
\decprob{\textsc{Multiway Cut}}{%
An undirected graph $G=(V,E)$, a weight function $\ensuremath{\omega}:E \to \ensuremath{\mathbb{N}}$,
a set~$T \subseteq V$ of terminals, and an integer $k$.
}{%
Is there a subset $\ensuremath{S}{} \subseteq E$ with $\sum_{e \in \ensuremath{S}{}} \ensuremath{\omega}(e)
\leq k$ such that the removal of~$\ensuremath{S}{}$ from $G$ disconnects
each terminal from all the others?
}
\noindent We first recall the reduction from \textsc{Multiway Cut}{} to \textsc{DAG Partitioning}{}. From a
\textsc{Multiway Cut}{} instance~$I_1:=(G_1,\ensuremath{\omega}_1,T,k_1)$, we construct in polynomial
time a \textsc{DAG Partitioning}{} instance~$I_2:=(G_2,\ensuremath{\omega}_2,k_2)$ such that $I_1$~is a
yes-instance if and only if~$I_2$ is. From $I_2$, we then obtain an
instance~$I_3$ with maximum degree three. Since \textsc{Multiway Cut}{} remains NP-hard
even for three terminals and unit weights~\cite{DJP+94}, we may
assume $|T|=3$ and, similarly as in the proof of \autoref{thm:diam2},
we agree on all arcs in this proof having weight one. We now
construct the \textsc{DAG Partitioning}{} instance~$I_2=(G_2,\ensuremath{\omega}_2,k_2)$
from~$I_1=(G_1,\ensuremath{\omega}_1,T,k_1)$ as follows. The construction is
illustrated in \autoref{fig:mc-to-dagp}.
\begin{figure}[t] \centering
\begin{tikzpicture}
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex, label=above:$t_1$] (t1) {};
\node[vertex, label=above:$v_1$] (v1) at (1,0) {};
\node[vertex, label=above:$t_2$] (t2) at (2,0) {};
\node[vertex, label=above:$v_2$] (v2) at (3,0) {};
\node[vertex, label=above:$t_3$] (t3) at (4,0) {};
\draw(t1) -- (v1);
\draw[dotted] (v1) -- (t2);
\draw (t2) -- (v2);
\draw[dotted] (v2) -- (t3);
\draw[dotted] (t1) to[out=-45,in=-135] (t2);
\end{tikzpicture}
\bigskip\noindent
\begin{tikzpicture}[x=3cm, shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\node[vertex, label=above:$t_1$] (t1) {};
\node[vertex, label=above:$v_1$] (v1) at (1,0) {};
\node[vertex, label=above:$t_2$] (t2) at (2,0) {};
\node[vertex, label=above:$v_2$] (v2) at (3,0) {};
\node[vertex, label=above:$t_3$] (t3) at (4,0) {};
\node[vertex, label=below:$s_1$] (r1) at (0,-1) {};
\node[vertex, label=below:$s_2$] (r2) at (2,-1) {};
\node[vertex, label=below:$s_3$] (r3) at (4,-1) {};
\node[vertex, label=above:$e_{\{t_1,v_1\}}$] (e1) at (0.5,1) {};
\node[vertex, label=above:$e_{\{v_1,t_2\}}$] (e2) at (1.5,1) {};
\node[vertex, label=above:$e_{\{t_2,v_2\}}$] (e3) at (2.5,1) {};
\node[vertex, label=above:$e_{\{v_2,t_3\}}$] (e4) at (3.5,1) {};
\node[vertex, label=above:$e_{\{t_1,t_2\}}$] (e5) at (1,2) {};
\draw[->] (t1) -> (r1);
\draw[->] (t2) -> (r2);
\draw[->] (t3) -> (r3);
\draw[->] (v1) -> (r1);
\draw[->,dotted] (v1) -> (r2);
\draw[->,dotted] (v1) -> (r3);
\draw[->,dotted] (v2) -> (r1);
\draw[->] (v2) -> (r2);
\draw[->,dotted] (v2) -> (r3);
\draw[->] (e1) -> (t1);
\draw[->] (e1) -> (v1);
\draw[->,dotted] (e2) -> (v1);
\draw[->] (e2) -> (t2);
\draw[->] (e3) -> (t2);
\draw[->] (e3) -> (v2);
\draw[->] (e4) -> (v2);
\draw[->,dotted] (e4) -> (t3);
\draw[->,shorten >= 0.35cm,dotted] (e5) to[out=180,in=90] (t1);
\draw[->,shorten >= 0.35cm] (e5) to[out=0,in=90] (t2);
\node[label=below right:$Z{}$] (recta) at (-0.75,2.5) {};
\node[label=below right:$Y{}$] (rectb) at (-0.75,0.6) {};
\node[label=below right:$X{}$] (rectc) at (-0.75,-0.5) {};
\begin{pgfonlayer}{background}
\path[fill=black!10,rounded corners] (recta)
rectangle (4.25,0.7);
\path[fill=black!10,rounded corners] (rectb)
rectangle (4.25,-0.4);
\path[fill=black!10,rounded corners] (rectc)
rectangle (4.25,-1.5);
\end{pgfonlayer}
\end{tikzpicture}
\caption{Reduction from a \textsc{Multiway Cut}{} instance with the terminals~$t_1,t_2$,
and~$t_3$ to \textsc{DAG Partitioning}{}. The top shows an instance~$I_1$ of
\textsc{Multiway Cut}{}, where the dotted edges are a multiway cut{} of size~$k_1=3$. The
bottom shows the corresponding instance~$I_2$ of \textsc{DAG Partitioning}{},
where the dotted arcs are a corresponding partitioning set{} of
size~$k_2=k_1+2(n-3)=7$ ($n$~is the number of vertices in the
graph of the \textsc{Multiway Cut}{} instance). The constructed vertex
sets~$X{},Y{},$ and $Z{}$ are highlighted using a gray
background.}
\label{fig:mc-to-dagp}
\end{figure}
\begin{enumerate}
\item Add three vertices $s_1,s_2,s_3$ to~$G_2$, forming the vertex
set~$X{}$,
\item add each vertex of~$G_1$ to~$G_2$, forming the vertex
set~$Y{}$,
\item for each edge~$\{u,v\}$ of~$G_1$, add a vertex $e_{\{u,v\}}$
to~$G_2$, forming the vertex set~$Z{}$,
\item for each terminal $t_i \in T$, add the arc~$(t_i,s_i)$ to~$G_2$,
\item for each vertex~$v \in Y{} \setminus T$, add the
arcs~$(v,s_i)$ for $i = 1,2,3$ to~$G_2$, and
\item for each edge~$\{u,v\}$ of~$G_1$, add the arcs~$(e_{\{u,v\}},
u)$ and~$(e_{\{u,v\}}, v)$ to~$G_2$.
\end{enumerate}
Set $k_2 = k_1 + 2(n-3)$, where $n$~is the number of vertices
of~$G_1$. We claim that $I_1$ is a yes-instance if and only if $I_2$
is a yes-instance.
First, suppose that there is a multiway cut{}~$\ensuremath{S}{}$ of size at most~$k_1$
for~$G_1$. Then, we obtain a partitioning set{} of size at most~$k_2$ for
$G_2$ as follows: if a vertex~$v$ belongs to the same connected
component of $G_1\setminus\ensuremath{S}{}$ as terminal~$t_i$, then remove every
arc $(v,s_j)$ with~$j \neq i$ from~$G_2$. Furthermore, for each
edge~$\{u,v\} \in \ensuremath{S}{}$, remove \emph{either} the arc~$(e_{\{u,v\}},
u)$ or the arc~$(e_{\{u,v\}}, v)$ from~$G_2$. One can easily check
that we end up with a valid partitioning set{} of size $k_2=k+2(n-3)$ for~$G_2$:
we delete at most $k$~arcs from~$Z{}$ to~$Y{}$ and, for each
of the $n-3$~vertices in~$Y{}\setminus T$, delete two arcs
from~$Y{}$ to~$X{}$. There are no arcs from~$X{}$
to~$Z{}$.
Conversely, suppose that we are given a minimal partitioning set{}~$\ensuremath{S}{}$ of
size at most~$k_2$ for~$G_2$. Note that it has to remove at least two
of the three outgoing arcs of each vertex~$v_2 \in Y{}\setminus
T$ but cannot remove all three of them: contrary to
\autoref{lem:no_new_sinks}, this would create a new sink. Thus,
$\ensuremath{S}{}$~deletes $2(n-3)$~arcs from~$Y{}$ to~$X{}$ and the
remaining $k_2-2(n-3)=k_1$~arcs from~$Z{}$
to~$Y{}$. Therefore, we can define the following multiway cut{} of
size~$k_1$ for $G_1$: remove an edge $\{u,v\}$ from~$G_1$ if and only
if one of the arcs~$(e_{\{u,v\}}, u)$ and~$(e_{\{u,v\}}, v)$ is
removed from~$G_2$ by~$\ensuremath{S}{}$. Again, one can easily check that we
end up with a valid multiway cut{}.
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}[shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\tikzstyle{neighbor}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\tikzstyle{annot} = [text width=4em, text centered]
\node[neighbor, label=below:$s_1$] (U-1) at ( 0, 2) {};
\node[neighbor, label=below:$s_2$] (U-2) at ( 1, 2) {};
\node[neighbor, label=below:$s_3$] (U-3) at ( 2, 2) {};
\node[neighbor, label=right:$v$] (V) at (1,4) {};
\node[neighbor, label=above:$w_1$] (B-0) at (-0.5,8) {};
\node[neighbor, label=above:$w_2$] (B-1) at (0.5,8) {};
\node[neighbor, label=above:$w_3$] (B-2) at (1.5,8) {};
\node[neighbor, label=above:$w_4$] (B-3) at (2.5,8) {};
%
\draw[->] (B-0) to[out=-90,in=126] (V);
\draw[->] (B-1) to[out=-90,in=102] (V);
\draw[->] (B-2) to[out=-90,in=78] (V);
\draw[->] (B-3) to[out=-90,in=54] (V);
\draw[->] (V) to[out=-126,in=90] (U-1);
\draw[->] (V) edge (U-2);
\draw[->] (V) to[out=-54,in=90] (U-3);
\end{tikzpicture}\hspace{2cm}
\begin{tikzpicture}[node distance=1cm, shorten >= 0.5mm]
\tikzstyle{vertex}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\tikzstyle{neighbor}=[circle,draw,fill=black,minimum size=3pt,inner sep=0pt]
\tikzstyle{annot} = [text width=4em, text centered]
\node[neighbor, label=above:$w_1$] (C-1) {};
\node[neighbor, label=above:$w_2$] (C-2) [right of=C-1] {};
\node[neighbor, label=above:$w_3$] (C-3) [right of=C-2] {};
\node[neighbor, label=above:$w_4$] (C-4) [right of=C-3] {};
\node[neighbor, label=left:$w_2'$] (C-5) [below of=C-2] {};
\node (C-6-1) [below of=C-3] {};
\node[neighbor, label=left:$w_3'$] (C-6) [below of=C-6-1] {};
\node (C-7-1) [below of=C-4] {};
\node (C-7-2) [below of=C-7-1] {};
\node[neighbor, label=left:$w_4'$] (C-7) [below of=C-7-2] {};
\node[neighbor, label=left:$v$] (C-8) [below of=C-7] {};
\node (C-12-1) [below of=C-8] {};
\node[neighbor, label=below:$s_3$] (C-12) [below of=C-12-1] {};
\node[neighbor, label=left:$v'$] (C-9) [left of=C-12-1] {};
\node[neighbor, label=below:$s_2$] (C-11) [below of=C-9] {};
\node[neighbor, label=below:$s_1$] (C-10) [left of=C-11] {};
\node[annot, xshift=0.5cm] (T) [right of=C-9] {$T_{v}$};
\foreach \src / \dest in {1/5, 2/5, 3/6, 4/7, 5/6, 6/7, 7/8, 8/9, 8/12, 9/10, 9/11}
\draw[->] (C-\src) edge (C-\dest);
\begin{pgfonlayer}{background}
\path[fill=black!10,rounded corners]
(0.5,-3.5) rectangle (4,-6.5);
\end{pgfonlayer}
\end{tikzpicture}
\end{center}
\vspace{-15pt}
\caption{Reduction of the degree of a vertex~$v$ to three. On the
left, the original neighborhood of~$v$ is shown. The right side
shows~$v$ after modification. The tree structure~$T_{v}$
constructed in the proof is highlighted using a gray background.}
\label{fig:tree-structure}
\end{figure}
It remains to modify the instance $I_2=(G_2,\ensuremath{\omega}_2,k_2)$ to get an
instance $I_3=(G_3,\ensuremath{\omega}_3,k_3)$ of maximum degree three. To this end,
first we show how to reduce the outdegree of each vertex of~$G_2$ to
two. Thereafter, we show how to reduce the indegree of each vertex of~$G_2$
to one by introducing gadget vertices, each having
indegree two and outdegree one. The construction is illustrated in
\autoref{fig:tree-structure}.
Note that all vertices of~$G_2$ with outdegree larger than two are
in~$Y{}$. In order to decrease the degree of these vertices, we
obtain a graph~$G_3'$ from~$G_2$ by carrying out the following
modifications (see \autoref{fig:tree-structure}) to~$G_2$: for each
vertex~$v \in Y{}$, with $N^+(v) = \{s_1,s_2,s_3\}$, remove
$(v,s_1)$ and $(v,s_2)$ add a new vertex~$v'$, and insert the three
arcs~$(v,v')$, $(v',s_1)$, and~$(v',s_2)$.
\looseness=-1We show that~$(G_3',\ensuremath{\omega}_3',k_2)$ is a yes-instance if and only
if~$(G_2,\ensuremath{\omega}_2,k_2)$~is. To this end, for~$v\inY{}$, let $T_{v}$~be the
induced subgraph~$G_3'[\{v,v', s_1,s_2,s_3\}]$. In~$G_2$, a
minimal partitioning set{} removes exactly two of the outgoing arcs of~$v$,
since~$s_1,s_2$, and~$s_3$ are sinks. It is enough to show that a
minimal partitioning set{}~$S$ removes exactly two arcs in~$T_{v}$ from~$G_3'$
in such a way that there remains exactly one directed path{} from~$v$ to exactly
one of~$s_1,s_2$, or~$s_3$. This remaining directed path{} then one-to-one
corresponds to the arc that a partitioning set{} would leave in~$G_2$
between~$v$ and~$s_1,s_2$, or~$s_3$. Since $s_1,s_2$, and~$s_3$ are
sinks, $\ensuremath{S}{}$ indeed has to remove at least two arcs from~$T_{v}$:
otherwise, two sinks will belong to the same connected component.
However, due to \autoref{lem:no_new_sinks}, $\ensuremath{S}{}$ cannot remove
more than two arcs from~$T_{v}$. Moreover, again exploiting
\autoref{lem:no_new_sinks}, the two arcs removed by~$\ensuremath{S}{}$ leave a
single directed path{} from~$v$ to exactly one of~$s_1,s_2$, or~$s_3$.
We have seen that $(G_3',\ensuremath{\omega}_3',k_2)$ is equivalent to~$(G_2,\ensuremath{\omega}_2,k_2)$ and that
all vertices of~$G_3'$ have outdegree two. To shrink the overall
maximum degree to three, it remains to reduce the indegrees.
Note that the vertices newly introduced in the previous step already have indegree one.
We obtain
graph~$G_3$ of maximum degree three from~$G_3'$ as follows. For each
vertex~$v$ with $|N^-(v)| = |\{w_1,\ldots,w_{d^-(v)}\}| \geq 2$, do
the following (see \autoref{fig:tree-structure}): for $j = 2, \ldots,
d^-(v)$, remove the arc~$(w_j,v)$ and add a vertex~$w_j'$ together
with the arc~$(w_j,w_j')$. Moreover, add the arcs $(w_{1},w_{2}')$,
$(w_{d^-(v)},v)$, and~$(w_j',w_{j+1}')$ for each $j\in\{2, \ldots,
d^-(v)-1\}$. Now, every vertex of~$V(G_3')$ in~$G_3$ has indegree
one and outdegree two, while the newly introduced vertices have
indegree two and outdegree one. It follows that all vertices in~$G_3$
have degree at most three.
It remains to show that~$(G_3,\ensuremath{\omega}_3,k_2)$ is a yes-instance if and
only if~$(G_3',\ensuremath{\omega}_3',k_2)$ is. It then follows that
$(G_3,\ensuremath{\omega}_3,k_2)$ is a yes-instance if and only if~$(G_2,\ensuremath{\omega}_2,k_2)$
is. To this end, note that by \autoref{lem:no_new_sinks}, among the
introduced arcs, only the arcs~\(w_1,w'_2\) and~$(w_j,w_j')$ can be
removed by a minimal partitioning set{}. From this, there is a one-to-one
correspondence between deleting the arc \(w_1,w'_2\) or $(w_j,w_j')$
in the graph~$G_3$ and deleting the arc~$(w_j,v)$ in the
graph~$G_3'$. \qed
\end{proof}
\section{Outlook}\label{sec:outlook}
We have presented two fixed-parameter algorithms for \textsc{DAG Partitioning}, one with
respect to the weight~$k$ of the partitioning set{} sought and one with respect
to the parameter treewidth~$\ensuremath{t}$.
We demonstrated the feasibility of applying the fixed-parameter algorithm for the
parameter~$k$ (\autoref{alg:simple-st}) to large input instances with optimal
partitioning set{}s of small weight. However, we were unable to solve the
instances in the data set of~\citet{LBK09}, since the weight of
optimal partitioning set{}s is too large. We found out that the heuristic
presented by \citet{LBK09} finds nearly optimal partitioning set{}s on the
instances that our algorithm works on best. However, we have also seen
that one does not have to specially craft adversarial instances to make
the heuristic perform badly. Surprisingly, our algorithm for \textsc{DAG Partitioning}{} is
much simpler and more efficient than the algorithm for \textsc{Multiway Cut}{} by
\citet{Xia10}, although \textsc{Multiway Cut}{} is much easier to approximate than \textsc{DAG Partitioning}{}
\citep{KKS+04, AM12}.
On the theoretical side, we improved a fixed-parameter algorithm by
\citet{AM12} such that the running time now depends on the treewidth of the input graph rather than on its pathwidth.
However, our algorithm, as well as the algorithm
of~\citet{AM12}, are practically inapplicable.
Towards solving the instances of \citet{LBK09} exactly in reasonable
time, a challenging task would be to analyze the data used by
\citet{LBK09} in order to find parameters that are small \emph{and} make
\textsc{DAG Partitioning}{} fixed-parameter tractable, that is, to take a data-driven
approach to parameterizing \textsc{DAG Partitioning}{}.
\section*{Acknowledgments}
\noindent We are thankful to Rolf Niedermeier and to the anonymous referees
of CIAC'13 and \emph{Discrete Applied Mathematics} for helpful comments.
René van Bevern acknowledges supported by the Russian Foundation for
Basic Research (RFBR), project~16-31-60007
mol\textunderscore{}a\textunderscore{}dk, while working at Novosibirsk
State University, and by the German Research Foundation (DFG), project
DAPA (NI 369/12), while working at TU Berlin. Robert Bredereck
acknowledges support by DFG project PAWS (NI~369/10). Morgan Chopin was
supported by the DAAD during a three-month visit to TU~Berlin in
summer~2012. Falk Hüffner acknowledges support by DFG projects PABI
(NI~369/7) and ALEPH (HU~2139/1). Ondřej Suchý acknowledges support by
DFG project AREG (NI~369/9) while at TU~Berlin and by the Czech Science
Foundation, project 14-13017P.
\section*{References}
\bibliographystyle{abbrvnat}
|
1,108,101,565,237 | arxiv | \section{Introduction}
The study of the AdS/CFT correspondence in low dimensions has seen renewed interest in the last few years \cite{Tong:2014yna}-\cite{Couzens:2022agr}. On the AdS side of the correspondence, a plethora of new AdS$_3$ and AdS$_2$ solutions of Type II and eleven dimensional supergravities with different amounts of supersymmetries have been constructed. In turn, on the CFT side it has been possible to identify the 2d and 1d CFTs dual to some of these solutions as IR fixed points of explicit quiver field theories, from where it has been possible to explore some of their properties, in particular to compute their central charge. These AdS/CFT pairs thus represent perfect scenarios where the Bekenstein-Hawking entropy of black strings and black holes can be computed microscopically. This is particularly promising for the large classes of black strings and black holes with $\mathcal{N}=(0,4)$ and $\mathcal{N}=4$ supersymmetries constructed in \cite{Couzens:2017way,Couzens:2017nnr,Lozano:2019jza,Lozano:2019zvg,Lozano:2019ywa,Lozano:2020bxo,Faedo:2020nol,Lozano:2020txg,Lozano:2020sae,Faedo:2020lyw,Lozano:2021rmk,Ramirez:2021tkd,Lozano:2021fkk,Couzens:2021veb,Passias:2019rga,Passias:2020ubv}, which enable extensions of the seminal studies in \cite{Strominger:1996sh}-\cite{Minasian:1999qn}.
Another interesting interpretation of low dimensional AdS spaces is as holographic duals of CFT's describing defects within higher dimensional CFT's \cite{Karch:2000gx,DeWolfe:2001pq,Aharony:2003qf,DHoker:2006vfr,Lunin:2007ab}. Notable examples of such realisations for AdS$_3$ and AdS$_2$ spaces have been reported in
\cite{DHoker:2007mci,Chiodaroli:2009yw,Chiodaroli:2009xh,Dibitetto:2017tve,Dibitetto:2017klx,Dibitetto:2018gtk,Dibitetto:2018iar,Chen:2020mtv,Faedo:2020nol,Lozano:2020sae,Faedo:2020lyw,Dibitetto:2020bsh,Ramirez:2021tkd,Lozano:2021fkk}. A hint that this interpretation may be possible is when the low dimensional AdS space flows asymptotically (locally) in the UV to a higher dimensional AdS geometry, which contains extra fluxes. These fluxes partially break the isometries (and typically also the supersymmetries) of the higher dimensional AdS space, and can be associated to extra {\it defect} branes embedded in the geometry. We will see that some of the AdS$_3$ solutions constructed in this paper allow for an interpretation as surface defects within
6d $(1,0)$ CFT's dual to AdS$_7$ geometries.
AdS$_2$/CFT$_1$ holography features particular challenges not shared by higher dimensional AdS/CFT. These have to do mainly with the non-connectedness of the boundary of AdS$_2$ and with the interpretation of the central charge of the dual super-conformal quantum mechanics (SCQM), which does not allow for finite energy excitations \cite{Maldacena:1998uz,Strominger:1998yg,Balasubramanian:2003kq,Hartman:2008dq,Balasubramanian:2009bg}. Therefore directly applying AdS$_2$/CFT$_1$ holography to the microscopic description of extremal black holes is not straightforward, and interesting alternative ways to make this possible have been proposed in the literature (see for example \cite{Almheiri:2014cka,Maldacena:2016hyu,Maldacena:2016upp,Harlow:2018tqv}).
Recently, it has been shown \cite{Lozano:2020txg} that for AdS$_2$ spaces related to AdS$_3$ through compactification or T-duality an understanding of the SCQM as a chiral half of a 2d CFT (following the ideas in \cite{Balasubramanian:2003kq,Balasubramanian:2009bg}) allows one to sidestep these difficulties, providing explicit set-ups where the microscopic description program can been carried out in detail. It is likely that the solutions that we construct in this paper will allow for similar applications.
In this paper we construct new AdS$_3$ solutions with small $(0,4)$ supersymmetry in massive Type IIA supergravity. The small ${\cal N}=4$ superconformal algebra is characterised by an SU(2) R-symmetry with generators transforming in the $\textbf{2}\oplus \overline{\textbf{2}}$, as such backgrounds realising this algebra should respect this isometry which requires an S$^2$ factor (either round or with U(1)s fibred over it). Small ${\cal N}=(0,4)$ backgrounds of Type II supergravity of the warped product form AdS$_3\times$S$^2\times $M$_5$ were recently classified across \cite{Lozano:2019emq,Macpherson:2022sbs} under the assumptions that M$_5$ contains no necessary isometries and S$^2$ does not experience an enhancement to S$^3$. Our focus here will be on solutions that lie outside these assumptions\footnote{Though they are related to classes in \cite{Macpherson:2022sbs} via T-duality.}, namely solutions containing a warped AdS$_3\times$S$^3$ factor. These have the benefit of being compatible with an enhancement to small ${\cal N}=(4,4)$, a maximal case for AdS$_3$ with relatively few known examples. We are aware of only the U-duality orbits of the D1-D5 near horizon \cite{Maldacena:1997re}, the $d=11$ solution of \cite{Dibitetto:2020bsh} and the type IIB class of \cite{Lin:2004nb}, albeit with no explicit examples. This enhancement is of course not guaranteed by the presence of an S$^3$ factor and indeed the class that we construct generically supports just $(0,4)$ supersymmetry. However an enhancement to ${\cal N}=(4,4)$ is possible when the class is suitably restricted, which allows us to find explicit examples with both $(0,4)$ and $(4,4)$ supersymmetry that we shall study in some detail.
The paper is organised as follows. In section \ref{eq:the massiveclass} we construct the general class of AdS$_3\times$S$^3\times$M$_4$ solutions of massive type IIA supergravity with $\mathcal{N}=(0,4)$ supersymmetries that are the focus of the paper. We do this by generalising the
Minkowski$_6$ solutions constructed in \cite{Legramandi:2019ulq} to also include D2 and D4-branes. We check the supersymmetries and provide the explicit brane intersection, consisting on D2-D4 branes ending on D6-NS5-D8 bound states \cite{Imamura:2001cr}, from which the AdS$_3$ solutions arise in the near horizon limit. We further show that any solution to minimal $\mathcal{N}=2$ supergravity in 6d gives rise to a solution of massive IIA supergravity sharing the same warping and internal space as our class. In section \ref{defectsin6d} we show that when M$_4=~$S$^2\times \Sigma_2$, with $\Sigma_2$ a 2d Riemann surface, and the geometry is foliated over the $\Sigma_2$, the AdS$_3$ solutions flow asymptotically in the UV to the AdS$_7\times $S$^2\times I$ solutions of massive IIA supergravity constructed in \cite{Apruzzi:2013yva}, dual to 6d $(1,0)$ CFTs living in D6-NS5-D8 intersections \cite{Gaiotto:2014lca,Cremonesi:2015bld}. This allows us to interpret this subclass of solutions as holographic duals of 2d $(0,4)$ CFTs describing D2-D4 defects inside the 6d CFTs. We construct the 2d $(0,4)$ quiver gauge theories that flow in the IR to the duals of our solutions, and show that they can be embedded within the 6d quivers constructed in \cite{Cremonesi:2015bld}. This extends (and corrects, in the precise sense discussed in the paper) the constructions in \cite{Faedo:2020nol} for the massless case. In section \ref{D2D4NS5} we focus on the subclass of solutions for which M$_4=\mathbb{T}^3\times I$ and the geometry is foliated over the interval, first in massless IIA. We show that these solutions arise in the near horizon limit of D2-D4-NS5 brane intersections, and enjoy an enhancement to $\mathcal{N}=(4,4)$ supersymmetry. Our constructions represent a key step forward in the identification of the holographic duals of $(4,4)$ 2d CFTs living in D2-D4-NS5 Hanany-Witten brane set-ups, studied long ago in \cite{Brodie:1997wn,Alishahiha:1997cm}. As a consistency check of our proposal we show that the holographic and field theory central charges are in exact agreement. In section \ref{TypeI'} we complete the analysis of the AdS$_3\times$S$^3\times \mathbb{T}^4\times I$ solutions in the presence of Romans mass. We show that these backgrounds are associated to D2-D4-D8 intersections preserving $(0,4)$ supersymmetries, that can be globally embedded in Type I' string theory. We perform this explicit construction and check the matching between the field theory and holographic central charges. Section \ref{conclusions} contains our conclusions, where we summarise our results and discuss future lines of investigation, in particular the possibility of constructing new AdS$_2$ solutions with $\mathcal{N}=4$ by acting with Abelian and non-Abelian T-dualities on our new classes of solutions \cite{us}. Finally in Appendix \ref{appendix} we complement our analysis in section \ref{defectsin6d} with the construction of a domain wall solution to 7d minimal supergravity that flows to the AdS$_7$ vacuum asymptotically.
\section{A new class of $\ma N=(0,4)$ AdS$_3$ solutions in massive IIA}\label{eq:the massiveclass}
A class of solutions in massive IIA that has born much fruit over the years is the D8-D6-NS5 flat-space brane intersection \cite{Imamura:2001cr}. This is a class of $\frac{1}{4}$ BPS warped Minkowski$_6$ solutions which support an SU(2) R-symmetry realised by a round 2-sphere in the internal space. All AdS$_7$ solutions in Type II supergravity are contained in this class as well as examples of compact Mink$_4\times \mathbb{T}^2$ vacua \cite{Macpherson:2016xwk,Bobev:2016phc}. A generalisation of this class without the 2-sphere was found in \cite{Legramandi:2019ulq}, where solutions with O planes back-reacted on a torus were found. The metric and fluxes of solutions in this generalised class take the local form
\begin{align}\label{eq:massiveclassmetricoriginal}
ds^2&=\frac{1}{\sqrt{h}}ds^2(\mathbb{R}^{1,5})+ g \bigg[\frac{1}{\sqrt{h}} d\rho^2+\sqrt{h}ds^2(\mathbb{R}^3)\bigg],~~~~e^{-\Phi}=\frac{h^{\frac{3}{4}}}{\sqrt{g}},\\[2mm]
F_0&= \frac{\partial_{\rho}h}{g},~~~~F_2= -\star_3 d_3 h ,~~~~H_3=\partial_{\rho}(h g)\text{vol}(\mathbb{R}^3)-(\star_3 d_3 g)\wedge d\rho,\nonumber
\end{align}
where $h,g$ have support on $(\rho,\mathbb{R}^3)$ and $(d_3,\star_3)$ are the exterior derivative and Hodge dual on $\mathbb{R}^3$. Away from the loci of possible sources the Bianchi identies of the 2 and 3-form impose that
\begin{align}\label{eq:Bianchiidentitiesorg}
&\partial_{\rho}(\frac{\partial_{\rho}h}{g})=0,~~~~\nabla_3^2 g +\partial_{\rho}^2 (gh)=0,~~~~\nabla_3^2 h +F_0\partial_{\rho} (gh)=0,
\end{align}
with any solution to this system giving rise to a solution of massive IIA supergravity, provided any localised source terms are also calibrated.\\
~~\\
In this section we will present a generalised version of this class for which
\begin{equation}
\mathbb{R}^{1,5} \to \text{AdS}_3\times \text{S}^3,~~~~ (F_0,F_2,H_3)\to(F_0,F_2,F_4,H_3),
\end{equation}
giving rise to AdS$_3$ vacua of massive IIA preserving small ${\cal N}=(0,4)$ supersymmetry, as explained in section \ref{sec:SUSY}. We shall construct a system of
intersecting branes in flat space giving rise to these AdS$_3$ vacua in a near horizon limit in section \ref{branepicture}, and finally establish that in fact any solution of minimal ${\cal N}=2$ supergravity in $d=6$ can be embedded into massive IIA with a similar ansatz for the metric and fluxes in section \ref{sec:uplift}.
\subsection{A small ${\cal N}=(0,4)$ AdS$_3$ class with source D8-D6-NS5 branes}\label{sec:SUSY}
In this section we present a new class of AdS$_3$ solutions preserving small ${\cal N}=(0,4)$ supersymmetry with possible D8-D6-NS5 sources.\\
~~\\
The general form of the metric and dilaton of solutions in this class is nothing more than \eqref{eq:massiveclassmetricoriginal} with $\mathbb{R}^{1,5} \to \text{AdS}_3\times \text{S}^3$,
\begin{align}\label{eq:massiveclassmetric}
ds^2=\frac{q}{\sqrt{h}}\bigg[ds^2(\text{AdS}_3)+ds^2(\text{S}^3)\bigg]+ g \bigg[\frac{1}{\sqrt{h}} d\rho^2+\sqrt{h}\bigg(dz_1^2+dz_2^2+dz_3^2\bigg)\bigg],~~~~e^{-\Phi}=\frac{h^{\frac{3}{4}}}{\sqrt{g}},
\end{align}
where AdS$_3$ and S$^3$ both have unit radius and $q$ is a redundant constant we keep to make contact with later sections more smooth. We have introduced $(z_1,z_2,z_3)$ coordinates spanning the $\mathbb{R}^3$ factor for later convenience. The fluxes this solution supports are
\begin{subequations}
\begin{align}
F_0&= \frac{\partial_{\rho}h}{g},~~~~F_4=2\,q\bigg(\text{vol}(\text{AdS}_3)+\text{vol}(\text{S}^3)\bigg)\wedge d\rho,\label{eq:flux1}\\[2mm]
F_2&= -(\partial_{z_1}h dz_2\wedge dz_3+\partial_{z_2}h dz_3\wedge dz_1+\partial_{z_3}h dz_1\wedge dz_2),\label{eq:flux2}\\[2mm]
H_3&=\partial_{\rho}(h g)dz_1\wedge dz_2\wedge dz_3-(\partial_{z_1}g dz_2\wedge dz_3+\partial_{z_2}g dz_3\wedge dz_1+\partial_{z_3}g dz_1\wedge dz_2)\wedge d\rho,\label{eq:flux3}
\end{align}
\end{subequations}
where the additional 4-form with respect to \eqref{eq:massiveclassmetricoriginal} is to be expected given that the external space has been replaced with a curved product space. The Bianchi identities of the fluxes, in regular regions of the internal space, require that $F_0$ is constant and
\begin{align}
&(\partial_{z_1}^2+\partial_{z_2}^2+\partial_{z_3}^2)g +\partial_{\rho}^2 (gh)=0,\nonumber\\[2mm]
&(\partial_{z_1}^2+\partial_{z_2}^2+\partial_{z_3}^2)h +F_0\partial_{\rho} (gh)=0,\label{eq:Bianchiidentities}
\end{align}
which exactly reproduce \ref{eq:Bianchiidentitiesorg} and define solutions in this class. Actually these constraints give rise to two local classes depending on whether $F_0=0$ or not. As $F_0=0$ demands $\partial_{\rho}h=0$, the governing PDEs reduce to those of a flat space D6-NS5 brane intersection. On the other hand when $F_0\neq 0$, one is free to divide by it and take
\begin{equation}\label{imamuraeq1}
g= \frac{\partial_{\rho}h}{F_0}.
\end{equation}
Given this one can then show that \eqref{eq:Bianchiidentities} reduce to a single PDE
\begin{equation}\label{imamuraeq2}
(\partial_{z_1}^2+\partial_{z_2}^2+\partial_{z_3}^2)h+\frac{1}{2}\partial_{\rho}^2( h^2)=0,
\end{equation}
reproducing the novel behaviour of \cite{Imamura:2001cr} when we impose SO(3) invariance in $(z_1,z_2,z_3)$. Let us now move on to address the amount of supersymmetry solutions in this class preserve.
\subsubsection{Supersymmetry}\label{supersymmetry}
The preservation of supersymmetry for AdS$_3$ vacua in massive IIA can be phrased in terms of differential bi-spinor relations first introduced for ${\cal N}=(0,1)$ in \cite{Dibitetto:2018ftj}. In the conventions of \cite{Macpherson:2021lbr} for a solution decomposing as
\begin{equation}
ds^2= e^{2A}ds^2(\text{AdS}_3)+ ds^2(\text{M}_7),~~~~F= f_++ e^{3A}\text{vol}(\text{AdS}_3)\wedge \star_7 \lambda f,
\end{equation}
with purely magnetic NS flux, dilaton $\Phi$ and where $\lambda f_n= (-1)^{[\frac{n}{2}]}f_n$, these are\footnote{We are also assuming unit radius AdS$_3$ and have fixed an arbitrary constant below. The truly general conditions are given in \cite{Macpherson:2021lbr}. Note that we have inverted what is referred to as ${\cal N}=(1,0)$ and ${\cal N}=(0,1)$ with respect to that reference.}
\begin{align}
&d_{H_3}(e^{A-\Phi}\Psi_-)=0,~~~~d_{H_3}(e^{2A-\Phi}\Psi_+)- 2 e^{A-\Phi}\Psi_{-}=\frac{1}{8}e^{3A}\star_7\lambda(f_{+}),\nonumber\\[2mm]
&(\Psi_{-}\wedge \lambda f_{+})\bigg\lvert_7= - \frac{ 1}{2} e^{-\Phi}\text{vol}(\text{M}_7),\label{eq:BPSgen}
\end{align}
where $\Psi_{\pm}$ can be defined in term of spinors supported by M$_7$. However one does not need to make specific reference to these, it is sufficient that $\Psi_{\pm}$ realises a G$_2\times $G$_2$-structure. For our purposes it will be sufficient to consider a restricted case where the intersection of these two G$_2$'s is a strict SU(3)-structure for which one may parameterise
\begin{equation}
\Psi_+=-\text{Im}\left(e^{-i J}\right)+V\wedge \text{Re}\Omega,~~~~\Psi_-= -\text{Im}\Omega- V\wedge \text{Re}\left(e^{-i J}\right),
\end{equation}
where $V$ is a real 1-form defining a vielbein direction in M$_7$, while $(J,\Omega)$ can be written in terms of a further 3 complex vielbein directions $E_1,E_2,E_3$ as
\begin{equation}
J= E_1\wedge\overline{E}_1+E_2\wedge\overline{E}_2+E_3\wedge\overline{E}_3,~~~~\Omega= E_1\wedge E_2\wedge E_3.
\end{equation}
The class of solutions of the previous section preserves ${\cal N}=(0,4)$ supersymmetry if it preserves 4 independent SU(3)-structures which each obey \eqref{eq:BPSgen}. As the class contains an S$^3$ factor one can define 1-forms $(L_a,R_a)$ for $a=1,2,3$ such that
\begin{equation}
dL_a=\frac{1}{2}\epsilon_{abc}L_b\wedge L_c,~~~~dR_a=-\frac{1}{2}\epsilon_{abc}R_b\wedge R_c,~~~~ ds^2(\text{S}^3)=\frac{1}{4}(L_a)^2=\frac{1}{4}(R_a)^2,
\end{equation}
with $L_a$ a singlet/triplet under the SO(3)$_{L/R}$ subgroup of SO(4) $=$ SO(3)$_L\times $SO(3)$_R$, with the charge of $R_a$ the opposite. It is possible to show that the SU(3)-structure defined through the vielbein
\begin{equation}\label{eq:specificvielnbein}
E_a= -\sqrt{g}h^{\frac{1}{4}}dx_a+ i \frac{1}{2\mu h^{\frac{1}{4}}} L_a,~~~V= \frac{\sqrt{g}}{h^{\frac{1}{4}}}d\rho
\end{equation}
solves \eqref{eq:massiveclassmetric}, realising ${\cal N}=(0,1)$ explicitly. This gets enhanced to ${\cal N}=(0,4)$ because $\Psi_{\pm}$ depend on the 3-sphere through $L_a,dL_a$, which are SO(3)$_R$ triplets, and $\text{vol}(\text{S}^3)$, an SO(4) invariant, with only the later entering the physical fields. As such, if \eqref{eq:specificvielnbein} solves \eqref{eq:BPSgen} so to does the SU(3)-structure that results after performing a generic constant SO(3) rotation
of $L_a$ in \eqref{eq:specificvielnbein}, which one can exploit to generate another 3 independent SU(3)-structures necessarily solving \eqref{eq:BPSgen} for the same physical fields, for 4 SU(3)-structures in total. That it is specifically small ${\cal N}=(0,4)$ that is realised for this class rather than some other superconformal group is obvious once one notes that any other choice would necessitate additional isometries not present in the class generically. Additionally, through Hopf fiber T-duality, it is possible to map the class of solutions to that of section 3.3 of \cite{Macpherson:2022sbs}, specialised to the case where the local coordinate $x$ there is an isometry, which proves this more rigorously.
Given the round 3-sphere in this class one might wonder whether, or under which conditions, there is an enhancement to ${\cal N}=(4,4)$. This would require a further 4 ${\cal N}=(1,0)$ SU(3)-structures to be supported by the background, which should solve a cousin of \eqref{eq:BPSgen} with $\Psi_-\to -\Psi_-$\footnote{Beware this map does not hold in full generality, only for the restricted case we consider. See \cite{Macpherson:2021lbr} for full details.}. These need to span the 3-sphere in terms of $R_a$ as each ${\cal N}=4$ sub-sector must be a singlet with respect to the R-symmetry of the other. One can show that the vielbein
\begin{equation}
E_a= \sqrt{g}h^{\frac{1}{4}}dx_a+ i \frac{1}{2\mu h^{\frac{1}{4}}} R_a,~~~V= -\frac{\sqrt{g}}{h^{\frac{1}{4}}}d\rho
\end{equation}
does give rise to an SU(3)-structure which solves the ${\cal N}=(1,0)$ conditions, with a further 3 implied by this as before. However the RR 2-form now changes sign with respect to \eqref{eq:flux2}. The only way to have the same physical fields compatible with both left and right ${\cal N}=4$ sub-sectors is to fix $dh=0$, ie
\begin{equation}
h=\text{constant}~~~~\Rightarrow~~~~{\cal N}=(4,4),
\end{equation}
which makes $F_2,F_0$ trivial. Generically however just ${\cal N}=(0,4)$ is preserved. Finally we should comment that when $h\neq$ constant one is free to replace S$^3$ by the lens space S$^3/\mathbb{Z}_k$ without breaking any further supersymmetry. Instead, when $h =$ constant the Lens space breaks ${\cal N}=(4,4)$ to ${\cal N}=(0,4)$.
\subsection{The brane picture}\label{branepicture}
In this section we show that the class of solutions \eqref{eq:massiveclassmetric} can be obtained as the near-horizon limit of a brane intersection defined by D2-D4 branes ending on D6-NS5-D8 bound states, as depicted in Table \ref{D2D4D6D8NS5_1}.
\begin{table}[http!]
\begin{center}
\begin{tabular}{| l | c | c | c| c | c | c | c | c| c | c |}
\hline
& $x^0$ & $x^1$ & $r$ & $\varphi^1$ & $\varphi^2$ & $\rho$ & $\zeta$ & $\theta^1$ & $\theta^2$ & $\theta^3$ \\ \hline
D2 & x & x & & & & x & & & & \\ \hline
D4 & x & x & x & x & x & & & & & \\ \hline
NS5 & x & x & & & & &x & x & x & x \\ \hline
D6 & x & x & & & &x& x &x &x &x \\ \hline
D8 & x & x &x & x & x & &x &x & x & x \\ \hline
\end{tabular}
\end{center}
\caption{$\frac18$-BPS brane intersection underlying the $\ma N=(0,4)$ AdS$_3$ solutions \eqref{eq:massiveclassmetric}. $(x^0,x^1)$ are the directions where the 2d dual CFT lives, $(r,\varphi^i)$ are spherical coordinates spanning the 3d space previously parameterised by $(z_1,z_2,z_3)$, $\zeta$ is the radial coordinate of AdS$_3$ and $\theta^i$ parameterise the S$^3$.}
\label{D2D4D6D8NS5_1}
\end{table}
Imamura's D6-NS5-D8 flat-space intersection \cite{Imamura:2001cr} is described by the supergravity solution \eqref{eq:massiveclassmetricoriginal}. Adding D2-D4 branes the 10d metric becomes
\begin{equation}
\label{brane_metric_D2D4NS5D6D8}
\begin{split}
d s^2&=\,h^{-1/2}\,\left[H_{\mathrm{D}4}^{-1/2}\,H_{\mathrm{D}2}^{-1/2}\,ds^2({\mathbb{R}^{1,1}})+H_{\mathrm{D}4}^{1/2}\,H_{\mathrm{D}2}^{1/2} \,(d\zeta^2+\zeta^2ds^2(\text{S}^3))\right] \\
&+h^{-1/2}\,g\,H_{\mathrm{D}4}^{1/2}\,H_{\mathrm{D}2}^{-1/2}\,d\rho^2+h^{1/2}\,g\,H_{\mathrm{D}4}^{-1/2}\,H_{\mathrm{D}2}^{1/2}(dr^2+r^2 ds^2(\text{S}^2)) \, ,
\end{split}
\end{equation}
where we have parameterised the 2d Minkowski spacetime $\mathbb{R}^{1,1}$ with $(x^0, x^1)$, the 4d space transverse to the D2-D4 branes with coordinates $(\zeta, \theta^i)$ and the 3d space parameterised by $(z_1, z_2, z_3)$ in the previous subsections with spherical coordinates $(r,\varphi^i)$.
We take the D4 and D2 charges completely localised within the worldvolume of the D6-NS5-D8 branes, i.e. $H_{\mathrm{D}4}=H_{\mathrm{D}4}(\zeta)$ and $H_{\mathrm{D}2}=H_{\mathrm{D}2}(\zeta)$, with the functions $h(\rho,r)$ and $g(\rho,r)$ describing the D6-NS5-D8 bound state as in \eqref{eq:massiveclassmetricoriginal}\footnote{In \cite{Imamura:2001cr} the functions $g$ and $h$ are respectively called $S$ and $K$.}. We introduce the following gauge potentials and dilaton,
\begin{equation}
\begin{split}\label{brane_potentials_D2D4NS5D6D8}
&C_{3}=H_{\mathrm{D}2}^{-1}\,\text{vol}(\mathbb{R}^{1,1})\wedge d\rho\,,\\
&C_{5}=H_{\mathrm{D}4}^{-1}\,h\,g\,r^2\,\text{vol}(\mathbb{R}^{1,1})\wedge dr \wedge \text{vol}(\text{S}^2)\,,\\
&C_{7}=H_{\mathrm{D}4}\,h^{-1}\,\zeta^3\,\text{vol}(\mathbb{R}^{1,1})\wedge d\zeta \wedge \text{vol}(\text{S}^3) \wedge d\rho \,,\\
&B_{6}=H_{\mathrm{D}4}\,g^{-1}\,\zeta^3\,\text{vol}(\mathbb{R}^{1,1})\wedge d \zeta \wedge\text{vol}(\text{S}^3)\,,\\ \vspace{0.4cm}
&e^{\Phi}=h^{-3/4}\,g^{1/2}\,H_{\mathrm{D}2}^{1/4}\,H_{\mathrm{D}4}^{-1/4}\,,
\end{split}
\end{equation}
from which the fluxes read
\begin{equation}
\begin{split}\label{brane_fluxes_D2D4NS5D6D8}
& F_{2} = -\partial_r h \,r^2\,\text{vol}(\text{S}^2)\,, \\
& H_{3} =- \partial_r g \, r^2\,d\rho\wedge\text{vol}(\text{S}^2)+H_{\mathrm{D}2}\,H_{\mathrm{D}4}^{-1}\,\partial_\rho \left(h\,g\right)\,r^2\,dr\wedge\text{vol}(\text{S}^2)\,, \\
&F_{4}=\partial_\zeta H_{\mathrm{D}2}^{-1}\,\text{vol}(\mathbb{R}^{1,1})\wedge d\zeta\wedge d\rho-\partial_\zeta H_{\mathrm{D}4}\,\zeta^3\, \text{vol}(\text{S}^3)\wedge d\rho\,,
\end{split}
\end{equation}
plus a Romans' mass $F_{0}$.
The equations of motion and Bianchi identities for the D2-D4 branes and the D6-NS5-D8 branes can then be solved independently, such that
\begin{equation}\label{10d-defectEOM}
H_{\mathrm{D}2}=H_{\mathrm{D}4}\qquad\text{and}\qquad \nabla^2_{\zeta}\,H_{\mathrm{D}4}=0\,,
\end{equation}
for the D2-D4 subsystem, and
\begin{equation}\label{10d-motherbranesEOM_nh}
\partial_\rho h=F_0\,g\qquad \text{and}\qquad \nabla^2_{r}\,h+\frac{1}{2}\partial_\rho^2\,h^2=0\,,
\end{equation}
for the D6-NS5-D8 branes. Here
$\nabla_r^2$ and $\nabla_\zeta^2$ are, respectively, the Laplacians in spherical coordinates on the 3d flat space transverse to the D6-NS5-D8 branes and the 4d space transverse to the D2-D4 branes.
The equations in \eqref{10d-motherbranesEOM_nh} coincide with \eqref{eq:Bianchiidentitiesorg} and then \eqref{imamuraeq1}, \eqref{imamuraeq2}. In turn,
the equations in \eqref{10d-defectEOM} can be easily solved for
\begin{equation}
H_{\mathrm{D}4}(\zeta)=H_{\mathrm{D}2}(\zeta)=1+\frac{q}{\zeta^2}\,
\end{equation}
where $q$ is an integration constant.
Taking the limit $\zeta \rightarrow 0$ the $\zeta$ coordinate becomes the radial coordinate of an AdS$_3$ factor, and the metric \eqref{brane_metric_D2D4NS5D6D8} and the fluxes \eqref{brane_fluxes_D2D4NS5D6D8} take the form\footnote{We redefined the Minkowski coordinates as $(t,x^1)\rightarrow q\,(t,x^1)$.}
\begin{equation}
\label{brane_metric_D2D4NS5D6D8_nh}
\begin{split}
ds_{10}^2 &= q\, h^{-1/2} \left[ds^2(\text{AdS}_3) + ds^2(\text{S}^3) \right] + h^{-1/2} g \, d\rho^2 + h^{1/2} g\left(dr^2 + r^2 ds^2(\text{S}^2)\right) \,, \\
F_{2} &= - \partial_r h \,r^2\,\text{vol}(\text{S}^2) \,, \qquad \qquad e^{\Phi} = h^{-3/4} g^{1/2} \,,\\
H_{3} &= -\partial_r g \, r^2\,d\rho\wedge\text{vol}(\text{S}^2)+\partial_\rho \left(h\,g\right)\,r^2\,dr\wedge\text{vol}( \text{S}^2)\,, \\
F_{4} &= 2q \, \text{vol}(\text{AdS}_3)\wedge d\rho + 2q \, \text{vol}(\text{S}^3) \wedge d\rho \,,\\
\end{split}
\end{equation}
with $(h,g)$ satisfying \eqref{10d-motherbranesEOM_nh}.
Therefore, we recover the AdS$_3\times $S$^3$ backgrounds \eqref{eq:massiveclassmetric} with the 3d transverse space that was parametrised by $(z_1, z_2, z_3)$ now written in spherical coordinates $(r, \varphi^i)$. Our new class of solutions can thus be interpreted as the low-energy regime of D6-NS5-D8 bound states \cite{Imamura:2001cr} wrapping an $\mrm{AdS}_3\times $S$^3$ geometry, with the geometry associated to the bound state uniquely fixed by the functions $h$ and $g$, and the D2-D4 intersection completely resolved into the AdS$_3\times $S$^3$ geometry.
\subsection{An uplift of 6d minimal ${\cal N}=2$ ungauged supergravity}\label{sec:uplift}
The fact that the system of governing PDEs \eqref{eq:Bianchiidentities} support solutions with both a warped Mink$_6$ and an AdS$_3\times$ S$^3$ factor is highly suggestive that it should actually work for any solution to 6d ${\cal N}=2$ ungauged supergravity with SU(2) R-symmetry (see for instance \cite{Hristov:2014eba}). In this subsection we show that this is indeed the case.
The pseudo action of the aforementioned 6d theory is
\begin{equation}
S_6= \int d^6x\sqrt{-g_6}\bigg( R- \frac{1}{3}H^{(6)}_{abc}H^{(6)abc}\bigg),
\end{equation}
where $H^{(6)}$ is a closed self dual 3-form, the latter constraint needing to be imposed after varying the action. It is possible to show that this theory can be embedded into massive IIA as
\begin{align}\label{eq:massiveclassmetric1}
ds^2= \frac{1}{\sqrt{h}}\bigg[c^{-2}ds^2_6+ g d\rho^2\bigg]+ g \sqrt{h}\bigg(dz_1^2+dz_2^2+dz_3^2\bigg),~~~~e^{-\Phi}=\frac{h^{\frac{3}{4}}}{\sqrt{g}},
\end{align}
for fluxes
\begin{align}
F_0&= \frac{\partial_{\rho}h}{g},~~~~F_4=2 c^2 H^{(6)}\wedge d\rho,\\[2mm]
F_2&= -(\partial_{z_1}h dz_2\wedge dz_3+\partial_{z_2}h dz_3\wedge dz_1+\partial_{z_3}h dz_1\wedge dz_2),\\[2mm]
H_3&=\partial_{\rho}(h g)dz_1\wedge dz_2\wedge dz_3-(\partial_{z_1}g dz_2\wedge dz_3+\partial_{z_2}g dz_3\wedge dz_1+\partial_{z_3}g dz_1\wedge dz_2)\wedge d\rho,
\end{align}
where $c$ is an arbitrary constant. We have confirmed that the 10d equations of motion are implied by those following from the 6d action together with \eqref{eq:Bianchiidentities}. Therefore, any solution to the 6d theory gives rise to a solution in massive IIA once \eqref{eq:Bianchiidentities} are imposed. All such supersymmetric solutions were classified some time ago in \cite{Gutowski:2003rg}.
\section{Defects within $\ma N=(1,0)$ 6d CFTs}\label{defectsin6d}
In this section we focus on the particular subclass of solutions featured by a (locally) AdS$_7$ asymptotics, and discuss their dual interpretation as surface defects within the 6d $\ma N=(1,0)$ CFTs dual to the AdS$_7$ solutions of massive Type IIA supergravity constructed in \cite{Apruzzi:2013yva}.
Our first aim will be to derive the particular set of coordinates for which the AdS$_7$ asymptotics is manifest. This can be done by direct calculation in 10d or by making use of the consistent truncation of massive IIA supergravity to minimal 7d $\ma N=1$ gauged supergravity \cite{Passias:2015gya}. From the latter perspective the 10d solutions take the form of a domain wall with AdS$_3\times$S$^3$ worldvolume with a locally AdS$_7$ vacuum at infinity, that arises upon consistent truncation from the AdS$_7\times $S$^2\times I$ solutions of \cite{Apruzzi:2013yva} (see Appendix \ref{appendix}).
In 10d one can see from the brane picture studied in subsection \ref{branepicture} that D2-D4 branes break the isometries of the $\mathbb{R}^{1,5}$ worldvolume common to the D6-NS5-D8 intersection, as
$$
\mathbb{R}^{1,5} \quad \longrightarrow \quad \text{AdS}_3 \times \text{S}^3,
$$
leaving intact the conformal symmetries of $\text{AdS}_3$. In the UV the AdS$_7$ vacuum emerges as a foliation of the $\text{AdS}_3 \times \text{S}^3$ subspace over an interval.
With the insight coming from the supergravity analysis, we will construct 2d $(0,4)$ quiver gauge theories that flow in the IR to the CFTs dual to the AdS$_3$ solutions and show that they can be embedded within the 6d quivers constructed in
\cite{Gaiotto:2014lca,Cremonesi:2015bld}, dual to the AdS$_7$ solutions in \cite{Apruzzi:2013yva}.
\subsection{The AdS$_7$ vacua of massive IIA and their dual 6d CFTs}
We start by briefly reviewing the main properties of the AdS$_7$ solutions of massive IIA supergravity and of their 6d dual CFTs.
The solutions in \cite{Apruzzi:2013yva} are described by AdS$_7\times$S$^2$ foliations over an interval preserving 16 supercharges. They arise in the near horizon limit of a D6-NS5-D8 intersection, constructed in \cite{Bobev:2016phc}.
In the parametrisation of \cite{Cremonesi:2015bld} they take the form
\begin{align}
\label{AdS7vacua}
ds_{10}^2 &= \pi\sqrt{2} \bigg[ 8 \Bigl(-\frac{\alpha}{\ddot{\alpha}}\Bigr)^{1/2} ds^2(\text{AdS}_7) + \Bigl(-\frac{\ddot{\alpha}}{\alpha}\Bigr)^{1/2} dy^2 + \Bigl(-\frac{\alpha}{\ddot{\alpha}}\Bigr)^{1/2} \frac{(-\alpha\ddot{\alpha})}{\dot{\alpha}^2 - 2\alpha\ddot{\alpha}} ds^2(\text{S}^2) \bigg] \,, \\
\label{dilatonAdS7}
e^{2\Phi} &= 3^8 2^{5/2} \pi^5 \frac{(-\alpha/\ddot{\alpha})^{3/2}}{\dot{\alpha}^2 - 2\alpha\ddot{\alpha}}\,, \\
\label{B2AdS7}
B_{2} &= \pi \Bigl(-y + \frac{\alpha\dot{\alpha}}{\dot{\alpha}^2 - 2\alpha\ddot{\alpha}}\Bigr) \, \text{vol}(\text{S}^2) \,, \\
\label{F2AdS7}
F_{2}&=\Bigl(\frac{\ddot{\alpha}}{162\pi^2}+\frac{\pi F_0\alpha\dot{\alpha}}{\dot{\alpha}^2-2\alpha\ddot{\alpha}}\Bigr)\text{vol}(\text{S}^2).
\end{align}
The solutions are specified by the function $\alpha(y)$, which satisfies the differential equation
\begin{equation}\label{alpha'''1}
\alpha'''=-162\pi^3 F_0.
\end{equation}
Let us now recall the main ingredients of the 6d quivers dual to these solutions. We will follow \cite{Cremonesi:2015bld} and
\cite{Nunez:2018ags}. Equation \eqref{B2AdS7} implies that there are NS5-branes located at given positions in the $y$-direction, that can be labelled by an integer number $k$. Piecewise $\alpha(y)$ functions defined in intervals $[k,k+1]$ between NS5-branes can then be constructed, with continuous first and second derivatives, and third derivative satisfying
\begin{equation}\label{alpha'''2}
\alpha_k'''=-81\pi^2\beta_k,
\end{equation}
at a given $[k,k+1]$ interval. Given that $Q_{D8}=2\pi F_0$ this implies that
\begin{equation}\label{QD8charge}
Q_{D8}^{(k)}=\beta_k,
\end{equation}
at a $[k,k+1]$ interval.
$\beta_k$ are therefore integer numbers, and $(\beta_{k-1}-\beta_k)$ are the numbers of D8-branes that are introduced at each $y=k$ position.
Integrating \eqref{alpha'''2} one finds
\begin{equation}
\alpha_k(y)= -\frac{27}{2}\pi^2 \beta_k (y-k)^3+\frac12 \gamma_k (y-k)^2+\delta_k (y-k)+\mu_k,\qquad \text{for} \quad y\in [k,k+1],
\end{equation}
where $(\gamma_k, \delta_k, \mu_k)$ are constants that are determined by imposing continuity of $\alpha, \alpha', \alpha''$. The condition that $\alpha_k''=\alpha_{k-1}''$ at $y=k$ imposes that
\begin{equation}\label{gamma}
\gamma_k=-81\pi^2 \beta_{k-1}+\gamma_{k-1}=-81\pi^2 (\beta_0+\beta_1+\dots +\beta_{k-1}).
\end{equation}
This implies that the D6-brane charge at each interval, given by
\begin{equation} \label{QD6charge}
Q_{D6}^{(k)}=\frac{1}{2\pi}\int_{\text{S}^2}\hat{F}_2,=-\frac{\gamma_k}{81\pi^2},
\end{equation}
where $\hat{F}_2=F_2-F_0\wedge B_2$ is the Page flux, defining a charge that should be integer.
In turn, $\alpha_k'=\alpha_{k-1}'$ and $\alpha_k=\alpha_{k-1}$ at $y=k$ determine, respectively,
\begin{equation}
\delta_k=-\frac{81}{2}\pi^2\beta_{k-1}+\gamma_{k-1}+\delta_{k-1}, \qquad
\mu_k=-\frac{27}{2}\pi^2\beta_{k-1}+\frac12 \gamma_{k-1}+\delta_{k-1}+\mu_{k-1}.
\end{equation}
The continuity conditions need to be supplemented by conditions at the boundaries of the $y$-interval. For this to be geometrically well-defined the asymptotic form of the metric needs to approach one of 4 physical behaviours compatible with the metric factors, namely a regular zero or singular D6, O6 or D8/O8 behaviour. Two of these arise generically: One can choose the integration constants such that $\alpha=0$ at a boundary of the space, in which case the behaviour corresponds to fully localised D6-branes,
or one can impose that $\alpha''=0$, in which case one finds fully localised O6-planes. The other behaviours are possible with specific tunings of $\alpha$ when $F_0\neq 0$: One can tune $\alpha$ such that in the boundary interval $\alpha=- q_2(y) \alpha''$, for $q_n=q_n(y)$ an order $n$ polynomial, then as long as $q_2$ has non degenerate zeros - the zero of $\alpha''$ is regular. Like-wise one can simultaneously impose $\alpha''=0$ and $(\alpha')^2-2 \alpha \alpha''= q_3 \alpha''$, then the behaviour at the zero of $\alpha''=0$ is that of a localised O8, which may be coincident to additional D8s.
The D6-NS5-D8 brane set-up associated to the solutions is the one depicted in Table \ref{D6D8NS5}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{| l | c | c | c | c| c | c| c | c| c | c |}
\hline
& $x^0$ & $x^1$ & $x^2$ & $x^3$ & $x^4$ & $x^5$ & $x^6$ & $x^7$ & $x^8$ & $x^9$\\ \hline
D6 & x & x & x & & & &x &x &x &x \\ \hline
D8 & x & x & &x & x & x &x &x & x & x \\ \hline
NS5 & x & x & & & & & x & x & x & x \\ \hline
\end{tabular}
\end{center}
\caption{$\frac14$-BPS brane intersection underlying the 6d $(1,0)$ CFTs living in D6-NS5-D8 brane intersections. The directions $(x^0,x^1,x^6,x^7,x^8,x^9)$ are the directions where the 6d CFT lives. $x^2$ is the field theory direction, along which the D6-branes are stretched. $(x^3, x^4, x^5)$ are the directions realising the SO(3) R-symmetry.}
\label{D6D8NS5}
\end{table}
Here the D6-branes play the role of colour branes while the D8-branes play the role of flavour branes \cite{Brunner:1997gk,Hanany:1997sa}. In 6d language the quantised charges give rise to the quiver depicted in Figure \ref{6dquiver},
\begin{figure}
\centering
\includegraphics[scale=0.65]{Dibujo1}
\caption{Quiver describing the field theory living in D6-NS5-D8 intersections. The circles denote $(1,0)$ vector multiplets and the lines $(1,0)$ bifundamental matter fields. The quiver has been terminated with $(\beta_{P-1}-\beta_P)$ D8-branes at the end of the space, with $\beta_P=\frac{\gamma_P}{81\pi^2}$ and $\gamma_P=-81\pi^2\sum_{l=1}^{P-1}\beta_l$.}
\label{6dquiver}
\end{figure}
as discussed in \cite{Cremonesi:2015bld,Nunez:2018ags}. One can check that 6d anomaly cancellation is fulfilled given that at each gauge node of the quiver
\begin{equation}\label{anomalycan}
2N_k=2Q_{D6}^{(k)}=N_f^k=Q_{D6}^{(k-1)}+Q_{D6}^{(k+1)}+\Delta Q_{D8}^{(k)},
\end{equation}
with $\Delta Q_{D8}^{(k)}=\beta_{k-1}-\beta_k$.
\subsection{The surface defect ansatz}\label{defectAnsatz}
In this subsection we search for a solution within the class constructed in section \ref{eq:the massiveclass} that is asymptotically AdS$_7$. The first step is to decide on the form of the external 7d and internal 3d spaces. We shall assume that the metric takes the form
\begin{align}
\frac{1}{\sqrt{2}\pi}ds^2&= L^2\sqrt{-\frac{\alpha}{\alpha''}}ds^2(\text{M}_{1,6})+ \Delta_1 dy^2+ \Delta_2 ds^2(\text{S}^2),\nonumber\\[2mm]
ds^2(\text{M}_{1,6})&= P^2\bigg[ds^2(\text{AdS}_3)+\frac{1}{m^2}ds^2(\text{S}^3)\bigg]+Q^2 dx^2,\label{eq:defectansatz}
\end{align}
where $P,Q$ are functions of $x$ only and $\Delta_{1,2}$ are functions of $x$ and $y$. We find it convenient to fix $q=1$
in this section, which we are free to do without loss of generality.
The first step is to impose SO(3) symmetry in \eqref{eq:massiveclassmetric}, so that $(z_1,z_2,z_3) ~\to~ (r,\text{S}^2)$. Then we need to arrange for a change of coordinates $(r,\rho)~ \to~(x,y)$ such that \eqref{eq:defectansatz} emerges. Our experience in the previous sections suggests we take
\begin{equation}
r= q_1(x) \alpha,~~~~ \rho= -q_2(x) \alpha'.
\end{equation}
By comparing \eqref{eq:massiveclassmetric} to \eqref{eq:defectansatz} we then see we must fix
\begin{equation}
h= \frac{1}{2P^4 L^4 \pi^2} \left(-\frac{\alpha''}{\alpha}\right),~~~~g=\frac{4 L^8 \pi^4 P^6 q_2^2 Q^2}{(\dot{q}_1)^2(q_1^2 (\alpha')^2-2 L^4\pi^2 P^4 q_2^2 \alpha \alpha'')}
\end{equation}
and solve
\begin{equation}
q_1\dot{q}_1=2 L^4\pi^2 P^4 q_2 \dot{q}_2.
\end{equation}
Turning our attention to the Bianchi identities, we find that $F_0=$ constant, under the assumption that $\alpha'''=-162 \pi^3F_0$, imposes that
\begin{equation}
4 q_1 \dot{P}=P \dot{q}_1,~~~~ (\dot{q}_1)^2= \frac{2\pi L^8}{3^4} P^6 Q^2 q_2,
\end{equation}
and implies the remaining Bianchi identities. Modulo diffeomorphisms the 3 ODEs we have can be solved without loss of generality as
\begin{equation}
P= 2^{3/2}x,~~~~ Q=-\frac{2^{3/2}}{(c+x^4)^{\frac{1}{4}}},~~~~ q_1=\frac{64L^6}{3^4}x^4,~~~~ q_2=\frac{8L^{4}}{3^4 \pi} \sqrt{c+ x^4},~~~ dc=0.
\end{equation}
The NS sector of the solution then takes the form
\begin{align}\label{AdS3sliced}
\frac{ds^2}{8\sqrt{2}\pi L^2}&=\bigg[\sqrt{-\frac{\alpha}{\alpha''}}\bigg(x^2 \big(ds^2(\text{AdS}_3)+ ds^2(\text{S}^3)\big)+ \frac{dx^2}{\sqrt{c+ x^4}}\bigg)+\frac{\sqrt{c+x^4}}{x^2}\sqrt{\frac{-\alpha''}{\alpha}}\bigg(dy^2+\frac{\alpha^2 x^4}{\Delta}ds^2(\text{S}^2)\bigg)\bigg],\nonumber\\[2mm]
e^{-\Phi}&=\frac{L \sqrt{\Delta}}{3^4 2^{\frac{5}{4}}\pi^{\frac{5}{2}} x (c+x^4)^{\frac{1}{4}}}\left(-\frac{\alpha''}{\alpha}\right)^{\frac{3}{4}},~~~~B_2=-L^2\pi\left(-y+ \frac{x^4 \alpha \alpha'}{\Delta}\right)\text{vol}(\text{S}^2),
\end{align}
where we have defined
\begin{equation}
\Delta=x^4\left((\alpha')^2-2 \alpha \alpha''\right)-2 c \alpha \alpha'',
\end{equation}
while the RR fluxes are
\begin{align}\label{AdS3slicedRR}
F_0&=-\frac{1}{162\pi^3}\alpha''',~~~~F_2= F_0 B_2- \frac{L^2}{162\pi^2}(162F_0\pi^3y+\alpha'') \text{vol}(\text{S}^2),\nonumber\\[2mm]
F_4&= -\frac{2^4L^4}{3^4 \pi }d(\sqrt{c+x^4}\alpha')\wedge \bigg( \text{vol}(\text{AdS}_3)+\text{vol}(\text{S}^3)\bigg),\nonumber\\[2mm]
F_6&=F_4\wedge B_2-\frac{2^4 L^6}{3^4}d(\sqrt{c+x^4}(\alpha-y \alpha'))\wedge \bigg( \text{vol}(\text{AdS}_3)+\text{vol}(\text{S}^3)\bigg)\wedge \text{vol}(\text{S}^2)
\end{align}
Notice that as $x \to \infty$, $x^{-4}\Delta \to 1$ and the entire NS sector tends to that of the AdS$_7$ solutions in massive IIA reviewed in the previous subsection, where the AdS$_7$ radius is 1. The same is true for the RR 0 and 2 form fluxes, however the 4 form does not tend to zero in this limit, which reflects the presence of a D2-D4 defect. That the directions $(\text{AdS}_3,\text{S}^3,x)$, tend to AdS$_7$ can be confirmed by computing the Riemann curvature tensor. The solution is bounded from below in a way that depends on the tuning of $c$: When $c\geq 0$ $x$ is bounded to the interval $[0,\infty)$, when $c=0$ there is a curvature singularity at the lower bound that we do not recognise as physical but for $c \neq 0$, defining $x=z^{\frac{1}{4}}$, the metric at this locus tends to
\begin{equation}
\frac{ds^2}{8\sqrt{2}\pi L^2}= \sqrt{-\frac{\alpha}{\alpha''}}\bigg[\sqrt{z}\bigg(ds^2(\text{AdS}_3)+ds^2(\text{S}^3)\bigg)+ \frac{1}{16 \sqrt{c}z^{\frac{3}{2}}}(dz^2+z^2 ds^2(\text{S}^2))\bigg]+\frac{\sqrt{c}}{8 \sqrt{z}}\sqrt{-\frac{\alpha''}{\alpha}}dy^2\nonumber.
\end{equation}
If we had $-\frac{\alpha}{\alpha''}=1$, this would be the behaviour one expects of a stack of localised D6 branes on $(\text{AdS}_3,\text{S}^3,y)$, with NS5 branes inside them of worldvolume $(\text{AdS}_3,\text{S}^3)$ smeared along $y$. Since $-\frac{\alpha}{\alpha''}\neq 1$ generically what we actually have is a slight generalisation of this: Rather than the NS5 branes being evenly smeared along $y$ such that the direction is an isometry, they form a $y$ dependent distribution. Finally if $c<0$ we can fix $c=-b^4$ and the metric is bounded below at $x=b$ where one sees the behaviour of ONS5 fixed planes\footnote{The S-dual of O5-planes.} that are smeared along $y$. The most attractive of these 3 behaviours is the second\footnote{See our discussion on smeared ONS5s below \eqref{eq:refpoint}.}, so from here we shall assume $c>0$ so that $x\in [0,\infty)$.
In the next subsection we construct the 2d quivers dual to the solutions defined by \eqref{AdS3sliced}-\eqref{AdS3slicedRR}, and show that they can be embedded in the 6d quivers discussed in the previous subsection, dual to the AdS$_7$ solutions. Before we do that we state the value of the holographic central charge computed using the Brown-Henneaux formula \cite{Brown:1986nw} for later comparison with the field theory result,
\begin{equation}
\label{holographic-c-defect}
c_{hol}=\frac{2^6}{3^7\pi^4}\int dx dy\, x^3\, (-\alpha \alpha'').
\end{equation}
\subsection{Surface defect CFTs}\label{surfaceCFT}
In this subsection we construct the 2d quivers that flow in the IR to the CFTs dual to the solutions defined by \eqref{AdS3sliced}-\eqref{AdS3slicedRR}. We show that in a certain limit these quivers can be embedded in the 6d quivers constructed from the D6-NS5-D8 sector of the brane intersection.
We start analysing the brane charges associated to the D2-D4-D6-NS5-D8 brane set-up underlying the solutions. One can see from the expressions for $F_0$ and $F_2$ in \eqref{AdS3slicedRR} that the D8 and D6 quantised charges of the AdS$_3$ solutions coincide with those of the AdS$_7$ backgrounds, given by equations \eqref{QD8charge} and \eqref{QD6charge}. In turn, for finite $x$ there are NS5-branes located at fixed values in $y$ and also in $x$. Since we are interested in embedding the 2d CFT in the 6d CFT associated to the D6-NS5-D8 subsystem, we will take $x$ large enough such that we can neglect the $(H_3)_{x\text{S}^2}$ component of the NS-NS 3-form flux and take the NS5-branes located at fixed positions in $y$, as in the D6-NS5-D8 subsystem. The fluxes associated to the $\text{AdS}_3$ solutions are then compatible with the brane intersection depicted in Table \ref{D2D4D6D8NS5_1}, that we repeat in Table \ref{D2D4D6D8NS5} below in a generic system of coordinates for a better reading.
\begin{table}[ht]
\begin{center}
\begin{tabular}{| l | c | c | c | c| c | c| c | c| c | c |}
\hline
& $x^0$ & $x^1$ & $x^2$ & $x^3$ & $x^4$ & $x^5$ & $x^6$ & $x^7$ & $x^8$ & $x^9$\\ \hline
D2 & x & x & x & & & & & & & \\ \hline
D4 & x & x & & x & x & x & & & & \\ \hline
D6 & x & x & x & & & &x &x &x &x \\ \hline
D8 & x & x & &x & x & x &x &x & x & x \\ \hline
NS5 & x & x & & & & & x & x & x & x \\ \hline
\end{tabular}
\end{center}
\caption{$\frac18$-BPS brane intersection underlying the AdS$_3$ solutions \eqref{AdS3sliced}-\eqref{AdS3slicedRR}. $(x^0,x^1)$ are the directions where the 2d dual CFT lives. $x^2$ is the field theory direction, that we identify with $y$, where the NS5-branes are located (for $x$ sufficiently large). The D2 and D6-branes are stretched in this direction. $(x^3, x^4, x^5)$ are the directions associated to the isometries of the S$^2$ while $(x^6,x^7,x^8,x^9)$ are those associated to the S$^3$.}
\label{D2D4D6D8NS5}
\end{table}
Note that the R-symmetry of the 2d field theory living in the brane set-up is the SO(3)$_R\subset $ SO(4) symmetry group of the S$^3$, while for the 6d field theory living in the D6-D8-NS5 brane intersection it is identified with the SO(3) symmetry group of the S$^2$. This is exactly what happens for 2d $(4,4)$ field theories arising upon compactification from 6d $(1,0)$ CFTs, where the SO(3) R-symmetry of the 6d theory becomes the R-symmetry of the Coulomb branch of the 2d theory, and the SO(3)$_L\times$SO(3)$_R$ R-symmetry of the Higgs branch of the 2d theory arises in the dimensional reduction \cite{Diaconescu:1997gu,Witten:1997yu,Aharony:1999dw}. In our $(0,4)$ theories there is just a Higgs branch, since the Coulomb branch contains no scalars, and the R-symmetry is just the SO(3)$_R$ arising in the dimensional reduction.
The Hanany-Witten brane set-up associated to the brane intersection in Table \ref{D2D4D6D8NS5} is depicted in Figure \ref{brane-setup-defect}.
In this set-up the D2-branes play the role of colour branes. They are stretched in the $y$-direction, which is divided into intervals of length $1$ in our units, where NS5-branes are located. The D6-branes are also stretched in this direction, however they lie as well along the $x$ direction, which is non-compact, therefore they become flavour branes. The D4-branes lie as well along the $x$ direction, so they are also flavour branes, and so are the D8-branes. In order to construct the quiver that lives in this set-up one needs to look at the quantisation of the open strings stretched between the different branes. This has been studied in detail in various references (see for instance \cite{Couzens:2021veb}), to which we refer the reader for more details.
There are three types of massless modes to consider:
\begin{figure}
\centering
\includegraphics[scale=0.75]{BraneSetupPart2-II}
\caption{Hanany-Witten brane set-up associated to the AdS$_3$ solutions \eqref{AdS3sliced}-\eqref{AdS3slicedRR}.}
\label{brane-setup-defect}
\end{figure}
\begin{itemize}
\item D2-D2 strings: There are two cases to consider, depending on whether the two end-points of the string lie on the same stack of D2-branes or on two different stacks, separated by an NS5-brane. Let us consider first the case in which the two end-points lie on the same stack. For D2-branes stretched between NS5-branes there is a $\mathcal{N}=(0,4)$ vector multiplet and a $\mathcal{N}=(0,4)$ adjoint twisted hypermultiplet, coming from the motion of the D2-branes along the $(x^6,x^7,x^8,x^9)$ directions. Since these scalars are charged under the R-symmetry of the solution they combine into a twisted hypermultiplet. The $\mathcal{N}=(0,4)$ vector multiplet and the $\mathcal{N}=(0,4)$ adjoint twisted hypermultiplet then build up a $\mathcal{N}=(4,4)$ vector multiplet.
Let us consider now the case in which the end-points of the string lie on two different stacks of D2-branes, separated by an NS5-brane. The massless modes arise from the intersection of the two stacks of D2-branes and the NS5-brane. This fixes the degrees of freedom moving along the $(x^6,x^7,x^8,x^9)$ directions, leaving behind the scalars associated to the $(x^3,x^4,x^5)$ directions, together with the $A_2$ component of the gauge field. These give rise to a $\mathcal{N}=(4,4)$ hypermultiplet in the bifundamental representation, since the scalars are uncharged under the R-symmetry of the solution.
\item D2-D4 strings: Strings with one end on D2-branes and the other end on orthogonal D4-branes in the same interval between NS5-branes contribute with fundamental $(4,4)$ hypermultiplets, associated to the motion of the strings along the $(x^3,x^4,x^5)$ directions plus the $A_2$ component of the gauge field.
\item D2-D6 strings: Strings with one end on D2-branes and the other end on D6-branes in the same interval between NS5-branes contribute with fundamental $(0,4)$ twisted hypermultiplets, associated to the motion of the string along the $(x^6,x^7,x^8,x^9)$ directions, which are charged under the R-symmetry of the solution. Strings with one end on D2-branes and the other end on D6-branes in adjacent intervals between NS5-branes contribute with fundamental $(0,2)$ Fermi multiplets.
\item D2-D8 strings: Strings with one end on D2-branes and the other end on orthogonal D8-branes in the same interval
contribute with fundamental $(0,2)$ Fermi multiplets.
\end{itemize}
The previous fields give rise to the quivers depicted in Figure \ref{2dquiverdefect}.
\begin{figure}
\centering
\includegraphics[scale=0.75]{Dibujo2}
\caption{2d quivers associated to the AdS$_3$ solutions \eqref{AdS3sliced}-\eqref{AdS3slicedRR}. Circles denote $(4,4)$ vector multiplets, black lines $(4,4)$ bifundamental hypermultiplets, grey lines $(0,4)$ bifundamental twisted hypermultiplets and dashed lines $(0,2)$ bifundamental Fermi multiplets.}
\label{2dquiverdefect}
\end{figure}
In these quivers the D6 and D8-brane charges are the ones given by equations \eqref{QD6charge} and \eqref{QD8charge}, while the D2 and D4 brane charges at each interval are given by
\begin{equation}
Q_{D2}^{(k)}=\frac{1}{(2\pi)^5}\int_{I_x,\text{S}^2,\text{S}^3}\hat{F}_6=\frac{4}{3^4\pi^2}\int_{I_x} dx\, \frac{2x^3}{\sqrt{c+x^4}}\,\alpha_k
\end{equation}
and
\begin{equation}
\Delta Q_{D4}^{(k)}=\frac{1}{(2\pi)^3}\int_{I_y,\text{S}^3}\hat{F}_4=\frac{4}{3^4\pi^2}\sqrt{c+x^4}\int_{k}^{k+1}dy\, \alpha_k''.
\end{equation}
As the $x$-direction is semi-infinite the D2-brane charges diverge, as expected from their defect interpretation. Note that the cancellation of gauge anomalies for the gauge groups associated to them is still given by
\begin{equation}\label{anomalies}
2 Q_{D6}^{(k)}=Q_{D6}^{(k-1)}+Q_{D6}^{(k+1)}+\Delta Q_{D8}^{(k)},
\end{equation}
as for the 6d quivers depicted in Figure \ref{6dquiver}. Here we have taken into account that $(0,4)$ fundamental multiplets contribute 1 to the gauge anomaly, $(0,2)$ fundamental Fermi multiplets contribute -1/2 and the remaining vector and matter fields do not contribute since they are $(4,4)$ (the reader is referred to \cite{Lozano:2019zvg,Couzens:2021veb} for more details).
Next we turn to the computation of the central charge. We show that, as expected, this quantity diverges, as $x$ is not bounded from above.
\vspace{0.5cm}
\noindent {\bf Central charge:}\\
~~\\
The central charge of a 2d $(0,4)$ CFT can be computed away from criticality, since it equals the anomaly in the two-point function of the R-symmetry current. In our normalisation this expression is given by \cite{Witten:1997yu}
\begin{equation}
\label{centralcharge(0,4)}
c_R=3{\rm Tr}[\gamma^3 Q_R^2],
\end{equation}
with $Q_R$ the R-charge under the U$(1)_R\subset$ SU$(2)_R$, $\gamma^3$ is the chirality matrix in 2d, and the trace is taken over all fermions in the theory. In order to compute the R-symmetry anomaly we recall the following well-known facts:
\begin{itemize}
\item $(0,4)$ vector multiplets contain two left-moving fermions with R-charge 1.
\item $(0,4)$ twisted hypermultiplets contain two right-moving fermions with R-charge 0.
\item $(0,4)$ hypermultiplets contain two right-moving fermions with R-charge -1.
\item $(0,2)$ Fermi multiplets contain one left-moving fermion with R-charge 0.
\item $(4,4)$ vector multiplets consist on a $(0,4)$ vector multiplet and a $(0,4)$ adjoint twisted hypermultiplet. Therefore they contribute with 2 to the R-symmetry anomaly.
\item $(4,4)$ hypermultiplets consist on a $(0,4)$ hypermultiplet plus a $(0,4)$ Fermi multiplet. Therefore they contribute with 2 to the R-symmetry anomaly.
\end{itemize}
This gives the well-known expression for the central charge \cite{Witten:1997yu}
\begin{equation} \label{field-theory-c}
c_R=6(n_{hyp}-n_{vec}),
\end{equation}
where $n_{hyp}$ stands for the number of $(0,4)$ (untwisted) hypermultiplets and $n_{vec}$ for the number of $(0,4)$ vector multiplets. In order to compute these numbers we first need to choose the precise way in which we would like to close the $y$ interval. Our choice is to take $\alpha=\alpha'=\alpha''$ to vanish at both ends of the interval, and to glue the quiver to itself at a given value $y=P+1$, in a continuous way. The resulting quivers are the ones depicted in Figure \ref{2dquiverdefectsym}, where the notation is the same used in Figure \ref{2dquiverdefect}.
\begin{figure}
\centering
\includegraphics[scale=0.75]{Dibujo3}
\caption{2d quivers completed in a symmetric way.}
\label{2dquiverdefectsym}
\end{figure}
This is of course just a possible way to globally define the $y$-direction, and one could consider many others. For the quivers depicted in Figure \ref{2dquiverdefectsym} we have
\begin{equation}
n_{hyp}=2 \sum_{k=1}^{P} Q_{D2}^{(k)}Q_{D4}^{(k)} + Q_{D2}^{(P+1)}Q_{D4}^{(P+1)}+2\sum_{k=1}^{P} Q_{D2}^{(k)}Q_{D2}^{(k+1)},
\end{equation}
and
\begin{equation}
n_{vec}=2\sum_{k=1}^P (Q_{D2}^{(k)})^2+(Q_{D2}^{(P+1)})^2,
\end{equation}
which lead to
\begin{equation}\label{cRcomplete}
c_R=6 \Bigl[\Bigl(2\sum_{k=1}^{P} Q_{D2}^{(k)}Q_{D4}^{(k)}+Q_{D2}^{(P+1)}Q_{D4}^{(P+1)}\Bigr)+ \Bigl(2\sum_{k=1}^{P} Q_{D2}^{(k)}(Q_{D2}^{(k+1)}-Q_{D2}^{(k)})-(Q_{D2}^{(P+1)})^2\Bigr)\Bigr].
\end{equation}
Given that the D2-brane charge is infinite we need a prescription to regularise it. We will evaluate all charges at a given value of $x$ and finally sum over all of them. Doing this one can check that the contribution of the second big bracket to \eqref{cRcomplete} is subleading in $x$ compared to that of the first big bracket. We then find an expression that diverges in $x$ in exactly the same way as the holographic central charge computed in \eqref{holographic-c-defect}, and agrees with it to leading order in $P$ (that is, for long quivers). Explicitly, the leading order in $P$ of \eqref{cRcomplete} gives
\begin{equation}\label{cRcompare}
c_R=\frac{2^7}{3^7 \pi^4}\int_{I_x} dx\, x^3 \sum_{k=1}^P \mu_k\gamma_k.
\end{equation}
In order to show the matching with the holographic central charge we should recall that the holographic central charge is to be identified with \cite{Kraus:2005zm}
\begin{equation}
c_{hol}=\frac{c_L+c_R}{2}.
\end{equation}
Therefore we need to compute first $c_L$. In order to do this we can use that
\begin{equation}
c_L-c_R={\rm Tr}\gamma^3,
\end{equation}
which leads to \cite{Couzens:2021veb}
\begin{equation}\label{cL}
c_L-c_R=2 n_H^{(0,4)}-n_F^{(0,2)},
\end{equation}
where $n_H^{(0,4)}$ refers to the number of isolated $(0,4)$ hypermultiplets and $n_F^{(0,2)}$ to the number of isolated $(0,2)$ multiplets. It can be checked that $c_L=c_R$ identically for our quivers due to the condition of anomaly cancellation. Therefore $c_{hol}=c_R$ and both quantities can readily be compared. Indeed, we
find, to leading order in $P$,
\begin{equation}
c_{hol}=\frac{2^6}{3^7 \pi^4}\int dx\, x^3 \Bigl[2\sum_{k=0}^P \int_k^{k+1}dy (-\alpha\alpha'')\Bigr]=
\frac{2^7}{3^7 \pi^4}\int_{I_x} dx\, x^3 \sum_{k=1}^P \mu_k\gamma_k+\dots
\end{equation}
which exactly agrees with \eqref{cRcompare}, to leading order.
As expected, these quantities diverge in $x$ due to its non-compact character. This shows that the 2d quiver CFTs associated to the \eqref{AdS3sliced}-\eqref{AdS3slicedRR} solutions are ill-defined per se, and only find a meaning in the UV when the deconstructed extra dimensions where the 6d CFTs live emerge. Still, our analysis in this section shows that, for $x$ suitably large, we can nicely embed the D2 and D4 defect branes within the 6d quiver theories associated to the D6-NS5-D8 {\it mother} branes to produce non-anomalous, albeit infinitely charged, 2d quivers.
Note that the quivers discussed in this subsection differ from the quivers constructed in \cite{Faedo:2020nol} for D2-D4-NS5-D6 intersections. The main difference is that in that reference it was wrongly stated that the D2-D6 branes were accounting for bifundamental hypermultiplets and the D2-D4 for bifundamental twisted hypermultiplets, while the careful quantisation of open strings carried out in this section shows that these hypermultiplets are in fact interchanged. This explains why in reference \cite{Faedo:2020nol} it was not possible to match the behaviour in $x$ of the field theory and holographic central charges.
\section{$\mathcal{N}=(4,4)$ AdS$_3$ from D2-D4-NS5 branes}\label{D2D4NS5}
In this section we consider the particular limiting case in which the coordinates $(z_1,z_2,z_3)$ of the solutions given by \eqref{eq:massiveclassmetric} span a 3-torus $\mathbb{T}^3$ that the warp factors are independent of. We show that the brane intersection reduces to the $(4,4)$ D2-D4-NS5 Hanany-Witten brane set-ups discussed long ago in \cite{Brodie:1997wn,Alishahiha:1997cm}. These brane set-ups are the two dimensional realisations of the D3-NS5-D5 brane intersections constructed by Hanany and Witten \cite{Hanany:1996ie} and later extended to other dimensions. These D$p$-NS5-D$(p+2)$ brane intersections realise $p$ dimensional field theories with 8 supercharges that flow to CFTs in the IR (for $p<4$), in the UV (for $p>4$), or are conformal per se (for $p=4$). AdS$_{p+1}$ geometries with 16 supercharges dual to these CFTs have been constructed in the literature for $p=6,5,4,3$ (see \cite{Apruzzi:2013yva,Apruzzi:2015wna,Gaiotto:2014lca,Cremonesi:2015bld,Brandhuber:1999np,Bergman:2012kr,DHoker:2016ujz,Lozano:2018pcp,Gaiotto:2009gz,ReidEdwards:2010qs,Aharony:2012tz,Assel:2011xz})\footnote{Also partially for $p=1$ (see \cite{Dibitetto:2019nyz}).}, but the
$p=2$ case remained an open problem\footnote{In \cite{Chen:2006ps} a probe brane analysis revealed an
AdS$_3\times $S$^3$ geometry as gravity dual of a $(4,4)$ CFT.}.
In this section we fill this gap, and construct explicit AdS$_3\times $S$^3$ duals to $(4,4)$ D2-D4-NS5 brane set-ups. In subsection \ref{solutions} we state the main properties of the solutions, consisting of AdS$_3\times $S$^3\times \mathbb{T}^3$ geometries foliated over an interval. In subsection \ref{fieldtheory} we construct the 2d quivers that describe the field theory living in the brane set-ups, and show the agreement between their central charge and the one computed from the supergravity solutions. In subsection \ref{mirror} we discuss the M-theory realisation of these solutions. This allows us to relate them to the AdS$_3\times $S$^2\times \mathbb{T}^4\times I$ solutions of massless Type IIA supergravity constructed in \cite{Lozano:2019emq}. The common M-theory origin of both classes of solutions implies that they flow to the same 2d dual CFT in the IR, that we interpret as a manifestation of mirror symmetry, as discussed in \cite{Brodie:1997wn,Alishahiha:1997cm}. Finally
in subsection \ref{typeIIB} we construct new $\mathcal{N}=(0,4)$ solutions of Type IIB supergravity related by T-dualities to the previous ones. One such class is holographically dual to D3-brane boxes constructions \cite{Hanany:2018hlz} with small $\mathcal{N}=(0,4)$ supersymmetry.
\subsection{AdS$_3\times $S$^3\times \mathbb{T}^3$ solutions with $(4,4)$ supersymmetries}\label{solutions}
Imposing the condition that the coordinates $(z_1,z_2,z_3)$ of the solutions given by \eqref{eq:massiveclassmetric} span a 3-torus $\mathbb{T}^3$ that the warp factors are independent of one finds the subclass of solutions
\begin{align}\label{eq:massiveclassmetricYolanda}
ds^2&= \frac{q}{\sqrt{h}}\bigg[ds^2(\text{AdS}_3)+ds^2(\text{S}^3)\bigg]+ \frac{g}{\sqrt{h}}\, d\rho^2+ g \sqrt{h}\,ds^2(\mathbb{T}^3),\\[2mm]
F_0&= \frac{\partial_{\rho}h}{g}\,,~~~~e^{-\Phi}=\frac{h^{\frac{3}{4}}}{\sqrt{g}},\\
F_4&= 2 q \, \text{vol}(\text{AdS}_3)\wedge d\rho+2 q \, \text{vol}(\text{S}^3)\wedge d\rho,\label{otroF4}\\[2mm]
H_3&=\partial_{\rho}(h g)\, \text{vol}(\mathbb{T}^3), \label{otroH3}\\[2mm]
F_6&= 2q\, g h\, \text{vol}(\mathbb{T}^3)\wedge (\text{vol}(\text{S}^3) + \text{vol}(\text{AdS}_3)), \label{F6Yolanda}
\end{align}
where $g,h$ are functions of $\rho$ satisfying the Bianchi identities
\begin{equation}\label{bianchisT3}
\partial_\rho(\frac{\partial_\rho h}{g})=0, \quad \partial_\rho^2(gh)=0,\quad F_0\partial_\rho(gh)=0.
\end{equation}
The smearing of the functions $g$ and $h$ over the $\mathbb{T}^3$ imply that the underlying brane intersection simplifies. In this section we will focus on the massless limit $F_0=0$, to later analyse the non-vanishing Romans' mass case in section \ref{TypeI'}. When $F_0=0$ we have
\begin{equation}
h= h_0=\text{constant},
\end{equation}
and the Bianchi identities imply that
\begin{equation}
g''=0.
\end{equation}
These assumptions imply the exclusion of D8 and D6 branes from the set-up of Table \ref{D2D4D6D8NS5_1}. Moreover, there is a supersymmetry enhancement to $\ma N=(4,4)$, as discussed in subsection \ref{supersymmetry}. We thus obtain a class of $\ma N=(4,4)$ AdS$_3 \times \text{S}^3 \times \mathbb{T}^3$ backgrounds fibered over an interval whose underlying brane intersection is the one depicted in Table \ref{D2-D4-NS5}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{| l | c | c | c | c| c | c| c | c| c | c |}
\hline
&$x^0$ & $x^1$ & $z_1$ & $z_2$ & $z_3$ & $\rho$ & $\zeta$ & $\theta^1$ & $\theta^2$ & $\theta^3$ \\ \hline
D2 & x & x & & & & x & & & & \\ \hline
D4 & x & x & x &x & x & & & & & \\ \hline
NS5 & x & x & & & & & x & x & x & x \\ \hline
\end{tabular}
\end{center}
\caption{$\frac14$-BPS brane intersection underlying the geometry \eqref{eq:massiveclassmetricYolanda} with $F_0=0$. $(x^0,x^1)$ are the directions where the 2d dual CFT lives, $(z_1, z_2, z_3)$ span the $\mathbb{T}^3$ on which the D4-branes are wrapped, $\zeta$ and $\theta^i$ parameterise respectively the radial coordinate of AdS$_3$ and the S$^3$, and $\rho$ is the field theory direction.
}
\label{D2-D4-NS5}
\end{table}
The quantised charges of the D2-D4-NS5 branes are computed from the $F_4$, $H_3$ and $F_6$ magnetic fluxes, given by \eqref{otroF4}-\eqref{F6Yolanda}. In order to define the Page fluxes one notes however that it is not possible to define a $B_2$ globally, and that the flux that gives rise to quantised D2-brane charges is rather
\begin{equation} \label{newf6}
\hat{f}_6=f_6-C_3\wedge H_3 =2 \,q\, h_0 (g-\rho \,g')\,\text{vol}(\mathbb{T}^3)\wedge \text{vol}(\text{S}^3),
\end{equation}
where $\hat{f}_p$ stands for the magnetic component of $F_p$. We will use this definition of the 6-form RR magnetic flux to compute the charge associated to the D2 branes. We will take $h_0=1$ without loss of generality\footnote{This constant can be absorbed through a rescaling of $\rho$ and the radius of AdS$_3$.}.
The definition given by \eqref{newf6} implies that the Page charge associated to D2-branes is sensitive to gauge transformations of the $C_3$ RR potential. In order to carefully account for these we will take as representative of $C_3$ the one satisfying\footnote{We choose units with $\alpha'=g_s=1$.}
\begin{equation}
\frac{1}{(2\pi)^3}\int_{\text{S}^3}C_3 \in [0,1].
\end{equation}
This is inspired by the more familiar condition that the NS-NS 2-form potential lie in the fundamental region. In order to accomplish this we need to take
\begin{equation}
C_3=-2\,q\,\Bigl(\rho-\frac{2\pi}{q}k\Bigr)\,\text{vol}(\text{S}^3),
\end{equation}
for $\rho\in [\frac{2\pi}{q} k, \frac{2\pi}{q} (k+1)]$.
Given that the D4-brane charge is obtained via computing
\begin{equation}
Q_{D4}^{(k)}=\frac{1}{(2\pi)^3}\int_{I_{\rho},\text{S}^3} \hat{F}_4,
\end{equation}
this gives $Q_{D4}^{(k)}=1$
for $I=[\frac{2\pi}{q}\, k,\, \frac{2\pi}{q}\, (k+1)]$. Therefore there is a single D4 brane in each such interval. This clarifies the role played by the large gauge transformations performed between intervals: a D4-brane is localised on the boundaries of the intervals, generating a strong coupling realisation of the Hanany-Witten brane creation effect\footnote{In the usual Hanany-Witten effect NS5-branes are created. Uplifting this phenomenon to M-theory and reducing along a worldvolume direction of the M5-branes one finds the same effect happening for D4-branes.}. Taking the whole interval spanned by $\rho$ to be $[0,\frac{2\pi}{q}(P+1)]$, with $P$ as defined below, we then find a total number of $(P+1)$ D4-branes.
We proceed by solving the Bianchi identity $g''=0$. The function $g$ must be continuous, but it can have discontinuities in its first derivative at the locations of the D4-branes, at $\rho=\frac{2\pi}{q} k$. The most general solution is then
\begin{equation}
g_k=\alpha_k+\frac{\beta_k}{2\pi}\Bigl(\rho-\frac{2\pi}{q}k\Bigr), \qquad {\rm for} \qquad \rho\in [\frac{2\pi}{q}\,k,\, \frac{2\pi}{q}\, (k+1)].
\end{equation}
Imposing that the space begins and ends at $\rho=0, \frac{2\pi}{q} (P+1)$, where $g$ vanishes, we find
\begin{equation} \label{profileg}
g(\rho) = \left\{ \begin{array}{ccrcl}
\frac{\beta_0 }{2\pi}
\rho\, , & 0\leq \rho\leq \frac{2\pi}{q} \\
\alpha_k + \frac{\beta_k}{2\pi}(\rho-\frac{2\pi}{q} k )\, , &~~ \frac{2\pi}{q} k\leq \rho \leq \frac{2\pi}{q}(k+1),\;\;\;\; k=1,....,P-1\\
\alpha_P+ \frac{\beta_P}{2\pi}(\rho-\frac{2\pi}{q} P)\, , & \frac{2\pi}{q} P\leq \rho \leq \frac{2\pi}{q}(P+1).
\end{array}
\right.
\end{equation}
The condition $g\,(\frac{2\pi}{q}(P+1))=0$ implies $\beta_P=-q\,\alpha_P$, while continuity across the different intervals implies the conditions
\begin{equation}\label{eq:refpoint}
\alpha_k=\frac{1}{q}\sum_{j=0}^{k-1}\beta_j, \qquad k=1,\dots, P.
\end{equation}
The behaviour close to the zeros of $g$, which bound the solution, is that of an ONS5 plane (the S-dual of an O5 plane) that is smeared over the $\mathbb{T}^3$. Of course for an O-plane in string theory such a smearing is not really physically allowed as the plane should lie at the fixed point of the orientifold involution. Our solutions here are in supergravity, but as we approach the ONS5 the curvature becomes large and that description should be supplemented by $\alpha'$ corrections. One can hope that such higher order effects conspire to localise the ONS5 behaviour in string theory - indeed \cite{Baines:2020dmu} argues that smeared O-planes can be a good approximation to localised ones in some instances. However if one takes the conservative view and insists on fully localised O-planes in supergravity, all is not lost: The compatibility of a class of solutions with smeared O-planes often suggests that it is also compatible with localised planes. Such solutions are harder to construct, but one can view the solution here as a positive first step in that direction. As this subtly involves the boundaries of the space we expect such generalisations to exhibit qualitatively similar physical behaviour.
The quantised charges in the different $[\frac{2\pi}{q} k, \frac{2\pi}{q} (k+1)]$ intervals are thus given by
\begin{eqnarray}
&&Q_{D2}^{(k)}=\frac{1}{(2\pi)^5}\int_{\mathbb{T}^3, \text{S}^3}\hat{F}_6=q \Bigl(g-g' (\rho-\frac{2\pi}{q}k)\Bigr)=q\,\alpha_k=\sum_{j=0}^{k-1} \beta_j, \label{chargeD2}\\
&&Q_{NS5}^{(k)}=\frac{1}{(2\pi)^2}\int_{\mathbb{T}^3} H_3=\beta_k,\label{chargeNS5}\\
&&Q_{D4}^{(k)}=\frac{1}{(2\pi)^3}\int_{I_{\rho},\text{S}^3}\hat{F}_4=1\label{chargeD4}.
\end{eqnarray}
This implies that the constants $\beta_k$ must be integer numbers, as they are directly related to the number of branes in the brane set-up. This confirms that the suggested brane configuration is the one given in Table \ref{D2-D4-NS5}. Substituting our expression for $g$ into the Bianchi identities we find
\begin{eqnarray}
&&dH_3= \frac{h_0}{2\pi} \sum_{k=1}^P (\beta_k-\beta_{k-1}) \delta\left(\rho-\frac{2\pi}{q}k\right) \,d\rho \wedge\text{vol}(\mathbb{T}^3)\,,\\
&&d\hat{f}_6= -\frac{q\,h_0}{\pi} \sum_{k=1}^P (\beta_k-\beta_{k-1})\left(\rho-\frac{2\pi}{q}k\right) \delta\left(\rho-\frac{2\pi}{q}k\right) \,\,d\rho \wedge\text{vol}(\mathbb{T}^3)\wedge \text{vol}(\text{S}^3)=0,\nonumber
\end{eqnarray}
where $\hat{f}_6$ denotes the magnetic component of the 6-form Page flux.
They are thus satisfied up to source terms, which indicate the presence of $(\beta_{k-1}-\beta_k)$ NS5-branes at $\rho=\frac{2\pi}{q}k$, where the slope of $g$ changes. These branes are wrapped on the $\text{AdS}_3\times $S$^3$ subspace of the geometry and smeared over the $\mathbb{T}^3$.
Finally, the central charge computed with the Brown-Henneaux formula gives, for this class of solutions\footnote{Note that this expression is also valid when $F_0\neq 0$.}
\begin{equation}
\label{holographic-c}
c_{hol}=\frac{3}{\pi}\,q^2 \int d\rho\, h\,g.
\end{equation}
This will be later compared to the corresponding field theory expression.
\subsection{2d dual CFTs}\label{fieldtheory}
In order to extract the quiver QFTs associated to the previous solutions we need to account for the ordering of the NS5-branes along the
$\rho$ direction, together with the net number of D2-branes ending on each of them and the D4-branes orthogonal to both types of branes. The massless modes that give rise to the quiver QFT are then coming from
the strings stretching between the D-branes in the same interval between NS5-branes, or between adjacent intervals. There are three types of massless modes to consider:
\begin{itemize}
\item D2-D2 strings: There are two cases to consider. Open strings with both end points lying on the same stack of D2-branes give rise to $\mathcal{N}=(4,4)$ vector multiplets, while those with end points on two different stacks separated by an NS5-brane give rise to $\mathcal{N}=(4,\,4)$ hypermultiplets in the bifundamental representation.
\item D4-D4 strings: Depending on the size of the $\mathbb{T}^3$, on which the D4-branes are wrapped, these strings do not contribute massless modes. Given that D4-D4 strings are T-dual to D2-D2 strings, the open strings would contribute a $(4,\,4)$ vector multiplet for a stringy size $\mathbb{T}^3$.
\item D2-D4 strings: Strings with one end on D2-branes and the other end on orthogonal D4-branes in the same interval between NS5-branes contribute with fundamental $(4,4)$ hypermultiplets, associated to the motion of the strings along the $(z_1,z_2,z_3)$ directions plus the $A_5$ component of the gauge field.
\end{itemize}
The relevant data to construct the quivers associated to these massless modes are the linking numbers of the D4-branes and the NS5-branes. To define these we use that the brane set-up depicted in Table \ref{D2-D4-NS5} is T-dual to the Type IIB construction studied in \cite{Hanany:1996ie}, and use the definitions
\begin{eqnarray}
&&l_i=n_i+L_i^{NS5}, \qquad \text{for the D4-branes} \\
&&\hat{l}_j=-\hat{n}_j+R_j^{D4}, \qquad \text{for the NS5-branes,}
\end{eqnarray}
where $n_i$ is the number of D2-branes ending on the $i$th D4-brane from the right minus the number of D2-branes ending on it from the left, $\hat{n}_j$ is the same quantity for the $j$th NS5-brane, $L_i^{NS5}$ is the number of NS5-branes lying on the left of the $i$th D4-brane, and $R_j^{D4}$ is the number of D4-branes lying on the right of the $j$th NS5-brane\footnote{Our conventions are related through T and S dualities to the conventions in \cite{Gaiotto:2008ak}.}. Following \cite{Gaiotto:2008ak} it is then possible to read the data of the QFT living in the brane set-up from the linking numbers, namely, the gauge group $G=U(N_1)\times \dots \times U(N_k)$, the bifundamental fields transforming in the $(N_i,\bar{N}_{i+1})$ representations, and the fundamental matter, transforming under $U(M_i)$ for each group.
The way to proceed is as follows. The linking numbers of both the D4 and NS5 branes define an integer number $N$, as
$N=\sum_{i=1}^p l_i=\sum_{j=1}^{\hat{p}}\hat{l}_j$, where $p$ and $\hat{p}$ are the numbers of D4-branes and NS5-branes, respectively.
This is the number of D2-branes that end on the left on a collection of D4 branes and on the right on a collection of NS5-branes. Any brane configuration can be pictured in this way after suitable Hanany-Witten moves. Now, in order to read the quiver, we consider the partition $N=\sum_{j=1}^{\hat{p}}\hat{l}_j$, where the NS5-branes have to be ordered such that $\hat{l}_1\ge \hat{l}_2\ge \dots \ge \hat{l}_{\hat{p}}$, and a second partition defined from a list of positive integer numbers satisfying $q_1\ge q_2\ge \dots \ge q_r$, $N=\sum_{s=1}^r M_s q_s$, with the numbers $M_s$ indicating how many times the different integers $q_s$ appear in the partition. The set of integers $q_s$ is defined such that the number of terms in the decomposition that are equal or bigger than a given integer $j$, that we denote as $m_j$, satisfy that
\begin{equation} \label{keycondition}
\sum_{j=1}^i m_j \ge \sum_{j=1}^i \hat{l}_j, \quad \forall i=1,\dots, \hat{p}.
\end{equation}
From these two partitions the
ranks of the different $U(N_i)$ gauge groups of the quiver are then computed as
\begin{equation}
N_i=\sum_{j=1}^i (m_j-\hat{l}_j).
\end{equation}
In turn, the numbers $M_s$ appearing in the $N=\sum_{s=1}^r M_s q_s$ decomposition give the ranks of the fundamental matter groups that couple to each of the gauge groups. A detailed account of this construction can be found in \cite{Assel:2011xz}. It will become clearer after we illustrate it with the particular brane set-up that is the subject of our analysis.
Let us now apply these rules to the construction of the field theory associated to our solutions, defined by $g(\rho)$ as in \eqref{profileg}. The brane set-up is read from the numbers of branes at each $\rho\in [\frac{2\pi}{q} k, \frac{2\pi}{q} (k+1)]$ interval, determined by equations
\eqref{chargeD2}-\eqref{chargeD4}. Moreover, as discussed below equation \eqref{profileg}, $\beta_P$ anti-NS5-branes must end the space at $\rho=\frac{2\pi}{q}(P+1)$. The resulting brane set-up is then the one depicted in Figure \ref{brane-setup-T3}.
\begin{figure}
\centering
\includegraphics[scale=0.75]{brane_setupT31}
\caption{Brane set-up associated to the quantised charges \eqref{chargeD2}-\eqref{chargeD4}, in units of $q$.}
\label{brane-setup-T3}
\end{figure}
From this brane configuration we can read the linking numbers for the D4-branes
\begin{equation}
l_i=\sum_{r=0}^{i-2}\beta_{r}+2\beta_{i-1},\qquad i=1,\dots, P
\end{equation}
and for the NS5-branes
\begin{eqnarray}
&&\hat{l}_1=\hat{l}_2=\dots =\hat{l}_{\beta_0}=P,\nonumber\\
&&\hat{l}_{\beta_0+1}=\hat{l}_{\beta_0+2}=\dots =\hat{l}_{\beta_0+\beta_1}=P-1,\nonumber\\
&& \hspace{4cm}\vdots\nonumber\\
&&\hat{l}_{\beta_0+\beta_1+\dots +\beta_{P-3}+1}=\hat{l}_{\beta_0+\beta_1+\dots +\beta_{P-3}+2}=\dots =
\hat{l}_{\beta_0+\beta_1+\dots +\beta_{P-2}}=2,\nonumber\\
&&\hat{l}_{\beta_0+\beta_1+\dots +\beta_{P-2}+1}=\dots =\hat{l}_{\beta_0+\beta_1+\dots +\beta_{P-1}}=1,\nonumber\\
&&\hat{l}_{\beta_0+\beta_1+\dots +\beta_{P-1}+1}=\dots =\hat{l}_{\beta_0+\beta_1+\dots +\beta_{P-1}+\beta_P}=1.
\end{eqnarray}
From the linking numbers we construct the total number of D2-branes ending on D4-branes on the left and NS5-branes on the right. This is given by
\begin{equation}
N=\sum_{i=1}^P l_i=\sum_{j=1}^{\beta_0+\dots +\beta_P}\hat{l}_j=\sum_{k=0}^{P-1} (P-k+1)\beta_{k}.
\end{equation}
Now, from $N$ we define the two partitions that will allow us to read the quiver CFT. The NS5-branes in our brane set-up are ordered such that $\hat{l}_1\ge \hat{l}_2\ge \dots \ge \hat{l}_{\hat{\beta_0+\dots +\beta_P}}$. These linking numbers define then one of the two partitions, $N=\sum_{j=1}^{\beta_0+\dots +\beta_P}\hat{l}_j$.
In turn, for the D4-branes we take
\begin{equation} \label{partition}
N=\underbrace{\beta_0}+\underbrace{\beta_0+\beta_1}+\underbrace{\beta_0+\beta_1+\beta_2}+\dots +\underbrace{\beta_0+\beta_1+\dots +\beta_{P-2}}+2\underbrace{(\beta_0+\beta_1+\dots +\beta_{P-1})}
\end{equation}
from where
\begin{eqnarray}
&&m_1=m_2=\dots =m_{\beta_0}=P+1, \nonumber\\
&&m_{\beta_0+1}=\dots=m_{\beta_0+\beta_1}=P, \nonumber\\
&& \hspace{4cm}\vdots\nonumber\\
&&m_{\beta_0+\beta_1+\dots +\beta_{P-3}+1}=\dots =m_{\beta_0+\beta_1+\dots +\beta_{P-2}}=3, \nonumber\\
&&m_{\beta_0+\beta_1+\dots +\beta_{P-2}+1}=\dots =m_{\beta_0+\beta_1+\dots +\beta_{P-1}}=2.
\end{eqnarray}
These numbers satisfy the condition \eqref{keycondition} $\forall i=1,\dots, (\beta_0+\dots +\beta_P)$.
We then find for the ranks of the gauge groups
\begin{eqnarray}
&&N_1=m_1-\hat{l}_1=P+1-P=1, \quad N_2=N_1+m_2-\hat{l}_2=2, \quad \dots \quad N_{\beta_0}=\beta_0, \nonumber\\
&&N_{\beta_0+1}=\beta_0+1, \quad \dots \quad
N_{\beta_0+\beta_1+\dots + \beta_{P-1}}= \beta_0+\beta_1+\dots + \beta_{P-1},
\end{eqnarray}
to then start decreasing
\begin{equation}
N_{\beta_0+\beta_1+\dots \beta_{P-1}+1}= \beta_0+\beta_1+\dots + \beta_{P-1}-1, \quad \dots \quad
N_{\beta_0+\beta_1+\dots \beta_{P-1}+\beta_P-1}=1.
\end{equation}
That is, the ranks of the gauge groups increase in units of 1 till the value $\beta_0+\beta_1+\dots +\beta_{P-1}$ is reached, to then start decreasing, again in units of one, till the gauge group of rank 1 is reached, corresponding to the D2-branes stretched between the last pair of NS5-branes.
Finally, from the partition \eqref{partition} we have that
\begin{equation}\label{flavourgroups}
M_{\beta_0}=M_{\beta_0+\beta_1}=\dots = M_{\beta_0+\beta_1+\dots +\beta_{P-2}}=1,\quad M_{\beta_0+\beta_1+\dots +\beta_{P-1}}=2.
\end{equation}
This implies that the gauge groups with ranks $\beta_0=q \,\alpha_1$, $\beta_0+\beta_1=q\, \alpha_2$, till $\beta_0+\dots + \beta_{P-2}=q\,\alpha_{P-1}$ have U(1) flavour groups, while the gauge group with rank $\beta_0+\beta_1\dots +\beta_{P-1}=q\,\alpha_P$ has flavour group U(2). The rest of gauge groups have no flavour groups attached. The resulting quiver is depicted in Figure \ref{quiverT3_1}.
\begin{figure}
\centering
\includegraphics[scale=0.65]{quiverT3_1_q}
\caption{2d quiver associated to the brane set-up depicted in Figure \ref{brane-setup-T3}. Circles denote (4,4) vector multiplets and black lines (4,4) bifundamental hypermultiplets. The gauge groups with ranks $\alpha_k$, with $k=1,\dots, P-1$ to the left of the gauge group with rank $\alpha_P$ have U(1) flavour symmetries. The gauge group with rank $\alpha_P$ has U(2) flavour symmetry. The rest of gauge groups do not have attached any flavours.}
\label{quiverT3_1}
\end{figure}
One can check that the number of gauge nodes equals the total number of NS5-branes minus 1, as it should be. In this quiver circles denote $(4,4)$ vector multiplets and black lines $(4,4)$ bifundamental hypermultiplets. Note that we have rescaled it such that the intervals have length $[0,2\pi]$, as it is more standard in the literature, and therefore
\begin{equation}\label{nuevasalphas}
\alpha_k=\sum_{j=0}^{k-1}\beta_j, \qquad \text{for} \qquad k=1,\dots, P.
\end{equation}
Our proposal is that the QFTs defined by these quivers flow in the IR to the 2d CFTs dual to the class of solutions defined by \eqref{eq:massiveclassmetricYolanda}-\eqref{F6Yolanda}, with $h=$ constant and $g$ given by \eqref{profileg}. Next we will provide a non-trivial check of this proposal, consisting on the matching between the field theory and holographic central charges. However, before we do that we should recall that the Higgs and Coulomb branches of 2d $(4,4)$ theories are described by different CFTs, with the different branches having different R-symmetries and usually different central charges \cite{Diaconescu:1997gu,Witten:1997yu}. Thus, the question arises as to which
of these branches of the theory is described holographically by our class of solutions.
The basis of the argument in \cite{Witten:1997yu} is that the scalars should be singlets under the SO(4) R-symmetry of the 2d CFT. In our case this symmetry is associated to the isometry group of the 3-sphere in the internal space. Since the scalars in the Higgs branch are singlets under this group the Higgs branch flows to a CFT with R-symmetry coming from this SO(4). In turn, the scalars in the Coulomb branch transform in the $(\textbf{2},\textbf{2})$ representation of SO(4), so the Coulomb branch must flow to a 2d CFT with R-symmetry coming from the SU(2) associated to the S$^2$ living in the $\mathbb{T}^3$ (this is locally $\mathbb{R}^3$), which should be enhanced to SO(4) at strong coupling (see below). Based on this argument our solutions must be holographically dual to the Higgs branch 2d CFT. Accordingly, the holographic central charge must match the central charge of the Higgs branch.
Given that our theories are $(4,4)$ supersymmetric, we can use the expression that gives the central charge of the left or right-moving SU(2) group of R-symmetries to compute the central charge of the Higgs branch, given by equation \eqref{field-theory-c}, $c=6(n_{hyp}-n_{vec})$,
where $n_{hyp}$ stands for the number of $(0,4)$ hypermultiplets and $n_{vec}$ for the number of $(0,4)$ vector multiplets. Note that they can also stand, respectively, for the number of $(4,4)$ hypermultiplets and $(4,4)$ vector multiplets, more useful for our quiver constructions, since their respective $(0,4)$ Fermi multiplets and $(0,4)$ adjoint twisted hypermultiplets do not contribute to the R-symmetry anomaly. For the quivers depicted in Figure \ref{quiverT3_1} we have
\begin{equation}
n_{hyp}=2 \sum_{k=1}^{\alpha_P-1}k(k+1)+q\,\sum_{k=1}^{P-1}\alpha_k+2q\, \alpha_P \qquad \text{and} \qquad n_{vec}=2\sum_{k=1}^{\alpha_P-1}k^2+ \alpha_P^2.
\end{equation}
This gives
\begin{equation}
\label{result-c}
c=6\,q\,\sum_{k=1}^{P}\alpha_k.
\end{equation}
The holographic central charge was computed in the previous section. It is given by expression \eqref{holographic-c}. Taking $h=1$ and $g$ as defined by \eqref{profileg}, \eqref{nuevasalphas} it reduces to
\begin{equation}\label{holoc}
c_{hol}=6\,q\,\sum_{k=1}^{P}\alpha_k.
\end{equation}
We thus find exact agreement with the field theory calculation.
A particular example in our class of solutions is when the interval is periodically identified, in which case the function $g$ has to be constant. This gives the quantised charges
$Q_{\text{D}2}=q\, g$, $Q_{\text{D}4}=1$, for $\rho\in [0,\frac{2\pi}{q}]$. For $\rho\in [0,2\pi]$ we have $Q_{\text{D}2}=g$, $Q_{\text{D}4}=q$.
This solution describes the T-dual of the D1-D5 system when the $\text{CY}_2$ is a $\mathbb{T}^4$, and the T-duality takes place along one of the directions of the $\mathbb{T}^4$. The D5-branes become D4-branes smeared on the T-duality direction and the quiver collapses to the one describing the D1-D5 system, depicted in Figure \ref{quiverTD} (for $\rho\in [0,2\pi]$). Equation \eqref{field-theory-c} gives the well-known result $c=6\, Q_{\text{D}2}Q_{\text{D}4}$ for the central charge, in agreement with the holographic result.
\begin{figure}
\centering
\includegraphics[scale=0.45]{quiver1}
\caption{Quiver associated to the solution with $g=\text{constant}$, corresponding to the T-dual of the D1-D5 system.}
\label{quiverTD}
\end{figure}
\subsection{Realisation in M-theory}\label{mirror}
In this subsection we look into the M-theory regime of the brane intersection depicted in Figure \ref{D2-D4-NS5}. At strong coupling the D4-branes become M5-branes wrapped on the 11th direction, while the NS5-branes become M5'-branes transverse to it. Thus, the Hanany-Witten configuration consists on M2-branes stretched between M5'-branes with M5-branes orthogonal to them. In M-theory the M5 and the M5' branes are however equally non-perturbative, so one could alternatively consider the configuration in which the M2-branes are stretched between the M5-branes with the M5'-branes orthogonal to them.
In order to read off the field content associated to this configuration in weakly coupled string theory we need to reduce to ten dimensions in a direction in which the M5-branes become NS5-branes. In our set-up this can be achieved reducing along the Hopf-fibre direction of the S$^3$, which is transverse to the M5-branes. This halves the number of supersymmetries and creates a D6-brane. Moreover, in the reduction the $\mathbb{T}^3$ combines with the S$^1$ (that played before the role of eleventh direction, that we denote by $\psi$) to produce a $\mathbb{T}^4$. The resulting brane set-up is the one depicted in Table
\ref{D2-D4-NS5-D6}, which is the one underlying the AdS$_3\times $S$^2\times \mathbb{T}^4\times I$ solutions constructed in \cite{Lozano:2019emq}, restricted to the massless case.
\begin{table}[ht]
\begin{center}
\begin{tabular}{| l | c | c | c | c| c | c| c | c| c | c |}
\hline
&$x^0$ & $x^1$ & $z_1$ & $z_2$ & $z_3$ & $\psi$ & $\rho$ & $\zeta$ & $\theta^1$ & $\theta^2$ \\ \hline
D2 & x & x & & & & & x & & & \\ \hline
NS5 & x & x & x &x & x & x & & & & \\ \hline
D4 & x & x & & & & & & x & x & x \\ \hline
D6 & x & x & x & x & x & x & x & & & \\ \hline
\end{tabular}
\end{center}
\caption{$\frac18$-BPS brane intersection associated to the solutions in \cite{Lozano:2019emq}. $(x^0,x^1)$ are the directions where the 2d dual CFT lives. $(z_1, z_2, z_3, \psi)$ span the $\mathbb{T}^4$, on which the NS5 and D6 branes are wrapped. The coordinates $(\zeta,\theta^1,\theta)$ are the transverse directions realising the SO(3)-symmetry associated with the isometries of the S$^2$.}
\label{D2-D4-NS5-D6}
\end{table}
In the particular brane intersection associated to our solutions there are $\alpha_j$ D2-branes\footnote{With the $\alpha_j$ defined as in \eqref{nuevasalphas}.} and a D6-brane wrapped on the $\mathbb{T}^4$ stretched between NS5-branes, that play the role of colour branes.
Note however that in order to have a consistent IIA supergravity background the number of D6-branes should be large, which implies that prior to the reduction the S$^3$ has to be modded by $\mathbb{Z}_k$, such that $k$ D6-branes are obtained upon reduction.
Additional $(\beta_{j-1}-\beta_j)$ orthogonal D4-branes at each interval play the role of flavour branes. The holographic central charge can be obtained from the result in \cite{Lozano:2019zvg}, where this quantity was computed for the general class of solutions in \cite{Lozano:2019emq}. One can check that for our configuration it agrees with the holographic central charge computed in \eqref{holoc}, multiplied by $k$ due to the $\mathbb{Z}_k$ orbifolding, that mods out the S$^3$ by $\mathbb{Z}_k$. The field theory living in the brane intersection can also be determined from the general study in \cite{Lozano:2019zvg}\footnote{See also \cite{Couzens:2021veb}, where some corrections to the analysis in \cite{Lozano:2019zvg} were pointed out.}. The result is the quiver gauge theory depicted in Figure \ref{quiverT3_2}.
\begin{figure}
\centering
\includegraphics[scale=0.7]{quiverT3_2_red}
\caption{2d quiver associated to the $\text{AdS}_3\times \text{S}^2\times \mathbb{T}^4\times I$ solutions with $\alpha_k$ D2-branes and $k$ D6-branes wrapped on the $\mathbb{T}^4$. Circles denote $(0,4)$ vector multiplets, blue lines $(4,4)$ twisted hypermultiplets, red lines $(0,4)$ hypermultiplets and dashed lines $(0,2)$ Fermi multiplets.}
\label{quiverT3_2}
\end{figure}
In this figure circles denote $(0,4)$ vector multiplets, blue lines $(4,4)$ twisted hypermultiplets, red lines $(0,4)$ hypermultiplets and dashed lines $(0,2)$ Fermi multiplets. 2d $(0,4)$ theories do not have a Coulomb branch, since $(0,4)$ vector multiplets contain no scalars. In turn, for the Higgs branch one can use expression \eqref{field-theory-c}, which gives rise to\footnote{Here the factor of $q$ arises because the quiver has to be rescaled by $q$ in order to account for the rescaling of the quiver of Figure \ref{quiverT3_1}.}
\begin{equation}
c_R=6\,q\,k\, \sum_{j=1}^P \alpha_j.
\end{equation}
In this case this is the right-moving central charge, since the theory is $(0,4)$ supersymmetric. However, using expression \eqref{cL} one can see that $c_L=c_R$, due to the condition of anomaly cancellation. One can now see that this expression agrees with the central charge of (the Higgs branch of) the quiver depicted in Figure \ref{quiverT3_1}, given by expression
\eqref{result-c}\footnote{Multiplied by $k$ due to the orbifolding by $\mathbb{Z}_k$.}. This result shows that the different light multiplets appearing in the quivers depicted in Figures \ref{quiverT3_1} and \ref{quiverT3_2}, both of which are precise deductions of perturbative string theory, lead to the same central charge. Of course the reason for this agreement is the common origin in M-theory of both classes of solutions. Field theoretically what we find is a realisation of the mirror symmetry of the dual CFT, in the precise sense discussed below.
\subsection{Realisation in Type IIB}\label{typeIIB}
The common M-theory origin of both classes of solutions implies that they are related by S-duality once they are T-dualised onto Type IIB string theory. This is another reason why they should flow to the same CFT in the IR.
Which deformation would be more convenient to use away from the critical point depends as usual on the concrete value of the gauge coupling. At the level of the solutions, once they have been T-dualised to Type IIB supergravity both classes become $(0,4)$ supersymmetric, because the T-duality on the $\text{AdS}_3\times\text{S}^3\times\mathbb{T}^3\times I$ solutions, $(4,4)$ supersymmetric in Type IIA, takes place along the Hopf-fibre of the $\text S^3$, and this halves the supersymmetries to $(0,4)$\footnote{Still, the 2d dual CFT does not change, independently on the number of supersymmetries that are manifest in the UV.}. These Type IIB solutions are interesting on their own, since they provide explicit holographic duals to D3-brane boxes constructions \cite{Hanany:2018hlz}, realising in this case small $\mathcal{N}=(0,4)$ supersymmetry\footnote{Recall that, instead, the D3-brane boxes constructed in \cite{Hanany:2018hlz} have SO$(4)_R$ symmetry, and should therefore be dual to $\text{AdS}_3$ solutions with large $\mathcal{N}=(0,4)$ supersymmetry.}. In the next subsection we construct these Type IIB backgrounds, and show that they are related by an SL(2,$\mathbb{R}$) transformation to the T-duals (along a circle on the $\mathbb{T}^4$) of the AdS$_3\times $S$^2\times \mathbb{T}^4\times I$ solutions of massless Type IIA constructed in \cite{Lozano:2019emq}.
The brane set-up associated to the T-dual of the AdS$_3\times $S$^3\times \mathbb{T}^3\times I$ solutions studied in section \ref{D2D4NS5} is depicted in Table \ref{D3-D5-NS5-NS5'}, while that associated to the T-dual of the AdS$_3\times $S$^2\times \mathbb{T}^4\times I$ solutions constructed in \cite{Lozano:2019emq} is depicted in Table \ref{D3-NS5-D5-D5'}. One can check that these brane set-ups are S-dual to each other.
\begin{table}[ht]
\begin{center}
\begin{tabular}{| l | c | c | c | c| c | c| c | c| c | c |}
\hline
&$x^0$ & $x^1$ & $z_1$ & $z_2$ & $z_3$ & $\rho$ & $\zeta$ & $\psi$ & $\theta^1$ & $\theta^2$\\ \hline
D3 & x & x & & & &x & & x & & \\ \hline
D5 & x & x & x &x & x & & & x & & \\ \hline
NS5 & x & x & & & & & x & x & x & x \\ \hline
NS5' & x & x & x & x & x & x & & & & \\ \hline
\end{tabular}
\end{center}
\caption{$\frac18$-BPS brane intersection T-dual to the brane intersection depicted in Table \ref{D2-D4-NS5}, realising a D3-brane box model. $(x^0,x^1)$ are the directions where the field theory lives, $(z_1, z_2, z_3)$ span the $\mathbb{T}^3$, $\rho$ is the direction where the NS5-branes are located, $\zeta$ and $\theta^i$ are respectively the radial coordinate of AdS$_3$ and the angles that parameterise the S$^2$, and $\psi$ parameterises the S$^1$ generated upon the dualisation, where the NS5'-branes are located. $(\rho,\psi)$ are thus the two directions of the brane box.}
\label{D3-D5-NS5-NS5'}
\end{table}
\begin{table}[http!]
\begin{center}
\begin{tabular}{| l | c | c | c | c| c | c| c | c| c | c |}\hline
&$x^0$ & $x^1$ & $z_1$ & $z_2$ & $z_3$ & $\psi$ & $\rho$ & $\zeta$ & $\theta^1$ & $\theta^2$ \\ \hline
D3 & x & x & & & &x & x & & & \\ \hline
NS5 & x & x & x &x & x &x & & & & \\ \hline
D5 & x & x & & & & x & & x & x & x \\ \hline
D5' & x & x & x & x & x & &x & & & \\ \hline
\end{tabular}
\end{center}
\caption{$\frac18$-BPS brane intersection T-dual to the brane intersection depicted in Table \ref{D2-D4-NS5-D6}. $(x^0,x^1)$ are the directions where the field theory lives, $(z_1, z_2, z_3)$ span a $\mathbb{T}^3$, $\psi$ is the T-duality circle and $\rho$ is the field theory direction. This configuration is S-dual to the configuration in Table \ref{D3-D5-NS5-NS5'}.}
\label{D3-NS5-D5-D5'}
\end{table}
Furthermore, one can see that the S-duality of Type IIB string theory interchanges the $(0,4)$ hypermultiplets and $(0,4)$ twisted hypermultiplets associated to the massless string modes living in the respective Type IIB configurations.
This is the 2d manifestation of the mirror symmetry present in 3d gauge theories \cite{Intriligator:1996ex,Hanany:1996ie}, which besides inverting the coupling constant, interchanges the scalars in the hypermultiplets and vector multiplets, and therefore the Higgs and Coulomb branches of the 3d theory. Given that 2d (0,4) field theories do not have a Coulomb branch, since (0,4) vector multiplets contain no scalars, 2d mirror symmetry cannot be realised as the interchange between the Higgs and Coulomb branches. Remarkably, mirror symmetry is realised in this set-up as the interchange between the scalars transforming under the SU$(2)_R$ symmetry, i.e those belonging to the twisted hypermultiplets, with those that are singlets under the SU(2)$_R$, i.e the ones belonging to the untwisted hypermultiplets. This extends very naturally the mirror symmetry present in 3d gauge theories to these 2d theories, and parallels the interchange between chiral and twisted chiral superfields inherent to mirror symmetry in supersymmetric sigma models.
\subsubsection{Solutions of Type IIB supergravity}
In this subsection we complement the above holographic discussion with the explicit construction of the Type IIB supergravity solutions.
We start presenting the T-dual of the solutions studied in section \ref{D2D4NS5}. T-dualising along the Hopf fiber of the S$^3$ of the $\text{AdS}_3\times\text{S}^3\times\mathbb{T}^3$ solutions given by \eqref{eq:massiveclassmetricYolanda}-\eqref{F6Yolanda} we obtain the Type IIB backgrounds
\begin{align}\label{eq:solD3D5NS5}
ds^2&= q\,h^{-1/2}\left[ds^2(\text{AdS}_3)+4^{-1}\,ds^2(\text{S}^2)\right]+q^{-1}\,h^{1/2}\,d\psi^2+ g\,\left[h^{-1/2} d\rho^2+ h^{1/2}\,ds^2(\mathbb{T}^3)\right]\,,\nonumber\\[2mm]
e^{-\Phi}&=(q\,h)^{1/2}\, g^{-1/2}\,,\qquad \qquad H_3=\partial_{\rho}\,(hg)\, \text{vol}(\mathbb{T}^3)-2^{-1}\,\text{vol}(\text{S}^2)\wedge d\psi,\nonumber \\[2mm]
F_1&=g^{-1}\,\partial_\rho\,h\,d\psi\,,\nonumber\\[2mm]
F_3&=-2^{-1} q \, \text{vol}(\text{S}^2)\wedge d\rho,\nonumber\\[2mm]
F_5&= 2\, q \, \text{vol}(\text{AdS}_3)\wedge d\rho\wedge d\psi+2^{-1}\,q\,g \,h\,\text{vol}(\mathbb{T}^3)\wedge \text{vol}(\text{S}^2)\,,
\end{align}
where $\psi$ parameterises the T-duality circle\footnote{This solution is an example contained in the class of \cite{Macpherson:2022sbs} section 3.1: one should identify $(h,g)$ and $(P,G)$ there, restrict $u'=0$ and impose that $\partial_{z_i}$ are all isometries.}.
In order to provide the local representation of the brane set-ups of Tables \ref{D3-D5-NS5-NS5'} and \ref{D3-NS5-D5-D5'} we need to focus on the particular situation
\begin{equation}
h=\text{constant}, \qquad g''=0,
\end{equation}
corresponding to the massless solutions in Type IIA. In this case the metric exhibits the characteristic behaviour of NS5-branes wrapped on an AdS$_3\times$S$^2\times$S$^1$ geometry. Indeed, it
can be verified that these solutions arise in the near horizon limit of a D3-D5-NS5-NS5' brane solution representing the bound state of Table \ref{D3-D5-NS5-NS5'}, where the D3-D5-NS5' branes have been fully localised within the worldvolume of the NS5 branes (as it was done for the $H_{\text D 2}(\zeta)$ and $H_{\text D 4}(\zeta)$ harmonic functions in section \ref{branepicture}). We will restrict to this subclass of solutions, characterised by a vanishing axion, in the remainder of this section.
Let us now perform an $\mrm{SL}(2,\mathbb R)$ rotation parameterised by an angle $\xi \in[0,\frac{\pi}{2}]$,\begin{equation}
R=
\left(\begin{array}{cc}
\cos \xi & - \sin \xi \\
\sin\xi & \cos\xi \\
\end{array}\right)\,,
\end{equation}
in this subclass of solutions.
Starting with a ``seed" background described by fluxes, dilaton, metric and axio-dilaton $F_{(n),s}$, $\Phi_{s}$, $ds^2_{10,s}$ and $\tau_s=C_{0,s}+ie^{-\Phi_s}$, $R$ acts as usual,
\begin{equation}
\begin{split}\label{Srotation_fluxes}
&\left(\begin{array}{c}
\tilde F_{3} \\
H_{3}\\
\end{array}\right)=\left(\begin{array}{cc}
\cos \xi & - \sin \xi \\
\sin\xi & \cos\xi \\
\end{array}\right)\left(\begin{array}{c}
F_{3,s} \\
H_{3,s}\\
\end{array}\right)\,,\\
& \tau=\frac{\cos\xi\,\tau_s-\sin\xi}{\sin\xi\,\tau_s+\cos\xi}\,, \qquad F_{5}=F_{5,s}\,.\\
\end{split}
\end{equation}
Note that even if our seed solutions are characterized by a vanishing axion, this transformation generates a non-trivial profile for $C_{0}$. This implies that the 3-form flux associated to the rotated solution is given by $F_{3}=\tilde F_{3}-C_{0}H_{3}$.
Finally, the metric in the string frame transforms as $ds^2_{10}=|\cos\xi+\sin\xi\,\tau|\,ds^2_{10,s}$. Applying these rules to the backgrounds given by \eqref{eq:solD3D5NS5} the following one-parameter family of solutions is obtained,
\begin{align}\label{eq:solD3D5NS5_SLrotated}
ds^2&=\Delta^{1/2}\left[q\,h^{-1/2}\left[ds^2(\text{AdS}_3)+4^{-1}\,ds^2(\text{S}^2)\right]+q^{-1}\,h^{1/2}\,d\psi^2+ g\,\left[h^{-1/2} d\rho^2+ h^{1/2}\,ds^2(\mathbb{T}^3)\right]\right]\,,\nonumber\\[2mm]
\Delta&=c^2+q\,h\,g^{-1}\,s^2\,,\nonumber\\[2mm]
e^{-\Phi}&=\Delta^{-1}(h\,q)^{1/2}\,g^{-1/2}\,,\qquad \qquad C_0=sc\,\Delta^{-1}\left(h\,q\,g^{-1}-1 \right)\,,\nonumber\\[2mm]
H_3&=c\,h\,\partial_{\rho}\,g\, \text{vol}(\mathbb{T}^3)-2^{-1}\,c\text{vol}(\text{S}^2)\wedge d\psi-2^{-1}\,s\,q \, \text{vol}(\text{S}^2)\wedge d\rho, \nonumber\\[2mm]
F_3&= -2^{-1}\,q \,c\,\Delta^{-1} \text{vol}(\text{S}^2)\wedge d\rho-s\,q\,h^2g^{-1}\Delta^{-1}\partial_{\rho}\,g\, \text{vol}(\mathbb{T}^3)+2^{-1}s\,q\,h\,g^{-1}\,\Delta^{-1}\text{vol}(\text{S}^2)\wedge d\psi,\nonumber\\[2mm]
F_5&= 2\, q \text{vol}(\text{AdS}_3)\wedge d\rho\wedge d\psi+2^{-1}\,q\,g \,h\,\text{vol}(\mathbb{T}^3)\wedge \text{vol}(\text{S}^2)\,,
\end{align}
where $s=\sin\xi$ and $c=\cos\xi$\footnote{This generalised solution is an example contained in the class of \cite{Macpherson:2022sbs} section 3.2: again one should identify $(h,g)$ with $(P,G)$ there, restrict $u'=0$ and impose that $\partial_{z_i}$ are all isometries.}. In particular, the family of S-dual solutions is obtained by setting $\xi=\frac{\pi}{2}$ in the above class, giving rise to
\begin{align}\label{eq:solD3D5NS5_Sdual}
ds^2&= q^{3/2}g^{-1/2}\left(ds^2(\text{AdS}_3)+4^{-1}ds^2(\text{S}^2)\right)+q^{-1/2}h\,g^{-1/2}d\psi^2\nonumber\\
&+q^{1/2} g^{1/2} d\rho^2+ q^{1/2} g^{1/2}\,h\,ds^2(\mathbb{T}^3),\nonumber\\[2mm]
e^{-\Phi}&=(q\,h)^{-1/2}g^{1/2}\,,\qquad \qquad H_3=-2^{-1}q \, \text{vol}(\text{S}^2)\wedge d\rho\,\nonumber\\[2mm]
F_3&=-h\, \partial_{\rho}\,g\, \text{vol}(\mathbb{T}^3)+2^{-1}\text{vol}(\text{S}^2)\wedge d\psi,\nonumber\\[2mm]
F_5&= 2\, q \text{vol}(\text{AdS}_3)\wedge d\rho\wedge d\psi+2^{-1}\,q\,g \,h\,\text{vol}(\mathbb{T}^3)\wedge \text{vol}(\text{S}^2)\,.
\end{align}
One can observe that, as expected, the 5-branes exchange their roles, with the metric now exhibiting the characteristic behaviour of D5-branes wrapped on a AdS$_3\times$S$^2\times$S$^1$ geometry, originated by a D3-D5'-NS5 fully-backreacted intersection. One can also verify that these solutions arise in the near horizon limit of the brane
intersection depicted in Table \ref{D3-NS5-D5-D5'}, where the D3-D5'-NS5 branes are fully localised within the worldvolume of the D5-branes.
The class of solutions presented in this section can be related to the Type IIB $\ma N=(0,4)$ AdS$_3$ solutions constructed in \cite{Faedo:2020lyw}, from slightly more general, D3-D5-NS5-D5'-NS5', brane set-ups. The easiest way to show this is by relating the solution with $\xi=\frac{\pi}{2}$ given in \eqref{eq:solD3D5NS5_Sdual} with equation (2.7) in \cite{Faedo:2020lyw}. One needs to impose that $H_{\text{NS5}'}=1$, rename $H_{\text{D5}'}=g$ and smear the solution in \cite{Faedo:2020lyw} in such a way that $H_{\text{D5}'}=g$ is delocalised over the internal $\mathbb R^3$, such that it can be replaced by a $\mathbb{T}^3$.
\section{AdS$_3\times$S$^3\times \mathbb{T}^3$ in Type I'}\label{TypeI'}
In this section we return to the solutions constructed in section \ref{D2D4NS5} but we now focus on the massive case
$F_0\neq 0$. Recall that we had the AdS$_3\times$S$^3\times \mathbb{T}^3$ geometries fibered over an interval given by \eqref{eq:massiveclassmetricYolanda}-\eqref{F6Yolanda}, with defining functions satisfying the Bianchi identities \eqref{bianchisT3}.
In the massive case we choose to write $(g,h)$ in terms of a function $u$ as
\begin{equation} \label{massiveT3}
h=\sqrt{u} ,~~~~g=\frac{c}{\sqrt{u}},
\end{equation}
such that the Bianchi identities are satisfied with $c$ constant and $u$ a linear function.
The solutions then take the form
\begin{align}\label{eq:massiveclassmetricmassive}
&ds^2= \frac{q}{u^{\frac{1}{4}}}\bigg[ds^2(\text{AdS}_3)+ds^2(\text{S}^3)\bigg]+ \frac{c}{u^{\frac{1}{4}}}\bigg[ds^2(\mathbb{T}^3)+\frac{1}{\sqrt{u}}d\rho^2\bigg],~~~~e^{-\Phi}=\frac{u^{\frac{5}{8}}}{\sqrt{c}},\\
&F_0=\frac{u'}{2c}, \qquad F_4=2\,q\Bigl(\text{vol}(\text{AdS}_3)+\text{vol}(\text{S}^3)\Bigr)\wedge d\rho, \label{F0F4}\\
&F_6= 2\,q\,c\text{vol}(\mathbb{T}^3)\wedge (\text{vol}(\text{S}^3) + \text{vol}(\text{AdS}_3)) \label{massiveclasslast}
\end{align}
The underlying brane set-up is the one depicted in Table \ref{D2D4D8}.
\begin{table}[http!]
\begin{center}
\begin{tabular}{| l | c | c | c | c| c | c| c | c| c | c |}\hline
&$x^0$ & $x^1$ & $z_1$ & $z_2$ & $z_3$ & $\rho$ & $\zeta$ & $\theta^1$ & $\theta^2$ & $\theta^3$ \\ \hline
D2 & x & x & & & &x & & & & \\ \hline
D4 & x & x &x &x &x & & & & & \\ \hline
D8 & x & x & x & x & x & &x &x &x &x \\ \hline
\end{tabular}
\end{center}
\caption{$\frac18$-BPS brane intersection underlying the geometry \eqref{eq:massiveclassmetricmassive}-\eqref{massiveclasslast}. $(x^0,x^1)$ are the directions where the 2d dual CFT lives, $(z_1, z_2, z_3)$ span the $\mathbb{T}^3$, where the D4's and the D8's are wrapped, $\rho$ is the field theory direction, where the D2 branes are stretched, and $\theta^i$ parameterise the S$^3$.}
\label{D2D4D8}
\end{table}
As mentioned above, $u$ has to be a linear function in order to satisfy the Bianchi identities. We will take it to be piece-wise linear such that D8-branes can be introduced at the different jumps of its derivative, according to the expression for $F_0$ in \eqref{F0F4}.
We take the space to begin at $\rho=0$ and end at $\rho_P$, where $u$ vanishes. At the zeros of $u$ the solutions behave as
\begin{align}\label{eq:massiveclassmetricmassiveII}
ds^2&= \frac{q}{\sqrt{x}}\bigg[ds^2(\text{AdS}_3)+ds^2(\text{S}^3)+c \,ds^2(\mathbb{T}^3) \bigg]+4c \sqrt{x}dx^2,~~~~e^{-\Phi}=\frac{x^{\frac{5}{4}}}{\sqrt{c}},
\end{align}
where $\rho=x^2$, which is the behaviour of a localised D8/O8 system on AdS$_3\times $S$^3\times \mathbb{T}^3$.
We will then define the solutions globally by embedding them into Type I' string theory, that is, introducing O8 orientifold fixed points at both ends of the space and 16 D8-branes (together with their mirrors under $\mathbb{Z}_2$) at arbitrary positions in $\rho$. Taking $\rho_P=\rho_{17}=\pi$ and the 16 D8-branes located at arbitrary points $\rho_1,\dots, \rho_{16}$ between $\rho=0$ and $\rho_{17}=\pi$, we have that $u(\rho)$ is given by
\begin{equation} \label{profileu}
u(\rho) = \left\{ \begin{array}{ccrcl}
- \frac{16c}{2\pi}\rho , &0\leq \rho_1 \\
\alpha_1- \frac{14c}{2\pi}(\rho-\rho_1), &\rho_1\leq \rho \leq \rho_2 \\
\vdots&\\
\alpha_k+ \frac{2c(k-8)}{2\pi}(\rho-\rho_k), &\rho_k\leq \rho \leq \rho_{k+1} \\
\vdots&\\
\alpha_{15}+\frac{14c}{2\pi}(\rho-\rho_{15}), &\rho_{15}\leq \rho\leq \rho_{16}\\
\alpha_{16}+ \frac{16c}{2\pi}(\rho-\pi), &\rho_{16}\leq \rho\leq \pi ,\\
\end{array}
\right.
\end{equation}
where, for continuity the $\alpha_k$ must satisfy
\begin{equation}
\alpha_k=\alpha_{k-1}-\frac{2c}{2\pi}(9-k)(\rho_k-\rho_{k-1}), \qquad \text{for} \qquad k=1,\dots, 16.
\end{equation}
In turn, in order to satisfy the condition $u(\pi)=0$ the positions of the D8-branes must be such that
\begin{equation}
\sum_{k=1}^{17}(9-k)(\rho_k-\rho_{k-1})=0.
\end{equation}
Note that this is trivially satisfied when $\rho_{17-k}=\pi-\rho_k$, with $k=1,\dots, 8$, i.e. when the D8-branes are symmetrically distributed along the interval, and also when the D8-branes are equally spaced, such that $\rho_k-\rho_{k-1}=\pi/16$ for all $k$.
Besides the D8-brane charge jumping by +1 at the position of each D8-brane, we have the quantised charges
\begin{eqnarray}
&&Q_{D2}^{(k)}=\frac{1}{(2\pi)^5}\int_{\mathbb{T}^3,\text{S}^3}f_6= c\, q \label{QD2}\\
&&Q_{D4}^{(k)}=\frac{1}{(2\pi)^3}\int_{I_\rho,\text{S}^3}F_4=\frac{q}{2\pi}(\rho_{k+1}-\rho_k). \label{QD4}
\end{eqnarray}
The number of D2-branes must thus be the same in all intervals, with $c=Q_{D2}/q$, while the jump in the D4-brane charge must be given by \eqref{QD4}.
With these ingredients we can proceed to construct the quiver gauge theories that flow in the IR to the CFTs dual to our solutions. In order to account for the different massless fields that build the quivers we look at the quantisation of the open strings stretched between the different branes in the brane set-up depicted in Table \ref{D2D4D8}. Following \cite{Douglas:1996uz}\footnote{In this reference the projection induced by the orientifold fixed points was carefully analysed for the Type I D1-D5 system, T-dual to our D2-D4-D8 brane set-up.} we find:
\begin{itemize}
\item D2-D2 strings: Open strings with both ends on the same stack of D2-branes give rise to $(0,4)$ SO($Q_{D2}$) vector multiplets and $(0,4)$ hypermultiplets in the symmetric representation of SO($Q_{D2}$).
\item D2-D4 strings: Open strings stretched between D2 and D4 branes give rise to $\mathcal{N}=(0,4)$ hypermultiplets in the bifundamental representation of SO($Q_{D2}) \times$ Sp($2Q_{D4}$).
\item D2-D8 strings: Open strings stretched between D2 and D8 branes give rise to $(0,2)$ Fermi multiplets in the bifundamental representation of SO($Q_{D2}) \times$ SO($Q_{D8}$).
\end{itemize}
These massless modes give rise to the $(0,4)$ disconnected quivers depicted in Figure \ref{quiverTypeI'}.
\begin{figure}
\centering
\includegraphics[scale=0.45]{Dibujo4}
\caption{Quiver associated to the AdS$_3\times $S$^3\times \mathbb{T}^4$ solutions in Type I'.}
\label{quiverTypeI'}
\end{figure}
In these quivers anomaly cancellation imposes that
\begin{equation}
2Q_{D4}^{(k)}=\Delta Q_{D8}^{(k)}=1,
\end{equation}
as explained below equation \eqref{anomalies}. Given that D4-branes in Type I' carry $1/2$ units of charge \cite{Witten:1995gx}, in order to obtain a consistent CFT in the IR the D4-branes must be located in exactly the same positions in $\rho$ as the D8-branes. This fixes the total number of D4-branes to 16\footnote{Note that it is possible to consider the situation in which some of the $\rho_k$ coincide, such that a group of D8-D4 branes is added at that position.}.
This condition needs to be imposed on the supergravity solution in order to describe a proper Type I' background with a well-defined 2d dual CFT. It is likely that this condition arises as a consistency condition of the supergravity solution itself, however we leave confirmation of this for future work.
Finally, substituting \eqref{massiveT3} in \eqref{holographic-c} it is straightforward to see that the holographic central charge for this class of solutions is given by
\begin{equation}
c_{hol}= 48\, Q_{D2},
\end{equation}
and that this matches exactly the field theory result, obtained from \eqref{field-theory-c}, which gives in this case
\begin{equation}
c_R=c_L=6\sum_{k=1}^{16} Q_{D2}Q_{D4}^{(k)}= 48\, Q_{D2}.
\end{equation}
\section{Conclusions}\label{conclusions}
In this paper we have constructed a new class of AdS$_3\times $S$^3\times $M$_4$ solutions of massive Type IIA supergravity with $\mathcal{N}=(0,4)$ supersymmetries and SU(3) structure. We have then analysed separately two interesting subclasses of solutions. The first one is when M$_4=~$S$^2\times \Sigma_2$, with $\Sigma_2$ a 2d Riemann surface, and the geometry is foliated over the $\Sigma_2$. We have shown that the AdS$_3\times $S$^3\times $S$^2\times \Sigma_2$ geometries flow in the UV, asymptotically locally, to the AdS$_7\times $S$^2\times I$ geometries constructed in \cite{Apruzzi:2013yva}. This points at a possible interpretation of the solutions as describing surface defect CFTs within the 6d $(1,0)$ CFTs dual to the AdS$_7$ solutions. We have checked that this interpretation is correct by explicitly embedding the 2d $(0,4)$ quivers associated to the AdS$_3$ solutions into the 6d quivers that describe the 6d $(1,0)$ CFTs dual to the AdS$_7$ spaces. Our analysis extends\footnote{And corrects, as explained in section \ref{surfaceCFT}.} the results in \cite{Faedo:2020nol}, where AdS$_3$ solutions dual to surface defect CFTs embedded in the 6d $(1,0)$ CFT dual to the AdS$_7$ solution to massless Type IIA supergravity \cite{Cvetic:2000cj} were constructed, allowing now for $F_0\neq0$. In our analysis we have been able to show the exact agreement between the field theory and holographic central charges, even if both quantities are divergent due to the existence of the non-compact direction inherent to the defect. Indeed, the whole point of the defect interpretation is that the presence of the non-compact direction allows to build up the AdS$_7$ geometry asymptotically and therefore to complete the non-compact AdS$_3$ solutions in the UV.
The second case that we have addressed in detail is when M$_4=\mathbb{T}^3\times I$ and the AdS$_3\times $S$^3\times \mathbb{T}^3$ geometry is foliated over the interval. We have studied separately the massless and massive cases, starting with the former. We have shown that in this case there is a supersymmetry enhancement to $(4,4)$, and that the solutions are holographically dual to 2d CFTs with 8 supercharges living in D2-D4-NS5 Hanany-Witten brane set-ups. This is the trivial extension to 2d of the 3d Hanany-Witten brane set-ups constructed in \cite{Hanany:1996ie}, and even if they were studied long ago \cite{Brodie:1997wn,Alishahiha:1997cm} the holographic duals were still missing in the literature. In this paper we have taken the first step towards filling this gap. Our only point of concern is that the global completion that we have found for our AdS$_3$ constructions is in terms of smeared ONS5 orientifold fixed planes. ONS5 orientifold fixed planes are perfectly well-defined objects in string theory (one way of defining them is as S-duals of O5-planes), but in our construction they are smeared on the $\mathbb{T}^3$. As we discuss below \eqref{eq:refpoint}, it is possible that the smearing of the ONS5s is an artifact of the supergravity approximation and is resolved in string theory. However if one is to take a more conservative view, the existence of our solutions with smeared ONS5s also suggests the existence of similar solutions in supergravity with localised ONS5s, as is often the case in constructions involving O-planes. Such solutions are generically far harder to construct, and lie outside the scope of this work, but one can view our solutions as an important first step in this direction. We have shown that the embedding of this class of solutions within M-theory and Type IIB supergravity sheds some light onto some of their properties.
The first realisation relates them to the AdS$_3\times $S$^2\times \mathbb{T}^4\times I$ solutions of massless Type IIA supergravity constructed in \cite{Lozano:2019emq} \footnote{The class of solutions in \cite{Lozano:2019emq} is more general, since they allow for a non-vanishing Romans' mass. Here we come across the massless subclass due to the connection via M-theory.}. This realisation allows one to interpret the quiver CFTs dual to the solutions studied in this paper, which exhibit $(4,4)$ supersymmetry, and the quiver CFTs associated to the massless solutions in \cite{Lozano:2019emq}, $(0,4)$ supersymmetric, as deformations of a unique 2d $(4,4)$ CFT, which exhibits different supersymmetries depending on how it is deformed in the UV. We have completed our analysis with an study in Type IIB string theory, where both AdS$_3$/CFT$_2$ pairs are related by S-duality. The realisation in Type IIB shows that mirror symmetry in 2d interchanges the scalars in the hypermultiplets and twisted hypermultiplets, instead of the scalars in the vector multiplets and hypermultiplets (and therefore the Coulomb and Higgs branches) as in 3d \cite{Intriligator:1996ex,Hanany:1996ie}. That mirror symmetry can still be realised in this way in theories without a Coulomb branch is a remarkable output of our analysis. These AdS$_3$ solutions in Type IIB provide concrete examples within the broad classification of AdS$_3\times$S$^2\times$M$_5$ vacua with M$_5$ supporting an identity-structure derived in \cite{Macpherson:2022sbs}.
Finally, we have extended our study of the AdS$_3\times $S$^3\times \mathbb{T}^3\times I$ solutions by turning on a Romans' mass. We find solutions with local non-compact parts glued together with localised D8-branes, bounded between D8/O8s. The solutions so constructed can be globally embedded within Type I' string theory allowing us to propose a dual AdS/CFT pair: We provide evidence for our proposal by comparing the central charges of the two theories, finding exact agreement. In this case the condition for anomaly cancellation of the 2d quivers required that we impose an additional constraint on the dual supergravity background by hand - it would be interesting to reproduce this condition with a gravity computation. The solutions constructed in this section are the small ${\cal N}=(0,4)$ analogues of a similar class of geometries on AdS$_3\times $S$^3\times $S$^3\times I$ constructed in \cite{Macpherson:2018mif}. It would be interesting to explore what the CFT dual of these solutions is also, and to what extent it is similar to our proposal here.
\section*{Acknowledgements}
YL and CR are partially supported by the AEI through the Spanish grant PGC2018-096894-B-100 and by the FICYT through the Asturian grant SV-PA-21-AYUD/2021/52177. CR is supported by a Severo Ochoa Fellowship by the Principality of Asturias (Spain). NM is also supported by the AEI through the project PID2020-114157GB-I00 and the Unidad de Excelencia Mar\'\i a de Maetzu MDM-2016-0692, and by Xunta de Galicia-Conseller\'\i a de Educaci\'on (Centro singular de investigaci\'on de Galicia accreditation 2019-2022, and project ED431C-2021/14), and the European Union FEDER. NP is supported by the Israel Science Foundation (grant No. 741/20) and by the German Research Foundation through a German-Israeli Project Cooperation (DIP) grant ''Holography and the Swampland".
|
1,108,101,565,238 | arxiv | \section{Introduction}
This paper establishes strong convergence rates for certain numerical approximations of fractional Brownian motion.
These approximations are inspired by Markovian representations of fractional Brownian motion \cite{carmona1998fractional, carmona2000approximation, muravlev2011representation, harms2016affine} and of more general Volterra processes with singular kernels \cite{mytnik2015uniqueness, abijaber2017affine, abijaber2018markovian, abijaber2018multi, cuchiero2018generalized}.
The motivation is to develop efficient Monte Carlo methods for fractional (or rough) volatility models \cite{gatheral2014vola, bayer2016pricing, bennedsen2016decoupling, bayer2017regularity, horvath2017functional}, which have been introduced on the grounds of extensive empirical evidence \cite{gatheral2014vola, bayer2016pricing, bennedsen2016decoupling} and theoretical results \cite{alos2007short, fukasawa2011asymptotic, forde2017asymptotics, bayer2018short}.
The main result is the following.
\begin{theorem}\label{thm:main}
For any time horizon $T\in(0,\infty)$,
Hurst index $H \in (0,1/2)$,
and desired convergence rate $r \in (0,\infty)$,
the following statements hold:
\begin{enumerate}
\item\label{thm:main1}
Volterra Brownian motion can be approximated at rate $n^{-r}$ by a sum of $n$ Ornstein--Uhlenbeck processes.
More precisely, there are speeds of mean reversion $x_{n,i}\in(0,\infty)$ and weights $w_{n,i}\in(0,\infty)$, $1\leq i\leq n$, such that for any Brownian motion $W$, the continuous versions $W^H$ and $W^{H,n}$ of the stochastic integrals
\begin{align*}
W^H_t&:=\int_0^t (t-s)^{H-1/2} dW_s,
&
W^{H,n}_t&:=\sum_{i=1}^n w_{n,i}\int_0^t e^{-(t-s)x_{n,i}} dW_s,
&&
t \in [0,T].
\end{align*}
satisfy
\begin{equation*}
\forall p\in[1,\infty):
\qquad
\sup_{n \in \mathbb N} n^r \left\| W^H-W^{H,n}\right\|_{L^p(\Omega,C([0,T],\mathbb R)}<\infty.
\end{equation*}
\item\label{thm:main2}
Under the above approximation, put prices in the rough Bergomi model converge at rate $n^{-r}$.
More precisely, for any Brownian motion $B$, the stochastic exponentials
\begin{align*}
S_t &:= 1 + \int_0^t S_s \exp(W^H_s) dB_s,
&
S^{n}_t &:= 1 + \int_0^t S^n_s \exp(W^{H,n}_s) dB_s,
&&
t \in [0,T],
\end{align*}
satisfy for all strikes $K \in [0,\infty)$ that
\begin{equation*}
\sup_{n \in \mathbb N} n^r \left| \mathbb E\left[(K-S_T)_+\right]-\mathbb E\left[(K-S^n_T)_+\right]\right|<\infty.
\end{equation*}
\end{enumerate}
\end{theorem}
\begin{figure}[h]%
\centering
\tiny
\begin{psfrags}%
\psfrag{a00}[Bc][Bc]{$0.0$}%
\psfrag{a02}[Bc][Bc]{$0.2$}%
\psfrag{a04}[Bc][Bc]{$0.4$}%
\psfrag{a06}[Bc][Bc]{$0.6$}%
\psfrag{a08}[Bc][Bc]{$0.8$}%
\psfrag{a0}{$\text{ 0}$}%
\psfrag{a10A}{$10$}%
\psfrag{a10}[Bc][Bc]{$1.0$}%
\psfrag{a5}{$\text{ 5}$}%
\psfrag{t}[Bc][Bc]{$\text{t}$}%
\psfrag{x0}[tc][tc]{$0$}%
\psfrag{x11}[tc][tc]{$1$}%
\psfrag{x2}[tc][tc]{$0.2$}%
\psfrag{x4}[tc][tc]{$0.4$}%
\psfrag{x6}[tc][tc]{$0.6$}%
\psfrag{x8}[tc][tc]{$0.8$}%
\psfrag{x}[Bc][Bc]{$\text{x}$}%
\psfrag{y0}[cr][cr]{$0$}%
\psfrag{y11}[cr][cr]{$1$}%
\psfrag{y2}[cr][cr]{$0.2$}%
\psfrag{y4}[cr][cr]{$0.4$}%
\psfrag{y6}[cr][cr]{$0.6$}%
\psfrag{y8}[cr][cr]{$0.8$}%
\psfrag{Yxt}[Bc][Bc]{$Y^x_t$}%
\psfrag{AAA}[tc][tc]{$0.2$}%
\includegraphics[width=0.7\textwidth]{RandomField}
\end{psfrags}
\caption{Volterra Brownian motion of Hurst index $H\in(0,1/2)$ can be represented as an integral $W^H_t=\int_0^\infty Y^x_t x^{-1/2-H}dx$ over a Gaussian random field $Y^x_t$. The smoothness of the random field in the spatial dimension $x$ allows one to approximate this integral efficiently using high order quadrature rules.}
\label{fig:random_field}
\end{figure}
\begin{proof}
\ref{thm:main1} follows from the integral representation in \autoref{lem:vol} and its discretization in \autoref{lem:dis}.
More precisely, the $m$-point quadrature rule in \autoref{lem:dis} converges at any rate $r<\delta m/(1-\alpha-\beta+\delta+m)=2Hm/3$, where the parameters $\alpha=H+1/2$, $\beta=m-1$, $\gamma=1/2-H$, and $\delta=H$ are given by \autoref{lem:vol}.
Moreover, \ref{thm:main2} follows from \ref{thm:main1} and \autoref{lem:bergomi}.
\end{proof}
The idea behind \autoref{thm:main} is to represent Volterra Brownian motion as an integral over a Gaussian random field, as described in \autoref{lem:vol} and \autoref{fig:random_field}.
Thanks to the spatial smoothness of the random field, the integral can be approximated efficiently using high order quadrature rules, following and extending \cite{carmona2000approximation, harms2016affine, abijaber2018lifting, abijaber2018multi}.
A visual impression of the quality of this approximation can be obtained from \autoref{fig:paths}.
The predicted convergence rate $r\approx 2Hm/3$ using $m$-point interpolatory quadrature closely matches the numerically observed one; see \autoref{fig:errors}.
\begin{figure}[h]%
\begin{minipage}{0.5\textwidth}
\tiny
\begin{psfrags}%
\psfrag{AAA}[tc][tc]{$0.2$}%
\psfrag{AAB}[tc][tc]{$0.4$}%
\psfrag{AAC}[tc][tc]{$0.6$}%
\psfrag{AAD}[tc][tc]{$0.8$}%
\psfrag{AAE}[tc][tc]{$1.0$}%
\psfrag{AA}[tc][tc]{$0.0$}%
\psfrag{B}[tc][tc]{$W^{H,n}_t$}%
\psfrag{TimeT}[bc][Bc]{$\text{time ($t$)}$}%
\psfrag{VVVA}[cr][cr]{$-2$}%
\psfrag{VVVB}[cr][cr]{$0$}%
\psfrag{VVVC}[cr][cr]{$2$}%
\psfrag{VVV}[cr][cr]{$-4$}%
\psfrag{VVVD}[cr][cr]{$4$}%
\includegraphics[width=\textwidth]{SamplePaths}
\end{psfrags}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\tiny
\begin{psfrags}%
\psfrag{AAA}[tc][tc]{$0.2$}%
\psfrag{AAB}[tc][tc]{$0.4$}%
\psfrag{AAC}[tc][tc]{$0.6$}%
\psfrag{AAD}[tc][tc]{$0.8$}%
\psfrag{AAE}[tc][tc]{$1.0$}%
\psfrag{AA}[tc][tc]{$0.0$}%
\psfrag{B}[tc][tc]{$W^{H,n}_t$}%
\psfrag{TimeT}[bc][Bc]{$\text{time ($t$)}$}%
\psfrag{VVVA}[cr][cr]{$ -5$}%
\psfrag{VVVB}[cr][cr]{$ 0$}%
\psfrag{VVVC}[cr][cr]{$ 5$}%
\psfrag{VVV}[cr][cr]{$-10$}%
\psfrag{VVVD}[cr][cr]{$10$}%
\includegraphics[width=\textwidth]{HurstIndices}%
\end{psfrags}
\end{minipage}
\caption{%
Dependence of the approximations on the number $n$ of quadrature intervals and the Hurst index $H$.
Left: varying the number
$n\in\{2,5,10,20,40\}
=
\{\textcolor[rgb]{0.368417, 0.506779, 0.709798}{\blacksquare},
\textcolor[rgb]{0.880722, 0.611041, 0.142051}{\blacksquare},
\textcolor[rgb]{0.560181, 0.691569, 0.194885}{\blacksquare},
\textcolor[rgb]{0.922526, 0.385626, 0.209179}{\blacksquare},
\textcolor[rgb]{0.528488, 0.470624, 0.701351}{\blacksquare}\}$
of quadrature intervals with fixed parameters $H=0.1$, $m=5$.
Right:
varying the Hurst index
$H\in\{0.1,0.2,0.3,0.4\}
=
\{\textcolor[rgb]{0.368417, 0.506779, 0.709798}{\blacksquare},
\textcolor[rgb]{0.880722, 0.611041, 0.142051}{\blacksquare},
\textcolor[rgb]{0.560181, 0.691569, 0.194885}{\blacksquare},
\textcolor[rgb]{0.922526, 0.385626, 0.209179}{\blacksquare}\}$ with fixed parameters $n=40$, $m=5$.}
\label{fig:paths}
\end{figure}
\begin{figure}[h]%
\begin{minipage}{0.5\textwidth}
\tiny
\begin{psfrags}%
\psfrag{AAAAAA}[cr][cr]{$10^{-2}$}%
\psfrag{AAAAAB}[cr][cr]{$10^{-1}$}%
\psfrag{AAAAAC}[cr][cr]{$10^{0}$}%
\psfrag{AAAAA}[cr][cr]{$10^{-3}$}%
\psfrag{AAA}[tc][tc]{$10^{1}$}%
\psfrag{AAB}[tc][tc]{$10^{2}$}%
\psfrag{AAC}[tc][tc]{$10^{3}$}%
\psfrag{AA}[tc][tc]{$10^{0}$}%
\psfrag{B}[tc][tc]{$\text{relative error ($e$)}$}%
\psfrag{NumberpOfInt}[bc][Bc]{$\text{number of intervals ($n$)}$}%
\includegraphics[width=\textwidth]{strongErrors}
\end{psfrags}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\tiny
\begin{psfrags}%
\psfrag{AAA}[tc][tc]{$ 5$}%
\psfrag{AAB}[tc][tc]{$10$}%
\psfrag{AAC}[tc][tc]{$15$}%
\psfrag{AAD}[tc][tc]{$20$}%
\psfrag{AA}[tc][tc]{$ 0$}%
\psfrag{B}[tc][tc]{$\text{rate ($r$)}$}%
\psfrag{InterpolOrd}[bc][Bc]{$\text{interpolation order ($m$)}$}%
\psfrag{VVVA}[cr][cr]{$0.2$}%
\psfrag{VVVB}[cr][cr]{$0.4$}%
\psfrag{VVVC}[cr][cr]{$0.6$}%
\psfrag{VVV}[cr][cr]{$0.0$}%
\psfrag{VVVD}[cr][cr]{$0.8$}%
\psfrag{VVVE}[cr][cr]{$1.0$}%
\psfrag{VVVF}[cr][cr]{$1.2$}%
\psfrag{VVVG}[cr][cr]{$1.4$}%
\hspace{0.85em}\includegraphics[width=\textwidth-0.85em]{rates}%
\end{psfrags}
\end{minipage}
\caption{
The predicted convergence rate $r\approx2Hm/3$ with $m$-point interpolatory quadrature closely matches the numerically observed one.
Left: relative error $e=\|W^H_1-W^{H,n}_1\|_{L^2(\Omega)}/\|W^H_1\|_{L^2(\Omega)}$ for $m\in\{2,3,\dots,20\}=\{\textcolor[rgb]{0.368417, 0.506779, 0.709798}{\blacksquare},\textcolor[rgb]{0.880722, 0.611041, 0.142051}{\blacksquare},\dots,\textcolor[rgb]{0.8439466852489265, 0.3467106629502147, 0.3309221912517893}{\blacksquare}\}$.
Right: slopes of the lines in the left plot (dots) and predicted convergence rate (line).}
\label{fig:errors}
\end{figure}
A comparison to several alternative methods \cite{hosking1984modeling, dieker2004simulation, bennedsen2017hybrid, carmona2000approximation} exhibits the potential, but also the limitations of integral representations as a basis for numerical simulation schemes.
The ranking of these methods in terms of overall complexity depends on the desired accuracy and number of time points as shown in \autoref{tab:comparison}.
Our scheme outperforms the others in situations where accuracy $n^{-1}$ on a time grid of step size $\gg n^{-1/H}$ is desired.
However, in fractional volatility modeling one typically wants accuracy $n^{-1}$ on finer time grids of step size $\approx n^{-1/H}$ because this leads via piecewise constant interpolation to the same accuracy in the supremum norm.
On these finer time grids our scheme achieves accuracy $n^{-1}$ at complexity $n^{1/H+r}$ for arbitrarily small $r$.
Using exponentially converging quadrature rules such as Chebychev \cite{gass2016magic, gass2016chebychev}, one could at best hope to reduce this complexity down to $n^{1/H}\log n$.
This is exactly the complexity of the hybrid scheme \cite{bennedsen2017hybrid} and the circulant embedding method \cite{dietrich1997fast}.
This complexity is optimal because it coincides with the complexity of convolution of $n^{1/H}$ numbers using the fast Fourier transform.
\begin{table}[h]
\centering
\begin{tabular}{lccc}
\toprule
Method & Structure & Error & Complexity \\
\midrule
Cholesky & Static & 0 & $k^3$ \\
Hosking, Dieker \cite{hosking1984modeling, dieker2004simulation} & Recursive & 0 & $k^2$ \\
Dietrich, Newsam \cite{dietrich1997fast} & Static & 0 & $k\log k$ \\
Bennedsen, Lunde and Pakkanen \cite{bennedsen2017hybrid} & Recursive & $k^{-H}$ & $k \log k$ \\
Carmona, Coutin, Montseny \cite{carmona2000approximation} & Recursive & $n^{-1}$ & $kn^{9/(4H)}$ \\
This paper & Recursive & $n^{-1}$ & $kn^r$ for all $r$ \\
\bottomrule
\end{tabular}
\medskip
\caption{Complexity of several numerical methods for sampling fractional Brownian motion $(W^H_{i/k})_{i\in\{1,\dots,k\}}$ with Hurst index $H\in(0,1/2)$ at $k$ equidistant time points.}
\label{tab:comparison}
\end{table}
Our result has applications to fractional volatility modeling.
One implication, which is spelled out in \autoref{thm:main}, is that put prices in the rough Bergomi model converge at the same rate as the underlying fractional volatility process.
By put-call parity, this extends to call prices if the Brownian motions $B$ and $W$ are negatively correlated, as explained in \autoref{rem:putcall}.
A fully discrete Monte Carlo scheme for the rough Bergomi model can be obtained by discretizing the Ornstein--Uhlenbeck processes of \autoref{thm:main} in time.
This can be done efficiently because the covariance matrix of the Ornstein--Uhlenbeck increments has low numerical rank if the time steps are small.
Several directions for future generalization and improvement come to mind.
\autoref{thm:main} is proved by approximation in the Laplace domain, which implies convergence in the time domain by the continuity of the Laplace transform.
As Volterra processes with Lipschitz drift and volatility coefficients depend continuously on the kernel in the $L^2$ norm, it would be interesting to check if similar convergence results hold also in this more general setting.
The rate of convergence could potentially be improved using Chebychev quadrature, taking advantage of the real analyticity of the random field $Y^x_t$ in the spatial variable $x$.
Finally, following \cite{bennedsen2017hybrid, mccrickerd2018turbocharging}, one could aim for more careful treatments of the singularity of the kernel near the diagonal and apply some variance reduction techniques.
\section{Setting and notation}\label{sec:set}
We will frequently make the following assumptions.
Let $H\in (0,1/2)$,
let $\alpha=H+1/2$,
let $\mu$ be the sigma-finite measure $x^{-\alpha}dx$ on the interval $(0,\infty)$,
let $p \in [1,\infty)$,
let $T \in (0,\infty)$,
let $(\Omega,\mathcal F,\mathbb P,(\mathcal F_t)_{t \in [0,T]})$ be a stochastic basis,
and let $W,B\colon [0,T]\times\Omega\to\mathbb R$ be $(\mathcal F_t)_{t \in [0,T]}$-Brownian motions.
Moreover, we will use the following notation.
The space $C^{0,\infty}([0,T]\times(0,\infty),\mathbb R)$ carries the initial topology and differential structure with respect to the derivatives
\begin{equation*}
\partial_x^k\colon C^{0,\infty}([0,T]\times(0,\infty),\mathbb R) \to C([0,T]\times K,\mathbb R), \qquad k \in \mathbb N, K \subset (0,\infty) \text{ compact},
\end{equation*}
and the spaces $C([0,T]\times K,\mathbb R)$, $C([0,T],L^1(\mu))$, etc.\ carry the supremum norm.
The space of real-valued Lipschitz functions $f\colon\mathbb R\to\mathbb R$ is denoted by $\operatorname{Lip}(\mathbb R)$ and carries the norm $\|f\|_{\operatorname{Lip}(\mathbb R)}=|f(0)|+\sup_{x\neq y}|f(y)-f(x)| |y-x|^{-1}$.
\section{Integral representation}\label{sec:int}
This section establishes bounds on the tails and derivatives of the Markovian lift of Volterra Brownian motion \cite{carmona1998fractional, carmona2000approximation, muravlev2011representation, harms2016affine}.
These bounds are used in the error analysis in \autoref{sec:dis}.
The meaning of the constants $\alpha,\beta,\gamma,\delta,m$ below is consistent throughout the paper.
\begin{lemma}\label{lem:vol}
Assume the setting of \autoref{sec:set}.
Then there exists a measurable mapping
\begin{equation*}
Y \colon \Omega \to C^{0,\infty}([0,T]\times(0,\infty),\mathbb R) \cap C([0,T],L^1(\mu))
\end{equation*}
with the following properties:
\begin{enumerate}
\item\label{lem:vol1} Volterra Brownian motion is a linear functional of $Y$ in the sense that
\begin{align*}
\forall t \in [0,T]: \quad \mathbb P\left[\int_0^t (t-s)^{\alpha-1} dW_s=\int_0^\infty Y_t(x) \frac{dx}{x^\alpha}\right] = 1.
\end{align*}
\item\label{lem:vol2} The following integrability conditions hold: for all $m \in \mathbb N_{>0}$, $\beta = m-1$, $\gamma = 1-\alpha$, and $\delta \in [0,\alpha-1/2)$,
\begin{align*}
\left\|\sup_{t \in [0,T]}\sup_{x \in (0,\infty)}\left|x^\beta \partial_x^m Y_t(x)\right|\right\|_{L^p(\Omega)} &<\infty,
\\
\sup_{x_0\in [0,1]} x_0^{-\gamma} \left\| \sup_{t \in [0,T]} \left|\int_0^{x_0} Y_t(x)\frac{dx}{x^\alpha}\right|\right\|_{L^p(\Omega)} &< \infty,
\\
\sup_{x_1\in [1,\infty)} x_1^{\delta} \left\| \sup_{t \in [0,T]} \left|\int_{x_1}^\infty Y_t(x)\frac{dx}{x^\alpha}\right|\right\|_{L^p(\Omega)} &< \infty.
\end{align*}
\item\label{lem:vol3} The following integrability condition holds: for each $\beta \in (0,1/2)$,
\begin{align*}
\left\|\sup_{t\in[0,T]}\sup_{x\in(0,\infty)} x^\beta |Y_t(x)|\right\|_{L^p(\Omega)} < \infty.
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
Let $Y\colon [0,T]\times (0,\infty)\times \Omega \to \mathbb R$ satisfy for each $t \in [0,T]$ and $x \in (0,\infty)$ that
\begin{equation*}
Y_t(x) = \frac{1}{\Gamma(\frac12-H)}\left(W_t - \int_0^t W_s x e^{-(t-s)x}ds\right).
\end{equation*}
Then $Y\colon \Omega\to C^{0,\infty}([0,T]\times(0,\infty),\mathbb R)$ is well-defined and measurable because the right-hand side above is a smooth function of the sample paths of $W$, i.e., the following mapping is smooth:
\begin{equation*}
C([0,T]) \ni w \mapsto \Bigg((t,x)\mapsto w_t - \int_0^t w_s x e^{-(t-s)x}ds\Bigg) \in C^{0,\infty}([0,T]\times(0,\infty),\mathbb R).
\end{equation*}
Moreover, $Y\colon \Omega\to C([0,T],L^1(\mu))$ is well-defined and measurable by \cite[Theorem~2.11]{harms2016affine}. We briefly reproduce the argument here because it will be needed in the sequel. The expression
\begin{align*}\tag{$*$}\label{equ:vol*}
\mathbb E\left[\int_0^\infty \sup_{t \in [0,T]} |Y_t(x)|\frac{dx}{x^\alpha}\right]
=
\int_0^\infty \mathbb E\left[\sup_{t \in [0,T]} |Y_t(x)|\right]\frac{dx}{x^\alpha}
\end{align*}
is well-defined because the supremum is measurable by the continuity in $t$ of $Y_t(x)$. Integration by parts and a continuity argument show that
\begin{equation*}
\forall x \in (0,\infty):
\quad
\mathbb P\left[\forall t \in [0,T]: W_t - \int_0^t W_s x e^{-(t-s)x}ds = \int_0^t e^{-(t-s)x}dW_s\right]=1,
\end{equation*}
which implies that
\begin{align*}
\eqref{equ:vol*}
=
\frac{1}{\Gamma(\frac12-H)} \int_0^\infty \mathbb E\left[\sup_{t \in [0,T]} \left|\int_0^t e^{-(t-s)x}dW_s\right|\right]\frac{dx}{x^\alpha}.
\end{align*}
This can be bounded using the maximal inequality for Ornstein--Uhlenbeck processes of \cite[Theorem~2.5 and Remark~2.6]{graversen2000maximal}: there is $C_1 \in (0,4)$ such that
\begin{align*}
\eqref{equ:vol*}
\leq
\frac{C_1}{\Gamma(\frac12-H)}
\int_0^\infty \sqrt{\frac{\log(1+Tx)}{x}}\frac{dx}{x^\alpha}<\infty.
\end{align*}
Thus, $Y$ has continuous sample paths in $L^1(\mu)$ by the dominated convergence theorem.
To summarize, we have shown that the following mapping is well-defined and measurable,
\begin{equation*}
Y \colon \Omega \to C^{0,\infty}([0,T]\times(0,\infty),\mathbb R) \cap C([0,T],L^1(\mu)),
\end{equation*}
where the intersection of the two spaces above carries the initial sigma algebra with respect to the inclusions.
\ref{lem:vol1} follows from the above and the stochastic Fubini theorem: for each $t \in [0,T]$, one has almost surely that
\begin{align*}
\int_0^\infty Y_t(x)\frac{dx}{x^\alpha}
&=
\frac{1}{\Gamma(\frac12-H)}\int_0^\infty \int_0^t e^{-(t-s)x} dW_s \frac{dx}{x^\alpha}
\\&=
\frac{1}{\Gamma(\frac12-H)} \int_0^t \int_0^\infty e^{-(t-s)x} \frac{dx}{x^\alpha} dW_s
=
\int_0^t (t-s)^\alpha dW_s.
\end{align*}
\ref{lem:vol2} can be seen as follows.
Recall that $\beta=m-1$ and let
\begin{align*}
C_2 &=
\sup_{\substack{t \in (-\infty,0]\\x \in (0,\infty)}} |x^\beta\partial_x^m (x e^{tx})|
=
\sup_{\substack{t \in (-\infty,0]\\x \in (0,\infty)}} |x^{m-1}\partial_x^m \partial_t e^{tx}|
\\&=
\sup_{\substack{t \in (-\infty,0]\\x \in (0,\infty)}} |x^{m-1}\partial_t(t^m e^{tx})|
=
\sup_{y \in (-\infty,0]} \left|m y^{m-1} + y^m\right| e^{y}<\infty,
\\
C_3 &= \sup_{x \in (0,\infty)} x^{-(\alpha-\frac12-\delta)} \sqrt{\log(1+Tx)}<\infty,
\end{align*}
Using again the maximal inequality for Ornstein--Uhlenbeck processes of \cite[Theorem~2.5 and Remark~2.6]{graversen2000maximal} and noting that $\log(1+Tx)\leq Tx$, one obtains the following three estimates:
\begin{align*}
\hspace{2em}&\hspace{-2em}
\mathbb E\left[\sup_{t \in [0,T]}\sup_{x \in (0,\infty)}\left|x^\beta \partial_x^m Y_t(x)\right|\right]
\\&=
\mathbb E\left[\sup_{t \in [0,T]} \sup_{x \in (0,\infty)}\left| \int_0^t W_s x^{m-1}\partial_x^m (x e^{-(t-s)x}) ds \right| \right]
\\&\leq
C_2 T\, \mathbb E\left[\sup_{t \in [0,T]} |W_t| \right]<\infty,
\\
\hspace{2em}&\hspace{-2em}
\sup_{x_0\in [0,1]} x_0^{-\gamma} \mathbb E\left[ \sup_{t \in [0,T]} \left|\int_0^{x_0} Y_t(x)x^{-\alpha}dx\right|\right]
\\&\leq
C_1 \sup_{x_0\in [0,1]} x_0^{-\gamma} \int_0^{x_0} \sqrt{\frac{\log(1+Tx)}{x}} x^{-\alpha} dx
\\&\leq
C_1 \sup_{x_0\in [0,1]} x_0^{-\gamma} \int_0^{x_0} \sqrt{T} x^{-\alpha} dx
=
C_1 \sqrt{T} \gamma^{-1}
< \infty,
\\
\hspace{2em}&\hspace{-2em}
\sup_{x_1\in [1,\infty)} x_1^{\delta} \mathbb E\left[ \sup_{t \in [0,T]} \left|\int_{x_1}^\infty Y_t(x)x^{-\alpha}dx\right|\right]
\\&\leq
C_1 \sup_{x_1\in [1,\infty)} x_1^{\delta} \int_{x_1}^\infty \sqrt{\frac{\log(1+Tx)}{x}} x^{-\alpha} dx
\\&\leq
C_1 C_3 \sup_{x_1\in [1,\infty)} x_1^{\delta} \int_{x_1}^\infty x^{-1-\delta} dx
=
C_1 C_3 \delta^{-1}
< \infty.
\end{align*}
This shows \ref{lem:vol2} for $p=1$. The generalization to $p \in [1,\infty)$ follow from the Kahane--Khintchine inequality applied to the Gaussian process $Y$.
\ref{lem:vol3} can be seen as follows. Let
\begin{align*}
C_4 = \mathbb E\left[\sup_{s,t\in[0,T]} \frac{|W_t-W_s|}{|t-s|^\beta}\right],
\end{align*}
which is finite by the H\"older continuity of Brownian motion. Note that
\begin{align*}
Y_t(x)
&=
W_t - \int_0^t W_t x e^{-(t-s)x} ds + \int_0^t (W_t-W_s) x e^{-(t-s)x} ds
\\&\leq
(W_t-W_0) e^{-tx} + \int_0^t (W_t-W_s)xe^{-(t-s)x}ds.
\end{align*}
Therefore,
\begin{align*}
\hspace{2em}&\hspace{-2em}
\mathbb E\left[\sup_{t\in[0,T]}\sup_{x\in(0,\infty)} x^\beta |Y_t(x)|\right]
\\&\leq
C_4 \sup_{t\in[0,T]}\sup_{x\in(0,\infty)}\left( (tx)^\beta e^{-tx} + \int_0^t (t-s)^\beta x^\beta xe^{-(t-s)x}ds \right)
\\&=
C_4 \sup_{y\in(0,\infty)}\left( y^\beta e^{-y} + \int_0^y z^\beta e^{-z}dz \right)
\leq 2 C_4.
\end{align*}
This shows \ref{lem:vol3} for $p=1$. The generalization to $p \in [1,\infty)$ follow from the Kahane--Khintchine inequality applied to the Gaussian process $Y$.
\end{proof}
\section{Discretization}\label{sec:dis}
In this section, the measure $\mu$ in the integral representation of Volterra Brownian motion is approximated by a weighted sum of Dirac measures.
More specifically, for each $n\in\mathbb N$, the positive half line is truncated to a finite interval $[\xi_{n,0},\xi_{n,n}]$.
This interval is then split into subintervals by a geometric sequence $(\xi_{n,i})_{i\in\{1,\dots,n\}}$,
and on each subinterval $[\xi_{n,i},\xi_{n,i+1}]$ the measure $\mu$ is approximated by an $m$-point interpolatory quadrature rule such as e.g.\@ the Gauss rule.
Classical error analysis for interpolatory quadrature rules (see e.g.~\cite{brass2011quadrature}) then yields the desired convergence result.
\begin{definition}\label{def:qua}
Let $a,b \in \mathbb R$ satisfy $a<b$, let $w\colon [a,b]\to [0,\infty)$ be a continuous function such that $\int_a^b w(x)dx>0$, and let $m \in \mathbb N_{>0}$. Then a measure $\mu$ on $[a,b]$ is called a non-negative $m$-point interpolatory quadrature rule on $[a,b]$ with respect to the weight function $w$ if there are grid points $x_1,\dots,x_m \in [a,b]$ and weights $w_1,\dots,w_m \in [0,\infty)$ such that $\mu = \sum_{j=1}^m w_j \delta_{x_j}$ and
\begin{equation*}
\forall k \in \{0,\dots,m-1\}:
\quad
\int_a^b x^k w(x) \mu(dx) = \int_a^b x^k w(x) dx.
\end{equation*}
\end{definition}
The assumptions of the following lemma are satisfied thanks to the bounds of \autoref{lem:vol}, where the same constants $\alpha,\beta,\gamma,\delta,m$ are used.
\begin{lemma}
\label{lem:dis}
Assume the setting of \autoref{sec:set},
let $m \in \mathbb N_{>0}$ and $\alpha,\beta,\gamma,\delta \in (0,\infty)$ satisfy $1-\alpha-\beta+m > 0$,
let
\begin{equation*}
Y\colon \Omega\to C^{0,m}([0,T]\times(0,\infty),\mathbb R) \cap C([0,T],L^1(\mu))
\end{equation*}
be a measurable function which satisfies the integrability conditions
\begin{align*}
\left\|\sup_{t \in [0,T]}\sup_{x \in [0,\infty)}\left|x^\beta \partial_x^m Y_t(x)\right|\right\|_{L^p(\Omega)} &< \infty,
\\
\limsup_{x_0\downarrow 0} x_0^{-\gamma} \left\| \sup_{t \in [0,T]} \left|\int_0^{x_0} Y_t(x)x^{-\alpha}dx\right|\right\|_{L^p(\Omega)} &< \infty,
\\
\limsup_{x_1\uparrow \infty} x_1^{\delta} \left\| \sup_{t \in [0,T]} \left|\int_{x_1}^\infty Y_t(x)x^{-\alpha}dx\right| \right\|_{L^p(\Omega)} &< \infty,
\end{align*}
let $r \in (0,\delta m/(1-\alpha-\beta+\delta+m))$,
for each $n \in \mathbb N$ and $i \in \{0,\dots,n-1\}$ let
\begin{equation*}
\xi_{n,0}=n^{-r/\gamma},
\qquad
\xi_{n,n}=n^{r/\delta},
\qquad
\xi_{n,i}=\xi_{n,0}(\xi_{n,n}/\xi_{n,0})^{i/n},
\end{equation*}
let $\mu_{n,i}$ be a non-negative $m$-point interpolatory quadrature rule on $[\xi_{n,i},\xi_{n,i+1}]$ with respect to the weight function $x\mapsto x^{-\alpha}$,
and let $\mu_n=\sum_{i=0}^{n-1}\mu_{n,i}$.
Then
\begin{equation*}
\sup_{n\in\mathbb N} n^r \left\|\sup_{t \in [0,T]} \left|\int_0^\infty Y_t(x) x^{-\alpha} (\mu_n(dx)-dx)\right|\right\|_{L^p(\Omega)}<\infty.
\end{equation*}
\end{lemma}
\begin{proof} We define the constants
\begin{align*}
\eta &= \left(\frac{1}{r}-\frac{1-\alpha-\beta+m+\delta}{\delta m}\middle)\middle\slash\middle(\frac{1}{\gamma}+\frac{1}{\delta}\right) \in (0,\infty),
\\
C_1
&=
\frac{\pi^m}{m!2^m} \left\|\sup_{t \in [0,T]}\sup_{x \in [0,\infty)}|x^\beta Y^{(m)}_t(x)|\right\|_{L^p(\Omega)}
\in (0,\infty),
\\
C_2
&=
\sup_{\lambda \in (1,\infty)} \frac{\lambda-1}{\lambda^{1-\alpha-\beta+m}-1}
\in
(0,1/(1-\alpha-\beta+m)],
\\
C_3
&=
\sup_{\xi \in [1,\infty)} \sup_{n \in [\log \xi,\infty)} \left(\xi^{1/n}-1\right) n \xi^{-\eta}
\in (0,\infty),
\\
C_4
&=
\min \left\{n \in \mathbb N; n\geq \log(\xi_{n,n}/\xi_{n,0})=\left(\frac{r}{\gamma}+\frac{r}{\delta}\right)\log(n)\right\}
< \infty,
\end{align*}
where the upper bound on $C_2$ follows from Bernoulli's inequality
\begin{equation*}
\forall \lambda \in [0,\infty):
\quad
\lambda^{1-\alpha-\beta+m}=(1+(\lambda-1))^{1-\alpha-\beta+m} \geq 1 +(1-\alpha-\beta+m)(\lambda-1),
\end{equation*}
and the upper bound on $C_3$ follows from the inequality
\begin{align*}
\forall \xi \in [1,\infty)\forall n \in [\log(\xi),\infty):
\quad
\xi^{1/n}-1
&=
\exp(\log(\xi)/n)-1
\leq
e \log(\xi)/n.
\end{align*}
By \cite[Theorem~4.2.3]{brass2011quadrature} one has for each $t \in [0,T]$, $n \in \mathbb N$, and $i \in \{0,\dots,n-1\}$ that
\begin{align*}
\hspace{2em}&\hspace{-2em}
\int_{\xi_{n,i}}^{\xi_{n,i+1}} Y_t(x) x^{-\alpha} (\mu_n(dx)-dx)
=
\int_{\xi_{n,i}}^{\xi_{n,i+1}} \partial_x^m Y_t(x) K_{n,i}(x)dx,
\end{align*}
where the Peano kernel $K_{n,i} \colon [\xi_{n,i},\xi_{n,i+1}]\to\mathbb R$ is a measurable function which satisfies \cite[Theorem~5.7.1]{brass2011quadrature}
\begin{align*}
\sup_{x \in [\xi_{n,i},\xi_{n,i+1}]} |K_{n,i}(x)|
\leq
\frac{\pi^m}{m!} \left(\frac{\xi_{n,i+1}-\xi_{n,i}}{2}\right)^m \sup_{x \in [\xi_{n,i},\xi_{n,i+1}]} x^{-\alpha}.
\end{align*}
Thus, one has for each $n \in \mathbb N$ that
\begin{align*}\tag{$*$}\label{equ:dis*}
\hspace{2em}&\hspace{-2em}
\left\|\sup_{t \in [0,T]}\left|\int_{\xi_{n,0}}^{\xi_{n,n}} Y_t(x) x^{-\alpha} (\mu_n(dx)-dx)\right|\right\|_{L^p(\Omega)}
\\&\leq
\sum_{i=0}^{n-1}\left\|\sup_{t \in [0,T]}\left|\int_{\xi_{n,i}}^{\xi_{n,i+1}} Y_t(x) K_{n,i}(x) dx\right|\right\|_{L^p(\Omega)}
\\&\leq
\sum_{i=0}^{n-1} \frac{\pi^m}{m!2^m} \left\|\sup_{\substack{t \in [0,T]\\x \in [\xi_{n,i},\xi_{n,i+1}]}}|x^\beta Y^{(m)}_t(x)|\right\|_{L^p(\Omega)}\!\! \xi_{n,i}^{-\alpha-\beta} (\xi_{n,i+1}-\xi_{n,i})^{m+1}
\\&\leq
C_1 \sum_{i=0}^{n-1} \xi_{n,i}^{-\alpha-\beta} (\xi_{n,i+1}-\xi_{n,i})^{m+1}.
\end{align*}
This can be expressed as a geometric series: letting $\lambda_n = (\xi_{n,n}/\xi_{n,0})^{1/n}$, one has for each $n \in \mathbb N$ that
\begin{align*}
\eqref{equ:dis*} &=
C_1 \xi_{n,0}^{1-\alpha-\beta+m} (\lambda_n-1)^{m+1} \sum_{i=0}^{n-1} \lambda_n^{i(1-\alpha-\beta+m)}
\\&=
C_1 \xi_{n,0}^{1-\alpha-\beta+m} (\lambda_n-1)^{m+1} \frac{\lambda_n^{n(1-\alpha-\beta+m)}-1}{\lambda_n^{1-\alpha-\beta+m}-1}
\\&=
C_1 (\lambda_n-1)^{m+1} \frac{\xi_{n,n}^{1-\alpha-\beta+m}-\xi_{n,0}^{1-\alpha-\beta+m}}{\lambda_n^{1-\alpha-\beta+m}-1}.
\end{align*}
Absorbing the denominator into one of the factors $(\lambda_n-1)$ and discarding the term $\xi_{n,0}$ yields for each $n \in \mathbb N$ that
\begin{align*}
\eqref{equ:dis*}&\leq
C_1 C_2(\lambda_n-1)^m \xi_{n,n}^{1-\alpha-\beta+m}
=
C_1 C_2((\xi_{n,n}/\xi_{n,0})^{1/n}-1)^m \xi_{n,n}^{1-\alpha-\beta+m}.
\end{align*}
For each $n \in \mathbb N \cap [C_4,\infty)$, this can be estimated by
\begin{align*}
\eqref{equ:dis*}&\leq
C_1 C_2 C_3^m n^{-m} (\xi_{n,n}/\xi_{n,0})^{\eta m} \xi_{n,n}^{1-\alpha-\beta+m}
\\&=
C_1 C_2 C_3^m n^{-m+\eta m r (1/\gamma+1/\delta)+(1-\alpha-\beta+m)r/\delta}
=
C_1 C_2 C_3^m n^{-r}.
\end{align*}
Therefore, noting that $n^r = \xi_{n,0}^{-\gamma}=\xi_{n,n}^{\delta}$, one has
\begin{align*}
\hspace{2em}&\hspace{-2em}
\limsup_{n\to\infty} n^r \mathbb E\left[\sup_{t \in [0,T]} \left|\int_0^\infty Y_t(x) x^{-\alpha} (\mu_n(dx)-dx)\right|\right]
\\&\leq
\limsup_{n\to\infty} \xi_{n,0}^{-\gamma}\ \mathbb E\left[\sup_{t \in [0,T]} \left|\int_{(0,\xi_{n,0}]} Y_t(x) x^{-\alpha} dx\right|\right]
\\&\qquad+
\limsup_{n\to\infty} \xi_{n,n}^{\delta}\ \mathbb E\left[\sup_{t \in [0,T]} \left|\int_{[\xi_{n,n},\infty)} Y_t(x) x^{-\alpha} dx\right|\right]
\\&\qquad+
\sup_{n\in\mathbb N} n^r \mathbb E\left[\sup_{t \in [0,T]}\left|\int_{\xi_{n,0}}^{\xi_{n,n}} Y_t(x) x^{-\alpha} (\mu_n(dx)-dx)\right|\right]
<\infty.
\qedhere
\end{align*}
\end{proof}
\begin{remark}
The choice of the quadrature rule in \autoref{lem:dis} is admittedly somewhat arbitrary but produces good results.
The use of the geometric grid $\xi_{n,i}$ goes back to \cite{carmona2000approximation} and simplifies the error analysis compared to more complex subdivisions which distribute the error more equally.
It would be interesting to explore if the holomorphicity of $x\mapsto Y_t(x)$ permits the use of quadrature rules with exponential convergence rates such as Chebychev quadrature; see the discussion in \autoref{sec:int}.
\end{remark}
\section{Rough Bergomi model}
The prices of put options in the rough Bergomi model converge at the same rate as the approximated Volterra processes.
This holds not only for the Ornstein--Uhlenbeck approximations of \autoref{lem:dis}, but in full generality for any approximations in the $L^2([0,T]\times\Omega)$ norm with exponential moment bounds.
\begin{lemma}\label{lem:bergomi}
Assume the setting of \autoref{sec:set},
let $V,\smashedtilde V,S,\smashedtilde S\colon [0,T]\times\Omega\to\mathbb R$ be continuous stochastic processes with $V_0=\smashedtilde V_0=0$ and
\begin{equation*}
\forall t \in [0,T]:
\quad
S_t = 1 + \int_0^t S_s \exp(V_s)dW_s,
\quad
\smashedtilde S_t = 1 + \int_0^t \smashedtilde S_s \exp(\smashedtilde V_s)dW_s,
\end{equation*}
and let $f\colon(0,\infty)\to\mathbb R$ be a measurable function such that $f \circ \exp \in \operatorname{Lip}(\mathbb R)$.
Then
\begin{multline*}
\big|\mathbb E[f(S_T)]-\mathbb E[f(\smashedtilde S_T)]\big|
\leq
\|f\circ\exp\|_{\operatorname{Lip}(\mathbb R)}\big(\sqrt{T}+6\big)
\\
\times\left\|\exp(2|V|)+\exp(2|\smashedtilde V|)\right\|_{L^2(\Omega,C([0,T]))}\|V-\smashedtilde V\|_{L^2([0,T]\times\Omega)}.
\end{multline*}
\end{lemma}
\begin{proof}
It is sufficient to control the log prices in $L^1$ because
\begin{equation*}
\big|\mathbb E[f(S_T)]-\mathbb E[f(\smashedtilde S_T)]\big|
\leq
\|f\circ\exp\|_{\operatorname{Lip}(\mathbb R)} \|\log(S_T)-\log(\smashedtilde S_T)\|_{L^1(\Omega)}.
\end{equation*}
The basic inequality
\begin{equation*}
\forall x,y \in \mathbb R:
\quad
\big|\exp(x)-\exp(y)\big|
\leq
\big(\exp(x)+\exp(y)\big) |x-y|
\end{equation*}
and the Burkholder--Davis--Gundy inequality imply that
\begin{align*}
\hspace{2em}&\hspace{-2em}
\|\log(S_T)-\log(\smashedtilde S_T)\|_{L^1(\Omega)}
\\&=
\left\|-\frac12\int_0^T\big(\exp(2V_t)-\exp(2\smashedtilde V_t)\big)dt+\int_0^T\big(\exp(V_t)-\exp(\smashedtilde V_t)\big)dW_t\right\|_{L^1(\Omega)}
\\&\leq
\left\|\frac12\int_0^T\big(\exp(2V_t)+\exp(2\smashedtilde V_t)\big)(2V_t-2\smashedtilde V_t)dt\right\|_{L^1(\Omega)}
\\&\qquad
+6\left\|\sqrt{\int_0^T\big(\exp(V_t)+\exp(\smashedtilde V_t)\big)^2(V_t-\smashedtilde V_t)^2dt}\right\|_{L^1(\Omega)}
\\&\leq
\left\|\exp(2|V|)+\exp(2|\smashedtilde V|)\right\|_{L^2(\Omega,C([0,T]))}
\\&\qquad\times
\left(\left\|\int_0^T(V_t-\smashedtilde V_t)dt\right\|_{L^2(\Omega)}+6\left\|\sqrt{\int_0^T(V_t-\smashedtilde V_t)^2dt}\right\|_{L^2(\Omega)}\right)
\\&\leq
\left\|\exp(2|V|)+\exp(2|\smashedtilde V|)\right\|_{L^2(\Omega,C([0,T]))}
\big(\sqrt{T}+6\big) \left\|V-\smashedtilde V\right\|_{L^2([0,T]\times\Omega)}.
\qedhere
\end{align*}
\end{proof}
\begin{remark}\label{rem:putcall}
For each $K \in (0,\infty)$ the put-option payoff
\begin{equation*}
f\colon (0,\infty) \to \mathbb R, \qquad x \mapsto (K-x)_+,
\end{equation*}
satisfies the assumption of \autoref{lem:bergomi} that $f\circ\exp\in\operatorname{Lip}(\mathbb R)$ because
\begin{equation*}
\sup_{\substack{x,y\in\mathbb R\\x\neq y}}\frac{|f(e^y)-f(e^x)|}{|y-x|}\leq e^K<\infty.
\end{equation*}
The call-option payoff does not have this property, but the prices of call options can be obtained by put-call parity if $W$ and $B$ are negatively correlated because this implies that $S$ is a martingale \cite{gassiat2018martingale}.
\end{remark}
\printbibliography
\end{document}
|
1,108,101,565,239 | arxiv | \section{\scshape Introduction}
Let $\mathbb{F}$ be an algebraically closed field of any characteristic and
$\mathds{A}_{\mathbb{F}}^{n}$ be the $n$-dimensional affine space over $\mathbb{F}$.
Suppose a finite group $G$ acts on $\mathds{A}_{\mathbb{F}}^{n}$, linearly but not necessarily faithfully.
Then the coordinate ring of the quotient variety $\mathds{A}_{\mathbb{F}}^{n}/G$ is isomorphic to the invariant ring $\mathcal{O}(\mathds{A}_{\mathbb{F}}^{n})^{G}$ which is the main object of study in Algebraic Invariant Theory. The groundbreaking theorem which says that $\mathcal{O}(\mathds{A}_{\mathbb{F}}^{n})^{G}$ is always a finitely generated $\mathbb{F}$-algebra, was proved by Emmy Noether in 1926. Since then, the study on relations between
the algebraic structure of $\mathcal{O}(\mathds{A}_{\mathbb{F}}^{n})^{G}$ and the geometry of $G$ on $\mathds{A}_{\mathbb{F}}^{n}$ has been occupied the central position in the invariant theory of finite groups; see Campbell-Wehlau \cite{CW2011}, Derksen-Kemper \cite{DK2015} and Neusel-Smith \cite{NS2002} for general references.
We say that the quotient variety $\mathds{A}_{\mathbb{F}}^{n}/G$ is \textit{modular} if the characteristic of $\mathbb{F}$ divides the order of $G$; otherwise \textit{nonmodular}. The affine space $\mathds{A}_{\mathbb{F}}^{n}$ could be regarded as a linear representation of $G$, and we call $\mathds{A}_{\mathbb{F}}^{n}/G$ \textit{reduced} if any direct summand of $\mathds{A}_{\mathbb{F}}^{n}$ is not isomorphic to the one-dimensional trivial representation of $G$.
We also say that $\mathds{A}_{\mathbb{F}}^{n}/G$ is a \textit{Cohen-Macaulay} (\textit{Gorenstein, complete intersection, hypersurface}) quotient if $\mathcal{O}(\mathds{A}_{\mathbb{F}}^{n})^{G}$ is \textit{Cohen-Macaulay} (\textit{Gorenstein, complete intersection, hypersurface} respectively). We use ${\rm Sing}_{G}(\mathds{A}_{\mathbb{F}}^{n})$ to denote the set of all singularities in $\mathds{A}_{\mathbb{F}}^{n}/G$.
Suppose $\mathds{A}_{\mathbb{F}}^{n}/G$ is nonmodular, i.e., the characteristic of $\mathbb{F}$ is zero or a prime $p$ such that $p$ does not divide the order of $G$. The famous Shephard-Todd-Chevalley theorem says that ${\rm Sing}_{G}(\mathds{A}_{\mathbb{F}}^{n})$ is empty (i.e., $\mathds{A}_{\mathbb{F}}^{n}/G$ is smooth) if and only if the action of $G$ on $\mathds{A}_{\mathbb{F}}^{n}$ is a group generated by reflections; see Shephard-Todd \cite{ST1954} and Chevalley \cite{Che1955}. Noether's bound theorem demonstrates that any invariant ring $\mathcal{O}(\mathds{A}_{\mathbb{F}}^{n})^{G}$ can be generated by polynomial invariants of degree less than or equal to the order of $G$; see for example Derksen-Kemper \cite[Corollary 3.2.4]{DK2015}.
Another important result due to Hochster and Eagon asserts that
$\mathds{A}_{\mathbb{F}}^{n}/G$ is always Cohen-Macaulay; see Hochster-Eagon \cite{HE1971}.
For modular quotient varieties $\mathds{A}_{\mathbb{F}}^{n}/G$, the situation becomes to be worse and more complicated.
A theorem of Serre shows that if $\mathds{A}_{\mathbb{F}}^{n}/G$ is smooth, then the action of $G$ on $\mathds{A}_{\mathbb{F}}^{n}$ is a group generated by reflections; see \cite{Ser1968}. However, there exists a counterexample that demonstrates the converse statement of Serre's theorem is not feasible in general; see Campbell-Wehlau \cite[Section 8.2]{CW2011}. Noether's bound theorem and Hochster-Eagon's theorem also failed to hold in the modular cases; see Richman \cite{Ric1990} and Campbell-Wehlau \cite[Example 8.0.9]{CW2011}.
For the low-dimensional case, however, if $n\leqslant3$ then $\mathds{A}_{\mathbb{F}}^{n}/G$ is always Cohen-Macaulay; see Smith \cite{Smi1996}.
Further, considering $G=C_{4}$ acting on $\mathds{A}_{\mathbb{F}}^{4}$ regularly and ${\rm char}(\mathbb{F})=2$, Bertin proved that
$\mathds{A}_{\mathbb{F}}^{4}/C_{4}$ is factorial but not Cohen-Macaulay, answering a question of Pierre Samuel; see \cite{Ber1967}.
For recent explicit computations of modular Cohen-Macaulay quotient varieties, see Chen \cite{Che2014}, Chen-Wehlau \cite{CW2017} and references therein.
Motivated by connecting representations theory with crepant resolutions of quotient varieties, the McKay correspondence (which, roughly speaking, reveals an equality between an invariant of representations of a finite subgroup $G\subset {\rm SL}_{n}(\mathbb{C})$ and an invariant of a crepant resolution of the quotient variety $\mathbb{C}^n/G$) in characteristic 0 has been studied extensively; for example, see
Batyrev \cite{Bat1999} and Batyrev-Dais \cite{BD1996} in terms of stringy invariants, Denef-Loeser \cite{DL2002} from the theory of motivic integration, and Reid \cite{Rei2002}. In 2014, Yasuda \cite{Yas2014} studied the McKay correspondence for modular representations of the cyclic group $C_{p}$ of order $p$ and calculated the stringy invariant of the quotient variety via the motivic integration generalized to quotient stacks associated to representations, leading to a conjecture of Yasuda \cite[Conjecture 1.1]{Yas2015} with a positive answer for the modular cases of $C_{p}$. In particular, it was proved that the Euler characteristic of the crepant resolution variety is equal to the number of the conjugacy class of $C_{p}$. In his notes, Yasuda \cite[Problem 6.6]{Yas2015} asks whether this statement remains valid for
any small subgroup $G\subset {\rm GL}_{n}(\mathbb{C})$ in the modular cases. Here a ``small'' subgroup $G$ means $G$ contains no any non-identity reflections.
\begin{prob}\label{prob1.1}
{\rm
Let $G\subset{\rm GL}_n({\mathbb{F}})$ be a small subgroup and $Y\longrightarrow X=\mathds{A}_{\mathbb{F}}^{n}/G$ be a crepant resolution. Suppose ${\rm char}(\mathbb{F})=p>0$ divides the order of $G$. Is the Euler characteristic of $Y$ equal to the number of the conjugacy classes of $G$?
}\end{prob}
There are some works done on the McKay correspondence in the modular cases and for different purposes; see Gonzalez-Verdier \cite{GV1985}, Schr\"oer \cite{Sch2009}, and Yasuda \cite{Yas2017}. However, to the best of our knowledge, no more results (even for finite cyclic groups) have appeared to answer Problem \ref{prob1.1} except Yasuda's work \cite{Yas2014} on the cyclic group $C_{p}$.
The purpose of this paper is to study singularities in $n$-dimensional Cohen-Macaulay modular quotient variety $\mathds{A}_{\mathbb{F}}^{n}/C_{2p}$, where $p$ is a prime number and $C_{m}$ denotes the cyclic group of order $m\in\mathbb{N}^{+}$. In particular, we will provide an example that demonstrates that Problem \ref{prob1.1} has a negative answer in the modular cases of $C_{2p}$
if the condition that ``$G$ is a small subgroup'' was dropped.
To make $\mathds{A}_{\mathbb{F}}^{n}/C_{2p}$ to be a modular quotient variety, we have the following three situations:
(1) char$(\mathbb{F})=p=2$; (2) char$(\mathbb{F})=p>2$; and (3) char$(\mathbb{F})=2$ and $p>2$.
For the first case we prove the following result.
\begin{thm}\label{thm1.2}
Let $\mathds{A}_{\mathbb{F}}^{n}/C_{4}$ be an $n$-dimensional, reduced Cohen-Macaulay quotient variety in characteristic 2. Then
\begin{enumerate}
\item $n\leqslant 4$ and $\mathds{A}_{\mathbb{F}}^{n}/C_{4}$ is always a hypersurface;
\item $\mathds{A}_{\mathbb{F}}^{n}/C_{4}$ is smooth if and only if $n=2$; in this case, $\mathds{A}_{\mathbb{F}}^{2}/C_{4}$ is isomorphic to $\mathds{A}_{\mathbb{F}}^{2}/C_{2}$;
\item ${\rm Sing}_{C_{4}}(\mathds{A}_{\mathbb{F}}^{3})\cong \mathds{A}_{\mathbb{F}}^{1}$ and ${\rm Sing}_{C_{4}}(\mathds{A}_{\mathbb{F}}^{4})\cong \mathds{A}_{\mathbb{F}}^{2}$.
\end{enumerate}
\end{thm}
For the second case, we prove that there always exists a reduced modular Cohen-Macaulay quotient variety in each dimension $n$; and all reduced modular Cohen-Macaulay quotient varieties can be realized via one of seven families of modular representations
of $C_{2p}$; see Proposition \ref{prop3.1}.
This is the ingredient of classifying all two and three-dimensional reduced modular Cohen-Macaulay quotient varieties; see Corollary \ref{coro3.2}. Furthermore, we describe all quotient singularities and varieties in dimensions two and three as follows.
\begin{thm}\label{thm1.3}
Let $\mathds{A}_{\mathbb{F}}^{n}/C_{2p}$ be an $n$-dimensional, reduced Cohen-Macaulay quotient variety in characteristic $p>2$.
\begin{enumerate}
\item $\mathds{A}_{\mathbb{F}}^{2}/C_{2p}$ is always a hypersurface and ${\rm Sing}_{C_{2p}}(\mathds{A}_{\mathbb{F}}^{2})$ either is empty or consists of the zero point;
\item $\mathds{A}_{\mathbb{F}}^{3}/C_{2p}$ is either a hypersurface or not Gorenstein.
\end{enumerate}
\end{thm}
\noindent We also prove that if $n\geqslant 4$ and $\mathds{A}_{\mathbb{F}}^{n}$ is isomorphic to the $n$-dimensional indecomposable
modular representation of $C_{2p}$, then $\mathds{A}_{\mathbb{F}}^{n}/C_{2p}$ is never Cohen-Macaulay; see Proposition \ref{prop3.5}.
For the third case, we prove that all reduced modular Cohen-Macaulay quotient varieties can be realized through
direct sums of finitely many 1-dimensional and at most two 2-dimensional indecomposable representations; see Proposition \ref{prop4.1}. We calculate the invariant ring $\mathcal{O}(\mathds{A}_{\mathbb{F}}^{2})^{C_{2p}}$ for any reduced quotient variety $\mathds{A}_{\mathbb{F}}^{2}/C_{2p}$
by giving explicit set of generators and relations between these generators; see Propositions \ref{HWp} and \ref{prop4.6}.
As a consequence, the following result
\begin{thm} \label{thm1.4}
For any $1\leqslant i,j\leqslant p-1$, $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}$ is Gorenstein if and only if $i+j=p$.
\end{thm}
\noindent together with Remark \ref{rem4.8} gives rise to a classification of all reduced, two-dimensional modular Gorenstein quotient varieties.
In Section 2, we consider the modular quotient varieties of $C_{2p}$ in characteristic $p$ and give the proofs of Theorem \ref{thm1.2} and Theorem \ref{thm1.3}. In Section 3, we study the reduced modular quotient varieties of $C_{2p}$ ($p>2$) in characteristic $2$, emphasizing a characterization of $2$-dimensional modular Gorenstein quotient varieties
$\mathds{A}^2_{\mathbb{F}}/C_{2p}$ in Theorem \ref{thm1.4}. Section 4 contains a discussion on the McKay correspondence in the modular cases (or wild McKay correspondence) and we present an example to show the necessary of the assumption that $G$ is a small subgroup in Problem \ref{prob1.1}.
\section{\scshape Modular Quotient Varieties of $C_{2p}$ in Characteristic $p$} \label{section2}
Let $W$ be a finite-dimensional representation of a group $G$ over $\mathbb{F}$. Recall that a non-identity invertible linear map $\sigma: W\longrightarrow W$ is called a \textbf{bi-reflection} if the dimension of the image of $\sigma-1$
is less than or equal to 2. We say that $G$ is a \textbf{bi-reflection group} on its representation $W$ if the image of $G$ under the group homomorphism $G\longrightarrow {\rm GL}(W)$ is a group generated by bi-reflections.
Let $\mathbb{F}[W]$ denote the symmetric algebra on the dual space $W^{*}$ of $W$. Choose a basis $e_{1},\dots,e_{n}$ for $W$ and a dual basis $x_{1},\dots,x_{n}$ in $W^{*}$. Then $\mathbb{F}[W]$ can be identified with the polynomial algebra $\mathbb{F}[x_{1},\dots,x_{n}]\cong
\mathcal{O}(\mathds{A}_{\mathbb{F}}^{n})$.
For a finitely generated graded subalgebra $A\subseteq \mathbb{F}[W]$, let $A_{+}$ denote its maximal homogeneous ideal.
The depth of a homogeneous ideal $I\subseteq A_{+}$, written ${\rm depth}_{A}(I)$, is the maximal length of a regular sequence in $I$.
In particular, we write ${\rm depth}(A)={\rm depth}_{A}(A_{+})$.
We define ${\rm def}(A):=\dim(A)-{\rm depth}(A)$ to be the \textbf{Cohen-Macaulay defect} of $A$. Clearly, $A$ is Cohen-Macaulay if and only if
${\rm def}(A)=0$. In fact, the Cohen-Macaulay defect of $A$ measures how far $A$ is from being Cohen-Macaulay.
The following two results are extremely useful to decide whether a modular invariant ring is a Cohen-Macaulay algebra.
\begin{thm}[Kemper \cite{Kem1999}] \label{kemper99}
Let $W$ be a finite-dimensional modular representation of a finite $p$-group $P$ over $\mathbb{F}$ of characteristic $p>0$. If the invariant ring $\mathbb{F}[W]^{P}$ is a Cohen-Macaulay algebra, then $P$ is a bi-reflection group on $W$.
\end{thm}
\begin{thm}[Ellingsrud-Skjelbred \cite{ES1980} or Kemper \cite{Kem2012}] \label{ES}
Let $G$ be a cyclic group with Sylow $p$-subgroup $P$ and $W$ be any finite-dimensional modular representation of $G$ over a field $\mathbb{F}$ of characteristic $p>0$. Then ${\rm def}(\mathbb{F}[W]^G)=\max\{\textrm{codim}(W^P)-2,0\}$.
\end{thm}
Now we give a proof of Theorem \ref{thm1.2}.
\begin{proof}[Proof of Theorem \ref{thm1.2}] Let $C_{4}$ be generated by $\sigma$ and $V$ be any finite-dimensional indecomposable representation of $C_{4}$ over $\mathbb{F}$. It is well-known that $V$ must be isomorphic to one of $\{V_{k}\mid 1\leqslant k\leqslant 4\}$, where $\dim(V_{k})=k$ and $V_{k}$ corresponds to the group homomorphism $C_{4}$ to ${\rm GL}(V_{k})$ defined by carrying $\sigma$ to the $k\times k$-Jordan block with the diagonals 1; see for example Campbell-Wehlau \cite[Lemma 7.1.3]{CW2011}. More precisely, $V_{1}$ is the trivial representation and $V_{4}$ is the regular one; $V_{1}$ and $V_{2}$ are not faithful meanwhile $V_{3}$ and $V_{4}$ are faithful.
As $\mathds{A}_{\mathbb{F}}^{n}/C_{4}$ is reduced, $\mathds{A}_{\mathbb{F}}^{n}$ contains no the trivial representation $V_{1}$ as direct summand. Thus there exist $a,b,c\in\mathbb{N}$ such that
$\mathds{A}_{\mathbb{F}}^{n}\cong aV_{2}\oplus b V_{3}\oplus c V_{4}.$
This implies that the dimension of the image of $\sigma-1$ is $a+2b+3c$.
The fact that $\mathds{A}_{\mathbb{F}}^{n}/C_{4}$ is Cohen-Macaulay, together with Theorem \ref{kemper99}, forces $a+2b+3c$ to be less than or equal to 2.
Hence, $c$ must be zero and clearly, there are only three possibilities: (i) $b=1$ and $a=0$; (ii) $b=0$ and $a=1$; and (iii)
$b=0$ and $a=2$. This proves $n\leqslant 4$ and $\mathds{A}_{\mathbb{F}}^{n}/C_{4}$ must be isomorphic to one of $\{V_{3}/C_{4},V_{2}/C_{4},
2V_{2}/C_{4}\}$.
Magma calculations show that the invariant ring $\mathcal{O}(V_{3})^{C_{4}}=\mathbb{F}[x,y,z]^{C_{4}}$ is a hypersurface, minimally generated by
$f_{1} := x, f_{2} := xy+y^{2}, f_{3} := x^2 yz + x^2 z^2 + xy^2 z + xyz^2 + y^2 z^2 + z^4$, and $h := x^2 z + xy^2 + xz^2 + y^3
$ subject to the unique relation: $f_{1}^2 f_{3} + f_{1}f_{2}h + f_{2}^3 + h^2=0.$ Hence
$\mathcal{O}(V_{3}/C_{4})\cong \mathbb{F}[x_{1},\dots,x_{4}]/(f)$ where $f:=x_{1}^2 x_{3} + x_{1}x_{2}x_{4} + x_{2}^3 + x_{4}^2.$
Note that characteristic of $\mathbb{F}$ is 2 and ${\rm Sing}_{C_{4}}(V_{3})=\{e\in \mathds{A}_{\mathbb{F}}^{4}\mid f(e)=0\textrm{ and } {\rm rank}(\frac{\partial f}{\partial x_{1}}(e),\dots,
\frac{\partial f}{\partial x_{4}}(e))=0\}$. A direct calculation shows that
${\rm Sing}_{C_{4}}(V_{3})=\{(0,0,a,0)\in \mathds{A}_{\mathbb{F}}^{4}\mid a\in\mathbb{F}\} \cong \mathds{A}_{\mathbb{F}}^{1}$.
Similarly, as $V_{2}$ is not a faithful representation of $C_{4}$ and the image of $C_{4}$ on $V_{2}$ is actually isomorphic to the image of $C_{2}$ acting faithfully on $V_{2}$. Thus $\mathcal{O}(V_{2}/C_{4})=\mathcal{O}(V_{2}/C_{2})$ and
$\mathcal{O}(2V_{2}/C_{4})=\mathcal{O}(2V_{2}/C_{2})$. These two invariant rings have already been calculated in Campbell-Wehlau \cite[Theorem 1.11.2]{CW2011}
and \cite[Theorem 1.12.1]{CW2011} respectively. Indeed, $\mathcal{O}(V_{2}/C_{4})=\mathbb{F}[x,y]^{C_{2}}=\mathbb{F}[x,xy+y^{2}]$ is a polynomial algebra
and $\mathcal{O}(2V_{2}/C_{4})=\mathbb{F}[x_{1},y_{1},x_{2},y_{2}]^{C_{2}}$ is a hypersurface, minimally generated by
$\{x_{i},N_{i}:=x_{i}y_{i}+y_{i}^{2},u:=x_{1}y_{2}+x_{2}y_{1}\mid i=1,2\},$
subject to the unique relation: $u^{2}=x_{1}^{2}N_{2}+x_{2}^{2}N_{1}+x_{1}x_{2}u.$
This yields that $\mathds{A}_{\mathbb{F}}^{n}/C_{4}$ is always a hypersurface. Clearly, $V_{2}/C_{4}$ is smooth and
a direct calculation similar with the case $V_{3}$ applies for the case $2V_{2}$. Eventually we derive
${\rm Sing}_{C_{4}}(2V_{2})\cong \mathds{A}_{\mathbb{F}}^{2}$. This completes the proof.
\end{proof}
To accomplish the proof of Theorem \ref{thm1.3}, we suppose char$(\mathbb{F})=p>2$ throughout the rest of this section. Then $C_{2p}$ is isomorphic to the direct product of $C_{2}$ and
$C_{p}$.
We have seen in Alperin \cite[pages 24--25]{Alp1993} that up to equivalence, there are only $2p$ indecomposable modular representations $\{V_{k}^{\pm} \mid 1\leqslant k\leqslant p\}$ of $C_{2p}$, where $\dim(V_{k}^{\pm})=k$ for all $k$. More precisely, for $1\leqslant k\leqslant p$, $V_{k}^+$ corresponds to the group homomorphism carrying $\sigma$ to $T^+_k$, the $k\times k$ Jordan block with diagonals 1; and $V_{k}^-$ corresponds to the group homomorphism carrying $\sigma$ to $T^-_k$, the $k\times k$ Jordan block with diagonals $-1$.
Note that the orders of $T^+_k$ and $T^-_k$
are $p$ and $2p$ respectively for $k\geq 2$. Thus all $V_k^+$ are not faithful; and all $V_k^-$ are faithful except for $k=1$.
Further, $(T^-_k)^2$ and $T^+_k$ generate the same $p$-sylow subgroup $P$ of $C_{2p}$ where $|P|=p$.
Hence, the invariant ring $\mathbb{F}[V_k^-]^{C_{2p}}$ can be viewed to be a subring of $\mathbb{F}[V_k^+]^{C_{2p}}$ via $\mathbb{F}[V_k^-]^{C_{2p}}\subseteq \mathbb{F}[V_k^-]^{P}\cong\mathbb{F}[V_k^+]^{C_{2p}}$.
\begin{proposition}\label{prop3.1}
Suppose char$(\mathbb{F})=p>2$ and $W$ is a finite-dimensional reduced modular representation of $C_{2p}$ over $\mathbb{F}$ such that
$\mathbb{F}[W]^{C_{2p}}$ is Cohen-Macaulay. If $W$ is faithful, then it must be isomorphic to one of $\{V_2^+\oplus b V_1^-, 2V_2^+\oplus b V_1^-, V_3^+\oplus b V_1^-, b_1 V_1^-\oplus V_2^-, b_1 V_1^-\oplus 2V_2^-, b_1 V_1^-\oplus V_3^-, V_2^+\oplus V_2^-\oplus b_{1}V_1^-\}$ where $b\in\mathbb{N}^+$ and $b_1\in \mathbb{N}$; if $W$ is not faithful, then it must be isomorphic to one of $\{b V_1^-, V_2^+, 2V_2^+, V_3^+\}$
where $b\in\mathbb{N}^+$.
\end{proposition}
\begin{proof}
As $W$ is reduced, we may suppose $a_2,\dots,a_p,b_1,\dots,b_p\in\mathbb{N}$ such that
$W\cong a_2V_2^+\oplus \cdots \oplus a_pV_p^+\oplus b_1V_1^-\oplus b_2V_2^-\oplus\cdots \oplus b_pV_p^-.$
Note that $C_{2p}$ is generated by $\sigma$ and $P$ is generated by $\sigma^2$, thus
$\textrm{codim}(W^P)=\dim(W)-\dim(W^P)$, where
$W^P=\{w\in W\mid \delta(w)=w,\textrm{ for all }\delta\in P\}=\{w\in W\mid \sigma^2(w)=w\}$.
Hence, $\textrm{codim}(W^P)={\rm rank}(\sigma^2-1)$.
Since the rank of $\sigma^2-1$ on $V_k^{\pm}$ is $k-1$, it follows that the rank of $\sigma^2-1$ on $W$
is equal to $a_2+a_3 \cdot 2+\cdots+a_p\cdot (p-1)+b_1\cdot 0+b_2+b_3\cdot 2+\cdots+b_p\cdot (p-1)$.
Since $\mathbb{F}[W]^{C_{2p}}$ is Cohen-Macaulay, ${\rm def}(\mathbb{F}[W]^{C_{2p}})$ must be zero. It follows Theorem \ref{ES} that $\textrm{codim}(W^P)={\rm rank}(\sigma^2-1)\leq 2$, i.e., $$\sum_{k=2}^{p} (a_{k}+b_{k})(k-1)\leqslant 2.$$
This implies that $a_{k}=b_{k}=0$ for $k=4,5,\dots,p$; and so $a_{2}+b_{2}+2a_{3}+2b_{3}\leqslant 2.$
Clearly, $(a_{2}, b_{2}, a_{3}, b_{3})$ must be one of $$\{(1,0,0,0),(0,1,0,0), (1,1,0,0),(2,0,0,0), (0,2,0,0),(0,0,1,0),(0,0,0,1)\}.$$
Note that $b_{1}$ is arbitrary; so if $W$ is faithful, then it must be isomorphic to one of $\{V_2^+\oplus b V_1^-, 2V_2^+\oplus b V_1^-, V_3^+\oplus b V_1^-, b_1 V_1^-\oplus V_2^-, b_1 V_1^-\oplus 2V_2^-, b_1 V_1^-\oplus V_3^-, V_2^+\oplus V_2^-\oplus b_{1}V_1^-\}$ for some $b\in\mathbb{N}^+$ and $b_1\in \mathbb{N}$; if $W$ is not faithful, then it must be isomorphic to one of $\{b V_1^-, V_2^+, 2V_2^+, V_3^+\}$ for some $b\in\mathbb{N}^+$.
This completes the proof.
\end{proof}
\begin{corollary}\label{coro3.2}
Let $\mathds{A}_{\mathbb{F}}^{n}/C_{2p}$ be an $n$-dimensional, reduced Cohen-Macaulay quotient variety in characteristic $p>2$.
Then $\mathds{A}_{\mathbb{F}}^{2}/C_{2p}$ is isomorphic to the quotient obtained from one of $\{V_2^+,2 V_1^-, V_2^-\}$; and $\mathds{A}_{\mathbb{F}}^{3}/C_{2p}$ is isomorphic to the quotient obtained from one of $$\{V_2^+\oplus V_1^-,V_{3}^{+},3V_1^-,V_3^-,V_2^-\oplus V_1^-\}.$$
\end{corollary}
\begin{lemma}\label{lem3.3}
Let $V_{1}$ be the one-dimensional irreducible nontrivial representation of $C_{2}$ over $\mathbb{F}$ of characteristic $p>2$.
Then $\mathbb{F}[3V_{1}]^{C_{2}}=\mathbb{F}[x,y,z]^{C_{2}}$ is Cohen-Macaulay but not Gorenstein, minimally generated by $\{x^{2},y^{2},z^{2},xy,xz,yz\}$. Moreover, $\mathbb{F}[3V_{1}]^{C_{2}}$ is isomorphic to the quotient algebra $\mathbb{F}[x_{1},x_{2},\dots,x_{6}]/(r_{1},r_{2},\dots,r_{6})$ where
$r_{1}=x_{4}^{2}-x_{1}x_{2}, r_{2}=x_{5}^{2}-x_{1}x_{3}, r_{3}=x_{6}^{2}-x_{2}x_{3}, r_{4}=x_{4}x_{5}-x_{1}x_{6},
r_{5}=x_{4}x_{6}-x_{2}x_{5}$ and $r_{6}=x_{5}x_{6}-x_{3}x_{4}$.
\end{lemma}
\begin{proof}
Note that the generator of $C_{2}$ sends $x,y$ and $z$ to $-x,-y$ and $-z$ respectively. Thus there are no invariants of degree 1.
By Noether's bound theorem, $\mathbb{F}[3V_{1}]^{C_{2}}$ can be generated by invariants of degree 2.
Since $x^{2},y^{2},z^{2},xy,xz,yz$ forms a basis of $\mathbb{F}[3V_{1}]_{d}$ and all are invariants, they minimally generate
$\mathbb{F}[3V_{1}]^{C_{2}}$. By Hochster-Eagon \cite[Proposition 13]{HE1971} and Watanabe \cite[Theorem 1]{Wat1974b} we see that
$\mathbb{F}[3V_{1}]^{C_{2}}$ is Cohen-Macaulay but not Gorenstein. Magma's calculation proves the second statement.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm1.3}]
We first consider the two-dimensional case, assuming $\mathbb{F}[V_2^+]=\mathbb{F}[2 V_1^-]=\mathbb{F}[V_2^-]=\mathbb{F}[x,y]$. Since the image of $C_{2p}$ on $V_{2}^{+}$ is $P$, it follows that
$\mathbb{F}[V_{2}^{+}]^{C_{2p}}=\mathbb{F}[x,y]^{P}=\mathbb{F}[x,N(y)]$ is a polynomial algebra, where $N(y):=y^{p}-yx^{p-1}$.
Hence, $V_{2}^{+}/C_{2p}$ is smooth. We observe that $\mathbb{F}[2V_{1}^{-}]^{C_{2p}}=\mathbb{F}[x^{2},xy,y^{2}]\cong \mathbb{F}[z_{1},z_{2},z_{3}]/(z_{2}^{2}-z_{1}z_{3})$ and thus ${\rm Sing}_{C_{2p}}(2V_{1}^{-})$ consists of the zero point in $\mathds{A}_{\mathbb{F}}^{3}$.
Moreover, $\mathbb{F}[V_2^-]^{C_{2p}}=(\mathbb{F}[V_2^-]^{P})^{C_{2p}/P}\cong\mathbb{F}[x,N(y)]^{C_{2}}$ where the generator of $C_{2}$ sends
$x$ to $-x$ and $N(y)$ to $-N(y)$ respectively. Thus $\mathbb{F}[V_2^-]^{C_{2p}}=\mathbb{F}[x^{2},x\cdot N(y),N(y)^{2}]\cong \mathbb{F}[2V_{1}^{-}]^{C_{2p}}$. Hence, ${\rm Sing}_{C_{2p}}(V_{2}^{-})$ also consists of the zero point in $\mathds{A}_{\mathbb{F}}^{3}$. This proves the first statement.
For the three-dimensional case we suppose $\{x,y,z\}$ is the dual basis for the representation.
By Kemper \cite[Proposition 16]{Kem1996} we see that $\mathbb{F}[V_2^+\oplus V_1^-]^{C_{2p}}=\mathbb{F}[x,N(y),z^{2}]$.
Note that $\mathbb{F}[V_{3}^{+}]^{C_{2p}}=\mathbb{F}[V_{3}^{+}]^{C_{p}}$ and the latter has been calculated by Campbell-Wehlau
\cite[Section 4.10]{CW2011}. Thus $\mathbb{F}[V_{3}^{+}]^{C_{2p}}$ is a hypersurface, generated by $\{x,d,N^{P}(y),N^{P}(z)\}$
where $d=y^{2}-2xz-xy$ and $N^{P}(*)$ denotes the norm of $*$. As $\mathbb{F}[3V_{1}^{-}]^{C_{2p}}\cong \mathbb{F}[3V_{1}]^{C_{2}}$, it follows from Lemma \ref{lem3.3} that $\mathbb{F}[3V_{1}^{-}]^{C_{2p}}$ is Cohen-Macaulay but not Gorenstein, minimally generated by six invariants subject to six relations among these generators.
To calculate $\mathbb{F}[V_{3}^{-}]^{C_{2p}}$, we consider the subgroup $P$ of $C_{2p}$ and the quotient group $C_{2p}/P\cong C_{2}$. We have seen that $\mathbb{F}[V_{3}^{-}]^{P}=\mathbb{F}[V_{3}^{+}]^{C_{2p}}=\mathbb{F}[x,d,N^{P}(y),N^{P}(z)]$. Since
the generator of $C_{2}$ fixes $d$, and sends $x, N^{P}(y),N^{P}(z)$ to $-x, -N^{P}(y),-N^{P}(z)$ respectively, there exists a natural $C_{2}$-equivalent algebra surjection:
$$\rho:\mathbb{F}[x_{1},x_{2},x_{3},x_{4}]\longrightarrow \mathbb{F}[x,N^{P}(y),N^{P}(z),d]=\mathbb{F}[V_{3}^{-}]^{P}$$
carrying $x_{1}$ to $x$, $x_{2}$ to $N^{P}(y)$, $x_{3}$ to $N^{P}(z)$, and $x_{4}$ to $d$. Here
$\mathbb{F}[x_{1},x_{2},x_{3},x_{4}]^{C_{2}}\cong \mathbb{F}[3V_{1}]^{C_{2}}\otimes_{\mathbb{F}}\mathbb{F}[x_{4}]$ and we take $x_{1},x_{2},x_{3}$
as a basis of $(3V_{1})^{*}$ dual to the standard basis of $3V_{1}$. Since $C_{2}$ is a linearly reductive,
$\rho$ restricts to the following $\mathbb{F}$-algebra surjection:
$$\rho^{C_{2}}:\mathbb{F}[x_{1},x_{2},x_{3},x_{4}]^{C_{2}}\longrightarrow \mathbb{F}[x,N^{P}(y),N^{P}(z),d]^{C_{2}}=(\mathbb{F}[V_{3}^{-}]^{P})^{C_{2}}\cong
\mathbb{F}[V_{3}^{-}]^{C_{2p}}.$$
By Lemma \ref{lem3.3}, we see that $\mathbb{F}[V_{3}^{-}]^{C_{2p}}$ is generated by
$$\{d,x^{2},N^{P}(y)^{2},N^{P}(z)^{2}, xN^{P}(y),xN^{P}(z), N^{P}(y)\cdot N^{P}(z)\}.$$
Recall that $N^{P}(y)^{2}\in \mathbb{F}[x,d,N^{P}(z)]\oplus N^{P}(y)\cdot \mathbb{F}[x,d,N^{P}(z)]$; see Campbell-Wehlau \cite[page 77]{CW2011}.
Hence, $\mathbb{F}[V_{3}^{-}]^{C_{2p}}$ is minimally generated by
$\{d,x^{2},N^{P}(z)^{2}, xN^{P}(y),xN^{P}(z), N^{P}(y)\cdot N^{P}(z)\}.$
Similar argument applies to the last subcase. Note that $\mathbb{F}[V_2^-\oplus V_1^-]^{P}=\mathbb{F}[x,N(y),z]$
and the generator of $C_{2p}/P$ carries $x,N(y),z$ to $-x,-N(y),-z$ respectively. Hence,
$\mathbb{F}[V_2^-\oplus V_1^-]^{C_{2p}}$ is minimally generated by $\{x^{2},N(y)^{2},z^{2},x\cdot N(y),xz,z\cdot N(y)\}$. Therefore, Smith
\cite[Theorem 1.2]{Smi1996}, together with Watanabe \cite[Theorem 1]{Wat1974b}, implies that $\mathbb{F}[V_{3}^{-}]^{C_{2p}}$ and $\mathbb{F}[V_2^-\oplus V_1^-]^{C_{2p}} (\cong \mathbb{F}[3V_{1}]^{C_{2}})$ are Cohen-Macaulay but not Gorenstein.
\end{proof}
We currently are unable to characterize all singularities ${\rm Sing}_{C_{2p}}(V_{3}^{-})$ even for the case $p=3$ but we can find out some partial subset of singularities as the following example shows.
\begin{example}{\rm Let $p=3$ and $v\in {\rm Sing}_{C_{6}}(V_{3}^{-})$. Magma's calculation shows that the invariant ring $\mathbb{F}[V_{3}^{-}]^{C_{6}}=\mathbb{F}[x,y,z]^{C_{6}}$ is minimally generated by the primary invariants
$f_{1}:=x^2, f_{2}:=xy + xz + y^2, f_{3}:= x^2 y^2 z^2 + x^2 y z^3 + x^2 z^4 + 2x y^3 z^2 + xy^2 z^3 + xyz^4 +
2xz^5 + y^4z^2 + y^2z^4 + z^6$ and the secondary invariants
$ h_{1}:=x^3 z + x^2 y^2 + xy^3, h_{2}:=x^2 yz + 2x^2 z^2 + xy^2 z + 2xz^3,
h_{3}:= x^4 yz + 2x^4 z^2 + x^3 y^2 z + x^3 y z^2 + x^3 z^3 + x^2y^3z +
2x^2 z^4 + 2xy^4 z + 2xy^3 z^2 + 2xy^2 z^3 + y^5 z + 2y^3z^3$, subject to the following relations:
$r_{1}:=f_{1}^2 h_{2} + f_{1}f_{2}^3 + 2f_{1}f_{2}h_{1} + 2h_{1}^2,
r_{2}:= 2f_{1}^2h_{2} + f_{1}h_{3} + 2h_{1}h_{2},
r_{3}:=f_{1}f_{3} + 2h_{2}^2,
r_{4}:=2f_{1}^3h_{2} + f_{1}^2f_{2}h_{2} + f_{1}^2 f_{3} + f_{2}^2h_{3} + 2f_{1}f_{2}h_{3} + f_{2}^3h_{2} + 2h_{1}h_{3},
r_{5}:=f_{1}^2f_{3} + f_{3}h_{1} + 2h_{2}h_{3},
r_{6}:=f_{1}^3 f_{3} + 2f_{1}f_{3}h_{1} + f_{1}f_{3}h_{2} + f_{2}^3f_{3} + 2f_{2}f_{3}h_{1} + 2h_{3}^2.$
Thus there exists an $\mathbb{F}$-algebra isomorphism from $\mathbb{F}[x_{1},x_{2},x_{3},y_{1},y_{2},y_{3}]/(R_{1},\dots,R_{6})$ to $\mathbb{F}[V_{3}^{-}]^{C_{6}}$, which carries $x_{i}$ to $f_{i}$, $y_{i}$ to $h_{i}$, and $R_{j}$ to $r_{j}$, respectively.
We may assume that $v=(a_{1},\dots,a_{6})$ for all $a_{i}\in \mathbb{F}$, and suppose $a_{1}=0$. As $r_{1}(v)=r_{3}(v)=0$, thus $a_{4}=a_{5}=0$. If $a_{2}=0$, then the fact that $r_{4}(v)=0$ implies $a_{6}=0$; if $a_{2}\neq 0$, then the fact that $r_{6}(v)=0$ does also imply $a_{6}=0$. Thus $v=(0,a_{2},a_{3},0,0,0)$. Since the rank of the Jacobian matrix of the all $r_{i}$ at $v$ is not equal to
3, we see that $a_{2}=0$. Thus $\{v=(0,0,a,0,0,0)\mid a\in \mathbb{F}\}$ is a subset of singularities.
$\hbo$}\end{example}
We close this section by proving the following result.
\begin{proposition}\label{prop3.5}
For $k\geq 4$, $V_k^{\pm}/C_{2p}$ are not Cohen-Macaulay.
\end{proposition}
\begin{lemma}\label{lem3.6}
If $k\geq 2$ and $f\in \mathbb{F}[V_k^+]^{C_{2p}}$ is any homogenous polynomial of positive degree, then
$T^-_k(f)=(-1)^{\deg(f)}f$.
\end{lemma}
\begin{proof}
Let $\alpha=\textrm{diag}\{-1,-1,\dots,-1\}$ be the $k\times k$ diagonal matrix. Then
$T^-_k=\alpha\circ T^+_k$. Note that $\alpha(x_i)=-x_i$ for $1\leq i\leq k$. Thus
$\alpha(x_1^{a_1}\cdots x_k^{a_k})=(-1)^{a_1+\dots+a_k}x_1^{a_1}\cdots x_k^{a_k}$ for any monomial $x_1^{a_1}\cdots x_k^{a_k}$. As $f$ is homogenous, $T^-_k(f)=(\alpha\circ T^+_k)\cdot f=\alpha(f)=(-1)^{\deg(f)}f$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop3.5}]
We suppose $\mathbb{F}[V_k^+]\cong\mathbb{F}[x_1,x_2,\dots,x_k]\cong \mathbb{F}[V_k^-]$. Note that $\mathbb{F}[V_k^+]^{C_{2p}}=\mathbb{F}[V_k^+]^{P}$, where $P$ is the cyclic group of order $p$, generated by $T_k^+$. Since the dimension of the image of $T_k^+-1$ is equal to $k-1$, it follows from Theorem \ref{kemper99} that $\mathbb{F}[V_k^+]^{P}$ is not Cohen-Macaulay for $k\geq 4$. Hence, $V_k^{+}/C_{2p}$ is not Cohen-Macaulay.
To show $V_k^{-}/C_{2p}$ is not Cohen-Macaulay, we let
$\ell_1:=x_1,\ell_2:=x_2^2-x_1(x_2+2x_3)$, $\ell_3:=x_2^3+x_1^2(3x_4-x_2)-3x_1x_2x_3$, and $N(x_2):=x_2^p-x_1^{p-1}x_2$. By Shank \cite[Theorem 4.1 and Section 2]{Sha1998}, we see that $\ell_1,\ell_2,\ell_3,N(x_{2})\in \mathbb{F}[V_k^+]^{P}$ and they are algebraically independent over $\mathbb{F}$.
Thus $\{\ell_1,\ell_2,N(x_2)\}$ is a partial homogenous system of parameters for $\mathbb{F}[V_k^+]^{C_{2p}}$. Moreover, we \textit{claim} that $\{\ell_1,\ell_2,N(x_2)\}$ is not a regular sequence in $\mathbb{F}[V_k^+]^{C_{2p}}$. In fact, note that $p>2$ and $\ell_3\cdot N(x_2)-\ell_2^{\frac{p+3}{2}}\in (\ell_1)\mathbb{F}[V_k^+].$ We may assume that
$\ell_3\cdot N(x_2)-\ell_2^{\frac{p+3}{2}}=\ell_1\cdot\ell$ for some $\ell\in \mathbb{F}[V_k^+]$ with $\deg(\ell)>0$.
Thus $\ell\in \mathbb{F}(V_k^+)^{C_{2p}}\cap \mathbb{F}[V_k^+]=\mathbb{F}[V_k^+]^{C_{2p}}$, where $\mathbb{F}(V_k^+)^{C_{2p}}$ denotes the invariant field. This means $\ell_3\cdot N(x_2)-\ell_2^{\frac{p+3}{2}}\in (\ell_1)\mathbb{F}[V_k^+]^{C_{2p}}$, and
\begin{equation}\label{e3.1}
\ell_3\cdot N(x_2)\in (\ell_1,\ell_2)\mathbb{F}[V_k^+]^{C_{2p}}.
\end{equation}
To accomplish the proof of the claim, we need to show that $\ell_{3}$ is not in the ideal $(\ell_1,\ell_2)\mathbb{F}[V_k^+]^{C_{2p}}$, i.e., the image of $\ell_{3}$ in the quotient ring
$\mathbb{F}[V_k^+]^{C_{2p}}/(\ell_1,\ell_2)\mathbb{F}[V_k^+]^{C_{2p}}$ is not zero, which together with (\ref{e3.1}) will imply that the image of $N(x_{2})$
in $\mathbb{F}[V_k^+]^{C_{2p}}/(\ell_1,\ell_2)\mathbb{F}[V_k^+]^{C_{2p}}$ is a zerodivisor.
We use the graded lexicographic monomial order with $x_{2}>x_{1}>x_{3}>x_{4}>\cdots>x_{k}$ and use ${\rm LM}(f)$ to denote the leading monomial of a polynomial $f$. Assume by way of contradiction that $\ell_{3}=\ell_{1}f_{1}+\ell_{2}f_{2}$ for some $f_{1},f_{2}\in\mathbb{F}[V_k^+]^{C_{2p}}$.
Since ${\rm LM}(\ell_{3})=x_{2}^{3}>{\rm LM}(\ell_{1}f_{1})$, we have ${\rm LM}(\ell_{3})={\rm LM}(\ell_{2}f_{2})$. Since ${\rm LM}(\ell_2)=x_2^2$, we have ${\rm LM}(f_{2})=x_{2}$.
However, an immediate fact is that there is no element of $\mathbb{F}[V_k^+]^{C_{2p}}$ with this lead monomial.
Therefore,
$\{\ell_1,\ell_2,N(x_{2})\}$ is not a regular sequence in $\mathbb{F}[V_k^+]^{C_{2p}}$.
Now we are ready to show that $V_k^{-}/C_{2p}$ is not Cohen-Macaulay.
We define $ \widetilde{\ell_1}:=\ell_1^2, \widetilde{\ell_3}:=\ell_{1}\cdot\ell_3$ and $\widetilde{N}:=\ell_{1}\cdot N(x_{2})$.
By Lemma \ref{lem3.6} we see that $\widetilde{\ell_1},\ell_{2},\widetilde{\ell_3},\widetilde{N}\in \mathbb{F}[V_k^-]^{C_{2p}}$.
As $\widetilde{\ell_1},\ell_2,\widetilde{N}$ are algebraically independent over $\mathbb{F}$, they could be
a part of a homogenous system of parameters for $\mathbb{F}[V_k^-]^{C_{2p}}$. To show $\mathbb{F}[V_k^-]^{C_{2p}}$
is not Cohen-Macaulay, it suffices to show that $\{\widetilde{\ell_1},\ell_2,\widetilde{N}\}$ is not a regular sequence in
$\mathbb{F}[V_k^-]^{C_{2p}}$. We have seen that $\ell_3\cdot N(x_2)-\ell_2^{\frac{p+3}{2}}\in (\ell_1)\mathbb{F}[V_k^+]^{C_{2p}}$, so
$$\widetilde{\ell_3}\cdot \widetilde{N}-\ell_2^{\frac{p+3}{2}}\cdot\widetilde{\ell_{1}}\in (\widetilde{\ell_1})\mathbb{F}[V_k^-]^{C_{2p}}.$$
Namely,
$
\widetilde{\ell_3}\cdot \widetilde{N}\in (\widetilde{\ell_1},\ell_2)\mathbb{F}[V_k^-]^{C_{2p}}.
$
As in the previous paragraph, it is sufficient to show that $\widetilde{\ell_{3}}$ is not in the ideal $(\widetilde{\ell_1},\ell_2)\mathbb{F}[V_k^-]^{C_{2p}}$.
Assume by way of contradiction that $\widetilde{\ell_{3}}=\widetilde{\ell_1}\cdot g_{1}+\ell_2\cdot g_{2}$ for $g_{1},g_{2}\in \mathbb{F}[V_k^-]^{C_{2p}}$. Note that $\mathbb{F}[V_k^-]^{C_{2p}}\subseteq \mathbb{F}[V_k^-]^{C_{p}}\cong\mathbb{F}[V_k^+]^{C_{p}}=\mathbb{F}[V_k^+]^{C_{2p}}$, so we could work in $\mathbb{F}[V_k^+]^{C_{p}}$, which is factorial; see for example Campbell-Wehlau \cite[Theorem 3.8.1]{CW2011}. Since $\ell_{1}\cdot \ell_{3}=\ell_{1}\cdot\ell_{1}\cdot g_{1}+\ell_{2}\cdot g_{2}$, it follows that $\ell_{1}$ divides $\ell_{2}\cdot g_{2}$.
As $\ell_{1}$ does not divide $\ell_{2}$, we see that $g_{2}$ must be divisible by $\ell_{1}$.
Thus $\ell_{3}=\ell_{1}\cdot g_{1}+\ell_{2}\cdot g_{2}'$ for some $g_{2}'\in \mathbb{F}[V_k^+]^{P}=\mathbb{F}[V_k^+]^{C_{2p}}$.
This contradicts with the fact that $\ell_{3}$ is not in the ideal $(\ell_1,\ell_2)\mathbb{F}[V_k^+]^{C_{2p}}$.
Therefore, $\mathbb{F}[V_k^-]^{C_{2p}}$ is not Cohen-Macaulay, and the proof is completed.
\end{proof}
\section{\scshape Modular Quotient Varieties of $C_{2p}(p>2)$ in Characteristic 2}
In this section we suppose $p>2$, char$(\mathbb{F})=2$, and $C_{2p}$ is generated by $\sigma$.
It is well-known that up to equivalence, there are only $2p$ indecomposable modular representations $\{V_{i},W_{i}\mid 0\leqslant i\leqslant p-1\}$ of $C_{2p}$, where $\dim(V_{i})=1$ and $\dim(W_{i})=2$ for all $i$. Moreover, suppose $\{1=\xi^{0},\xi,\xi^{2},\dots,\xi^{p-1}\}$ denotes the set of all roots of the polynomial $X^{p}-1=0$ in $\mathbb{F}$. Then as a $C_{2p}$-representation, $V_{i}$ corresponds to the group homomorphism defined by $\sigma\mapsto \xi^{i}$. Note that $V_{0}$ is the trivial representation and these $V_{i}$ are exactly all irreducible representations of $C_{2p}$ over $\mathbb{F}$. For $0\leqslant i\leqslant p-1$, $W_{i}$ corresponds to the group homomorphism defined by $$\sigma\mapsto \begin{pmatrix}
\xi^{i} & 0\\
1& \xi^{i}
\end{pmatrix};$$
see Alperin \cite[pages 24--25]{Alp1993} for details.
\begin{proposition}\label{prop4.1}
Let $p>{\rm char}(\mathbb{F})=2$ be a prime and $\mathds{A}_{\mathbb{F}}^{n}/C_{2p}$ be a reduced Cohen-Macaulay modular quotient variety.
Then there exist $a_{1},\dots,a_{p-1}\in\mathbb{N}$ and $b,c\in\{0,1\}$ such that
$$\mathds{A}_{\mathbb{F}}^{n}/C_{2p}\cong (\oplus_{i=1}^{p-1} a_{i}V_{i}\oplus bW_{k}\oplus cW_{s})/C_{2p}$$
where $0\leqslant k,s\leqslant p-1$.
\end{proposition}
\begin{proof}
As a reduced modular representation of $C_{2p}$, $\mathds{A}_{\mathbb{F}}^{n}$ is isomorphic to
$(\oplus_{i=1}^{p-1} a_{i}V_{i})\oplus (\oplus_{j=0}^{p-1} b_{j}W_{i})$ where
$a_{i},b_{j}\in\mathbb{N}$. Let $P$ be the Sylow 2-subgroup of $C_{2p}$, generated by $\sigma^{p}$. Since $\mathds{A}_{\mathbb{F}}^{n}/C_{2p}$ is Cohen-Macaulay, it follows from Theorem \ref{ES} that codim$(\mathds{A}_{\mathbb{F}}^{n})^{P}\leqslant 2$, i.e., the rank of $\sigma^{p}-1$ is less than or equal to 2. Note that the rank of $\sigma^{p}-1$ on each $V_{i}$ and $W_{j}$ are zero and one respectively.
Hence, $\mathds{A}_{\mathbb{F}}^{n}$ is isomorphic to $\oplus_{i=1}^{p-1} a_{i}V_{i}\oplus bW_{k}\oplus cW_{s}$ for some $b,c\in\{0,1\}$ and $0\leqslant k,s\leqslant p-1$, as desired.
\end{proof}
The main purpose of this section is to study 2-dimensional reduced modular quotient varieties $\mathds{A}_{\mathbb{F}}^{2}/C_{2p}$ and their singularities over $\mathbb{F}$. By Proposition \ref{prop4.1} we see that $\mathds{A}_{\mathbb{F}}^{2}$ is isomorphic to either $V_{i}\oplus V_{j}$ or $W_{k}$ for some $1\leqslant i,j\leqslant p-1$ and $0\leqslant k\leqslant p-1$.
This leads us to calculate two families of invariant rings $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}$ and $\mathbb{F}[W_k]^{C_{2p}}$.
The following result due to Harris-Wehlau \cite[Section 3]{HW2013} has already exhibited a minimal generating set for the invariant ring $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}$ and all relations among this generating set.
\begin{proposition}\label{HWp}
For any $1\leqslant i,j\leqslant p-1$, there exists $m\in\mathbb{N}^+$ such that the invariant ring $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}=\mathbb{F}[x,y]^{C_{p}}$ is minimally generated by
$\{u_{k}:=x^{a_{k}}y^{b_{k}}\mid k=1,2,\dots,m\}$
where $p=a_{1}>a_{2}>\cdots>a_{m}=0$ and $0=b_{1}<b_{2}<\cdots <b_{m}=p$. Moreover, let
$$\pi:\mathbb{F}[U_{1},U_{2},\dots,U_{m}]\longrightarrow\mathbb{F}[x,y]^{C_{p}},\quad\textrm{ by~~ } U_{k}\mapsto u_{k}$$
be the standard, surjective $\mathbb{F}$-algebra homomorphism, where $\mathbb{F}[U_{1},U_{2},\dots,U_{m}]$ denotes a polynomial algebra. Then there exist $m-1\choose 2$ elements
$$\{R_{kt}\mid 1\leqslant k,t\leqslant m\textrm{ and }t-k\geqslant 2\}$$
minimally generate $\ker(\pi)$, where
$R_{kt}:=U_{k}U_{t}-U_{k+1}\prod_{s=k+1}^{t-1} U_{s}^{d_{kts}}$
for some $d_{kts}\in\mathbb{N}^{+}$.
\end{proposition}
\begin{remark}\label{rem4.3}
{\rm
The invariant ring $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}$ is not necessarily a complete intersection. For example,
if $p=5$, Magma calculation shows that $\mathbb{F}[V_{1}\oplus V_{2}]^{C_{10}}$ is minimally generated by four elements $\{xy^{2},x^{3}y,x^{5},y^{5}\}$, subject to three relations $\{R_{13},R_{14},R_{24}\}$.
$\hbo$}\end{remark}
\begin{remark}\label{rem4.4}
{\rm
The invariant ring $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}$ is not necessarily Gorenstein in general. For example,
if $p=5$, then the image of $C_{10}$ on $V_{1}\oplus V_{2}$ consists of $$\left\{\begin{pmatrix}
1 & 0 \\
0&1
\end{pmatrix},\begin{pmatrix}
\xi & 0 \\
0 & \xi^{2}
\end{pmatrix},\begin{pmatrix}
\xi^{2} & 0 \\
0 & \xi^{4}
\end{pmatrix}, \begin{pmatrix}
\xi^{3} & 0 \\
0 & \xi
\end{pmatrix},\begin{pmatrix}
\xi^{4} & 0 \\
0 & \xi^{3}
\end{pmatrix}\right\}$$
in which there are no reflections. Assume that $\mathbb{F}[V_{1}\oplus V_{2}]^{C_{10}}$ is Gorenstein. By Watanabe \cite[Theorem 1]{Wat1974b} we see that the image of $C_{10}$ on $V_{1}\oplus V_{2}$ is contained in ${\rm SL}(2,\mathbb{F})$. This is a contradiction.
$\hbo$}\end{remark}
Recall that $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}$ is always Cohen-Macaulay; see Smith \cite[Theorem 1.2]{Smi1996}. The example in Remark \ref{rem4.4} leads us to consider when $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}$ is Gorenstein. Here we prove Theorem \ref{thm1.4}.
\begin{proof}[Proof of Theorem \ref{thm1.4}]
$(\Longrightarrow)$ Assume by way of contradiction that $i+j\neq p$. The matrix corresponding to the generator $\sigma$ of $C_{2p}$ on
$V_{i}\oplus V_{j}$ is $\begin{pmatrix}
\xi^{i} & 0\\
0 & \xi^{j}
\end{pmatrix}$, which has determinate $\xi^{i+j}\neq 1$. This indicates that the image of $C_{2p}$ on $V_{i}\oplus V_{j}$ is not contained in ${\rm SL}(2,\mathbb{F})$. By Watanabe \cite[Theorem 1]{Wat1974b}, the fact that the image of $C_{2p}$ contains no reflections, together with the assumption that $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}$ is Gorenstein, implies that it must be contained in ${\rm SL}(2,\mathbb{F})$. This contradiction shows that $i+j=p$.
$(\Longleftarrow)$ Assume that $i+j=p$. Then $\det(\sigma)=\xi^{i+j}=\xi^{p}=1$. Thus the image of $C_{2p}$ is contained in ${\rm SL}(2,\mathbb{F})$.
By Watanabe \cite[Theorem 1]{Wat1974a} (or Braun \cite[Theorem]{Bra2011}) we see that $\mathbb{F}[V_{i}\oplus V_{j}]^{C_{2p}}$ is Gorenstein.
\end{proof}
Now we consider the invariant ring $\mathbb{F}[W_{k}]^{C_{2p}}$ for $k=0,1,\dots,p-1$. Clearly, the action of $C_{2p}$ on $W_{0}$ is not faithful, and actually, in this case, $\mathbb{F}[W_{0}]^{C_{2p}}=\mathbb{F}[x,y]^{C_{2}}=\mathbb{F}[x,xy+y^{2}]$ is a polynomial algebra. Note that the actions of $C_{2p}$ on $W_{k}$ are faithful for $k=1,2,\dots,p-1$.
Fix $k\in \{1,2,\dots,p-1\}$. Suppose $\sigma=\begin{pmatrix}
\xi^{k} & 0 \\
1 & \xi^{k}
\end{pmatrix}$ denotes a generator of $C_{2p}$ on $W_{k}$. Then $\sigma^{p}$ generates a subgroup of $C_{2p}$ of order 2, say $H$. Further, we have seen that $\mathbb{F}[W_{k}]^{H}=\mathbb{F}[x,y]^{C_{2}}=\mathbb{F}[h_1,h_2]$ is a polynomial algebra, where $h_1:=x$ and $h_2:=xy+y^{2}$. Thus,
$$\mathbb{F}[W_{k}]^{C_{2p}}=(\mathbb{F}[W_{k}]^{H})^{C_p}\cong \mathbb{F}[h_1,h_2]^{C_p}$$
where $C_p$ can be generated by $\tau:=\begin{pmatrix}
\xi & 0 \\
0 & \xi
\end{pmatrix}$, i.e., $\tau(h_1)=\xi^{-1}\cdot h_1$ and $\tau(h_2)=\xi^{-2}\cdot h_2$.
Therefore, $\mathbb{F}[W_{k}]^{C_{2p}}\cong\mathbb{F}[h_1,h_2]^{C_p}\cong \mathbb{F}[V_1\oplus V_2]^{C_p}\cong \mathbb{F}[V_1\oplus V_2]^{C_{2p}}.$
Applying Proposition \ref{HWp}, we derive
\begin{proposition}\label{prop4.6}
For $1\leqslant k\leqslant p-1$, there exists $m\in \mathbb{N}^+$ such that $\mathbb{F}[W_k]^{C_{2p}}=\mathbb{F}[x,y]^{C_{2p}}$ is minimally generated by
$\{f_{i}:=h_1^{a_{i}}h_2^{b_{i}}\mid i=1,2,\dots,m\}$
where $p=a_{1}>a_{2}>\cdots>a_{m}=0$ and $0=b_{1}<b_{2}<\cdots <b_{m}=p$. Moreover, let
$$\pi:\mathbb{F}[U_{1},U_{2},\dots,U_{m}]\longrightarrow\mathbb{F}[x,y]^{C_{p}},\quad \textrm{ by }U_{i}\mapsto f_{i}$$
be the standard, surjective $\mathbb{F}$-algebra homomorphism, where $\mathbb{F}[U_{1},U_{2},\dots,U_{m}]$ denotes a polynomial algebra. Then there exist $m-1\choose 2$ elements
$$\{R_{ij}\mid 1\leqslant i,j\leqslant m\textrm{ and }j-i\geqslant 2\}$$
minimally generate $\ker(\pi)$, where
$R_{ij}:=U_{i}U_{j}-U_{i+1}\prod_{s=j+1}^{j-1} U_{s}^{d_{ijs}}$
for some $d_{ijs}\in\mathbb{N}^{+}$.
\end{proposition}
\begin{example}\label{counter}
{\rm For $p=3$, the invariant ring $\mathbb{F}[W_k]^{C_{6}}$ is minimally generated by
$$\{f_1=x^3,f_2=x^2y+xy^2,f_3=(xy+y^2)^3\}$$
subject to one relation: $f_2^3+f_1f_3=0.$ Thus $\mathbb{F}[W_k]^{C_{6}}$ is a hypersurface.
$\hbo$}\end{example}
\begin{remark}\label{rem4.8}
{\rm
By Remarks \ref{rem4.3} and \ref{rem4.4} we see that $\mathbb{F}[W_k]^{C_{2p}}$ is not necessarily a complete intersection or Gorenstein. Further, it follows from Theorem \ref{thm1.4} that
$\mathbb{F}[W_k]^{C_{2p}}$ is never Gorenstein unless $p=3$.
$\hbo$}\end{remark}
\section{\scshape Wild McKay Correspondence}
In this last section we will show that Problem \ref{prob1.1} have a negative answer for the modular cases of $C_{2p}$ if the condition that ``$G$ is a small subgroup'' was dropped.
To begin with, we first recall the definition of the Grothendieck ring of varieties.
\begin{definition}{\rm
Let ${\rm Var}_{\mathbb{F}}$ be the set of isomorphism classes of $\mathbb{F}$-varieties. The \textbf{Grothendieck ring of varieties} over $\mathbb{F}$, written $K_0({\rm Var}_{\mathbb{F}})$, is the abelian group generated by $[Y]\in{\rm Var}_{\mathbb{F}}$ subject to the following relation: If $Z$ is a closed subvariety of $Y$, then $[Y]=[Y\backslash Z]+[Z]$. It has a ring structure where the product is defined by $[Y]\cdot[Z]:=[Y\times Z]$. We denote by $\mathbb{L}$ the class [$\mathds{A}_{\mathbb{F}}^1$] of the affine line.
}\end{definition}
The McKay correspondence has many versions for different subjects of study in the literature.
Let's recall the version of Batyrev \cite{Bat1999} in terms of stringy invariants and over $\mathbb{C}$. Suppose $G$ is a finite subgroup of ${\rm SL}_n(\mathbb{C})$ and $X$ denotes the associated quotient variety $\mathds{A}_{\mathbb{C}}^n/G$.
The McKay correspondence reveals the following equality in a certain modification of $K_0({\rm Var}_{\mathbb{C}})$:
\begin{equation}\label{McKayC}
M_{{\rm st}}(X)=\sum_{g\in {\rm Conj}(G)}\mathbb{L}^{{\rm age}(g)},
\end{equation}
where $M_{{\rm st}}(X)$ denotes the stringy motivic invariant of $X$ defined by a resolution, ${\rm Conj}(G)$ denotes the set of conjugacy classes of $G$ and ${\rm age}(g)$ denotes the \textbf{age} of $g$. Here we note that ${\rm age}(g)$ is determined by eigenvalues of elements in $g$ and thus it is independent of the choice of the representative in $g$. Moreover, if $Y\longrightarrow X$ is a crepant resolution, then $M_{{\rm st}}(X)=[Y]$; see Reid \cite{Rei2002} for the details.
Without using resolutions of $X$, Denef and Loeser \cite{DL2002} also redefined $M_{{\rm st}}(X)$ by the theory of motivic integration over the arc space of $X$, which is workable not only over $\mathbb{C}$ but also over arbitrary algebraic closed fields.
Yasuda \cite{Yas2014} conjectured and proved for the case of $C_{p}$ that the right hand side of (\ref{McKayC}) could be changed to some integration over the moduli space of $G$-covers. As a consequence, Problem \ref{prob1.1} has positive answer for the case of $C_{p}$.
To calculate the element $[Y]$ in the Grothendieck ring, we need the following result whose proof can be found in Hartmann
\cite[Proposition 6.2]{Har2016}.
\begin{proposition}\label{WQ}
Let $G$ be a finite abelian group with quotient $G\longrightarrow \Gamma$. Let $k$ be
a field of characteristic $p$, $q$ be the greatest divisor of $|G|$ prime to $p$, and let
$K/k$ be a Galois extension with Galois group $\Gamma$. Assume that the Galois action on
$K$ lifts to a $k$-linear action of $G$ on a finite dimensional $K$-vector space $V$. If $k$
contains all $q$-th roots of unity, then
$$[V/G]=\mathbb{L}_k^{\dim_K V}\in K_0({\rm Var}_k).$$
\end{proposition}
We conclude this paper with the following answer to Problem \ref{prob1.1} provided that the condition that ``$G$ is a small subgroup'' was dropped.
\begin{proof}[An Example Attached to Problem \ref{prob1.1}]
Let's go back to Example \ref{counter} where we take the ground field $\mathbb{F}$ to be
an algebraic closed field of characteristic $2$, which contains all third roots of unity.
We observe that the origin is the unique
singular locus in the quotient variety. Blowing up of the origin, we derive a crepant resolution $Y\longrightarrow X$. The exceptional set is a simple normal crossing divisor with two irreducible components, say $E_1\cup E_2$, where both $E_1$ and $E_2$ are $\mathbb{P}^1_{\mathbb{F}}$. Hence,
\begin{eqnarray*}
[Y]&= &[Y\backslash \{E_1\cup E_2\}]+[E_1\cup E_2] \\
&= & [X\backslash\{0\}]+2(\mathbb{L}+1)-1\\
&= & [X]+2\mathbb{L}\\
&= & \mathbb{L}^2+2\mathbb{L},
\end{eqnarray*}
where the last equality follows from Proposition \ref{WQ}. Therefore, the Euler characteristic of $Y$ is equal to 3 while the the cardinality of the conjugacy classes of $C_{6}$ is 6. This shows that Problem \ref{prob1.1} fails to be valid for the case of $C_{6}$.
\end{proof}
\section*{Acknowledgments}
This research is supported by the National Natural Science Foundation of China (Grant No. 11471116, 11531007), Science and Technology Commission of Shanghai Municipality (Grant No. 18dz2271000) and SAFEA (P182009020).
The authors would like to thank Liang Chen and Yongjiang Duan for their support.
|
1,108,101,565,240 | arxiv | \section{Introduction}
The flat rotation curve seen in many
galaxies cannot be explained by the visible
matter and a Newtonian potential. Often this
is taken as evidence for non-visible, i.e.,
``dark,'' matter in the galaxies. There are
many candidates for what the dark matter could be (see for
example Griest (1995)). Indeed, one of the remarkable
sucesses in the past year has been the observation of one
category of these candidates, the MACHOs or massive compact
halo objects by the EROS (Aubourg {\it et al.}\ 1993, 1995;
Beaulieu {\it et al.}\ 1995), OGLE (Udalski {\it et al.}\
1993, 1994a,b,c), and MACHO (Alcock {\it et al.}\ 1993,
1995a,b,c; Bennett {\it et al.}\ 1994) collaborations.
Unfortunately, these were discovered to be insufficiently
numerous to explain the flat rotation curves.
Whatever one's hopes or prejudices,
it is important to consider alternatives. In the present
case, one alternative is that dark matter is not the cause
of the rotation curve problem, but rather that the
gravitational forces are not Newtonian at long distances.
For example, Milgrom (1983, 1986, 1988, 1994) has suggested a
modified Newtonian dynamics wherein the gravitational
acceleration is the usually calculated Newtonian
gravitational acceleration if that acceleration is large
compared to some critical acceleration, but is the
geometrical mean of the critical acceleration and the
calculated Newtonian accleration if the Newtonian
acceleration is very small. Fits to galactic data, after
some early doubts, seem possible using the same critical
acceration for each galaxy (Begeman, Broeils, \& Sanders
1991). Milgrom's idea gives an acceleration like what one
would get from a $1/r$ force, proportional to the square
root of the mass, at very long range.
Here we shall examine a different alternative,
namely that, in addition to the Newtonian term, the potential
also contains a linear term. This has been suggested on the
basis of conformal gravity (Mannheim \& Kazanas 1989, 1991;
Mannheim 1993), but one does not need to subscribe to this
viewpoint in order to evaluate the result. For a point mass
$M$, the potential (potential energy per unit
test mass) is
\begin{equation}
V = -{GM\over r} + \Gamma M r .
\end{equation}
\noindent The linear term must have a very small
coefficient so that it will contribute
noticeably only at very long distances. The
coefficient of the linear potential,
$\Gamma$, should, like $G$, be a universal
coefficient if this potential correctly describes
nature.
We shall select a group of galaxies
for which a rotation curve has been well
measured, and for which sufficient other data
is available that can calculate a rotation
curve based on the directly observed matter.
We shall, at least at the outset, allow ourselves to
vary only the mass to light ratio of the
luminous disk and the value of the linear
potential coefficient $\Gamma$. The first goal
is to see if good fits to the rotation
curve can be obtained. If the answer is
generally yes, then the further question will be
whether good fits can be obtained with the same
value of $\Gamma$ for each galaxy. The answer
will be seen to be that good fits can be
obtained, but not with a universal value
of~$\Gamma$. However, we will notice that a
fairly but not absolutely consistent value of
$\Gamma \times M_{galaxy}$ does emerge. The latter is
equivalent to saying that when the gravitational forces get
very small, one does not get the Newtonian gravitational
acceleration but rather some small limiting value of
acceleration. Finally, there will be some short discussion
of what is possible if some other parameters, such as the
distance to the galaxy, are allowed to vary.
\section{Testing the idea}
The galaxies we use are the ten used by Begeman {\it et
al.}\ (1991).
There are optical measurements giving
the luminosity, scale length, and distance of each of these
galaxies, and also radio measurements of the rotation curve
many scale lengths out from the center of the galaxy. The
number of galaxies is restricted (Begeman {\it et al.}\ 1991) by
a requirement that they have reasonable azimuthal symmetry so
that the rotation curves accurately trace the overall mass
distribution of the galaxies. The galaxies we use, as well as
their distances, luminosities, and scale lengths, are listed in
Table~\ref{table}. The galaxies differ by a factor of 1000 in
luminosity and a factor 10 in size (scale length). We will
not, at least for now, vary the distances and scale lengths
found in the literature.
We give a few details of how we proceed.
The visible mass of each galaxy includes luminous matter
and H{\sc i} gas. The luminosity area density of the
galactic disk is generally well represented by a
falloff that is exponential in distance from
the galactic center. The scale lengths $R_D$ are given in
Table~\ref{table}. We take the mass to light ratio for
the luminous matter in the disk,
$M/L$, to be constant within a given galaxy although we
allow it to vary from galaxy to galaxy. The mass area
density of the luminous disk, $\sigma_D$, is then
\begin{equation}
\sigma_D
= \left( M_D \over 2 \pi R_D^2 \right)
e^{-r/R_D}.
\end{equation}
\noindent where $M_D$ is the mass of the luminous matter
in the disk.
The point mass Newtonian plus linear potential needs
to be integrated over the expontential disk mass
distribution to get the potential for a galaxy. Mannheim
(1993, 1995) has given the result extending earlier work by
others for the Newtonian case. The Newtonian force per unit
test mass (acceleration) is
\begin{equation}
g_{ND} = {G M_D \over R_D^2} \alpha
\left[ I_0(\alpha) K_0(\alpha)
- I_1(\alpha) K_1(\alpha) \right]
\end{equation}
\noindent where
\begin{equation}
\alpha \equiv r / 2 R_0 .
\end{equation}
\noindent and $I_\nu$ and $K_\nu$ are Bessel functions.
The corresponding result for the linear potential is
\begin{equation}
g_{LD} = 2 \Gamma M_D
\alpha I_1(\alpha) K_1(\alpha) .
\end{equation}
Two of the galaxies, the two biggest, in our sample have
central bulges which are not well included by a single
exponential falloff. For these galaxies the luminosity
curve is fit with a sum of two exponentials. The bulge
and (main) disk components are allowed different $M/L$
ratios, and in Table~\ref{table} we list the scale
lengths, fitted $M/L$ ratios, and total mass of the two
components separately.
In addition, the H{\sc i} gas in the galaxies is visible
through its 21 cm radiation. If the distance to the
galaxy is correctly known, the measurements give the mass
of the H{\sc i} directly, and we increase this mass by a
factor 4/3 to account for heavier material, mainly
primordial He. The gas contribution is dynamically
significant only for the three lightest galaxies in our
sample, and for each of these the gas mass area density
is well fit by a gaussian,
\begin{equation}
\sigma_G = {M_G \over \pi R_G^2 } e^{-r^2/R_G^2} ,
\end{equation}
\noindent and the total gas mass $M_G$ and gaussian
scale length for the gas $R_G$ are listed
in Table~\ref{table2}. We here record the acceleration fields
due to the gas and the Newtonian and linear potentials
(Mannheim 1995):
\begin{equation}
g_{NG} = {G M_G r \over R_G^3 } \sqrt{\pi} e^{-\beta}
\left( I_0(\beta) - I_1(\beta) \right)
\end{equation}
\noindent where
\begin{equation}
\beta \equiv r^2/2 R_G^2
\end{equation}
\noindent and
\begin{equation}
g_{LG} = {\Gamma M_G r \over 2 R_G} \sqrt{\pi} e^{-\beta}
\left( I_0(\beta) + I_1(\beta) \right).
\end{equation}
Thus in a case where we include gas and a luminous disk
described by a single exponential we have
\begin{equation}
g(r) = {v^2(r) \over r} = g_{ND}+g_{NG}+g_{LD}+g_{LG},
\end{equation}
\noindent where $v(r)$ is the revolution velocity of the
galaxy.
We then fit to the galactic rotation curves, given the
distribution and luminosity of the luminous matter and the
distribution and actual mass of the gas. To repeat, the
unknown quantities for each galaxy are $M/L$ for the
luminous matter and $\Gamma$. The rise of the rotation curve
to its peak is mainly determined by the Newtonian part of
the gravitational force (the contribution from the linear
potential is numerically small in this region), and this
rise determines
$M/L$. Conversely, the contribution of the Newtonian
potential to the far out part of the rotation curve is small,
and this part of the curve basically determines $\Gamma$ for
each galaxy.
Our fits to the rotation curves are shown in
Fig.~\ref{rotationcurves}. One sees that the
rotation curve fits, based on two parameters (three for
NGC 2841 and 7331), are tolerably good. The curves in
Fig.~\ref{rotationcurves} also show a clear feature of a
linear plus Newtonian potential in that the observed
near-flatness depends upon interplay between the two
contributions, and if the rotation curve were measured
farther out the curve would rise.
The values of $M/L$ and $\Gamma$ we get for each galaxy
are shown in Table~\ref{table}. Although we have good
fits, the price is that the largest and smallest values of
$\Gamma$ in the Table differ by two orders of magnitude.
We can get an idea of the play in
$\Gamma$ by asking what values we get if we set $M/L$ to
zero (which gives maximum possible $\Gamma$ at the expense
of a poor small radius fit), or set $M/L$ to twice its
best value (which also gives a poor small radius fit).
One gets changes of about $\pm 5\%$ in $\Gamma$ for the gas
dominated galaxy DDO 154 to typically $\pm 30\%$ for a
galaxy where the known gas plays little dynamical
r\^ole. Thus the fits cannot be modified to get the same
$\Gamma$ for each galaxy and we conclude that the values of
$\Gamma$ are not universal.
\subsection{Allowing the distance to vary}
Distances to galaxies are of course not perfectly measured.
Indeed, for DDO 154 there is discussion (Carignan \& Beaulieu
1989) of whether it is part of the Canes Venatici I cluster at
4 Mpc or really part of the Coma I cluster beyond it at 10
Mpc. So we may consider how changes in the measured distance
will affect the values of $\Gamma$ and $M_{gal}\Gamma$. For
galaxies dominated by luminous matter, when the distance
scales like
$d\rightarrow \eta d$, then $r \rightarrow \eta r$ and
$L\rightarrow \eta^2 L$. Then choosing
$(M/L)\rightarrow \eta^{-1} (M/L)$ leads to unchanged
rotation curves provided
\begin{equation}
\Gamma_{new} = \eta^{-2} \Gamma_{old} =
\left(d_{old}\over d_{new}\right)^2 \Gamma_{old} .
\end{equation}
For gas dominated galaxies, we have directly
$M\rightarrow \eta^2 M$ and examining the far
out part of the rotation curve then leads to
\begin{equation}
\Gamma_{new} =
\left(d_{old}\over d_{new} \right)^3 \Gamma_{old} .
\end{equation}
For either case,
\begin{equation}
\left( M_{gal} \Gamma \right)_{new} =
{d_{old}\over d_{new}}
\left( M_{gal} \Gamma \right)_{old} .
\end{equation}
To reconcile or make the same all the values of $\Gamma$ would
involve moving galactic distances so that the smaller galaxies
were systematically moved out by a factor of 5 to 10 compared
to the larger. We will not entertain this idea. Reconciling
$M_{gal} \Gamma$ is less motivated theoretically, but it is
relatively easy to do. Choosing to set $M_{gal}
\Gamma = 2.42$ (the geometric mean of the relevant numbers in
Table~\ref{table}), a two parameter fit varying $d$ and $M/L$
produces rotation curves like the ones we have already shown,
with the scaling of $M_{gal}\Gamma$ working as suggested
above even for the cases where luminous matter and gas are
both important. Reconciling $M_{gal}\Gamma$ in this way
requires, in the extreme cases, having DDO 154 at 2.4 Mpc
instead of 4 Mpc and NGC 2841 at 22.5 Mpc instead of 9.46
Mpc. In fact, for NGC 2841 the distance is already in dispute
(Sanders \& Begeman 1994) since the distance derived from the
Tully-Fisher relation is about twice 9.46 Mpc obtained from
Hubble's law and used here. Sanders \& Begeman (1994) suggest
2841's recession speed is greatly affected by proximity to the
Virgo Cluster, and that the larger distance is more likely
correct.
\section{Conclusion}
We have attempted to confirm or disconfirm the idea that the
far out part of the galactic rotation curves, usually taken
as evidence of dark matter, may be well described by a linear
add-on to the Newtonian potential. The original suggestion
was that the coefficient of the linear potential be a
universal constant $\Gamma$ times the mass of the source,
just as the coefficient in the Newtonian potential is a
universal constant $G$ times the mass of the source. This is
what one would expect if gravity theory, even with
modifications, were a metric theory driven by the
energy-momentum tensor of the source.
The original suggestion works poorly. Decent fits can be
gotten for the rotation curves of a selection of smooth and
azimuthally symmetric galaxies, but the fits require very
different values of $\Gamma$ for different size galaxies.
The galactic data, despite lacking {\it a priori} theoretical
motivation, do seem to give $M_{gal}\times\Gamma$ nearly the
same for each galaxy. In other words, the centripetal
acceleration of matter near the galactic edges approaches a
limiting value which is about the same for any galaxy.
Numerically, the value $g_0 = M_{gal}\Gamma$ is about
$2\frac{1}{2}\times10^{-11} m/s^2$. Thus there is some
systematic and reproducible feature of the mysterious
galactic rotation curves. Such things have also been noted in
the context of the dark matter
explanation (Bahcall and Casertano 1985; van Albada \&
Sancisi 1986), and may be a clue to an underlying
understanding of galactic structure or binding.
\section*{Acknowledgment}
Mannheim and Kmetko (1996) have studied the galactic rotation
curves from the same viewpoint, and have come to the same
conclusions. We thank Philip Mannheim for much friendly
communication while this work was progressing. CEC thanks the
National Science Foundation for support under Grant
PHY-9306141.
\newpage
\begin{table}[h]
\hglue -0.35in
\begin{tabular}{lccccccc}\hline
\ & $d$ &
$L$ &
$R_D$ &
$M/L $ &
$ \Gamma $ &
$ M_{gal} $ &
$ \Gamma M_{gal} $ \rule{0pt}{13pt}\\%[2pt]
Galaxy & Mpc & $10^9 L_\odot$ & kpc & $M_\odot/L_\odot$
& $10^{-52}N/kg^2$ & $10^9 M_\odot$
& $10^{-11} m/s^2$ \rule{0pt}{13pt} \\[2pt]
\hline
DDO 154 & \ 4.00 & \ 0.05 & 0.50 & 1.2 & 164 &0.42 & 1.37 \\
DDO 170 & 12.01 & \ 0.16 & 1.28 & 4.5 & 46 & 1.6 & 1.46 \\
NGC 1560 & \ 3.00 & \ 0.35 & 1.30 & 2.3 & 60 & 1.9 & 2.27 \\
UGC 2259 & \ 9.80 & \ 1.02 & 1.33 & 4.1 & 29 & 4.2 & 2.38 \\
NGC 6503 & \ 5.94 & \ 4.80 & 1.73 & 3.0 & 5.8 & 14 & 1.67 \\
NGC 2403 & \ 3.25 & \ 7.90 & 2.05 & 2.3 & 6.9 & 18 & 2.49 \\
NGC 3198 & \ 9.36 & \ 9.00 & 2.63 & 3.8 & 3.1 & 34 & 2.14 \\
NGC 2903 & \ 6.40 & 15.30 & 2.02 & 3.6 & 3.0 & 55 & 3.31 \\
NGC 2841 & \ 9.46 & 20.50 & 0.50 & 3.5 & 1.9 & 150 & 5.75 \\
& &(5.9+14.6)& 2.38 & 9.0 & & & \\
NGC 7331 & 14.90 & 54.00 & 1.20 & 0.75& 1.3 & 140 & 3.71 \\
& &(31.5+22.5)& 4.48 & 5.2 &&& \\
\hline
\end{tabular}
\caption{The galaxies. The two brightest galaxies have
two component expontentials describing their luminosity
profiles. Both scale lengths are given, and the
luminosities of the inner (``bulge'') and outer parts of the
disk are given, in that order, parenthetically in the
luminosity column.}
\label{table}
\end{table}
\begin{table}
\centering
\begin{tabular}{cccc}\hline
Galaxy & $M_{\rm H{\scriptscriptstyle I}} (M_\odot)$
& $M_G (M_\odot)$ & $R_G$
\\
\hline
DDO 154 & $2.7 \times 10^8$ & $3.6 \times 10^8$
&$3.3'=3.8 {\rm kpc}$\\
DDO 170 & $6.6 \times 10^8$ & $8.8 \times 10^8$
& $95''=6.7 {\rm kpc}$\\
NGC 1560 & $8.2 \times 10^8$ & $10.9 \times 10^8$
& $5.6'=4.85 {\rm kpc}$ \\
\hline
\end{tabular}
\caption{Parameters for gas in three galaxies.}
\label{table2}
\end{table}
\newpage
\section*{References}
\parindent 0 pt
Alcock, C., {\it et al.}\ 1993, Nature, 365, 621
------. 1995a, Phys. Rev. Lett., 74, 2867
------. 1995b, ApJ, 445, 133
------. 1995c, ApJ, 449, 28
Aubourg, E. {\it et al.}\ 1993, Nature, 365, 623
------. 1995, A \& A, 301, 1
Bahcall, J. N. \& Casertano, S. 1985 ApJ, 293, L7
Beaulieu J.P., {\it et al.}\ 1995, A \& A, 299, 168
Begeman, K. G., Broeils, A. H., \& Sanders, R. H. 1991,
MNRAS, 249, 523
\parindent 20 pt \hang \noindent
Bennett, D.P., {\it et al.} 1994, Proceedings of the 5th
Astrophysics Conference in Maryland: Dark Matter
\parindent 0 pt
Carignan, C. \& Beaulieu, S. 1989, ApJ, 347, 760
\parindent 20 pt \hang \noindent
Griest, K., Lectures presented at
the International School of Physics ``Enrico Fermi" Course
``Dark Matter in the Universe", Varenna, 25 July - 4 August,
1995 \parindent 0 pt
Mannheim, P. D. \& Kazanas, D. 1989, ApJ, 342, 635
------. 1991, Phys. Rev. D, 44, 417
Mannheim, P. D. 1993, ApJ, 419, 150
------. 1995, preprint UCONN 95-02; astro-ph/9504022
Mannheim, P. D., \& Kmetko, J. 1996, preprint UCONN 96-02
Milgrom, M. 1983, ApJ, 270, 365
------. 1986, ApJ, 302, 617
------. 1988, ApJ, 333, 689
------. 1994, Ann. Physics (N. Y.), 229, 384
Sanders, R. H., \& Begeman, K. G. 1994, MNRAS, 266, 360
Udalski, A., {\it et al.}\ 1993, Acta Astronomica, 43, 289
------. 1994a, Acta Astronomica, 44, 165
------. 1994b, Acta Astronomica, 44, 227
------. 1994c, ApJ Lett., 436, L103.
\parindent 20 pt \hang \noindent
van Albada, T. S. \& Sancisi, R. 1986,
Phil.\ Trans.\ R. Soc.\ Lond.\ A 320, 447
\newpage
\begin{figure}
\caption{\protect\small {\it Caption:} Rotation curves. In all
cases, $v$ is in km/sec and $r$ is in kpc. The heavy line is
the full result. The dotted line would be the result from the
linear potential alone, the solid line would be the result
from the Newtonian potential alone. In the cases where there
is a two component fit to the mass distribution, we have
indicated the separate contributions to the Newtonian result
with dashed lines. For the lighter galaxies, the two
components are luminous matter and gas, with the gas
contribution peaking farther out; for the heavier galaxies,
both components represent luminous matter.}
\label{rotationcurves}
\end{figure}
\end{document}
|
1,108,101,565,241 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
SOXS, the two-channel, single object, medium resolution spectrograph, is designed to observe transient events and variable sources\cite{Schipani16,Schipani18,Sanchez18,Aliverti18, Schipani20}. The SOXS instrument consists of five sub-systems, namely Common Path (CP), Calibration unit (CU), Acquisition Camera (AC), UV-VIS spectrograph, and the NIR spectrograph. CP is the backbone of the instrument\cite{Claudi18, Claudi22}. It has opto-mechanical interfaces to the other four sub-systems and the NTT telescope. During observation, CP receives the $F/11$ beam from the telescope, and the dichroic onboard splits the incoming beam, sending 350-850~$nm$ to the UV-VIS spectrograph and 800-2000~$nm$ to the NIR spectrograph. Both the spectrographs will receive an $F/6.5$ beam from the CP. For acquisition and imaging, the CP drives the light to the 3.5’ x 3.5’ Andor Camera. For wavelength and flux calibration purposes during the daytime, the CP can also direct the light from the CU to the spectrographs.
For the assembly, alignment, integration, and verification of the CP sub-system, we have exploited an opto-mechanical approach. We have used a portable Coordinate Measuring Machine (pCMM) to place the opto-mechanical components onto the CP bench. In addition to that, optical feedback using an on-axis laser source and a telescope simulator was used to fine-tune and validate the goodness of the alignment of the individual components. The $F/\#$ of the CP exiting beam to the spectrographs, their positions and tilts, and the on- and off-axis PSF optical quality were verified after the integration.
\subsection{The Common Path}\label{cp}
The CP consists of a dichroic, 2 folding mirrors (FM), 2 tip-tilt (TT) mirrors (envisaged to compensate flexures), ADC in the UVVIS arm, refocuser mounted on a linear stage in the NIR arm, a linear stage to which the pierced mirror and pellicle are mounted (directing some or all light to the AC), a linear stage to which a folding mirror is mounted (directing the light from the CU towards the spectrographs), a PT100 temperature probe, and the instrument shutter. Figure~\ref{fig1} shows the CP, its components, and the light path.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=5.5cm]{fig1.png}
\end{tabular}
\end{center}
\caption[fig1]
{ \label{fig1}
\textit{Left panel}: Common Path CAD image displaying its components. \textit{Right panel}: The CP light path.}
\end{figure}
The optical components UVVIS field lens, NIR window, and NIR field lens (marked within red circles in Figure~\ref{fig1} are formally a part of the CP, but physically present within the spectrographs. Without these components, the CP produces an $F/6.91$ beam at the UVVIS CP exit and an $F/6.8$ beam at the NIR CP exit.
\section{Alignment Strategy}
As mentioned earlier, the CP alignment follows an opto-mechanical approach. We used a pCMM to position the optical components (see Figure~\ref{fig2}), which has a measurement accuracy is about $30~\mu m$. We have therefore used large mirrors (e.g., $10~cm$ diameter) to minimize the error due to the pCMM whenever possible (see Figure~\ref{fig2}).
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=17cm]{fig2.png}
\end{tabular}
\end{center}
\caption[fig2]
{ \label{fig2}
\textit{Left}: A portable Coordinate Measuring Machine (pCMM). \textit{Middle}: Characterized CMOS detector mounted on XYZ-tip-tilt-rotation stage. \textit{Right}: $10~cm$ mirror used for alignment purposes.}
\end{figure}
\subsection{Characterized CMOS Detector}
To accurately position the opto-mechanical components in decenter, in focus, and estimate the tilt of the beam after the component, we have characterized a CMOS detector mounted in a robust mount with 6 degrees of freedom (XYZ, tip, tilt, and rotation) as displayed in Figure~\ref{fig2}.
The characterization was done using a converging beam coming from a 4D interferometer. We used the pCMM to have the mechanical measures. We were able to define a reference pixel and the distance from the intersection of three planes. This way we can position (inside the pCMM errors) the reference pixel of the CMOS detector on the nominal beam, checking for decenter and angles.
\subsection{Alignment Set Up and Steps}
Schematic representation of the optical setup used to align the CP can be seen in Figure~\ref{fig3}. The CP is fixed to the optical bench using the kinematic mounts that will be used to mount the subsystem to the SOXS flange later on. The details of the kinematic mounts and actual image of the setup can be found in Biondi et al. paper\cite{Biondi20}.
We could feed the CP either with a bright laser source or with an $F/11$ Telescope Simulator (TelSim). The details of the TelSim alignment is described in Biondi et al. article\cite{Biondi20}. The TelSim lenses ($L750$ and $L300$) were aligned to the nominal Laser beam with decenter $<40~\mu m$ and tip \& tilt $\sim$8” \& $\sim$4”, respectively. We achieved the TelSim PSF FWHM of $22.83~\mu m$, the nominal value being $22.62~\mu m$ and $F/\#$ of 10.94, nominal value being 11.00. The TelSim lenses can be moved out of the way to have the laser beam reaching directly the CP.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=15cm]{fig3.png}
\end{tabular}
\end{center}
\caption[fig3]
{ \label{fig3}
Schematic representation of the CP alignment setup (not to scale).}
\end{figure}
Shims were used to adjust the decenter, tip and tilt of the opto-mechanical components. Optical feedback was always used to confirm if the shims produced the right results.
The pCMM used the CP coordinate system for all its measurements. We used the CP-base, CP-side, and CP-front to create the CP coordinate system, with the origin matching the SOXS input focal position.
The alignment steps are enumerated below.
\begin{enumerate}
\item The correct position of the dichroic with respect to the CP coordinate system was earlier determined using a CMM (which has $< 10~\mu m$ measurement precision). We used our pCMM to position the dichroic to the same location.
\item Start with UVVIS arm or NIR arm.
\item Align the FM. See Figure~\ref{fig4} for details of the procedure. We used the 10~$cm$ mirror, placing the mirror surface perpendicular to the nominal beam (using the pCMM) so as to send the reflected beam backwards eventually reaching the 'reference detector' (see Figure~\ref{fig3}). The tilt of the beam is verified also in this way.
\item Align the TT mirror.See Figure~\ref{fig4} for details of the procedure. We used the 10~$cm$ mirror, placing the mirror surface perpendicular to the nominal beam (using the pCMM) so as to send the reflected beam backwards eventually reaching the 'reference detector' (see Figure~\ref{fig3}). The tilt of the beam is verified also in this way.
\item Align the Refocuser / ADC. See Figure~\ref{fig4} for details of the procedure. Note that the ADC here should be internally aligned using a separate setup before placing onto the CP. Check Battaini et al.\cite{Battaini22} for the details of the ADC internal alignment and results.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=11cm]{fig4.png}
\end{tabular}
\end{center}
\caption[fig4]
{ \label{fig4}
\textit{Top}: Schematic representation of the alignment procedure of the CP-UVVIS-FM and CP-NIR-FM, \textit{Bottom Left}: Schematic representation of the alignment procedure of the CP-UVVIS-TT and CP-NIR-TT, \textit{Bottom Right}: Schematic representation of the alignment procedure of the CP-UVVIS-ADC and CP-NIR-Refocuser.}
\end{figure}
\item With the help of the pCMM and the characterized CMOS detector, check the PSF position, tilt of the exiting beam, PSF FWHM, exit $F/\#$, on- \& off-axis PSF quality, and the throughput for various narrow-band filters.
\item Repeat the same for the other arm.
\item The pierced mirror (which is a part of the AC, but physically present within the CP) and the CU folding mirror (which is a part of the CU, but physically present within the CP) are positioned using the pCMM. Measurements are taken to confirm that the mirror surfaces are at correct angles to the CP-base.
\item The instrument shutter is installed and its functionality is checked.
\item The PT100 temperature probe is installed onto the CP bench. Its functionality is verified using the instrument software.
\item Cabling of all the wires within the CP is performed.
\item All the linear stages with the final load is tuned to minimize the vibrations and named locations are found and registered at the configuration files for the instrument software.
\end{enumerate}
\section{Alignment Verification}
After the alignment and integration of the individual opto-mechanical components, the verification of the subsystem is performed. These verification included the following tests.
\begin{enumerate}
\item Decenter, focus and tilt of the exiting beam: The characterized CMOS detector is positioned so that the reference pixel is at the nominal, on-axis, focus position of the CP UVVIS/NIR exit focal position. At this point, the decenter is measured using the actual position of the PSF. The tilt is measured by taking multiple images at different optical axis positions and compensating for the tilt of the linear stage measured by the pCMM.
\begin{table}[ht]
\caption{CP UVVIS and NIR exit beam results. Note that the nominal $F/\#$ values are mentioned in Section~\ref{cp}. For the reported values here, we assumed a conservative estimation on uncertainty for the tilt because of the measuring conditions and technique.}
\label{table1}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
\rule[-1ex]{0pt}{3.5ex} & Decenter–X ($\mu m$) & Decenter–Y ($\mu m$) & Decenter–Z ($\mu m$) & Tilt-X (") & Tilt-Y (") & $F/\#$ \\
\hline
\rule[-1ex]{0pt}{3.5ex} UVVIS exit beam & 230 $\pm$ 60 & -145 $\pm$ 60 & $<$30 $\pm$ 60 & -413 $\pm$ 100 & 37 $\pm$ 100 & 6.93 \\
\hline
\rule[-1ex]{0pt}{3.5ex} NIR exit beam & 45 $\pm$ 60 & 3 $\pm$ 60 & $<$30 $\pm$ 60 & -250 $\pm$ 32 & -255 $\pm$ 32 & 6.94 \\
\hline
\end{tabular}
\end{table}
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=17cm]{fig5.png}
\end{tabular}
\end{center}
\caption[fig5]
{ \label{fig5}
\textit{Left}: Plot showing the tilt of the UVVIS PSF and that of the linear stage, \textit{Middle}: Finding the center of the PSF image, \textit{Right}: $F/\#$ measurement images for the UVVIS side.}
\end{figure}
\item $F/\#$ of the exiting beam: A mask with three holes of 1~$mm$ diameter is positioned in the collimated beam (between the $L750$ and $L300$ lenses of the TelSim). Keeping a detector at two different locations of the exiting beam, and knowing the distance between the two positions, the $F/\#$ of the exiting beam is estimated.
\item On- and off-axis PSF quality: Using the characterized CMOS detector, the PSF of the TelSim is taken at the CP UVVIS/NIR exit focal position. For the off-axis positions, the fiber source of the TelSim is moved by known amount. The numbers obtained are compared with ZEMAX simulated values.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=12cm]{fig6.png}
\end{tabular}
\end{center}
\caption[fig6]
{ \label{fig6}
\textit{Left}: UVVIS PSF, \textit{Right}: UVVIS on- and off-axis PSFs.}
\end{figure}
\item Transmission efficiency: Using flexOptometer, which is a multi-channel radiometer, the optical efficiency of the CP is estimated for both the UVVIS and NIR arms. The input (or reference) value is taken at the SOXS focal position (same location as that of the telescope focus as depicted in Figure~\ref{fig1}) and the output measurement is taken the CP UVVIS/NIR focal position. A white light source and various narrow-band filters are used for estimating the optical efficiency at different wavelengths.
\end{enumerate}
The results are tabulated in Table \ref{table1} and displayed in Figure~\ref{fig5},~\ref{fig6}, and ~\ref{fig7}.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=10cm]{fig7.png}
\end{tabular}
\end{center}
\caption[fig7]
{ \label{fig7}
Transmission is always above 80\% within the operational limits our instruments for the wavelength range from 550~$nm$ to 1900~$nm$. For the measurement, we have used a set of narrow-band filters of 10~$nm$ width. For 800~$nm$ wavelength, the efficiency is 76.7\%. This measurement comes only from the UVVIS side. The dichroic transmission is from 800-850~$nm$.
}
\end{figure}
\section{Conclusion}
The SOXS Common Path is aligned, integrated and it passed all the verification tests. In addition to the optical verification, it has also passed the internal software and electrical tests with the instrument software and final electrical configurations respectively.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=10cm]{fig8.JPG}
\end{tabular}
\end{center}
\caption[fig8]
{ \label{fig8}
SOXS Common Path mounted onto the SOXS flange at the INAF-Padova laboratories. The kinematic mount stability between the CP and the flange is also verified and found to be well within requirements.}
\end{figure}
The sub-system is completed. We also verified the mechanical interfaces (using the kinematic mounts) with the SOXS flange. Figure~\ref{fig8} shows the common path mounted onto the SOXS flange. For more details about the integration of the CP (and other sub-systems) to the SOXS flange, refer to Aliverti et al. article\cite{Aliverti22}.
\acknowledgments
KKRS, FB, and RC express their gratitude to the colleagues, INAF-PD administrative staff, and SOXS team for the continued support which facilitated the smooth AIVT process of the SOXS CP in spite of the COVID-19 pandemic in the last two years.
|
1,108,101,565,242 | arxiv | \section{Introduction}
Random numbers are an essential resource in the information processing
era, finding applications in gaming, simulations and
cryptography. Cryptographic protocols are frequently built upon an
assumption of access to a private random seed. Using poor-quality
randomness can be fatal to the security of the protocol (see,
e.g.,~\cite{weakkeys12}). Thus, in order to adhere to these standard
protocol assumptions, it is imperative that we are able to certify the
generation of private random numbers.
The intrinsic randomness of quantum theory provides a
natural mechanism with which we can generate random numbers: a simple
source of perfectly random bits could be a device that prepares a
$\sigma_x$ eigenstate and then measures $\sigma_z$. However, the use
of such a source comes with a significant caveat: the internal
mechanisms of the preparation and measurement devices must be
well-characterized and kept stable throughout their use. Any mismatch
between the characterization and how the device operates in practice
may be an exploitable weakness in the hands of a smart enough
adversary; such mismatches have been used to compromise commercially
available quantum key distribution (QKD)
systems (see e.g.,~\cite{Mak17-qkd_attack}).
While weaknesses caused by mismatches may be mitigated by increasingly
detailed descriptions of the quantum devices, generating such
descriptions rapidly becomes unwieldy and remaining vulnerabilities
can be difficult to detect. This is reminiscent of the situation in
modern software engineering where security flaws are frequently
discovered and patched. Fixing hardware vulnerabilities, such as those
exploited in the aforementioned QKD attacks, can be more difficult
logistically and economically.
Fortunately, quantum theory provides a means to address this
problem. Going back to~\cite{MayersYao} and using an important insight
of~\cite{Ekert}, device-independent quantum cryptography establishes security independently of the devices involved within a protocol, relying only on the validity of quantum theory and the imposition of certain no-signalling constraints between devices. Security is subsequently verified through the observation of non-local output statistics, which in turn act as witnesses to the inner workings of the devices. Limiting the number of initial assumptions greatly reduces the threat of side-channel attacks.
In this work we focus on the task of randomness expansion: a procedure wherein one attempts to transform a short private seed into a much larger (still private) source of uniform random bits. Randomness expansion in a device-independent setting was proposed
in~\cite{ColbeckThesis,CK2} with further development and experimental
testing following shortly after~\cite{PAMBMMOHLMM}. Subsequent work
provided security proofs against classical
adversaries~\cite{PM,FGS}. Security against quantum adversaries---who
may share entanglement with the internal state of the device---came
later~\cite{MS2,MS1,VV}, progressively increasing in noise-tolerance
and generality, with the recently introduced entropy accumulation
theorem (EAT)~\cite{DFR,DF}, on which our work is based, providing
asymptotically optimal rates~\cite{ARV,arnon2018practical}. A new
proof technique that is also asymptotically optimal has recently
appeared~\cite{KZF}.
In~\cite{ARV} the EAT was
applied to the task of randomness expansion and a general entropy
accumulation procedure was detailed. The security of the resulting
randomness expansion protocol relies on the construction of a
randomness bounding function (known as a min-tradeoff function) that
characterizes the average entropy gain during the protocol. Unfortunately,
the analysis in~\cite{ARV} applies only to protocols based on the CHSH
inequality, and relies on some analytic steps that do not directly
generalize to arbitrary protocols\footnote{In particular,
simplifications that arise due to the two party, two input, two
output scenario being reducible to qubits.}. However, as was also noted in \cite{ARV}, one could look to use the device-independent guessing probability (DIGP)~\cite{NPS14,
BSS14,KRS} in
conjunction with the semidefinite hierarchy~\cite{NPA,NPA2} to obtain
computational constructions of the required min-tradeoff functions.
Here we detail a precise method for combining these semidefinite
programming tools with the EAT to construct min-tradeoff functions. We
then apply this construction to the task of randomness expansion to
prove security of protocols based upon arbitrary nonlocal games. This
includes protocols with arbitrary (but finite) numbers of
inputs-outputs, as well as protocols based upon multiple
Bell-inequalities~\cite{NBS16}. It is worth noting that this
construction could also be readily extended to multipartite scenarios
although we do not discuss these in this work. Moreover, as this
computational method takes the form of a semidefinite program these
constructions are both computationally efficient and reliable,
although at the cost of producing potentially suboptimal bounds. To
accompany this work, we provide a code package (available
at~\cite{dirng-github}) for the construction and analysis of these
randomness expansion protocols.
In more detail, we give a template protocol, Protocol~QRE, from
which a user can develop their randomness expansion protocol. Given
certain parameters chosen by a user, e.g., time constraints, choice of
non-locality tests and security tolerances, the projected randomness
expansion rates to be calculated. If these rates are unsatisfactory,
then modifications to the protocol's design can be made and the
rates recalculated. As the computations can be done with a computationally efficient procedure, the user can optimize
their protocol parameters to best fit their experimental
setup. Once a choice of experimental design has been made, the
resulting randomness expansion procedure can be performed. Subject to
the protocol not aborting, this gives a certifiably private random
bit-string.
We apply our technique to several example protocols. In particular, we
look at randomness expansion using the complete empirical distribution
as well as a simple extension of the CHSH protocol, showing
noise-tolerant rates of up to two bits per entangled qubit pair,
secure against quantum adversaries. Although means of generating two
bits of randomness per entangled qubit pair have been considered
before~\cite{MP13} to the best of our knowledge our work is the first
to present a full protocol and prove that this rate can be robustly
achieved taking into account finite statistics. The nonlocal game we
use for this is related to that in~\cite{MP13}. We also compare the
achievable rates for these protocols to the protocol presented
in~\cite{ARV} which is based upon a direct von Neumann entropy bound.
Our comparison demonstrates that some of the protocols from the
framework are capable of achieving higher rates than the protocol
of~\cite{ARV}, in both the low and high noise regimes. Improved rates
in the high noise regime are of particular importance when considering
current experimental implementations, because of the difficulty of
significantly violating the CHSH inequality while closing the
detection loophole~\cite{Giustina,Shalm,Hensen}. Additionally, we include
in the appendices a full non-asymptotic account of input randomness
necessary for running the protocols.
The paper is structured as follows: in \sec\ref{sec:prelim} we
introduce the material relevant for our construction. In
\sec\ref{sec:adaptive-framework} we detail the various components of
our framework and present the template protocol with full security statements and proofs. We provide examples of several randomness expansion protocols built within our framework in \sec\ref{sec:examples} before concluding with some open problems in \sec\ref{sec:conclusion}.
\section{Preliminaries}\label{sec:prelim}
\subsection{General notation}\label{sec:notation}
Throughout this work, the calligraphic symbols $\mathcal{A}$, $\mathcal{B}$, $\mathcal{X}$ and
$\mathcal{Y}$ denote finite alphabets and we use the notational shorthand
$\mathcal{A}\mathcal{B}$ to denote the Cartesian product alphabet $\mathcal{A}\times\mathcal{B}$. We refer
to a \emph{behaviour} (or \emph{strategy}) on these alphabets as some
conditional probability distribution, $(p(a,b|x,y))_{ab|xy}$ with
$abxy \in \A\B\X\Y$, which we view as a vector $\bm{p}\in\mathbb R^{|\A\B\X\Y|}$. That is, by denoting the set of canonical bases vectors of $\mathbb R^{|\A\B\X\Y|}$ by $\{\bm{\mathrm{e}}_{ab|xy}\}_{abxy}$, we write $\bm{p} = \sum_{abxy} p(a,b|x,y) \bm{\mathrm{e}}_{ab|xy}$. We make the distinction between the vector and its elements through the use of boldface, i.e., $p(a,b|x,y) = \bm p \cdot \bm{\mathrm{e}}_{ab|xy}$. Throughout this work we assume that all
conditional distributions obey the no-signalling constraints that
$\sum_{a\in \mathcal{A}} p(a,b|x,y)$ is independent of $x$ and hence can be written $p(b|y)$ and similarly
$\sum_{b\in\mathcal{B}} p(a,b|x,y) = p(a|x)$. We denote the set of all no-signalling
behaviours by $\P_{\mathcal{A}\mathcal{B} |\mathcal{X}\mathcal{Y}} \subset \mathbb R^{|\A\B\X\Y|}$. Given an
alphabet $\mathcal{C}$ we denote the set of all distributions over $\mathcal{C}$ by
$\P_\mathcal{C}$, and given a
sequence $\bm{\mathrm{C}} = (c_i)_{i=1}^n$, with $c_i \in \mathcal{C}$ for each
$i=1,\dots,n$, we denote the frequency distribution induced by $\bm{\mathrm{C}}$ by
\begin{equation}
F_{\CC}(x) = \frac{\sum_{i=1}^n \delta_{x c_i}}{n},
\end{equation}
where $\delta_{ab}$ is the Kronecker delta on the set $\mathcal{C}$.
We use the symbol $\H$ to denote a Hilbert space, subscripting with
system labels when helpful. For a system $E$, we denote the set of
positive semidefinite operators with unit trace acting on $\H_E$ by
$\S(E)$ and its subnormalized extension (i.e., the set that arises
when the trace is restricted to be in the interval $[0,1]$) by
$\tilde\S(E)$ (we extend the use of tildes to other sets to denote
their subnormalized extensions). We refer to a state
$\rho_{XE}\in\S(XE)$ as a \emph{classical-quantum state} (cq-state) on
the joint system $XE$ if it can be written in the form
$\rho_{XE}=\sum_x p(x)\ketbra{x}\otimes\rho_E^x$ where $\{\ket{x}\}_x$
is a set of orthonormal vectors in $\H_X$. Letting $\Omega\subseteq\mathcal{X}$
be an event on the alphabet $\mathcal{X}$, we define the \emph{conditional
state} (conditioned on the event $\Omega$) by
\begin{equation}
\rho_{XE|{\Omega}}=\frac{1}{\pr{\Omega}}\sum_{x\in\Omega}p(x)\ketbra{x}\otimes\rho_E^x.
\end{equation} We denote the identity operator of a system $E$ by
$\mathbb{1}_E$. We write the natural logarithm as $\ln(\cdot)$ and the
logarithm base $2$ as $\log(\cdot)$. The function
$\mathrm{sgn}:\mathbb R\rightarrow\{-1,0,1\}$ is the sign function, mapping all
positive numbers to $1$, negative numbers to $-1$ and $0$ to $0$.
We say that a behaviour $\bm{p}\in\P_{\A\B|\X\Y}$ is \emph{quantum} if its
elements can be written in the form
$p(a,b|x,y)=\tr{\rho_{AB}(N_{a|x}\otimes M_{b|y})}$ where
$\rho_{AB}\in\S(\H_A\otimes\H_B)$ and $\{\{N_{a|x}\}_{a\in\mathcal{A}}\}_{x\in\mathcal{X}}$,
$\{\{M_{b|y}\}_{b\in\mathcal{B}}\}_{y\in\mathcal{Y}}$ are sets of POVMs; we denote the
set of all quantum behaviours by $\mathcal{Q}$. Additionally, we use $\tilde{\Q}$ to
denote the subnormalized extension of this set.
Note that randomness expansion is a single-party protocol; there is
one user who wishes to expand an initial private random
string. However, that user may work with a bipartite setup in which
they use two devices that are prevented from signalling to one
another; in such a case we sometimes refer to Alice and Bob as the
users of each device. Note though that, unlike in QKD, Alice and Bob
are agents of the same party and are within the same laboratory.
There may also be a dishonest party, Eve, trying to gain information
about the random outputs.
\subsection{Entropies and SDPs}\label{sec:entropies-and-sdps}
The von Neumann entropy of $\rho\in\S(A)$ is
\begin{equation}
H(A)_{\rho}:=-\tr{\rho\log(\rho)}.
\end{equation}
For a bipartite state $\rho_{AE}\in\S(AE)$, we use the notation
$\rho_{E}$ for $\ptr{A}{\rho_{AE}}$ and define the conditional von Neumann entropy of system $A$ given system $E$ when the joint system is in state $\rho_{AE}$ by
\begin{equation}
H(A|E)_{\rho}:=H(AE)_{\rho}-H(E)_{\rho}\,.
\end{equation}
In addition, for a tripartite system $\rho_{ABE}\in\S(ABE)$, the
conditional mutual information between $A$ and $B$ given $E$ is
defined by
$$I(A:B|E)_{\rho}=H(A|BE)_{\rho}-H(A|E)_{\rho}\,.$$
We drop the state subscript whenever the state is clear from the
context.
In this work it is useful to consider the conditional
min-entropy~\cite{Renner} in its operational
formulation~\cite{KRS}. Given a cq-state $\rho_{XE}=\sum_x p(x)\ketbra{x}\otimes \rho_E^x$, the maximum probability with which an agent holding system $E$ can guess the outcome of a measurement on $X$ is
\begin{equation}\label{eq:g-prob}
p_{\mathrm{guess}}(X|E):=\max_{\{M_x\}_x}\sum_x p(x)\tr{M_x\rho_E^x},
\end{equation}
where the maximum is taken over all POVMs $\{M_x\}_x$ on system $E$. Using this we can define the min-entropy of a classical system given quantum side information as
\begin{equation}\label{eq:hmin}
H_{\min}(X|E):=-\log\left(p_{\mathrm{guess}}(X|E)\right).
\end{equation}
The final entropic quantity we consider is the $\epsilon$\emph{-smooth min-entropy}~\cite{RenWolf04-smooth_entropies}. Given some $\epsilon\geq 0$ and $\rho_{XE}\in\S(XE)$, the $\epsilon$-smooth min-entropy $H_{\min}^\epsilon$ is defined as the supremum of the min-entropy over all states $\epsilon$-close to $\rho_{XE}$,
\begin{equation} \label{eq:smooth-entropy}
H_{\min}^\epsilon(X|E)_\rho:=\sup_{\rho'\in B_{\epsilon}(\rho)}H_{\min}(X|E)_{\rho'},
\end{equation}
where $B_{\epsilon}(\rho)$ is the $\epsilon$-ball centred at $\rho$ defined with respect to the purified trace distance~\cite{TCR2}. For a thorough overview of smooth entropies and their properties we refer the reader to~\cite{Tom15-smooth_entropies_book}.
In the device-independent scenario we do not know the quantum states
or measurements being performed. Instead, our entire knowledge about these must be inferred
from the observed input-output behaviour of the devices
used. In particular, observing correlations that violate a Bell
inequality provides a coarse-grained characterization of the
underlying system. In a device-independent protocol, the idea is to
use only this to infer bounds on particular system quantities, e.g.,
the randomness present in the outputs. As formulated above, the
guessing probability~\eqref{eq:g-prob} is not a device-independent
quantity because its computation requires knowing $\rho^x_E$. However,
the guessing probability can be reformulated in a device-independent
way~\cite{HR,NPS14,BSS14,NBS16} as we now explain.
Consider a tripartite system $\rho_{ABE}$ shared between two devices
in the user's lab and Eve. Because we are assuming an adversary
limited by quantum theory, we can suppose that, upon receiving some inputs $(x,y) \in \mathcal{X}\mathcal{Y}$, the devices
work by performing measurements $\{E_{a|x}\}_a$ and $\{F_{b|y}\}_b$
respectively, which give rise to some probability distribution $\bm{p} \in \mathcal{Q}_{\mathcal{A}\mathcal{B}|xy}$,
and overall state
$$\sigma^{x,y}_{ABE}=\sum_{ab}\ketbra{a}\otimes\ketbra{b}\otimes\tilde{\rho}_E^{abxy}\,,$$
where
$\ptr{AB}{(E_{a|x} \otimes F_{b|y}\otimes\mathbb{1}_E)\rho_{ABE}}=\tilde{\rho}_E^{abxy}$,
and $p(a,b|x,y)=\tr{\tilde{\rho}_E^{abxy}}$.
Note that the user of the protocol is not aware of what the devices
are doing.
Consider the best strategy for Eve to guess the value of $AB$ using her system
$E$. She can perform a measurement on her system to try to
distinguish $\{\rho_E^{abxy}\}_{ab}$ (occurring with probability
$p(a,b|x,y)$).
Denoting Eve's POVM $\{M_c\}_c$
with outcomes in one-to-one correspondence with the values $AB$ can take
(say $c_{ab}$
being the value corresponding to a best guess of
$AB = (a,b)$)\footnote{Without
loss of generality we can assume Eve's measurement has as many
outcomes as what she is trying to guess.}, then given some values of
$a,b,x$ and $y$,
Eve's outcomes are distributed as
$p(c_{a'b'}|a,b,x,y)=\tr{M_{c_{a'b'}}\rho_E^{abxy}}$,
and her probability of guessing
correctly is
$p(c_{ab}|a,b,x,y)=\tr{M_{c_{ab}}\rho_E^{abxy}}$.
Hence, the overall probability of guessing $AB$
correctly given $E$
and $XY=(x,y)$
for the quantum realisation of the statistics, $q=\{\rho_{ABE},\{E_{a|x}\},\{F_{b|y}\}\}$,
is
\begin{align*}
p_{\mathrm{guess}}(AB|x,y,E,q) &= \sup_{\{M_c\}_c}\sum_{ab} \tr{(E_{a|x}\otimes F_{b|y}\otimes M_{c_{ab}} )\rho_{ABE}} \\
&= \sup_{\{M_c\}_c} \sum_{ab} p(a,b,c_{ab}|x,y,q)\\
&=\sup_{\{M_c\}_c}\sum_{ab}p(c_{ab}|a,b,x,y,q)p(a,b|x,y,q)\,.
\end{align*}
Note that the guessing probability depends on the inputs $x,y$. In the protocols
we consider later, there will only be one pair of inputs for which Eve is
interested in guessing the outputs. We denote these inputs by $\tilde{x}$ and $\tilde{y}$.
In the device-independent scenario, Eve can also optimize over all
quantum states and measurements that could be used by the devices.
However, she wants to do so while restricting the devices to obey
certain relations which depend on the protocol (for example, the CHSH
violation that could be observed by the user). For the moment,
without specifying these relations precisely, call the set of quantum
states and measurements obeying these relations $\mathcal{R}$. Hence, we seek
$$p_{\mathrm{guess}}(AB|\tilde{x},\tilde{y},E)=\sup_{q\in\mathcal{R},\{M_c\}_c}\sum_{ab}p(a,b|\tilde{x},\tilde{y},q)p(c_{ab}|a,b,\tilde{x},\tilde{y},q)\,.$$
Because Eve's measurement commutes with those of the devices, due to
no signalling we can use Bayes' rule to rewrite the optimization
as\footnote{This rewriting makes sense provided no information leaks
to Eve during the protocol, which is reasonable for randomness expansion
since it takes place in one secure lab.}
\begin{eqnarray*}
\sup_{q\in\mathcal{R},\{M_c\}_c}\sum_{ab}p(c_{ab}|\tilde{x},\tilde{y},q)p(a,b|c_{ab},\tilde{x},\tilde{y},q)\,.
\end{eqnarray*}
With this rewriting it is evident that we can think about Eve's
strategy as follows: Eve randomly chooses a value of $C$ and then
prepares the device according to that choice, i.e., trying to bias
$A,B$ towards the values $a, b$ corresponding to the chosen $c$.
We can hence write
$$p_{\mathrm{guess}}(AB|\tilde{x},\tilde{y},E)=\sup_{\{\bm{p}_c\}_c}\sum_{ab}\pr{C=c_{ab}}p_{c_{ab}}(a,b|\tilde{x},\tilde{y},q)\,,$$
where $\sum_cp(c)\bm{p}_{c}$ satisfies some relations (equivalent to
the restriction to the set $\mathcal{R}$) and $\bm{p}_{c}\in\mathcal{Q}_{\mathcal{A}\mathcal{B}|\mathcal{X}\mathcal{Y}}$ for each $c$.
Provided the relations satisfied are linear, which we will henceforth
assume, they can be expressed as a matrix equation $\bm{W}\bm{p} = \bm{\omega}$ and the whole optimization is a conic program (the set of un-normalized
quantum-realisable distributions forms a convex cone). By writing
$\pr{C=c} \bm{p}_{c}$ as the subnormalized distribution
$\tilde{\bm{p}}_{c}$ the problem can be expressed as
\begin{equation}\label{prog:digp}
\begin{aligned}
\sup_{\{\tilde{\bm{p}}_{c}\}_c} &&&\sum_{ab}\tilde{p}_{c_{ab}}(a,b|\tilde{x},\tilde{y})\\
\text{subj.\
to}&&&{\sum_c\bm{W}\tilde \bm{p}_{c}}=\bm{\omega}\\
&&&\tilde{\bm{p}}_{c}\in\tilde{\Q}_{\mathcal{A}\mathcal{B}|\mathcal{X}\mathcal{Y}}\quad\forall\ c\,.
\end{aligned}
\end{equation}
Note that the normalisation condition, $\sum_{abc}\tilde{p}_c(a,b|\tilde{x},\tilde{y})=1$, is assumed to be contained within (or a consequence of) the conditions imposed by $\bm{W}$. For the particular sets of conditions that we impose later, normalization always follows.
Optimizing over the set of quantum correlations is a difficult
problem, in part because the dimension of the quantum system achieving
the optimum could be arbitrarily large. Because of this, we consider
a computationally tractable relaxation of the problem, by instead
optimizing over distributions within some level of the
semidefinite hierarchy~\cite{NPA,NPA2}. We denote the $k^{\text{th}}$ level
by $\tilde{\Q}^{(k)}$. This relaxation of the problem
takes the form of a semidefinite program that can be solved in an
efficient manner, at the expense of possibly not obtaining the same
optimum value. The corresponding relaxed program is
\begin{equation}\label{prog:relaxed-primal}
\begin{aligned}
p^{(k)}_{\text{\scriptsize \selectfont \hspace{-0.5pt}guess}}(\bm{\omega}):=&\sup_{\{\tilde{\bm{p}}_{c}\}_c} &&\sum_{ab}\tilde{p}_{ c_{ab}}(a,b|\tilde{x},\tilde{y})\\
&\text{subj.\
to}&&{\sum_c\bm{W}\tilde \bm{p}_{c}}=\bm{\omega}\\
&&&\tilde{\bm{p}}_{c}\in\tilde{\Q}^{(k)}\quad\forall\ c\,.
\end{aligned}
\end{equation}
This program has a dual. In Appendix~\ref{app:cones} we show that
there is an alternative program with the same properties\footnote{In
particular, the weak duality statement holds.} as the standard dual.
To specify this, we define the set $\mathcal{V}^{(k)}$ of \emph{valid constraint
vectors at level $k$} by the set of vectors $\v$ for which there exists
${\bm{p}}\in\mathcal{Q}^{(k)}$ such that $\bm{W}{\bm{p}}=\v$.
The alternative dual then takes the form
\begin{equation}\label{prog:relaxed-dual}
\begin{aligned}
d^{(k)}_{\text{\scriptsize \selectfont \hspace{-0pt}guess}}(\bm{\omega}):=&\inf_{\l} &&\l\cdot\bm{\omega}\\
&\text{subj.\
to}&
&p^{(k)}_{\text{\scriptsize \selectfont \hspace{-0.5pt}guess}}(\v)\leq \l\cdot\v,\ \ \forall\ \v\in\mathcal{V}^{(k)},
\end{aligned}
\end{equation}
with $\l\in\mathbb R^{\|\bm{\omega}\|_0}$. Since the NPA hierarchy forms a sequence of outer
approximations to the set of quantum correlations,
$\mathcal{Q}_1\supseteq\mathcal{Q}_2\supseteq\dots\supseteq\mathcal{Q}$, the relaxed
guessing probability provides an upper bound on the true guessing
probability, i.e., $p_{\mathrm{guess}}(\bm{\omega})\leq p^{(k)}_{\text{\scriptsize \selectfont \hspace{-0.5pt}guess}}(\bm{\omega})$. Combined
with~\eqref{eq:hmin}, one can use the relaxed programs to compute
valid device-independent lower bounds on $H_{\min}$.
Programs~\eqref{prog:relaxed-primal} and~\eqref{prog:relaxed-dual} are
parameterized by a vector $\bm{\omega}$. We denote a feasible point of the dual
program parameterized by $\bm{\omega}$ by $\l_{\w}$. Note that for our later
analysis we only need $\l_{\w}$ to be a feasible point of the dual
program, we do not require it to be optimal.\footnote{An optimal choice of $\l$ for \eqref{prog:relaxed-dual} may not even exist.}
\subsection{Devices and nonlocal games}\label{sec:devices-and-games}
Device-independent protocols involve a series
of interactions with some \emph{untrusted devices}. A \emph{device}
$\mathcal{D}$ refers to some physical system that receives classical inputs
and produces classical outputs. Furthermore, we say that $\mathcal{D}$ is
\emph{untrusted} if the mechanism by which $\mathcal{D}$ produces the outputs
from the inputs need not be characterized. During the protocol, the user interacts with their untrusted devices within the following scenario:\footnote{One does not have to recreate this scenario exactly in order to perform the protocol. Instead, the given scenario establishes one situation in which the protocol remains secure (see Def$.$~\ref{def:security} for a precise definition of security).}
\begin{enumerate}
\item The protocol is performed within a secure lab from which
information can be prevented from leaking.
\item This lab can be partitioned into disconnected sites
(one controlled by Alice and one by Bob).
\item The user can send information freely between these sites without
being overheard, while at the same time, they can prevent
unwanted information transfer between the sites.\footnote{In this
work we need to ensure that the user's devices are unable
to communicate at certain points of the protocol (when Bell tests
are being done), but not at others (e.g., when entanglement is
being distributed). However, they should never be allowed to
send any information outside the lab after the protocol begins.}
\item The user has two devices to which they can provide inputs (taken
from alphabets $\mathcal{X}$ and $\mathcal{Y}$) and receive outputs (from alphabets
$\mathcal{A}$ and $\mathcal{B}$).
\item These devices operate according to quantum theory, i.e.,
$\bm{p}_{AB|XY}\in\Q_{\A\B|\X\Y}$. Any eavesdropper is also limited by
quantum theory\footnote{In parts of this paper we allow the
eavesdropper limited additional power---the bounds will then still
apply if the eavesdropper is limited by quantum theory.}. We use
$\mathfrak{D}_{ABE}$ to denote the collection of devices (including any held
by an eavesdropper) and refer to this as an \emph{untrusted device
network}.
\item The user has an initial source of private random numbers and a
trusted device for classical information processing.
\end{enumerate}
One of the key advantages of a device-independent protocol is that
because no assumptions are made on the inner workings of the devices
used, the protocol checks that the devices are working sufficiently
well on-the-fly. The protocols hence remain impervious to many
side-channel attacks, malfunctioning devices or prior tampering. The idea behind their security is that by testing that
the devices exhibit `nonlocal' correlations, their internal workings
are sufficiently restricted to enable the task at hand.
In this work, we formulate the testing of the devices through
\emph{nonlocal} games. A nonlocal game is initiated by a referee who
sends the two players their own question chosen according to some
distribution, $\mu$. The players then respond with their answers
chosen from $\mathcal{A}$ and $\mathcal{B}$ respectively. Using the predefined scoring
rule $V$, the referee then announces whether or not they won the
game. The game is referred to as \emph{nonlocal} because prior to
receiving their questions, the players are separated and unable to
communicate until they have given their answers. The question sets,
answer sets, distribution $\mu$ and the scoring rule $V$ are all
public knowledge. Moreover, the players are allowed to confer prior to
the start of the game.
\begin{definition}\label{def:nonlocalgame}
Let $\mathcal{A}, \mathcal{B}, \mathcal{X}, \mathcal{Y}$ and $\mathcal{V}$ be finite sets. A (two-player) \textit{nonlocal game} $\mathcal{G}=(\mu,V)$ (on $\A\B\X\Y$) consists of a set of question
pairs $(x,y) \in \mathcal{X}\mathcal{Y}$ chosen according to some probability
distribution $\mu:\mathcal{X}\mathcal{Y} \rightarrow [0,1]$, a set of answer pairs
$(a,b) \in \mathcal{A}\mathcal{B}$ and a scoring function
$V:\mathcal{A}\mathcal{B}\mathcal{X}\mathcal{Y}\rightarrow \mathcal{V}$. A \emph{strategy} for $\mathcal{G}$ is a conditional distribution
$\bm{p}\in \P_{\A\B|\X\Y}$ defined on the question and answer sets.
\end{definition}
\begin{remark}
We will abuse notation and use the symbol $\mathcal{G}$ to refer to both the nonlocal game and the set of possible scores. I.e., we may refer to the players receiving a score $c \in \mathcal{G}$. Furthermore, we denote the number of different scores by $|\mathcal{G}|$.
\end{remark}
If the players play $\mathcal{G}$ using the strategy $\bm{p}$, then this induces a frequency distribution $\bm{\omega}_{\mathcal{G}}$ over the set of possible scores. That is,
\begin{equation}\label{eq:expected-freq-dist}
\omega_{\mathcal{G}}(c) = \sum_{abxy} \mu(x,y)p(a,b|x,y)\,\delta_{V(a,b,x,y), c}
\end{equation}
for each $c \in \mathcal{G}$. The expected frequency distribution, $\bm{\omega}_{\mathcal{G}}$, will be the figure of merit by which we evaluate the performance of our untrusted devices. We denote the set of possible frequency distributions achievable by the agents whilst playing according to quantum strategies by $\mathcal{Q}_\mathcal{G}$.
\begin{example}[Extended CHSH game $(\mathcal{G}_{\mathrm{CHSH}})$]\label{ex:chsh}
The \emph{extended CHSH game} has appeared already in the device-independent literature in the context of QKD (see, e.g.,~\cite{ABGMPS}). It extends the standard CHSH game to include a correlation check between one of Alice's CHSH inputs and an additional input from Bob. It is defined by the question-answer sets $\mathcal{X}=\{0,1\}$, $\mathcal{Y}=\{0,1,2\}$ and $\mathcal{A}=\mathcal{B}=\{0,1\}$, the scoring set $\mathcal{V} = \{c_{\mathrm{CHSH}}, c_{\mathrm{align}},0\}$ and the scoring rule
\begin{align}
V_{\mathrm{CHSH}}(a,b,x,y) &:=
\begin{cases}
c_{\mathrm{CHSH}} \qquad \qquad &\text{if } x\cdot y = a\oplus b \text{ and } y\neq2 \\
c_{\mathrm{align}} \qquad \qquad &\text{if } (x,y) = (0,2) \text{ and } a \oplus b = 0 \\
0 \qquad \qquad &\text{otherwise}.
\end{cases}
\end{align}
The input distribution we consider is defined by $\mu_{\mathrm{CHSH}}(x,y)=\frac18$ for
$(x,y) \in\{0,1\}^2$, $\mu_{\mathrm{CHSH}}(0,2)=\frac12$ and $\mu_{\mathrm{CHSH}}(x,y)=0$
otherwise. This game is equivalent to choosing to play either the $\mathrm{CHSH}$ game or
the game corresponding to checking the alignment of the inputs $(0,2)$ uniformly at random and then proceeding with the chosen
game. The frequency distribution then tells us the relative frequencies with which we win each game. The motivation behind $\mathcal{G}_{\mathrm{CHSH}}$ can be understood by considering a schematic of an ideal implementation on a bipartite qubit system as given in Fig$.$~\ref{fig:EkertDiag}. If we observe the maximum winning probability for the CHSH game, as well as perfect alignment for the inputs $(0,2)$, then the inputs $(\tilde{x},\tilde{y}) = (1,2)$ should produce two perfectly uniform bits.
\begin{figure}[t!]
\begin{center}
\begin{tikzpicture}[scale=1]
\begin{scope}[shift={(0.5,0)}]%
\draw[thick, rounded corners = 2, fill = lightgray] (-1,1,1) rectangle (2,5.5,1);
\draw[thick, rounded corners = 2, fill = lightgray] (-1,5.5,1) -- (-1,5.5,0) -- (2,5.5,0) -- (2,5.5,1) -- cycle;
\draw[thick, rounded corners = 2, fill = lightgray] (2,5.5,1) -- (2,5.5,0) -- (2,1,0) -- (2,1,1) -- cycle;
\draw[thick, rounded corners = 2, fill = lightgray] (9.75,1,1) rectangle (12.75,5.5,1);
\draw[thick, rounded corners = 2, fill = lightgray] (9.75,5.5,1) -- (9.75,5.5,0) -- (12.75,5.5,0) -- (12.75,5.5,1) -- cycle;
\draw[thick, rounded corners = 2, fill = lightgray] (12.75,5.5,1) -- (12.75,5.5,0) -- (12.75,1,0) -- (12.75,1,1) -- cycle;
\draw (0.25,5.3) node {\tbf{\textit{Alice's device}}};
\draw (11,5.3) node {\tbf{\textit{Bob's device}}};
\end{scope}
\def\tilde{x}{0};
\def\tilde{y}{4};
\draw[fill=gray!25] (\tilde{x},\tilde{y}) circle (.5cm);
\draw[dotted] (\tilde{x}-0.5,\tilde{y}) -- (\tilde{x}+.5,\tilde{y});
\draw[dotted] (\tilde{x},\tilde{y}-.5) -- (\tilde{x},\tilde{y}+.5);
\draw (\tilde{x}+1.2,\tilde{y}) node {$X=0$};
\draw (\tilde{x},\tilde{y}-.7) node {$\sigma_z$};
\draw [<->,thick] (\tilde{x},\tilde{y}+.5) -- (\tilde{x},\tilde{y}-.5);
\def\tilde{x}{0};
\def\tilde{y}{2};
\draw[fill=gray!25] (\tilde{x},\tilde{y}) circle (.5cm);
\draw[dotted] (\tilde{x}-0.5,\tilde{y}) -- (\tilde{x}+.5,\tilde{y});
\draw[dotted] (\tilde{x},\tilde{y}-.5) -- (\tilde{x},\tilde{y}+.5);
\draw (\tilde{x}+1.2,\tilde{y}) node {$X=1$};
\draw (\tilde{x},\tilde{y}-.7) node {$\sigma_x$};
\draw [<->,thick] (\tilde{x}+.5,\tilde{y}) -- (\tilde{x}-.5,\tilde{y});
\def\tilde{x}{12};
\def\tilde{y}{4.5};
\draw[fill=gray!25] (\tilde{x},\tilde{y}) circle (.5cm);
\draw[dotted] (\tilde{x}-0.5,\tilde{y}) -- (\tilde{x}+.5,\tilde{y});
\draw[dotted] (\tilde{x},\tilde{y}-.5) -- (\tilde{x},\tilde{y}+.5);
\draw (\tilde{x}-1.2,\tilde{y}) node {$Y=0$};
\draw (\tilde{x},\tilde{y}-.7) node {$\sigma_{\pi/4}$};
\draw [<->,thick] (\tilde{x}+0.35,\tilde{y}+0.35) -- (\tilde{x}-0.35,\tilde{y}-0.35);
\def\tilde{x}{12};
\def\tilde{y}{3};
\draw[fill=gray!25] (\tilde{x},\tilde{y}) circle (.5cm);
\draw[dotted] (\tilde{x}-0.5,\tilde{y}) -- (\tilde{x}+.5,\tilde{y});
\draw[dotted] (\tilde{x},\tilde{y}-.5) -- (\tilde{x},\tilde{y}+.5);
\draw (\tilde{x}-1.2,\tilde{y}) node {$Y=1$};
\draw (\tilde{x},\tilde{y}-.7) node {$\sigma_{-\pi/4}$};
\draw [<->,thick] (\tilde{x}-0.35,\tilde{y}+0.35) -- (\tilde{x}+0.35,\tilde{y}-0.35);
\def\tilde{x}{12};
\def\tilde{y}{1.5};
\draw[fill=gray!25] (\tilde{x},\tilde{y}) circle (.5cm);
\draw[dotted] (\tilde{x}-0.5,\tilde{y}) -- (\tilde{x}+.5,\tilde{y});
\draw[dotted] (\tilde{x},\tilde{y}-.5) -- (\tilde{x},\tilde{y}+.5);
\draw (\tilde{x}-1.2,\tilde{y}) node {$Y=2$};
\draw (\tilde{x},\tilde{y}-.7) node {$\sigma_{z}$};
\draw [<->,thick] (\tilde{x},\tilde{y}+.5) -- (\tilde{x},\tilde{y}-.5);
\draw (6,-0.5) node (state) {$\ket{\psi}_{AB} = \frac{1}{\sqrt{2}} (\ket{00} +\ket{11})$};
\draw[->, line width=0.4mm, dotted] (state) .. controls (1, -.5) and (0,-.5) .. (0.5,0.5);
\draw[->, line width=0.4mm, dotted] (state) .. controls (10, -.5) and (12,-.5) .. (11.5,0.5);
\node[scale = 0.75] (table) at (6.2,4.5) {%
\begin{tabular}{c c|c|c|c|c|c|c|}
\multicolumn{2}{l|}{\parbox[t][4mm]{4mm}{\multirow{2}{*}{\rotatebox[origin=l]{45}{$P_{AB|XY}$\,}}}} & \multicolumn{2}{c|}{$Y=0$} & \multicolumn{2}{c|}{$Y=1$} & \multicolumn{2}{c|}{$Y=2$} \tabularnewline
\hhline{~~------}
&& $B=0$ & $B=1$ & $B=0$ & $B=1$ & $B=0$ & $B=1$ \tabularnewline \hline
\multicolumn{1}{c|}{\bigstrut[1] \multirow{2}{*}{\rotatebox[origin=c]{90}{$X=0$\,\,\,\,\,}}} &\parbox[c][12mm]{1mm}{\rotatebox[origin=b]{90}{$A = 0$}}&
\cellcolor{green!10}{\large$\frac{2 + \sqrt{2}}{8}$ }& {\large$\frac{2 - \sqrt{2}}{8}$ }& \cellcolor{green!10}{\large$\frac{2 + \sqrt{2}}{8}$}& {\large$\frac{2 - \sqrt{2}}{8}$} & \cellcolor{blue!10}{\large$\frac{1}{2}$ }&{\large $0$}\tabularnewline
\hhline{~-------}
\multicolumn{1}{c|}{~} & \parbox[c][12mm]{1mm}{\rotatebox[origin=c]{90}{$A = 1$}}
&{\large $\frac{2 - \sqrt{2}}{8}$} & {\cellcolor{green!10}}{\large$\frac{2 + \sqrt{2}}{8}$} & {\large$\frac{2 - \sqrt{2}}{8}$} & \cellcolor{green!10}{\large$\frac{2 + \sqrt{2}}{8}$} & {\large$0$} & \cellcolor{blue!10}{\large$\frac{1}{2}$}\tabularnewline
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{\rotatebox[origin=c]{90}{$X=1$\,\,\,\,\,}}} &\parbox[c][12mm]{1mm}{\rotatebox[origin=b]{90}{$A = 0$}}
& \cellcolor{green!10}{\large$\frac{2 + \sqrt{2}}{8}$} & {\large$\frac{2 - \sqrt{2}}{8}$} & {\large$\frac{2 - \sqrt{2}}{8}$ }& \cellcolor{green!10}{\large$\frac{2 + \sqrt{2}}{8}$} & \cellcolor{red!10}{\large$\frac{1}{4}$} & \cellcolor{red!10}{\large$\frac{1}{4}$}\tabularnewline
\hhline{~-------}
\multicolumn{1}{c|}{~} & \parbox[c][12mm]{1mm}{\rotatebox[origin=c]{90}{$A = 1$}}
& {\large$\frac{2 - \sqrt{2}}{8}$} & \cellcolor{green!10}{\large$\frac{2 + \sqrt{2}}{8}$ }& \cellcolor{green!10}{\large$\frac{2 + \sqrt{2}}{8}$} & {\large$\frac{2 - \sqrt{2}}{8}$} & \cellcolor{red!10}{\large$\frac{1}{4}$ }& \cellcolor{red!10}{\large$\frac{1}{4}$}\tabularnewline
\hline
\end{tabular}
};
\node(CHSH) [below left =0.cm and -2.1cm of table,align=left] {CHSH};
\node(chshbox)[draw, left=.1cm of CHSH, fill=green!10] {};
\node(align) [below right =0.cm and -1.31cm of CHSH, align=left] {Alignment};
\node[draw, left=.12cm of align, fill=blue!10] {};
\node(gen) [below right =-0.cm and -2.1cm of table] {Generation};
\node[draw, left=.12cm of gen, fill=red!10] {};
\end{tikzpicture}
\end{center}
\caption[Qubit implementation of extended CHSH game]{A measurement schematic for a qubit implementation of $\mathcal{G}_{\mathrm{CHSH}}$. Measurements are depicted in the $x$-$z$ plane of the
Bloch-sphere with
$\sigma_{\varphi} = \cos(\varphi) \sigma_z + \sin(\varphi) \sigma_x$
for $\varphi \in (-\pi,\pi]$. Using the maximally entangled state
$\ket{\psi}_{AB}=(\ket{00}+\ket{11})/\sqrt{2}$ with the measurements
depicted, one has a frequency distribution of
$\bm{\omega}_{\mathcal{G}} = \tfrac12~\left(\tfrac12 + \tfrac{\sqrt{2}}{4}, 1, \tfrac12 - \tfrac{\sqrt{2}}{4}\right)$, where the scores are ordered $(c_{\mathrm{CHSH}}, c_{\mathrm{align}}, 0)$. The setup achieves Tsirelson's bound for the CHSH game as well as
perfect correlations for the $X=0$ and $Y=2$ inputs. In addition,
self-testing results~\cite{PopescuRohrlich} give a converse result:
these scores completely characterize the devices up to local
isometries. This implies that the state used by the devices is
uncorrelated with an adversary and that the measurement pair
$(X,Y)=(1,2)$ yields uniformly random results, certifying the
presence of $2$ bits of private randomness in the outputs.}
\label{fig:EkertDiag}
\end{figure}
\end{example}
\subsection{Device-independent randomness expansion protocols and their security}
\label{sec:re-protocols}
A device-independent randomness expansion protocol is a procedure by
which one attempts to use a uniform, trusted seed, $\bm{\mathrm{D}}$, to
produce a longer uniform bit-string, $\bm{\mathrm{Z}}$, through repeated
interactions with some untrusted devices. We consider so-called
\textit{spot-checking} protocols, which involve two round types:
test-rounds, during which one attempts to produce certificates of
nonlocality, and generation rounds in which a fixed input is given to
the devices and the outputs are recorded. By choosing the rounds
randomly according to a distribution heavily favouring generation
rounds, we are able to reduce the size of the seed whilst sufficiently
constraining the device's behaviour, guaranteeing the presence of
randomness in the outcomes (except with some small probability).
Using the setup described in \sec\ref{sec:devices-and-games}, our template randomness expansion protocol consists of two main steps.
\begin{enumerate}
\item \textbf{Accumulation}: In this phase the user repeatedly
interacts with the separated devices. Each interaction is randomly
chosen to be a generation round or a test round in a coordinated way
using the initial random seed.\footnote{For example, a central source of randomness could be used
to choose the round type. This information could then be communicated to each party (in such a way that
the devices do not learn this choice).} On generation rounds the devices are
provided with some fixed inputs $(\tilde{x},\tilde{y})\in\mathcal{X}\mathcal{Y}$, whereas during test rounds,
the testing procedure specific to the protocol is followed. After many
interactions, the recorded outputs are concatenated to give
$\AA\bm{\mathrm{B}}$. Using the statistics collected during test rounds, a
decision is made about whether to abort or not based on how close
the observations are to some pre-defined expected device behaviour.
\item \textbf{Extraction}: Subject to the protocol not aborting in the
accumulation step, a quantum-proof randomness extractor is applied
to $\AA\BB$. This maps the partially random $\AA\BB$ to a shorter
string $\bm{\mathrm{Z}}$ that is the output of the protocol.
\end{enumerate}
We define security of a randomness expansion protocol according to a
composable
definition~\cite{Canetti,Can01,PW,Ben-OrMayers,PR2014}. Using
composable security ensures that the output randomness can be used in
any other application with only an arbitrarily small probability of it
being distinguishable from perfect randomness. To make this more
precise, consider a hypothetical device that outputs a string $\bm{\mathrm{Z}}$ that
is uniform and uncorrelated with any information held by an
eavesdropper. In other words, it outputs $\tau_m\otimes\rho_E$, where
$\tau_m$ is the maximally mixed state on $m$ qubits. The ideal
protocol is defined as the protocol that involves first doing the real
protocol, then, in the case of no abort, replacing the output with a
string of the same length taken from the hypothetical device. The
protocol is then said to be $\varepsilon_{\mathrm{sound}}$-secure ($\varepsilon_{\mathrm{sound}}$ is called
the \emph{soundness error}) if, when the user either implements the
real or ideal protocol with probability $\frac{1}{2}$, the maximum
probability that a distinguisher can guess which is being implemented
is at most $\frac{1+\varepsilon_{\mathrm{sound}}}{2}$. If $\varepsilon_{\mathrm{sound}}$ is small, then the
real and ideal protocols are virtually indistinguishable. Defining
the ideal as above ensures that the real and ideal protocols can never
be distinguished by whether or not they abort. We refer
to~\cite{PR2014} for further discussion of composability in a related
context (that of QKD).
There is a second important parameter of any protocol, its
\emph{completeness error}, which is the probability that an ideal
implementation of the protocol leads to an abort. It is important for
a protocol to have a low completeness error in addition to a low soundness error since a protocol that always aborts is vacuously secure.
\begin{definition}\label{def:security}
Consider a randomness expansion protocol whose output is denoted by
$Z$. Let $\Omega$ be the event that the protocol does not
abort. The protocol is an $(\varepsilon_{\mathrm{sound}},\varepsilon_{\mathrm{comp}})$-randomness expansion
protocol if it satisfies the following two conditions.
\begin{enumerate}
\item \textbf{Soundness}:
\begin{equation}\label{eq:sound_def}
\frac{1}{2}\mathrm{Pr}[\Omega]\cdot\|\rho_{ZE}-\tau_m\otimes\rho_E\|_1\leq \varepsilon_{\mathrm{sound}},
\end{equation}
where $E$ is an arbitrary quantum register (which could have been
entangled with the devices used at the start of the protocol), $m$
is the length of the output string $Z$ and $\tau_m$ is the maximally mixed state on a system of dimension $2^m$.
\item \textbf{Completeness}: There exists a set of quantum states and
measurements such that if the protocol is implemented using those
\begin{equation}
\mathrm{Pr}[\Omega]\geq 1-\varepsilon_{\mathrm{comp}}.
\end{equation}
\end{enumerate}
\end{definition}
Although we use a composable security definition to ensure that any
randomness output by the protocol can be used in any scenario,
importantly, this may not apply if the devices used in the protocol
are subsequently reused~\cite{bckone}. Thus, after the protocol the
devices should be kept shielded and not reused until such time as the
randomness generated no longer needs to be kept secure. How best to
resolve this remains an open problem: the Supplemental Material
of~\cite{bckone} presents candidate protocol modifications (and
modifications to the notion of composability) that may circumvent such
problems.
\subsection{Entropy accumulation}
In order to bound the smooth min entropy
$H_{\min}^{\epsilon}(\AA\bm{\mathrm{B}}|\bm{\mathrm{X}}\bm{\mathrm{Y}} E)$ accumulated during the protocol
we employ the EAT~\cite{DFR, DF}. Roughly speaking, the EAT says that
this min-entropy is proportional to the number of rounds, up to square
root correction factors. The proportionality constant is the
single-round conditional von Neumann entropy optimized over all states
that can give rise to the observed scores. In its full form, the EAT
is an extension of the asymptotic equipartition property~\cite{TCR} to
a particular non-i$.$i$.$d$.$ regime. For the purposes of randomness expansion
we only require a special case of the EAT, which we detail later in
this section.
With the goal of maximising our entropic yield, we use the recently
improved statement of the entropy accumulation
theorem~\cite{DF}.\footnote{We discuss this EAT statement and compare
it to alternatives in Appendix~\ref{app:EAT}.} For completeness we present
the relevant statements including the accumulation procedure (see also~\cite{arnon2018practical}).
\subsubsection{The entropy accumulation procedure}
\label{sec:accumulation-procedure}
The entropy accumulation procedure prescribes how the user interacts
with their untrusted devices and collects data from them. Before
beginning this procedure a nonlocal game $\mathcal{G}=(\mu,V)$ that is compatible with
the alphabets of the devices is selected.
A \emph{round} within the entropy accumulation procedure consists of
the user giving an input to each of their devices and recording the
outputs. We use subscripts on random variables to indicate the round
that they are associated with, i.e., $X_iY_i$ are the random variables
describing the joint device inputs for the $i^{\text{th}}$ round. In
addition, boldface will be used to indicate that a random variable
represents the concatenation over all $n$ rounds of the protocol,
$\bm{\mathrm{X}} = X_1X_2\dots X_n$.
The accumulation procedure consists of $n \in \mathbb{N}$ separate interactions with the untrusted devices. We refer to a single interaction with the devices as a \emph{round}. A round consists of the user selecting and supplying inputs to the devices, receiving outputs and recording this data. During the $i^{\text{th}}$ round, a random variable $T_i \sim \text{Bernoulli}(\gamma)$ is sampled, for some fixed $\gamma \in (0,1)$, indicating whether the round will be a \emph{generation round} or a \emph{test round}. With probability $1-\gamma$ we have $T_i = 0$ and the round is a generation round. During a generation round, the user supplies the respective devices with the fixed generation inputs $(\tilde{x},\tilde{y}) \in \mathcal{X}\mathcal{Y}$, recording $X_iY_i = (\tilde{x},\tilde{y})$. They record the outputs received from the devices as $A_i$ and $B_i$ respectively and they record the round's score as $C_i = \perp$. With probability $\gamma$, $T_i = 1$ and the round is a test round. During a test round, inputs $X_iY_i$ are sampled according to the distribution specified by the chosen nonlocal game. The sampled inputs are fed to their respective devices and the outputs received are recorded as $A_iB_i$. The score is computed and recorded as $C_i = V(A_i,B_i,X_i,Y_i)$. The \emph{transcript for round $i$} is the tuple $(A_i,B_i,X_i,Y_i,T_i,C_i)$. After $n$ rounds, the complete transcript for the accumulation procedure is $(\AA,\bm{\mathrm{B}},\bm{\mathrm{X}},\bm{\mathrm{Y}},\bm{\mathrm{T}},\bm{\mathrm{C}})$. We denote by $\mathcal{C}$ the set of possible values that $C_i$ can take, i.e. $\mathcal{C} = \mathcal{G} \cup \{\perp\}$.
After the $n$-round transcript has been obtained, the user looks to determine the performance of the untrusted devices and, in turn, certify a lower bound on the total entropy produced, $H_{\min}^{\epsilon}(\AA\bm{\mathrm{B}}|\bm{\mathrm{X}}\bm{\mathrm{Y}}\mathrm{E})$. To this end, the user computes the \emph{empirical frequency distribution}
\begin{equation}\label{eq:frequency-distribution-scores}
F_{\CC}(c) = \frac{1}{n}\sum_{i=1}^{n} \delta_{c,C_i}.
\end{equation}
Prior to the accumulation step, the user fixes some frequency
distribution $\w$ corresponding to an expected (or hoped for) behaviour. Should the devices behave in an i$.$i$.$d$.$ manner according to $\w$, then concentration bounds tell us that the empirical frequency distribution $\bm{F}_{\CC}$ should be close to $\bm{\omega}$. With this in mind, we define the event that the protocol does not abort by
\begin{equation}\label{eq:success-event}
\Omega = \{\bm{\mathrm{C}} \mid \gamma (\bm\w(\mathcal{G}) - \d) < \bm{F}_{\CC}(\mathcal{G}) < \gamma (\bm\w(\mathcal{G}) + \d) \},
\end{equation}
where $\d \in (0,1)^{|\mathcal{G}|}$ is a vector of confidence interval widths satisfying $\bm 0 < \d < \bm\w(\mathcal{G})$ with all vector inequalities being interpreted as element-wise constraints.
\subsubsection{The entropy accumulation theorem}\label{sec:EAT-defns}
To complete the protocol, uniform randomness needs to be extracted
from the partially random outputs. Doing so requires the user to
assume a lower bound on the smooth min-entropy (conditioned on any side
information held by an adversary) contained in the devices' outputs
when the protocol does not abort. If $\varepsilon_{\mathrm{sound}}$ is very small, then
the assumption must be correct with near certainty. The EAT
provides a method by which one can compute such a lower
bound. Loosely, the EAT states that if the interaction between the
honest parties occurs in a sequential manner (as described in
\sec\ref{sec:accumulation-procedure}), then with high probability the
uncertainty an adversary has about the outputs is \emph{close} to
their total average uncertainty. As a mathematical statement it is a
particular example of the more general phenomenon of
\emph{concentration of measure} (see~\cite{ledoux2005concentration}
for a general overview). In order to state the EAT precisely, we first
require a few definitions.
\begin{definition}[EAT channels]\label{def:eat-channels}
A set of \emph{EAT channels} $\{\mathcal{N}_i\}_{i=1}^n$ is a collection of trace-preserving and completely-positive maps $\mathcal{N}_i:\S(R_{i-1})\rightarrow\S(A_iB_iX_iY_iC_i R_i)$ such that for every $i \in [n]$:
\begin{enumerate}
\item $A_i,B_i,X_i,Y_i$ and $C_i$ are finite dimensional classical
systems, $R_i$ is an arbitrary quantum system and $C_i$ is the
output of a deterministic function of the classical registers
$A_i,B_i,X_i$ and $Y_i$.
\item For any initial state $\rho_{R_0 E}$, the final state $\rho_{\AA\bm{\mathrm{B}}\bm{\mathrm{X}}\bm{\mathrm{Y}}\bm{\mathrm{C}} E} = \ptr{R_n}{((\mathcal{N}_n \circ \dots \circ \mathcal{N}_1)\otimes \mathcal{I}_{E}) \rho_{R_0 E}}$ fulfils the Markov chain condition $I(A^{i-1}B^{i-1}\!\! :\! X_iY_i | X^{i-1}Y^{i-1}E ) = 0$ for every $i \in [n]$.
\end{enumerate}
\end{definition}
The EAT channels formalise the notion of
interaction within the protocol. The first condition in
Def$.$~\ref{def:eat-channels} specifies the nature of the information
present within the protocol and, in particular, it restricts the honest
parties' inputs to their devices to be classical in nature. The
arbitrary quantum register $R_i$ represents the quantum
state stored by the separate devices after the $i^{\text{th}}$ round. The second condition
specifies the sequential nature of the protocol. The channels $\mathcal{N}_i$
describe the joint action of both devices and include the generation
of the randomness needed to choose the settings. The Markov chain
condition implies that the inputs to the devices presented by the
honest parties are conditionally independent of the previous outputs
they have received. Note that by using a trusted private seed to
choose the inputs and supplying the inputs sequentially (as is
done in \sec\ref{sec:accumulation-procedure}), this condition will be
satisfied.\footnote{A public seed can also be used if the Markov chain conditions can be shown to hold. However, one may need to be more careful when dealing with such a scenario. For example, if the entangled states distributed to the devices come from some third-party source, then it should be clear that the state used within the $i^{\text{th}}$ round was prepared independently of the seed used to generate the inputs $X_{i+1}^nY_{i+1}^n$. This could be achieved by choosing inputs $X_{i+1}Y_{i+1}$ using a public seed that was generated after the $i^{\text{th}}$ entangled state has been distributed.} Finally, the adversary is permitted to hold a purification,
$E$ of the initial state shared by the devices, and the state
evolves with the sequential interaction through the application of the
sequence of EAT channels.
As explained above, the EAT allows the elevation of i$.$i$.$d$.$ analyses to
the non-i$.$i$.$d$.$ setting. To do so requires a so-called \emph{min-tradeoff
function} which, roughly speaking, gives a lower bound on the
single-round von Neumann entropy produced by any devices that, on
expectation, produce some statistics. In the case of the EAT these
distributions are $\{F_{\CC}\}_{\bm{\mathrm{C}}\in\Omega}$, i.e., all frequency
distributions induced from score transcripts $\bm{\mathrm{C}}$ that do not lead to
an aborted protocol. The EAT asserts that, under sequential
interaction, an adversary's uncertainty about the outputs of the
non-i$.$i$.$d$.$ device will (with high probability) be concentrated within
some interval about the uncertainty produced by these i$.$i$.$d$.$ devices. In
particular, a lower bound on this uncertainty can be found by
considering the worst-case i$.$i$.$d$.$ device.
\begin{definition}[Min-tradeoff functions]\label{def:fmin}
Let $\{\mathcal{N}_i\}_{i=1}^n$ be a collection of EAT channels and let
$\mathcal{C}$ denote the common alphabet of the systems
$C_1,\dots,C_n$. An affine function $f: \P_\mathcal{C} \rightarrow \mathbb R$ is a
\emph{min-tradeoff function} for the EAT channels
$\{\mathcal{N}_i\}_{i=1}^n$ if for each $i \in [n]$ it satisfies
\begin{equation}\label{eq:fmin-definition}
f(\bm{p}) \leq \inf_{\sigma_{R_{i-1}R'}: \mathcal{N}_i(\sigma)_{C_i}=\tau_{\bm{p}}} H(A_iB_i|X_iY_iR')_{\mathcal{N}_i(\sigma)},
\end{equation}
where $\tau_{\bm{p}}:=\sum_{c\in\mathcal{C}}p(c)\ketbra{c}$, $R'$ is a
register isomorphic to $R_{i-1}$ and the infimum over the empty set is
taken to be $+\infty$.
\end{definition}
\begin{remark}\label{rem:protocol-respecting-distribution}
As the probability of testing during the protocol is fixed, the expected frequency distributions will always take the form
\begin{equation}\label{eq:protocol-respecting-distribution-extended}
\bm{p} = \begin{pmatrix}
\gamma \bm{\omega} \\
(1-\gamma)
\end{pmatrix}
\end{equation}
for some $\bm{\omega} \in \Q_{\G}$, where $p(\perp)$ is the final element of $\bm{p}$. Furthermore, the fixed testing probability means that any distribution that results in a finite infimum in~\eqref{eq:fmin-definition} necessarily takes this form. We shall refer to a distribution of the form~\eqref{eq:protocol-respecting-distribution-extended} as a \emph{protocol-respecting} distribution, denoting the set of all such distributions by $\respecting$.
\end{remark}
Particular properties of the min-tradeoff function that appear within the error terms of the EAT are:
\begin{itemize}
\item The maximum value attainable on $\P_{\mathcal{C}}$,
\begin{equation}
\Max{f} := \max_{\bm{p} \in \P_\mathcal{C}} f(\bm{p}).
\end{equation}
\item The minimum value over protocol-respecting distributions,
\begin{equation}
\Min{\left.f\right|_{\respecting}} := \min_{\bm{p} \in \respecting} f(\bm{p}).
\end{equation}
\item The maximum variance over all protocol-respecting distributions,
\begin{equation}
\Var{\left.f\right|_{\respecting}} := \max_{\bm{p} \in \respecting} \sum_{c \in \mathcal{C}} p(c) \left(f(\bm{\mathrm{e}}_c)- f(\bm{p})\right)^2.
\end{equation}
\end{itemize}
\begin{theorem}[EAT~\cite{DF}]
\label{thm:EAT}~\newline
Let $\{\mathcal{N}_i\}_{i=1}^n$ be a collection of EAT channels and let
$\rho_{\AA\bm{\mathrm{B}}\bm{\mathrm{I}}\bm{\mathrm{C}} E} = \ptr{R_n}{(( \mathcal{N}_n \circ \dots \circ
\mathcal{N}_1)\otimes \mathcal{I}_{E})\rho_{R_0 E}}$ be the output state after the
sequential application of the channels $\{\mathcal{N}_i\otimes\mathcal{I}_{E}\}_i$ to
some input state $\rho_{R_0 E}$. Let $\Omega \subseteq \mathcal{C}^n$
be some event that occurs with probability $p_{\Omega}$ and let
$\rho_{|_\Omega}$ be the state conditioned on $\Omega$ occurring. Finally let $\epsilon_{s} \in (0,1)$ and $f$ be a valid min-tradeoff function for $\{\mathcal{N}_i\}_i$.
If for all $\bm{\mathrm{C}}\in\Omega$, with $\pr{\bm{\mathrm{C}}}>0$, there is some $t \in \mathbb R$
for which $f(\bm{F}_{\CC})\geq t$, then for any $\beta \in (0,1)$
\begin{equation}\label{eq:EAT-bound}
H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\bm{\mathrm{X}}\bm{\mathrm{Y}}\mathrm{E})_{\rho_{|_{\Omega}}} > n t - n (\epsilon_V + \epsilon_K) - \epsilon_\Omega,
\end{equation}
where
\begin{equation}\label{eq:errV}
\epsilon_V := \frac{\beta \ln 2}{2} \left(\log\left(2 |\mathcal{A}\mathcal{B}|^2 +1\right) + \sqrt{ \Var{\left.f\right|_{\respecting}} + 2} \right)^2,
\end{equation}
\begin{equation}\label{eq:errK}
\epsilon_K := \frac{\beta^2}{6(1-\beta)^3\ln 2}\,
2^{\beta(\log |\mathcal{A}\mathcal{B}| + \Max{f} - \Min{\left.f\right|_{\respecting}})}
\ln^3\left(2^{\log |\mathcal{A}\mathcal{B}| + \Max{f} - \Min{\left.f\right|_{\respecting}}} + \mathrm{e}^2\right)
\end{equation}
and
\begin{equation}\label{eq:errW}
\epsilon_\Omega := \frac{1}{\beta}\left(1 - 2 \log(p_{\Omega}\,\epsilon_{s})\right).
\end{equation}
\end{theorem}
\begin{remark}\label{rem:alpha-choice}
As the EAT holds for all $\beta \in (0,1)$ we can numerically optimize
our choice of $\beta$ once we know the values of the other protocol
parameters. However, for large $n$ and small $\gamma$, a short
calculation shows that choosing $\beta \in O(\sqrt{\gamma/n})$ keeps
all the error terms of approximately the same magnitude. In
particular, this choice results in the error scalings: $n\epsilon_V\in O(\sqrt{n/\gamma})$, $n\epsilon_K\in O(1)$ and $\epsilon_\Omega\in O(\sqrt{n/\gamma})$.
\end{remark}
\subsection{Randomness extractors}\label{sec:extractors}
Subject to the protocol not aborting, the entropy accumulation
sub-procedure detailed in \sec\ref{sec:accumulation-procedure} will
result in the production of some bit string $\AA\bm{\mathrm{B}} \in \{0,1\}^{2n}$ with $H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\bm{\mathrm{X}}\bm{\mathrm{Y}}\mathrm{E}) > k$ for
some $k \in \mathbb R$. In order to `compress' this randomness into a shorter
but almost uniform random string a \emph{seeded, quantum-proof
randomness extractor} can be used. This is a function
$R_{\mathrm{ext}}:\AA\BB\times\bm{\mathrm{D}}\rightarrow\bm{\mathrm{Z}}$, such that if $\bm{\mathrm{D}}$ is a
uniformly distributed bit-string, the resultant bit-string
$\bm{\mathrm{Z}}$ is $\epsilon$-close to uniformly distributed, even from the
perspective an adversary with quantum side-information $E$ about
$\AA\BB$. More formally, combining~\cite[Lemma~$3.5$]{DPVR} with the
standard definition for a quantum-proof randomness
extractor~\cite{KR2011} gives the following definition.
\begin{definition}[Quantum-proof strong extractor]\label{def:extractor}
A function
$R_{\mathrm{ext}}:\{0,1\}^{|\AA\BB|}\times\{0,1\}^{|\bm{\mathrm{D}}|}\rightarrow\{0,1\}^{|\bm{\mathrm{Z}}|}$
is a
\emph{quantum-proof $(k,\epsilon_{\mathrm{ext}}+2\epsilon_{s})$-strong extractor}, if for all
cq-states $\rho_{\AA\bm{\mathrm{B}} \mathrm{E}}$ with $H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\mathrm{E})_{\rho} \geq
k$ and for some $\epsilon_{s}>0$ it maps $\rho_{\AA\bm{\mathrm{B}} \mathrm{E}}\otimes\tau_{\bm{\mathrm{D}}}$
to $\rho'_{R_{\mathrm{ext}}(\AA\bm{\mathrm{B}},\bm{\mathrm{D}})\bm{\mathrm{D}} \mathrm{E}}$ where
\begin{equation}
\frac12 \|\rho'_{R_{\mathrm{ext}}(\AA\bm{\mathrm{B}},\bm{\mathrm{D}})\bm{\mathrm{D}} \mathrm{E}}-\tau_m\otimes \tau_{|\bm{\mathrm{D}}|}\otimes\rho_\mathrm{E}\|_1 \leq \epsilon_{\mathrm{ext}} + 2\epsilon_{s}\,.
\end{equation}
(Recall that $\tau_m$ is the maximally mixed state on a system of dimension $m$.)
\end{definition}
Although in general the amount of randomness extracted will depend on
the extractor, $H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\mathrm{E})$ provides an upper bound on
the total number of $\epsilon_{s}$-close to uniform bits that can be
extracted from $\AA\bm{\mathrm{B}}$ and a well-chosen extractor will result in a
final output bit-string with
$|\bm{\mathrm{Z}}|\approxH_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\mathrm{E})$. We denote any loss of
entropy incurred by the extractor by $\ell_{\mathrm{ext}}=k-|\bm{\mathrm{Z}}|$. Entropy
loss will differ between extractors but in general it will be some
function of the extractor error, the seed length and
the initial quantity of entropy. The extractor literature is rich with
explicit constructions, with many following Trevisan's
framework~\cite{trevisan}. For an in-depth overview of randomness
extraction, we refer the reader to~\cite{nisan1999extracting} and references therein.
\begin{remark}
By using a \emph{strong} quantum-proof extractor, the output of the
extractor will remain uncorrelated with the string used to seed
it. Since the seed acts like a catalyst, we need not be overly
concerned with the amount required. Furthermore, if available, it
could just be acquired from a trusted public source immediately
prior to extraction without compromising security. However, if a
public source is used, it is important that it is not available to
Eve too early in the protocol as this could allow Eve to create
correlations between the outputs of the devices and the extractor
seed.
\end{remark}
\begin{remark}
Related to the previous remark is the question of whether the
quantity we are interested in is
$H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\bm{\mathrm{X}}\bm{\mathrm{Y}}\mathrm{E})$, rather than
$H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\mathrm{E})$ or
$H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}\bm{\mathrm{X}}\bm{\mathrm{Y}}|\mathrm{E})$. In common QKD protocols
(such as BB84), the first of these is the only reasonable choice
because the information $\bm{\mathrm{X}}\bm{\mathrm{Y}}$ is communicated between the two
parties over an insecure channel and hence could become known by
Eve. For randomness expansion, this is no longer the case: this
communication can all be kept secure within one lab. Whether the
alternative quantities can be used then depends on where the seed
randomness comes from. If a trusted beacon is used then the first
case is needed. If the seed randomness can be kept secure until such
time that the random numbers need no longer be kept random then the
second quantity could be used\footnote{This is a reasonable
requirement, because there are other strings that have to be kept
secure in the same way, e.g., the raw string $\AA$.}. If it is
also desirable to extract as much randomness as possible, then the
third quantity could be used instead. However, in many protocols the
amount of seed required to choose $X$ and $Y$ in the entropy
accumulation procedure is small enough that extracting randomness
from this will not significantly increase the rate (see, e.g., our discussion in
Appendix~\ref{sec:input-rand}).
\end{remark}
\section{A template protocol for randomness expansion}\label{sec:adaptive-framework}
The primary purpose of this work is to provide a method whereby one
can construct tailored randomness expansion protocols, with a
guarantee of security and easily calculable generation rates. We
achieve this by providing a template protocol (Protocol~QRE),
for which we have explicit security statements in terms of the
protocol parameters as well as the outputs of some SDPs. Our framework
is divided into three sub-procedures: preparation, accumulation
and extraction.
The preparation procedure consists of assigning values to the various
parameters of the protocol, this includes choosing a nonlocal game to
act as the nonlocality test. At the end of the preparation one would
have turned the protocol template into a precise protocol, constructed
a min-tradeoff function and be able to calculate the relevant security
quantities. Note that once a specific protocol has been decided it is
not necessary to perform this step. Furthermore, the manufacturer may
already specify the entire protocol to use with their devices, in which
case this step can be skipped. Nevertheless, the fact that the
protocol can be tuned to the devices at hand enables the
user to optimize the randomness output from the devices at hand.
The final two parts of the framework
form the process described in Protocol~QRE. The accumulation
step follows the entropy accumulation procedure detailed in
\sec\ref{sec:accumulation-procedure} wherein the user interacts with
their devices using the chosen protocol parameters. After the device
interaction phase has finished, the user implicitly evaluates the
quality of their devices by testing whether the observed inputs and
outputs satisfy the condition \eqref{eq:success-event}. Subject to the
protocol not aborting, a reliable lower bound on the min-entropy of
the total output string is calculated through the EAT
\eqref{eq:EAT-bound}. With this bound, the protocol can be completed
by applying an appropriate quantum-proof randomness extractor to the
devices' raw output strings.
\begin{figure}[t!]
\begin{center}
\fbox{
\begin{minipage}[c]{16cm}
\procedure[mode=text]{Protocol~QRE}{%
\tbf{Parameters and notation}: \\
\ind 1 $\mathfrak{D}_{AB}$ -- a pair of untrusted devices taking
inputs from $\mathcal{X}$, $\mathcal{Y}$ and giving outputs from $\mathcal{A}$, $\mathcal{B}$ \\
\ind 1 $\mathcal{G} = (\mu,V)$ -- a nonlocal game compatible with $\mathfrak{D}_{AB}$ \\
\ind 1 $\bm{\omega} \in \mathcal{Q}_{\mathcal{G}}$ -- an expected frequency distribution for $\mathcal{G}$ \\
\ind 1 $\d$ -- vector of confidence interval widths (satisfies $0\leq\delta_k\leq\omega_k$ for all $k \in [|\mathcal{G}|]$) \\
\ind 1 $n \in \mathbb N$ -- number of rounds \\
\ind 1 $\gamma \in (0,1)$ -- probability of a test round \\
\ind 1 $(\tilde{x},\tilde{y})$ -- distinguished inputs for generation rounds \\
\ind 1 $f_{\min}$ -- min-tradeoff function \\
\ind 1 $\epsilon_{\mathrm{ext}} > 0$ -- extractor error \\
\ind 1 $\epsilon_{s} \in (0,1)$ -- smoothing parameter \\
\ind 1 $\epsilon_{\mathrm{EAT}} \in (0,1)$ -- entropy accumulation error \\
\ind 1 $R_{\mathrm{ext}}$ -- quantum-proof $(k,\epsilon_{\mathrm{ext}}+2\epsilon_{s})$-strong extractor \\
\ind 1 $\ell_{\mathrm{ext}}$ -- entropy loss induced by $R_{\mathrm{ext}}$ \\
[][\hline\hline]
\tbf{Procedure}: \\
\ind 1 \nln{1} Set $i=1$. \\
\ind 1 \nln{2} \tbf{While} $i \leq n$: \\
\ind 4 Choose $T_i=0$ with probability $1-\gamma$ and otherwise $T_1=1$. \\
\ind 4 \,\tbf{If} $T_i=0$: \\
\ind 6 \nln{Gen} Input $(\tilde{x},\tilde{y})$ into the respective devices, recording the inputs $X_iY_i$ and outputs $A_iB_i$.
\\
\ind 8 \,\,Set $C_i= \perp$ and $i = i+1$. \\
\ind 4 \,\tbf{Else}: \\
\ind 6 \nln{Test} Play a single round of $\mathcal{G}$ on $\mathfrak{D}_{AB}$ using inputs sampled
from $\mu$, recording the inputs $X_iY_i$ and\\
\ind 8 \,\,outputs $A_i B_{i}$. Set $C_i = V(A_{i},B_{i},X_{i},Y_{i})$ and $i=i+1$.\\
\ind 1 \nln{3} Compute the empirical frequency distribution $\bm{F}_{\CC}$. \\
\ind 2 \,\tbf{If} $ \gamma (\bm{\omega}-\d) < \bm{F}_{\CC}(\mathcal{G}) < \gamma (\bm{\omega}+\d)$: \\
\ind 4 \nln{Ext} Apply a strong quantum-proof randomness extractor $R_{\mathrm{ext}}$ to the raw output string $\AA\bm{\mathrm{B}}$\\
\ind 6 producing $n f_{\min}(\bm{\omega} -\d_{\mathrm{sgn}}) - \ell_{\mathrm{ext}}$ bits $(\epsilon_{\mathrm{ext}}+2\epsilon_{s})$-close to uniformly distributed. \\
\ind 2 \,\tbf{Else}: \\
\ind 3 \,\nln{Abort} Abort the protocol.
}
\end{minipage}
}
\end{center}
\caption{The template quantum-secure device-independent randomness
expansion protocol.}
\label{fig:full-protocol}
\end{figure}
The next three subsections are dedicated to explaining these three sub-procedures in detail. In particular, \sec\ref{sec:subproc-prep} outlines the min-tradeoff function construction. A bound on the total entropy accumulated in terms of the various protocol parameters is then provided in \sec{\ref{sec:subproc-execution} and finally, in \sec\ref{sec:template-protocol}, the security statements for the template protocol are presented.
\subsection{Preparation}\label{sec:subproc-prep}
Before interacting with their devices the user must select appropriate
protocol parameters (see Fig$.$~\ref{fig:full-protocol} for a full list
of parameters). In particular, they must choose a nonlocal game to use
during the test rounds and construct a corresponding min-tradeoff
function. This step enables this to be done if it is not already
specified.
The parameter values chosen will largely be dictated by situational
constraints; e.g., runtime, seed length and the expected
performance of the untrusted devices.\footnote{At first this may
seem to conflict with the ethos of device-independence. The point is that
although the user of the protocol relies on an expected behaviour to
set-up their devices, they do not rely on this expected behaviour
being an accurate reflection of the devices for security. This also
means that the expected behaviour could be that claimed by the
device manufacturer. Using inaccurate estimation of the devices
behaviour will not compromise security, but may lead to a
different abort probability.} The user's choice of parameters, in
particular the choice of nonlocal game, will affect the form of their
min-tradeoff function derived and in turn their projected total
accumulated entropy. Before moving to the accumulation step of the
protocol the user can try to optimise their chosen parameters by computing the entropy rates for many different
choices. This allows them to adapt their protocol
to the projected performance of their devices.
We now present a constructible family of min-tradeoff functions for a
general instance of Protocol~QRE. This construction is based on
the following idea. As noted in \sec\ref{sec:entropies-and-sdps} one
can numerically calculate a lower bound on the min-entropy of a system
based on its observed statistics. Pairing this with the relation,
$H_{\min}(X|E)\leq H(X|E)$, we have access to numerical bounds on the von Neumann
entropy. In particular, we can use the affine function
$g(\bm q)=\l\cdot \bm q$, where $\l$ is a feasible point
of the dual program~\eqref{prog:relaxed-dual}, in order to build a min-tradeoff function for the protocol.\footnote{In fact, by relaxing the dual program
to the NPA hierarchy, the single round bound is valid against
super-quantum adversaries. However, the full protocol is not
necessarily secure more widely: to show that we would need to
generalise the EAT and the extractor.} In order for $g$ to meet the requirements of a min-tradeoff function, its domain must be extended to include the symbol $\perp$. To perform this extension we use the method presented in~\cite[Section~5.1]{DF}. As the rounds are split into testing and generation rounds, we may decompose the EAT-channel for the $i^{\text{th}}$ round as $\mathcal{N}_i = \gamma\mathcal{N}^{\mathrm{test}}_i + (1-\gamma)\mathcal{N}_i^\mathrm{gen}$, where $\mathcal{N}_i^\mathrm{test}$ is the channel that would be applied if the round were a test round and $\mathcal{N}_i^{\mathrm{gen}}$ if the round were a generation round. Importantly, this splitting separates $\perp$ from the nonlocal game scores. That is, if $\mathcal{N}_i^{\mathrm{test}}$ is the channel applied then $\pr{C_i = \perp} = 0$ whereas if $\mathcal{N}_i^{\mathrm{gen}}$ is applied then $\pr{C_i=\perp} = 1$.
\begin{lemma}[Min-tradeoff extension~{\cite[Lemma~5.5]{DF}} ]\label{lem:extensionlemma}
Let $g: \P_{\mathcal{G}} \rightarrow \mathbb R$ be an affine function satisfying
\begin{equation}\label{eq:entropy-bounding-OVG}
g(\bm{p}) \leq \inf_{\sigma_{R_{i-1}R'}: \mathcal{N}^{\mathrm{test}}_i(\sigma)_{C_i}=\tau_{\bm{p}}} H(A_iB_i|X_iY_iR')_{\mathcal{N}_i(\sigma)}
\end{equation}
for all $\bm{p}\in\mathcal{Q}_{\mathcal{G}}$. Then, the function $f: \P_{\mathcal{G}\cup\{\perp\}}\rightarrow \mathbb R$, defined by its action on trivial distributions
\begin{align*}
f(\bm{\mathrm{e}}_c) &= \Max{g} + \frac{g(\bm{\mathrm{e}}_c) - \Max{g}}{\gamma}, \qquad \forall c \in \mathcal{G}, \\
f(\bm{\mathrm{e}}_\perp) &= \Max{g},
\end{align*}
is a min-tradeoff function for the EAT-channels $\{\mathcal{N}_i\}_i$. Furthermore, $f$ satisfies the following properties:
\begin{align*}
\Max{f} &= \Max{g}, \\
\Min{\left.f\right|_{\respecting}} &\geq \Min{g}, \\
\Var{\left.f\right|_{\respecting}} &\leq \frac{(\Max{g} - \Min{g})^2}{\gamma}.
\end{align*}
\end{lemma}
\begin{lemma}[Min-tradeoff construction]\label{lem:fmin}
Let $\mathcal{G}$ be a nonlocal game and $k \in \mathbb N$. For each $\v\in\Q^{(k)}_\mathcal{G}$,
let $ \l_{\v}$ be a feasible point of
Prog$.$~\eqref{prog:relaxed-dual} when parameterized by
$\v$. Furthermore, let $\lambda_{\max} =
\max_{c \in \mathcal{G}}\lambda_{\v}(c)$ and $\lambda_{\min} =
\min_{c \in \mathcal{G}}\lambda_{\v}(c)$. Then, for any set of EAT channels
$\{\mathcal{N}_i\}_{i=1}^n$ implementing an instance of Protocol~QRE\ with the nonlocal game $\mathcal{G}$, the set of functionals $\mathcal{F}_{\min}(\mathcal{G}) = \{f_{\v}(\cdot)\mid \v\in \Q^{(k)}_\mathcal{G}\}$ forms a family of min-tradeoff functions, where $f_{\v}: \P_\mathcal{C} \rightarrow \mathbb R$ are defined by their actions on trivial distributions
\begin{align}\label{eq:fmin-c}
\hspace{4cm}f_{\v}(\bm{\mathrm{e}}_{c}) &:= (1-\gamma)\left(A_{\v} - B_{\v} \frac{\l_{\v}\cdot\bm{\mathrm{e}}_c - (1-\gamma)\lambda_{\min}}{\gamma} \right) \hspace{0.5cm}\text{for } c \in \mathcal{G},\intertext{and}
f_{\v}(\bm{\mathrm{e}}_\perp) &:= (1-\gamma)\left(A_{\v} - B_{\v}\, \lambda_{\min} \right), &~
\end{align}
where $A_{\v} = \tfrac{1}{\ln 2} - \log (\l_{\v}\cdot\v)$ and $B_{\v} = \frac{1}{\l_{\v}\cdot\v \ln 2}$.
Moreover, these min-tradeoff functions satisfy the following relations.
\begin{itemize}
\item Maximum:
\begin{equation}\label{eq:fv-max}
\Max{f_{\v}} = (1-\gamma) (A_{\v} - B_{\v} \,\lambda_{\min})
\end{equation}
\item $\respecting$-Minimum:
\begin{equation}\label{eq:fv-min}
\Min{\left.f_{\v}\right|_{\respecting}} \geq (1-\gamma)(A_{\v} - B_{\v} \,\lambda_{\max})
\end{equation}
\item $\respecting$-Variance:
\begin{equation}\label{eq:fv-var}
\Var{\left.f_{\v}\right|_{\respecting}} \leq \frac{(1-\gamma)^2 B_{\v}^2 (\lambda_{\max} - \lambda_{\min})^2}{\gamma}
\end{equation}
\end{itemize}
\begin{proof}
Consider the entropy bounding property \eqref{eq:entropy-bounding-OVG} but with $\mathcal{C}$ restricted to the scoring alphabet of $\mathcal{G}$, i.e., we have an affine function $g_{\v}: \P_\mathcal{G} \rightarrow \mathbb{R}$ such that
$$
g_{\v}(\bm q) \leq \inf_{\sigma_{R_{i-1}R'}: \mathcal{N}^{\mathrm{test}}_i(\sigma)_{C_i(\mathcal{G})}=\tau_{\bm q}} H(A_iB_i|X_iY_iR')_{\mathcal{N}_i(\sigma)},
$$
for all $\bm q \in \mathcal{Q}_\mathcal{G}$.
As conditioning on additional side information will not increase the von Neumann entropy, we may condition on whether or not the round was a test round,
\begin{align*}
H(A_iB_i|X_iY_iR')_{\mathcal{N}_i(\sigma)} &\geq H(A_iB_i|X_iY_i T_i R')_{\mathcal{N}_i(\sigma)} \\
&= \gamma H(A_iB_i|X_iY_i,T_i=1,R')_{\mathcal{N}_i(\sigma)} + (1-\gamma) H(A_iB_i|X_iY_i, T_i = 0, R')_{\mathcal{N}_i(\sigma)} \\
& > (1-\gamma) H(A_iB_i|X_i=\tilde{x},Y_i=\tilde{y}, T_i = 0, R')_{\mathcal{N}_i(\sigma)}
\end{align*}
where in the final line we have used the fact that the inputs are fixed for generation rounds. As the min-entropy lower bounds the von Neumann entropy, we arrive at the bound
$$
H(A_iB_i|X_iY_iR')_{\mathcal{N}_i(\sigma)} > (1-\gamma) H_{\min}(A_iB_i|X_i=\tilde{x},Y_i=\tilde{y}, T_i = 0, R')_{\mathcal{N}_i(\sigma)}.
$$
Using programs~\eqref{prog:relaxed-primal} and~\eqref{prog:relaxed-dual}, we can lower bound the right-hand side in terms of the relaxed guessing probability. Specifically, for a single generation round
\begin{align*}
H_{\min}(AB|X=\tilde{x}, Y=\tilde{y}, T=0, R') &= -\log(p_{\mathrm{guess}}(\bm q)) \\
& \geq -\log(\l_{\v}^{(k)} \cdot \bm q),
\end{align*}
holds for all $k\in\mathbb N$, any $\v \in \Q^{(k)}_\mathcal{G}$ and any quantum
system realising the expected statistics $\bm{q} \in \mathcal{Q}_\mathcal{G}$. In the
final line we used the monotonicity of the logarithm together with the
fact that a solution to the relaxed dual program, for any
parameterization $\v \in \Q^{(k)}_\mathcal{G}$, provides a linear function $ \l_{\v}
\cdot (\,\cdot\,)$ that is greater than $p_{\mathrm{guess}}$ everywhere on
$\Q^{(k)}_{\mathcal{G}}$. Note that this bound is also device-independent and is
therefore automatically a bound on the infimum. Dropping the
superscript $(k)$ for notational ease, we may recover the desired affine property by taking a first order expansion about the point $\v$. This results in the function
$$
g_{\v}(\bm q):= (1-\gamma) (A_{\v} - B_{\v} \, \l_{\v} \cdot \bm q),
$$
which satisfies
$$
g_{\v}(\bm q) \leq \inf_{\sigma_{R_{i-1}R'}: \mathcal{N}^{\mathrm{test}}_i(\sigma)_{C_i}=\tau_{\bm q}} H(A_iB_i|X_iY_iR')_{\mathcal{N}_i(\sigma)},
$$
for all $\bm q \in \mathcal{Q}_\mathcal{G}$, with $A_{\v}$ and $B_{\v}$ as defined in Lemma~\ref{lem:fmin}. The statement then follows by applying Lemma~\ref{lem:extensionlemma} to $g_{\v}$, noting $\Max{g_{\v}} = (1-\gamma) (A_{\v} - B_{\v} \,\lambda_{\min})$ and $\Min{g_{\v}} = (1-\gamma) (A_{\v} - B_{\v} \,\lambda_{\max})$.
\end{proof}
\end{lemma}
\begin{example}\label{ex:prep1}
Taking the nonlocal game $\mathcal{G}_{\mathrm{CHSH}}$ introduced in Example~\ref{ex:chsh}, we can use the above lemma to construct a min-tradeoff function. Fixing the probability of testing, $\gamma = 5 \times 10^{-3}$, we consider a device that behaves (during a test round) according to the expected frequency distribution~$\bm{\omega} = (\omega_{\cross}, \omega_{\chsh}, 1-\omega_{\cross}-\omega_{\chsh})$. In Fig$.$~\ref{fig:prep-example1}, we plot the certifiable min-entropy of a single generation round for a range of $\bm{\omega}$. We see that as the scores approach $\bm{\omega} = \tfrac12\,\left(1,\tfrac{2 + \sqrt{2}}{4}, \tfrac{2 - \sqrt{2}}{4} \right)$, we are able to certify almost\footnote{Due to the infrequent testing we are actually only able to certify a maximum of $2\cdot (1-\gamma)$ bits per interaction.} two bits of randomness per entangled qubit pair using $\mathcal{G}_{\mathrm{CHSH}}$.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=1.0]{example1.pdf}
\end{center}
\caption[Lower bounds to min-entropy surface]{A plot of a lower bound on the certifiable min-entropy produced during a single round of the protocol. This lower bound was calculated using Prog$.$~\ref{prog:relaxed-primal} relaxed to the second level of the NPA hierarchy. In addition, we plot a min-tradeoff function $f_{\v}$ evaluated for distributions of the form $\bm{p} = (\gamma \bm{\omega}, 1-\gamma)$ for $\bm{\omega} \in \mathcal{Q}_{\mathcal{G}}$, i.e. expected frequency distributions over $\mathcal{G}\cup\{\perp\}$ that are compatible with the spot-checking structure of the rounds. Since $f_{\v}$ is the tangent plane to the surface at the point $\v$ it forms an affine lower bound on the min-entropy of any quantum distribution compatible with the protocol.}
\label{fig:prep-example1}
\end{figure}
\end{example}
\subsection{Accumulation and extraction}\label{sec:subproc-execution}
After fixing the parameters of the protocol and constructing a
min-tradeoff function $f_{\min}$, the user proceeds with the remaining
steps of Protocol~QRE: accumulation and extraction. The
accumulation step consists of the device interaction and evaluation
sub-procedures that were detailed in
\sec\ref{sec:accumulation-procedure}. If the protocol does not abort,
then with high probability the generated string $\AA\bm{\mathrm{B}}$ contains at
least some given quantity of smooth min-entropy. The following lemma
applies the EAT to deduce a lower bound on the amount of entropy
accumulated.
\begin{lemma}[Accumulated entropy]\label{lem:accumulated-entropy}
Let the randomness expansion procedure and all of its parameters be as
defined in Fig$.$~\ref{fig:full-protocol}. Furthermore, let $\Omega$ be
the event the protocol does not abort (cf.~\eqref{eq:success-event})
and let $\rho_{|\Omega}$ be the final state of the system conditioned
on this. Then, for any $\beta,\,\epsilon_{s},\,\epsilon_{\mathrm{EAT}}\in(0,1)$ and any
choice of min-tradeoff function $f_{\v}\in \mathcal{F}_{\min}$, either
Protocol~QRE\ aborts with probability greater than $1-\epsilon_{\mathrm{EAT}}$ or
\begin{equation}\label{eq:EAT-total}
H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\bm{\mathrm{X}}\bm{\mathrm{Y}}\mathrm{E})_{\rho_{|_{\Omega}}} > (1-\gamma) n \left(A_{\v} - B_{\v}\l_{\v} \cdot (\bm{\omega} -\d_{\mathrm{sgn}})\right) - n (\epsilon_V + \epsilon_K) - \epsilon_\Omega,
\end{equation}
where
\begin{equation}\label{eq:errV-explicit}
\epsilon_V := \frac{\beta \ln 2}{2} \left(\log\left(2 |\mathcal{A}\mathcal{B}|^2 +1\right) + \sqrt{ \frac{(1-\gamma)^2 B_{\v}^2 (\lambda_{\max} - \lambda_{\min})^2}{\gamma} + 2} \right)^2,
\end{equation}
\begin{equation}\label{eq:errK-explicit}
\epsilon_K := \frac{\beta^2}{6(1-\beta)^3\ln 2}\,
2^{\beta(\log |\mathcal{A}\mathcal{B}| + (1-\gamma)B_{\v} (\lambda_{\max}-\lambda_{\min}))}
\ln^3\left(2^{\log |\mathcal{A}\mathcal{B}| + (1-\gamma)B_{\v} (\lambda_{\max}-\lambda_{\min})} + e^2\right),
\end{equation}
\begin{equation}\label{eq:errW-explicit}
\epsilon_\Omega := \frac{1}{\beta}\left(1 - 2 \log(\epsilon_{\mathrm{EAT}}\,\epsilon_{s})\right)
\end{equation}
and $\d_{\mathrm{sgn}} = (\delta(c)\,\mathrm{sgn}(-{\lambda_{\v}(c)}))_{c \in
\mathcal{G}}$.
\begin{proof}
Let $\{\mathcal{N}_i\}_{i\in [n]}$ be the set of channels implementing the
entropy accumulation sub-procedure of Protocol~QRE. Comparing
this procedure with the definition of the EAT channels
Def$.$~\ref{def:eat-channels}, we have
$\mathcal{N}_i:\mathcal{S}(R_{i-1}) \rightarrow
\mathcal{S}(A_iB_iX_iY_iT_iC_iR_i)$ with
$A_i, B_i, X_i, Y_i, T_i, C_i$ finite dimensional classical systems,
$R_i$ an arbitrary quantum system and the score $C_i$ is a
deterministic function of the values of the other classical
systems. Furthermore, the inputs to the protocol for the
$i^{\text{th}}$ round, $(X_i,Y_i,T_i)$, are chosen independently of
all other systems in the protocol and so the conditional
independence constraints
$I(A_1^{i-1}B_1^{i-1}\!\! :\! X_i Y_i | X_1^{i-1}Y_1^{i-1} E) = 0$
hold trivially. The conditions necessary for $\{\mathcal{N}_i\}_{i\in [n]}$
to be EAT-channels are satisfied and by Lemma~\ref{lem:fmin} $f_{\v}$
is a min-tradeoff function for these channels. We can now apply the
EAT to bound the total entropy accumulated.
Consider now the pass probability of the protocol,
$p_{\Omega}$. Either $p_\Omega < \epsilon_{\mathrm{EAT}}$, in which case the protocol
will abort with probability at least $1 - \epsilon_{\mathrm{EAT}}$, or $p_\Omega \geq
\epsilon_{\mathrm{EAT}}$. In the latter case we can replace the unknown $p_{\Omega}$
in~\eqref{eq:errW} with $\epsilon_{\mathrm{EAT}}$ as this results in an increase in the error term $\epsilon_\Omega$. The EAT then asserts that
$$
H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\bm{\mathrm{X}}\bm{\mathrm{Y}}\mathrm{E})_{\rho_{|\Omega}}
>
n \inf_{\bm{\mathrm{C}}\in\Omega} f_{\v}(\bm{F}_{\CC}) - n (\epsilon_V + \epsilon_K) - \epsilon_\Omega,
$$
for any choice of min-tradeoff function $f_{\v} \in \mathcal{F}_{\min}$.
As the min-tradeoff functions are affine, we can lower bound the infimum for the region of possible scores specified by the success event,
$$
\Omega = \left\{ \bm{\mathrm{C}} \mid \gamma (\bm{\omega}-\d) < \bm{F}_{\CC}(\mathcal{G}) < \gamma (\bm{\omega}+\d) \right\}.
$$
Taking $\bm{p} = (\gamma (\bm{\omega} - \d_{\mathrm{sgn}}), (1-\gamma)$), we
have $f(\bm{p}) \leq\inf_{\bm{\mathrm{C}}\in\Omega} f_{\v}(\bm{F}_{\CC})$. Note that $\bm{p}$ may not correspond to a frequency distribution that could have resulted from a successful run of the protocol -- it may not even be a probability distribution. However, it is sufficient for our purposes as an explicit lower bound on the infimum. Further, noting that $f_{\v}(\bm{p}) = g_{\v}(\bm{\omega} - \d_{\mathrm{sgn}})$, we can straightforwardly compute this lower bound as
$$
f_{\v}(\bm{p}) = (1-\gamma) \left(A_{\v} - B_{\v}\l_{\v} \cdot (\bm{\omega} -\d_{\mathrm{sgn}})\right).
$$
Inserting the min-tradeoff function
properties~\eqref{eq:fv-max}--\eqref{eq:fv-var} into the the EAT's
error terms [\eqref{eq:errV}--\eqref{eq:errW}] we get the explicit
form of the quantities $\epsilon_V$, $\epsilon_K$ and $\epsilon_\Omega$ stated in the lemma.
\end{proof}
\end{lemma}
If the protocol does not abort during the accumulation procedure, the
user may proceed by applying a quantum-proof strong extractor to the
concatenated output string $\AA\bm{\mathrm{B}}$ resulting in a close to uniform bit-string of length approximately $(1-\gamma) n \left(A_{\v} - B_{\v}\l_{\v} \cdot (\bm{\omega} -\d_{\mathrm{sgn}})\right) - n (\epsilon_V + \epsilon_K ) - \epsilon_\Omega$.
\begin{figure}[t!]
\begin{center}
\includegraphics{example2.pdf}
\end{center}
\caption{A plot of the randomness certified as we vary
our choice of min-tradeoff function. At each point
$\nu$ we evaluate the certifiable randomness
\eqref{eq:EAT-total} for the corresponding choice of
min-tradeoff function $f_{\nu}$, numerically
optimizing the parameter $\beta$ each time. The
rough appearance of the surface is a result of finding local optima in the
$\beta$ optimization. For reference, we include a
plot of the asymptotic rate, i.e.,
\eqref{eq:EAT-total} as $n \rightarrow \infty$ and
$\d \rightarrow \bm 0$. The protocol parameters used
during the calculations are as follows: $n =
10^{10}$, $\gamma= 5 \times 10^{-3}$, $\bm{\omega} =(0.49,0.4225,0.0875)$,
$\delta_{\mathrm{CHSH}}=\delta_{\mathrm{align}} = 10^{-3}$ and
$\epsilon_{s}=\epsilon_{\mathrm{EAT}}=10^{-8}$.}
\label{fig:prep-example2}
\end{figure}
\begin{example}\label{ex:prep2}
Continuing from Ex$.$~\ref{ex:prep1}, we look at the bound on
the accumulated entropy specified by \eqref{eq:EAT-total} for
a range of choices of $f_{\v} \in \mathcal{F}_{\min}$. Again, we are
considering a quantum implementation with an expected frequency distribution $\bm{\omega} =(0.49,0.4225,0.0875)$. In Fig$.$~\ref{fig:prep-example2} we
see that our choice of min-tradeoff function can have a large
impact on the quantity of entropy we are able to certify. The plot gives some reassuring numerical evidence that, for the nonlocal game $\mathcal{G}_{\mathrm{CHSH}}$, the certifiable randomness is continuous and concave in the family parameter $\v$.
The min-tradeoff function indexed by our expected frequency distribution,
$f_{\bm{\omega}}$, is able to certify just under $0.939$-bits per
interaction. By applying a gradient-ascent algorithm we
were able to improve this to $0.946$-bits per interaction. In an attempt to avoid getting stuck within local optima we
applied the algorithm several times, starting subsequent iterations at randomly chosen points close to the current optimum.
The optimization led to an improved min-tradeoff function choice $f_{\v^*}$, where $\v^* = (0.491,0.421,0.088)$.
\end{example}
\subsection{Protocol~QRE}\label{sec:template-protocol}
Protocol~QRE\ is the concatenation of the accumulation and
extraction sub-procedures. It remains to provide the formal security
statements for a general instance of Protocol~QRE. We refer to
an untrusted device network $\mathfrak{D}_{AB}$ as \emph{honest} if during each
interaction, the underlying quantum state shared amongst the devices
and the measurements performed in response to inputs remain the same
(i.e., the devices behave as the user expects). Furthermore, each
interaction is performed independently of all others. The following
lemma provides a bound on the probability that an honest
implementation of Protocol QRE\ aborts.
\begin{lemma}[Completeness of Protocol~QRE]
\label{lem:complet}
Let Protocol~QRE\ and all of its parameters be as defined in
Fig$.$~\ref{fig:full-protocol}. Then, the probability that an honest
implementation of Protocol~QRE\ aborts is no greater than $\varepsilon_{\mathrm{comp}}$ where
\begin{equation}\label{eq:completeness}
\varepsilon_{\mathrm{comp}}=2\sum_{k=1}^{|\mathcal{G}|}\mathrm{e}^{-\frac{\gamma\delta_k^2}{3\omega_k}n}.
\end{equation}
\begin{proof}
During the parameter estimation step of Protocol~QRE, the protocol aborts if the observed frequency distribution $\bm{F}_{\CC}$ fails to satisfy
\begin{equation*}
\gamma (\bm{\omega}-\d) < \bm{F}_{\CC}(\mathcal{G}) < \gamma (\bm{\omega}+\d).
\end{equation*}
Writing $\bm{F}_{\CC}(\mathcal{G}) = (r_k)_{k=1}^{|\mathcal{G}|}$, $\bm{\omega}=(\omega_k)_{k=1}^{|\mathcal{G}|}$ and $\d=(\delta_k)_{k=1}^{|\mathcal{G}|}$, the probability that an honest implementation of the protocol aborts can be written as
\begin{align*}
\mathrm{P}_{\mathrm{abort}} = \pr{ \bigcup_{k=1}^{|\mathcal{G}|} \bigg\{\big|r_k - \gamma \omega_k \big| \geq \gamma \delta_k\bigg\} } \leq \sum_{k=1}^{|\mathcal{G}|} \pr{\big|r_k - \gamma \omega_k \big| \geq \gamma \delta_k }.
\end{align*}
Restricting to a single element $r_k$ of $\bm{F}_{\CC}(\mathcal{G})$, we
can model its final value as the binomially distributed random
variable $r_k \sim \tfrac{1}{n}\bin{n}{\gamma \omega_k}$. As a consequence of the
Chernoff bound (cf.\ Corollary~\ref{cor:Chernoff}), and that
$\delta_k<\omega_k$, we have
$$\pr{\big|r_k - \gamma \omega_k \big| \geq \gamma \delta_k }\leq2\mathrm{e}^{-\frac{\gamma \delta_k^2 n}{3 \omega_k}}.$$
Applying this bound to each element of the sum individually, we arrive at the desired result.
\end{proof}
\end{lemma}
\begin{remark}
\label{rem:complet}
The completeness error in the above lemma only considers the
possibility of the protocol aborting during the parameter estimation
stage. However, if the initial random seed is a particularly limited
resource then it is possible that the protocol aborts due to seed
exhaustion. In Lemma~\ref{lem:input-randomness} we analyse a sampling
algorithm required to select the inputs during device interaction. If
required, the probability of failure for that algorithm could be
incorporated into the completeness error.
\end{remark}
With a secure bound on the quantity of accumulated entropy established by Lemma~\ref{lem:accumulated-entropy} we can apply a $(k,\epsilon_{\mathrm{ext}} + 2 \epsilon_{s})$-strong extractor to $\AA\bm{\mathrm{B}}$ to complete the security analysis. Combined with the input randomness discussed in Appendix~\ref{sec:input-rand} we arrive at the following theorem.
\begin{lemma}[Soundness of Protocol~QRE]
\label{lem:sound}
Let Protocol~QRE\ be implemented with some initial random seed
$D$ of length $d$. Furthermore let all other protocol parameters be
chosen within their permitted ranges, as detailed in
Fig$.$~\ref{fig:full-protocol}. Then the soundness error of
Protocol~QRE\ is
$$\varepsilon_{\mathrm{sound}}=\max(\epsilon_{\mathrm{ext}}+2\epsilon_{s},\epsilon_{\mathrm{EAT}})\,.$$
\end{lemma}
\begin{proof}
Recall from~\eqref{eq:sound_def} that the soundness error is an
upper bound on
$\frac{1}{2}\mathrm{Pr}[\Omega]\cdot\|\rho_{ZE}-\tau_m\otimes\rho_E\|_1$. In
the case $\pr{\Omega}\leq\epsilon_{\mathrm{EAT}}$, we have $\frac{1}{2}\mathrm{Pr}[\Omega]\cdot\|\rho_{ZE}-\tau_m\otimes\rho_E\|_1\leq\epsilon_{\mathrm{EAT}}$.
In the case $\pr{\Omega}>\epsilon_{\mathrm{EAT}}$,
Lemma~\ref{lem:accumulated-entropy} gives a bound on the accumulated
entropy. Combining with the definition of a quantum-proof strong
extractor Def$.$~\ref{def:extractor} and noting that the norm
is non-increasing under partial trace we obtain
$\frac{1}{2}\mathrm{Pr}[\Omega]\cdot\|\rho_{ZE}-\tau_m\otimes\rho_E\|_1\leq\epsilon_{\mathrm{ext}}+2\epsilon_{s}$,
from which the claim follows.
\end{proof}
\begin{remark}
By choosing parameters such that $\epsilon_{\mathrm{EAT}}\leq\epsilon_{\mathrm{ext}}+2\epsilon_{s}$ we
can take the soundness error to be $\epsilon_{\mathrm{ext}}+2\epsilon_{s}$.
\end{remark}
Combining all of the previous results we arrive at the full security statement concerning Protocol~QRE.
\begin{theorem}[Security of Protocol~QRE]\label{thm:security}
Protocol~QRE\ is an $(\varepsilon_{\mathrm{comp}},\varepsilon_{\mathrm{sound}})$-secure randomness expansion protocol producing
\begin{equation}\label{eq:security-of-qre}
((1-\gamma) \left(A_{\v} - B_{\v}\l_{\v} \cdot (\bm{\omega} -\d_{\mathrm{sgn}})\right) - \epsilon_V - \epsilon_K)\, n - \epsilon_\Omega- \ell_{\mathrm{ext}}
\end{equation}
random bits at least $\varepsilon_{\mathrm{sound}}$-close to uniformly distributed, where $\varepsilon_{\mathrm{comp}}$, $\varepsilon_{\mathrm{sound}}$ are given by Lemma~\ref{lem:complet} (cf. Remark~\ref{rem:complet}) and Lemma~\ref{lem:sound}.
\end{theorem}
\begin{remark}
The expected seed length required to execute
Protocol~QRE\ is $d \approx \left(\gamma H(\mu) + h(\gamma)\right) n$ (cf.\ Lemma~\ref{lem:input-randomness}).
\end{remark}
\begin{example}
In Ex$.$~\ref{ex:prep1} and Ex$.$~\ref{ex:prep2} we used the following choice of protocol parameters: $n = 10^{10}$, $\gamma = 5\times 10^{-3}$, $\delta_1 = \dots = \delta_{|\mathcal{G}|} = 10^{-3}$ and $\epsilon_{s} = \epsilon_{\mathrm{EAT}} = 10^{-8}$. The resulting implementation of Protocol~QRE, using the nonlocal game $\mathcal{G}_{\mathrm{CHSH}}$ with an expected frequency distribution $\bm{\omega} =(0.49,0.4225,0.0875)$, exhibits the following statistics.
\begin{center}
\begin{tabular}{|l|l|}
\hline
\tbf{Quantity} & \tbf{Value} \\ \hline
Total accumulated entropy before extraction (no abort) & $9.46 \times 10^{9}$ \\
Expected length of required seed before extraction & $5.54 \times 10^{8}$ \\
Expected net-gain in entropy (no abort) & $8.91 \times 10^{9} - \ell_{\mathrm{ext}}$ \\
Completeness error ($\varepsilon_{\mathrm{comp}}$) & $8.77 \times 10^{-8}$
\\ \hline
\end{tabular}
\end{center}
\end{example}
\section{Examples}\label{sec:examples}
In this section we demonstrate the use of our framework through the
construction and analysis of several protocols based on different tests of nonlocality.
To this end, we begin by introducing two families of nonlocal games which we consider alongside $\mathcal{G}_{\mathrm{CHSH}}$.
~\\\noindent\textbf{Empirical behaviour game $(\mathcal{G}_{\mathrm{EB}})$.}
The \emph{empirical behaviour game} $(\mathcal{G}_{\mathrm{EB}})$ is a nonlocal game that estimates the underlying behaviour of $\mathfrak{D}_{AB}$, i.e., it attempts to characterise each
individual probability $p(a,b|x,y)$. We may construct this by associating with each input-output tuple $(a,b,x,y)\in\A\B\X\Y$ a corresponding score $c_{abxy} \in \mathcal{G}$ and defining the scoring rule
$$
V_{\mathrm{EB}}(a,b,x,y) := c_{abxy},
$$
for each $(a,b,x,y) \in \A\B\X\Y$. Then, for any input distribution $\mu_{\mathrm{EB}}$ with full support
on the alphabets $\mathcal{X}\mathcal{Y}$, the collection $\mathcal{G}_{\mathrm{EB}} = (\mu_{\mathrm{EB}},V_{\mathrm{EB}})$
forms a nonlocal game. Moreover, for agents playing according to some strategy $\bm{p} \in \mathcal{Q}$,
the expected frequency distribution over the scores is precisely the joint distribution,
\begin{align*}
\omega_{\mathrm{EB}}(a,b,x,y) &= \mu_{\mathrm{EB}}(x,y) p(a,b|x,y) \\
&= p(a,b,x,y).
\end{align*}
As $\mathcal{G}_{\mathrm{EB}}$ can be defined for any collection of input-output
alphabets, we can indicate the size of these alphabets as
superscripts, i.e., $\mathcal{G}_{\mathrm{EB}}^{|\mathcal{X}||\mathcal{Y}||\mathcal{A}||\mathcal{B}|}$. However, since we
only consider binary output alphabets in this work, we will not
include their sizes in the superscript, i.e., we will write
$\mathcal{G}^{23}_{\mathrm{EB}}$ instead of $\mathcal{G}^{2322}_{\mathrm{EB}}$.
\begin{remark}\label{rem:redundancies}
The scoring rule for $\mathcal{G}_{\mathrm{EB}}$, as defined above, has several redundant components, arising from normalisation and the no-signalling conditions. In fact, there are only $[(|\mathcal{A}|-1)|\mathcal{X}| + 1][(|\mathcal{B}|-1)|\mathcal{Y}| +1] -1$ free parameters~\cite{Cirelson93}. Knowing this we can reduce the number of scores in our nonlocal game and, in turn, the number of constraints we impose in our SDPs.\footnote{It is important to remove redundant constraints in practice as they can lead to numerical instabilities.}
\end{remark}
~\\
\noindent
\textbf{Joint correlators game $(\mathcal{G}_{\braket{AB}})$.}
Specifically, for each $(x,y) \in \mathcal{X}\mathcal{Y}$ we define a score $c_{xy}$ and a scoring rule
$$
V_{\braket{AB}}(a,b,x,y) := \begin{cases}
c_{xy} \qquad &\text{if } a=b \\
c_{\text{norm}} \qquad &\text{otherwise}.
\end{cases}
$$
That is, for a pair of inputs $(x,y)$ the score is recorded as $c_{xy}$ whenever
the agents' outcomes agree. Otherwise, they record some normalization
score $c_{\text{norm}}$. The input distribution can then be specified in some way: we use the uniform distribution over $\mathcal{X}\mathcal{Y}$. We refer to this game by the symbol $\mathcal{G}_{\braket{AB}}$ and, as before, we will indicate the sizes of the input alphabets with superscripts.
\begin{figure}[t]
\begin{center}
\begin{subfigure}{0.48\textwidth}
\includegraphics[trim= 5pt 5pt 5pt 5pt,clip=true,width=\textwidth]{alignplot.pdf}
\caption{$\mathcal{G}_{\braket{AB}}$ protocols}\vspace{-0.2cm}\label{subfig:corrplot}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\includegraphics[trim= 5pt 5pt 5pt 5pt,clip=true,width=\textwidth]{fbplot.pdf}
\caption{$\mathcal{G}_{\mathrm{EB}}$ protocols}\vspace{-0.2cm}\label{subfig:ebplot}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering\vspace{0.25cm}
\includegraphics[trim= 5pt 5pt 5pt 5pt,clip=true]{comparisonplot.pdf}
\caption{Comparison of protocols in the $(2,3)$-scenario.}\vspace{-0.25cm} \label{subfig:23comparison}
\end{subfigure}
\caption{A plot of the asymptotic and EAT-rates for protocols using the nonlocal game families $\mathcal{G}_{\braket{AB}}$, $\mathcal{G}_{\mathrm{EB}}$ and $\mathcal{G}_{\mathrm{CHSH}}$.}\label{fig:detection-plots}
\end{center}
\end{figure}
\subsection{Rates in the presence of inefficient detectors}\label{sec:noise-robustness-plots}
We now compare the accumulation rates of protocols built using the nonlocal games described above. We retain the protocol parameter choices from the previous examples: $n=10^{10}$, $\gamma = 5 \times 10^{-3}$ and $\epsilon_{s} = \epsilon_{\mathrm{EAT}} = 10^{-8}$, except we now set the confidence interval width parameter to
\begin{equation}
\delta_k = \sqrt{\frac{ 3\,\omega_k \ln(2/\varepsilon_{\mathrm{comp}})}{\gamma n}},
\end{equation}
in order to have a similar completeness error $\varepsilon_{\mathrm{comp}} \approx
10^{-12}$ across the different protocols.\footnote{In practice one
would fix the soundness error of the protocol. However, because the
soundness error is also dependent on the extraction phase we instead
assume independence of rounds and fix the completeness error.}
We suppose that the devices operate
by using a pure, entangled state of the form
\begin{equation}
\ket{\psi(\theta)}_{AB} = \cos(\theta) \ket{00} + \sin(\theta) \ket{11},
\end{equation}
for $\theta \in (0,\pi/4]$. We denote the corresponding density
operator by $\rho_{\theta} = \ketbra{\psi(\theta)}$. For simplicity we
restrict to projective measurements within the $x$-$z$ plane of the
Bloch-sphere, i.e., measurements $\lbrace \Pi(\varphi),\mathbb{1}-\Pi(\varphi)\rbrace$, with the projectors defined by
\begin{equation}
\Pi(\varphi) = \begin{pmatrix}
\cos^2(\varphi/2) & \cos(\varphi / 2) \sin(\varphi / 2) \\
\cos(\varphi / 2) \sin(\varphi/2) & \sin^2(\varphi/2)
\end{pmatrix}
\end{equation}
for $\varphi \in [0,2\pi)$. We denote the projectors associated with
the $j^{\text{th}}$ outcome of the $i^{\text{th}}$ measurement by
$A_{j|i}$ and $B_{j|i}$. The elements of the devices' behaviour can
then be written as
\begin{equation}\label{eq:Born-rule}
p(a,b|x,y) = \tr{\rho_{\theta}(A_{a|x}\otimes B_{b|y})}.
\end{equation}
Our analysis is focussed on how the accumulation rates differ when the devices operate
with inefficient detectors. Heralding can be used to
account for losses incurred during state transmission and has been
used to develop novel device-independent
protocols~\cite{MKSCBA18}. However, losses that occur within a user's
laboratory cannot be ignored without opening a
detection loophole~\cite{pearle}. Inefficient detectors are a major
contributor to the total experimental noise, so robustness to
inefficient detectors is a necessary property for any practical
randomness expansion protocol. We characterize detection efficiency by
a single parameter $\eta \in [0,1]$, representing the (independent)
probability with which a measurement device successfully measures a
received state and outputs the result.\footnote{For simplicity, we
make the additional assumption that the detection efficiencies are constant
amongst all measurement devices used within the protocol.} To deal
with failed measurements we assign outcome $0$ when this occurs. Combining this with~\eqref{eq:Born-rule}, we may write the behaviour as
\begin{equation}
\begin{aligned}
p(a,b|x,y) &= \eta^2\tr{\rho_{\theta}(A_{a|x}\otimes B_{b|y})} + (1-\eta)^2 \delta_{0a}\delta_{0b} \\
&+ \eta(1-\eta)\left(\delta_{0a}\tr{\rho_{\theta}(\mathbb{1}\otimes B_{b|y})} + \delta_{0b}\tr{\rho_{\theta}(A_{a|x}\otimes \mathbb{1})}\right).
\end{aligned}
\end{equation}
For each protocol we consider lower bounds on two quantities: the
pre-EAT gain in min-entropy from a single interaction,
$H_{\min}(AB|XYE)$, and the \emph{EAT-rate},
$H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\bm{\mathrm{X}}\bm{\mathrm{Y}}\mathrm{E})/n$. The former quantity, which we
refer to as the \emph{asymptotic rate}, represents the maximum accumulation
rate achievable with our numerical technique. It is a lower bound on $H_{\min}^{\epsilon_{s}}(\AA\bm{\mathrm{B}}|\bm{\mathrm{X}}\bm{\mathrm{Y}}\mathrm{E})/n$, specified by \eqref{eq:EAT-total}, as $n \rightarrow \infty$ and $\gamma$, $\delta \rightarrow 0$.\footnote{We would really like to plot $H(AB|XYE)$ and the corresponding EAT-rate derived from it. However, in general we do not have suitable techniques to access these quantities in a device-independent manner.} Comparing these two quantities gives a clear picture of the amount of entropy that we lose due to the effect of finite statistics.
With inefficient detectors, partially entangled states can exhibit
larger Bell-inequality violations than maximally entangled
states~\cite{Eberhard}. To account for this we optimize both the state
and measurement angles at each data point using the iterative
optimization procedure detailed in~\cite{ATL16}. All programs were relaxed to the second level of the NPA hierarchy using \cite{ncpol2sdpa} and the resulting SDPs were computed using the SDPA solver \cite{sdpa}. The results of these
numerics are displayed in Fig$.$~\ref{fig:detection-plots}.
\begin{figure}[t]
\begin{center}
\includegraphics[]{RD-comparison.pdf}
\end{center}\vspace{-0.5cm}
\caption{Comparison illustrating the EAT-rates (cf. \eqref{eq:EAT-total}) converging to the asymptotic rates for protocols based on different nonlocal games. The rates were derived by assuming a qubit implementation of the protocols with a detection efficiency $\eta = 0.9$, optimizing the state and measurement angles in order to maximise the asymptotic rate. Then, for each value of $n$ we optimized the min-tradeoff function choice and $\beta$ parameter and noted the resulting bound on $H_{\min}^{\epsilon_{s}}$. To ensure that we approach the asymptotic rate as $n$ increased we set $\gamma = \delta_1 = \dots = \delta_{|\mathcal{G}|} = n^{-1/3}$, resulting in a constant completeness error across all values of $n$.}
\label{fig:RD-comparison}
\end{figure}
In Fig$.$~\ref{subfig:corrplot} and Fig$.$~\ref{subfig:ebplot} we see that in both families of protocols considered, an increase in the number of inputs leads to higher rates. This increase is significant when one moves from the $(2,2)$-scenario to the $(2,3)$-scenario. However, continuing this analysis for higher numbers of inputs we find that any further increases appear to have negligible impact on the overall robustness of the protocol.\footnote{This could also be an artefact of the assumed restriction to qubit systems.} Whilst all of the protocols achieve asymptotic rates of $2$ bits per round when $\eta = 1$, their respective EAT-rates at this point differ substantially.
In Fig$.$~\ref{subfig:23comparison} we see a direct comparison between
protocols from the different families. The plot shows that, as
expected, entropy loss is greater when using the nonlocality test
$G_{\mathrm{EB}}^{23}$ as opposed to the other protocols. In particular, for
high values of $\eta$ we find that we would be able to certify a
larger quantity of entropy by considering fewer scores. However, it is still worth noting that this entropy loss could be reduced by choosing a more generous set of protocol parameters, e.g., increasing $n$ and decreasing $\delta$.
Increasing $n$ can be difficult in practice due to restrictions on the overall runtime of the protocol. Not only does it take longer to collect the statistics within the device-interaction phase, but it may also increase the runtime of the extraction phase \cite{MPS12}. In Fig$.$~\ref{fig:RD-comparison}
we observe how quickly the various protocols converge on their
respective asymptotic rates as we increase $n$. Again we find that, due to the finite-size effect, entropy loss when using $\mathcal{G}^{23}_{\mathrm{EB}}$ is greater than that observed in the other protocols. In particular, we see that for protocols with fewer than $10^{10}$ rounds, it is advantageous to use $\mathcal{G}_{\braket{AB}}^{23}$. From the perspective of practical implementation, Fig$.$~\ref{subfig:23comparison} and Fig$.$~\ref{fig:RD-comparison} highlight the benefits of a flexible protocol framework wherein a user can design protocols tailored to the scenario under consideration.
\begin{figure}
\begin{center}
\includegraphics[scale=1]{comparisonplotAFRV.pdf}
\end{center}
\caption{Comparison between the certifiable accumulation rates
of QRNE protocols based on $\mathcal{G}_{\mathrm{CHSH}}$, $\mathcal{G}_{\mathrm{EB}}^{23}$ and
Protocol ARV from~\cite{ARV} on qubit systems with
inefficient detectors
(cf. Fig$.$~\ref{fig:detection-plots}). The rates of Protocol
ARV are also evaluated using the improved EAT
statement~\cite{DF}. For Protocol ARV, we use the one-sided
von Neumann entropy bound, so the maximum rate is one bit
per round, but because we can directly get the single-round
von Neumann entropy, the rate initially falls more slowly with
decreasing detection efficiency than for the other
protocols.} \label{fig:detection-AFRV-comparison}
\end{figure}
It is also important to compare the rates of instances of
Protocol~QRE~with other protocols from the literature, in
particular the protocol of~\cite{ARV} (ARV). In~\cite{ARV}, the
min-tradeoff functions are constructed from a tight bound on the
single-party von Neumann entropy, $H(A|XE)$, which is given in terms
of a CHSH inequality violation~\cite{ABGMPS}. In
Fig$.$~\ref{fig:detection-AFRV-comparison} we compare the rates of ARV
with $\mathcal{G}_{\braket{AB}}^{22}$ and $\mathcal{G}_{\mathrm{EB}}^{23}$ for entangled qubit systems
with inefficient detectors. To make our comparison fair, we have also
computed the rates for Protocol ARV using the improved EAT
bound\footnote{Note that we always use the direct bound on the von
Neumann entropy when considering Protocol ARV, rather than forming a
bound via the min-entropy}. As the rates of Protocol ARV are derived
from the entropy accumulated by a single party their rates are capped
at one bit per round.
In contrast, the semidefinite programs grant us access to bounds on
the entropy produced by both parties and we are therefore able to
certify up to two bits per round. In
Fig$.$~\ref{fig:detection-AFRV-comparison}, this advantage is observed in
the high detection efficiency
regime. Fig$.$~\ref{fig:detection-AFRV-comparison} also highlights a
significant drawback of our technique, which stems from our use of the
inequality $H(AB|XYE) \geq H_{\min}(AB|XYE)$. In particular, we see
that for $\eta < 0.9$, the $H(A|XE)$ bound for the CHSH inequality is
already greater than the $H_{\min}(AB|XYE)$ established for the
empirical behaviour. Therefore, in the asymptotic limit
($n\rightarrow\infty$) the min-entropy bounds for these protocols will
produce strictly worse rates in this regime. For the finite $n$ we
have chosen, $n = 10^{10}$, it appears that for the majority of smaller $\eta$, it is advantageous to use the ARV protocol over the protocols derived from the framework. Nevertheless, looking at the threshold detection efficiencies, i.e. the minimal detection efficiency required to achieve positive rates, we find that some protocols from our framework are able to again beat the rates established for Protocol ARV. Looking at the inset plot in Fig$.$~\ref{fig:detection-AFRV-comparison} we see that $\mathcal{G}_{\braket{AB}}^{22}$ has a smaller threshold efficiency than that of Protocol ARV for the chosen protocol parameters. Interestingly, this shows that $\mathcal{G}_{\braket{AB}}^{22}$ is capable of producing higher rates than Protocol ARV in both the low and the high detection efficiency regimes, with the improvement for low detection efficiencies being of particular relevance to experimental implementations. Importantly, this shows that protocols from the framework are of practical use for finite $n$ in spite of the losses coming from the use of $H(AB|XYE) \geq H_{\min}(AB|XYE)$.
\begin{remark}
We have so far considered the only noise to be that caused by
inefficient detectors. However, it is natural to ask how other sources
of noise affect our results. By replacing the states used with Werner
states~\cite{Werner}, we find that the results remain robust---they
remain qualitatively the same, but for small Werner state noise, all
of the graphs shift to slightly lower rates. For this reason we
choose not to include the graphs here.
\end{remark}
\section{Conclusion}\label{sec:conclusion}
We have shown how to combine device-independent bounds on the guessing
probability with the EAT, to create a versatile method for analysing
quantum-secure randomness expansion protocols. The construction was
presented as a template protocol from which an exact protocol can be
specified by the user. The relevant security statements and quantity
of output randomness of the derived protocol can then be evaluated
numerically. A Python package~\cite{dirng-github} accompanies this
work to help facilitate implementation of the framework. In
\sec\ref{sec:examples} we illustrated the framework, applying it to
several example protocols, with parameters chosen to reflect the capabilities of current nonlocality tests. We then compared the robustness of these
protocols when implemented on qubit systems with inefficient
detectors. Our analyses show that, within a broadly similar
experimental setup, different protocols can have significantly
different rates, and hence that it is worth considering small
modifications to a protocol during their design. We also compared the rates of a selection of
our protocols to the protocol presented in~\cite{ARV} (ARV). Interestingly, we found that some of the protocols from
the framework are able to achieve higher rates than Protocol ARV in both the high and low detection efficiency regimes. In particular,
the higher rates for low detection efficiencies is of great importance for actual experimental implementations.
Although the framework produces secure and robust protocols, there
remains scope for further improvements. For example, our work relies
on the relation $H(AB|XYE) \geq H_{\min}(AB|XYE)$ which is far from
tight. The resulting loss can be seen when one compares the asymptotic
rate of $\mathcal{G}_{\mathrm{CHSH}}$ in Fig$.$~\ref{subfig:23comparison} with those
presented in~\cite{ARV} (see
Fig$.$~\ref{fig:detection-AFRV-comparison}). Several alternative
approaches could be taken in order to reduce this loss. Firstly, the
above relation is part of a more general ordering of the conditional
R\'enyi entropies.\footnote{The R\'enyi entropies are one of many
different entropic families that include the von Neumann entropy as
a limiting case. Any such family could be used if they satisfy an
equivalent relation.} If one were able to develop efficient
computational techniques for computing device-independent lower bounds
on one of these alternative quantities we would expect an immediate
improvement. Furthermore, dimension-dependent bounds may be applicable
in certain situations. For example, it is known that for the special
case of $n$-party, $2$-input, $2$-output scenarios it is sufficient to
restrict to qubit systems~\cite{Cirelson93,ABGMPS}.
Optimizing the choice of min-tradeoff function over $\mathcal{F}_{\min}$ is a
non-convex and not necessarily continuous
problem~\cite{curchod2017unbounded}. Our analysis in
\sec\ref{sec:examples} used a simple probabilistic gradient ascent
algorithm to approach this problem. We found that for certain
protocols, in particular $\mathcal{G}_{\mathrm{EB}}^{22}$, the optimization had to be
repeated many times before a good choice of min-tradeoff function was
found.
As Fig$.$~\ref{fig:detection-AFRV-comparison} shows, the framework is
capable of producing protocols that are of immediate relevance to
current randomness expansion experiments. It is therefore a
worthwhile endeavour to search for protocols within the framework that
provide high EAT-rates in different parameter regimes. Investigations
into the randomness certification properties of nonlocality tests with
larger output alphabets or additional parties could be of
interest. However, increasing either of these parameters is likely to
increase the influence of finite-size effects. Alternatively, one
could try to design more economical nonlocality tests by combining
scores that are of a lesser importance to the task of certifying
randomness. Intuitively, for a score $c \in \mathcal{C}$, the
magnitude of of $\lambda(c)$ in the min-tradeoff function indicates
how important that score is for certifying entropy. If $|\lambda(c)|$
is large then this score is `important' in the sense that any small
deviations in the expected frequency of that score, $\omega(c)$, will
have a large impact on the amount of certifiable entropy. Another
approach to designing good nonlocality tests would be to take
inspiration from \cite{NPS14,BSS14} wherein the authors showed how to
derive the optimal Bell-expressions for certifying randomness. A
nonlocal game could then be designed to encode the constraints imposed
by this optimal Bell-expression. An example of such a game would be to
assign a score $+1$ to all $(ABXY)$ that have a positive coefficient
in the optimal Bell-expression and a score of $-1$ to all those with
negative coefficients. The input distribution of the nonlocal game
could then be chosen as such to encode the relative weights of the
coefficients.
Finally, our computational approach to the EAT considered only the
task of randomness expansion. Our work could be extended to produce security proofs for other device-independent tasks. Given that the EAT has already been successfully applied to a wide range of problems~\cite{RMW18,RMW16,AK17-DIRA,AB17,BMP17}, developing good methods for robust min-tradeoff function constructions represents an important step towards practical device-independent security.
\subsection*{Acknowledgments}
We are grateful for support from the EPSRC's Quantum Communications
Hub (grant number EP/M013472/1), an EPSRC First Grant (grant number
EP/P016588/1) and the WW Smith fund.
|
1,108,101,565,243 | arxiv | \section{Introduction}
A lot of work has been done in the subject of thin spherical shells of
matter in general relativity \cite{israel,hajicek} and several applications
given \cite{polchinski,guth,balbinot}.
An interesting application of the mechanics of thin shells has been the
semiclassical treatment of the black hole radiation
\cite{krauswilczek,krauswilczek2,parikhwilczek}.
An important role in such a treatment is the choice of the gauge. In the
original treatment \cite{krauswilczek} a gauge was adopted in which the
radial component of the metric is equal to the radial coordinate except for an
arbitrarily small region around the shell. Such a gauge choice was put in
mathematically clear terms in \cite{FLW,LWF} where the reduced
canonical momentum is extracted through a well defined limiting process.
However such a limit gauge becomes singular when two or more shells are present
and intersect. In addition also in the simple instance in which only one shell
is present, it would be nice to have a procedure in which no limit process is
present as the gauge choice should be completely arbitrary provided it is free
of coordinate singularities. Moreover even in the one shell case the
procedure for extracting the canonical momentum is usually considered as a very
complicated one. Here we shall give a treatment which
greatly simplifies the derivation of the reduced action, does
not require any limiting procedure and can be applied for the
treatment of two or more shells which intersect.
All the problem is to derive
the reduced action, i.e. an action in terms of the shell coordinates and some
appropriate conjugate momenta. We shall keep the formalism for the two shell
case as close as possible to the one shell treatment.
We shall perform all the treatment for a massive
dust shell and the simpler case of a null shell can be derived as a
particular case. As an application of the developed formalism we
rederive the well known Dray-'t Hooft and Redmount relations for the
intersection of light-like shells \cite{DtH,redmount}.
The main motivation which
originated the works \cite{krauswilczek,parikhwilczek,parikh} is the
computation of the
semiclassical tunneling amplitude, which is related to the black hole
radiation in which one takes into account the effect of energy loss by the
black hole. It is remarkable that such a simple model reproduces all the
correct features of the Hawking radiation, giving also some
corrections due to energy conservation. For approaches more kinematical in
nature see e.g.\cite{zerbini}. The adoption of more general gauges allows
the dynamical treatment of two intersecting shells without
encountering singularities. The formalism developed here is applied to the
computation of the semiclassical emission
probability of two shells; it was in fact suggested \cite{krauswilczek} that
correlations could show up in the multiple shell emission. We find however
that such a model gives no correlation among the probabilities of the two
emitted shells.
The paper is structured as follows.
In Section 2 we lay down the formalism by exploiting some peculiar properties
of a function $F$ strictly related to the momenta canonically conjugate to
the metric functions. It is then just a simple matter of partial integrations
to extract the reduced canonically conjugate momentum without employing any
limit process. This is done in Section 3. A new term containing the time
derivative of the mass of the remnant black hole appears; if such a mass is
considered as a datum of the problem the result agrees with those of the
original massless case of Kraus and Wilczek and with the massive case result
obtained through a limiting process in \cite{FLW}. Then we discuss the
derivation of the equations of motion. In \cite{krauswilczek} and \cite{FLW}
the variational principle was applied by varying the total mass of the system
which we shall denote by $H$. In \cite{parikhwilczek} the attitude was adopted
of keeping the total mass of the system fixed and varying instead the mass of
the remnant black hole. Here we show that both procedures can be applied to
obtain the equations of motion; depending however on the choice of the gauge
one procedure is far more complicated than the other and we give them both.
In Section 4 we discuss more general gauge choices and derive the equation of
motion in the inner gauge. We shall not consider in the present paper complex
gauges or complex gauge transformations \cite{chowd}.
In Section 5 we discuss the analytic properties of the conjugate momentum;
this is of interest because the imaginary part of the conjugate momentum is
responsible for the tunneling amplitude and in determining the Hawking
temperature both via a simple mechanical model or more precisely by working
out the semiclassical wave functions on which to expand the matter quantum
field \cite{keskivakkuri}. We remark that the result for the tunneling
probability is independent of the mass of the shell but depends only on the
initial and final energy of the black hole.
In Section 6 we extend the treatment to two shells in the outer gauge writing
down the explicit expression of the two reduced canonical momenta.
In Section 7 we derive the equations of motion for both shells from the
reduced two shell action. For simplicity this is done for the massless case;
the general treatment for massive shells is given in Appendix C.
In Section 8 the developed formalism is applied to give a very simple
treatment
of two shells which intersect. In the massless case we rederive the well
known
relations of Dray and 't Hooft \cite{DtH} and Redmount \cite{redmount}.
In Section 9 we consider the problem of computing the tunneling probability
for the emission of two shells which in the process can intersect. To this end
one has to compute the imaginary part of the action along the analytically
continued solution of the equations of motion. In this connection a helpful
integrability result is proven which allows to compute the action along a
specially chosen trajectory on the reduced coordinate space. Such a result
allows on one hand to prove the independence of the result from the
deformation defining the gauge and on the other hand it allows to compute
explicitly such imaginary part. The final outcome is that in all instances
(massive or massless shells) the result again depends only on the initial and
final values of the masses of the black hole and the expression coincides with
the one obtained in the one shell case. The interest in studying the two
shell system was pointed out in \cite{krauswilczek} in order to
investigate possible correlations among the emitted shells. Here we simply
find that the two shells, even if they interact with an exchange of masses,
are
emitted with a probability which is simply the product of the probabilities
for single shell emissions and thus that the model does not predict any
correlation among the emitted shells.
In Section 10 we summarize the obtained results.
\section{The action}\label{theactionsec}
As usual we write the metric for a spherically symmetric configuration
in the ADM form \cite{krauswilczek,polchinski,FLW}
\begin{equation}
ds^2=-N^2 dt^2+L^2(dr+N^r dt)^2+R^2d\Omega^2.
\end{equation}
We shall work on a finite region of space time $(t_i, t_f)\times( r_0,
r_m)$ . On the two initial and final surfaces we give the intrinsic
metric by specifying $R(r,t_i)$ and $L(r,t_i)$ and similarly
$R(r,t_f)$ and $L(r,t_f)$.
The complete action in hamiltonian form, boundary terms included is
\cite{hawkinghunter,polchinski,FLW}
\begin{equation}\label{completeaction}
S=S_{shell}+\int_{t_i}^{t_f} dt
\int_{r_0}^{r_m} dr (\pi_L \dot L +
\pi_R \dot R - N{\cal H}_t-N^r {\cal H}_r) +
\int_{t_i}^{t_f} dt \left.(-N^r \pi_L L+ \frac{NRR'}{L})\right|^{r_m}_{r_0}
\end{equation}
where
\begin{equation}\label{shellaction}
S_{shell}=\int_{t_i}^{t_f} dt ~\hat p~\dot{\hat r}.
\end{equation}
${\cal H}_t$ and ${\cal H}_r$ are the constraints which are reported in
Appendix A, $\hat r$ is the shell position and $\hat p$ its canonical conjugate
momentum. Action (\ref{completeaction}) is immediately generalized to a finite
number of shells.
$S_{shell}$ as given by eq.(\ref{shellaction}) refers to a dust shell even
though generalizations to more complicated equations of state have been
considered \cite{hajicek,polchinski,goncalves}.
Varying the action w.r.t. $N$ and $N^r$ gives the vanishing of the constraints
while
the variations w.r.t. $R,L,\pi_R,\pi_L$ give the equations of motion of
the gravitational field \cite{polchinski, FLW} which for completeness
are reported in Appendix A. The functions $R, L, N, N^r$ are continuous in $r$
while $R', L', N', {N^r}',\pi_L, \pi_R$ can have finite discontinuities
at the shell position \cite{FLW}. In \cite{FLW} it was proven that the
equation of motion of a massive shell cannot be obtained from a true
variational procedure, in the sense that the obtained expression for
$\dot {\hat p}$ is discontinuous at $r=\hat r$. In the same paper it
was remarked that for consistency the average value of the r.h.s. of the
equation for $\dot{\hat p}$ has to be taken. For the reader's convenience we
give in Appendix A the
explicit proof that the equations of motion of matter i.e. $\dot{\hat
r}$ and $\dot {\hat p}$ can be deduced from the already obtained equations of
motion for the gravitational field combined with the constraints.
The equation for $\dot{\hat r}$ does not pose any
problem while for the equation for $\dot{\hat p}$ one deduces algebraically
that the r.h.s. contains the
average of the derivatives of $L,N, N^r$ across the shell. In Appendix A we
also show directly that such discontinuity is absent in the massless case
\cite{LWF}.
Already in \cite{polchinski,krauswilczek} it was pointed out that in the region
free of matter, as a consequence of the two constraints the quantity
${\cal M}$
\begin{equation}
{\cal M}=\frac{\pi_L^2}{2R} +\frac{R}{2}-\frac{R R'^2}{2L^2}
\end{equation}
is constant in $r$ and this allows to solve for the momenta
\cite{polchinski,krauswilczek}
\begin{equation}
\pi_L=R\sqrt{\left(\frac{R'}{L}\right)^2-1+\frac {2{\cal M}}{R}}\equiv R W
\end{equation}
\begin{equation}
\pi_R= \frac{L[(R/L)(R'/L)'+(R'/L)^2-1+{\cal M}/R]}{W}.
\end{equation}
In the one shell problem we shall call the value of such ${\cal M}$,
$M$ for $r<\hat r$ and $H$ for $r>\hat r$.
The function
\begin{equation}
F=R L \sqrt{\left(\frac{R'}{L}\right)^2 -1+ \frac{2{\cal M}}{R}} +
RR'\log\left(\frac{R'}{L}- \sqrt{\left(\frac{R'}{L}\right)^2 -1+
\frac{2{\cal M}}{R}}\right)-
R'~f'(R)
\end{equation}
has the property of generating the conjugate momenta as follows
\begin{equation}
\pi_L = \frac{\partial F}{\partial L}
\end{equation}
\begin{equation}\label{piRfromF}
\pi_R = \frac{\delta F}{\delta R}=\frac{\partial F}{\partial
R}-\frac{\partial }{\partial r}\frac{\partial F}{\partial R'}.
\end{equation}
The total derivative $\frac{\partial f(R)}{\partial r}= R' f'(R)$
of the arbitrary function $f(R)$ does not contribute to the momenta.
The function $F$ will play a major role in the subsequent developments.
A large freedom is left in the choice of the gauge. With regard to $L$
we shall adopt the usual gauge $L=1$.
It will be useful in the following developments to choose the
arbitrary function $f(R)$ such that $F=0$ for $R'=1$. Such a
requirement fixes $f'(R)$ uniquely.
\begin{equation}
F(R,R',{\cal M})
=R W(R,R',{\cal M}) +
RR'({\cal L}(R,R',{\cal M}) - {\cal B}(R,{\cal M}))
\end{equation}
where
\begin{equation}\label{Wdefinition}
W(R,R',{\cal M})= \sqrt{R'^2-1+\frac{2{\cal M}}{R}};~~
{\cal L}(R,R',{\cal M}) = \log(R'-W(R,R',{\cal M}))
\end{equation}
and
\begin{equation}
{\cal B}(R,{\cal M}) = \sqrt{\frac{2{\cal M}}{R}}+\log(1-\sqrt{\frac{2{\cal
M}}{R}}).
\end{equation}
The function $F(R,R',{\cal M})$ has the following useful properties
\begin{equation}\label{Fproperties}
\frac{\partial F(R,R',{\cal M})}{\partial R'}= R({\cal L}-{\cal B});~~~~
\frac{\partial {\cal L}}{\partial R'}=-\frac{1}{W}.
\end{equation}
Other properties of $F$ will be written when needed.
In the following section we shall choose $R=r$ for $r> \hat r$ and
also $R=r$ for
$r<\hat r -l$ so that $F$ vanishes identically outside the interval $\hat
r-l<r<\hat r$. We shall call this class of gauges ``outer
gauges''. In Sect. \ref{moregeneralgauge} we shall consider more general
gauges e.g. the gauge $R=r$ for $r< \hat r$ and also $R=r$ for $r>\hat r +l$
which we shall call ``inner
gauges''. However contrary to what is done in \cite{krauswilczek,FLW} we will
not take any limit $l\rightarrow 0$ and prove that the results are
independent of the deformation of $R$ in the region $\hat r-l<r<\hat
r$ (or $\hat r< r<\hat r + l$ for the inner gauges) provided $R'$
satisfies the constraint at $r=\hat r$. We shall call these regions
$(\hat r -l,\hat r)$ for the outer gauge and $(\hat r ,\hat r+l)$ for
the inner gauge, the deformation regions.
The variation of $S$ which produces the equations of motion has to be
taken, as it is well known, by keeping the metric and in particular
$R$ and $L$ fixed at the boundaries. The variation of the boundary
terms gives
\begin{equation}
-N^r(r_m)\delta \pi_L(r_m)+ N^r(r_0)\delta \pi_L(r_0).
\end{equation}
The $N,~N^r$ can be obtained from the two equations of motion for the
gravitational field
\begin{equation}\label{NrNequations}
0= N[\frac{\pi_L}{R^2}-\frac{\pi_R}{R}] +(N^r)';~~~~
\dot R =- N \frac{\pi_L}{R} +N^r R'.
\end{equation}
Using these it is easily proved that for $r$ outside the deformation
region $(\hat r-l,\hat r)$ we have $N^r=N\sqrt{\frac{2H}{r}}$ for
$r>\hat r$, $N={\rm const}$ and $N^r=N\sqrt{\frac{2M}{r}}$ for
$r<\hat r-l$, $N={\rm const}$ where the two constants as a rule differ.
Thus the variation of the boundary term is
\begin{equation}
-N(r_m) \delta H+N(r_0) \delta M.
\end{equation}
In the next section we shall connect $N(r_m)$ with $N(r_0)$ being
$N(r)$ not constant in the deformation region.
\section{The one shell effective action in the outer gauge}\label{oneshell}
As outlined in the previous section we shall choose
\begin{equation}\label{Rfunction}
R(r,t) = r+\frac{V(t)}{\hat r(t)} \int_0^r \rho(r'-\hat r(t)) dr'=
r+\frac{V(t)}{\hat r} g(r-\hat r(t))
\end{equation}
having $\rho$ support $(-l,0)$, $\rho(0)=1$ and smooth in $-l$ and
\begin{equation}
\int_{-l}^{0} \rho(r) dr =0.
\end{equation}
As a consequence the deformation $g(r)$ has support in $(-l,0)$
and $g'(0-\varepsilon)=1$. Such $R$ satisfies the
discontinuity requirements at $r=\hat r$ which are imposed by the
constraints.
In fact the constraints (see Appendix A) impose the following discontinuity
relations at $\hat r$ (we recall that we chose $L\equiv 1$)
\begin{equation}\label{DeltaR1}
\Delta R'=-\frac{V}{R};~~~~V=\sqrt{{\hat p}^2+m^2}
\end{equation}
and
\begin{equation}\label{DeltapiL}
\Delta \pi_L=-\hat p.
\end{equation}
In the outer gauge the bulk gravitational action becomes
\begin{equation}\label{bulkaction}
S_g = \int_{t_i}^{t_f} I_g ~dt
\end{equation}
where, keeping in mind that $L\equiv 1$
\begin{eqnarray}
&& I_g=\int_{r_0}^\infty (\pi_L \dot L +\pi_R \dot R ) dr=\int_{r_0}^\infty
\pi_R \dot R dr=
\int_{r_0}^\infty
\left((\frac{\partial F}{\partial
R}-\frac{\partial }{\partial r}\frac{\partial F}{\partial R'})\dot R
\right) dr \nonumber \\
&& =\int_{r_0}^{\hat r(t)} \frac{dF}{dt}dr-\dot M(t)\int_{r_0}^{\hat
r(t)}\frac{\partial F}{\partial M} dr-
\left.\frac{\partial F}{\partial R'}
\dot R\right |_{r_0}^{\hat r(t)} \nonumber \\
&& =\frac{d}{dt }\int_{r_0}^{\hat r(t)} F dr- \dot M(t)\int_{r_0}^{\hat
r(t)}\frac{\partial F}{\partial M} dr -\left[\dot{\hat r}(t) F
+\frac{\partial F}{\partial R'}
\dot R\right]_{\hat r(t)-\varepsilon}
\end{eqnarray}
where we used the fact that $F$ vanishes at $r=r_0$.
Adding $I_{shell} = \hat p \dot{\hat r}$ and neglecting the total time
derivative which does not contribute to the equations of motion, we obtain
for the reduced action in the outer gauge
\begin{equation}\label{outerreducedaction}
\int_{t_i}^{t_f} \left(p_{c}~ \dot{\hat r} -\dot
M(t)\int_{r_0}^{\hat r(t)}\frac{\partial F}{\partial M} dr+
\left.(-N^r \pi_L + NRR')\right|^{r_m}_{r_0}\right)dt
\end{equation}
where using (\ref{DeltaR1},\ref{DeltapiL})
\begin{eqnarray}\label{pc}
&& p_c= - F(\hat r(t)-\varepsilon) - \frac{1}{\dot {\hat
r}(t)}\left.\frac{\partial F}{\partial R'}\dot R \right|_{\hat
r(t)-\varepsilon} +\hat p = \nonumber \\
&& = \sqrt{2M\,\hat r}-\sqrt{2H \,\hat r}-\hat
r\log\left(\frac{\hat r+\sqrt{{\hat p}^2+m^2}-\hat p-
\sqrt{2H \,\hat r}}{\hat r-\sqrt{2M \,\hat r}}\right).
\end{eqnarray}
A few comments are in order: 1) No limit $l\rightarrow 0$ is necessary
for obtaining $p_c$ of eq.(\ref{pc}) which holds for any deformation
$g$. 2) The $\dot M(t)$ term is important, as we shall see, if we
consider the variational problem in which $M$ is varied. On the other
hand if we consider $M$ as a datum of the problem and vary $H$ the
contribution $\dot M(t)$ is absent. 3) $\hat p$ in eq.(\ref{pc}) is a
function of $\hat r$, $H$ and $M$ as given by the discontinuity
relation (\ref{DeltapiL}) equivalent to the implicit equation
\begin{equation}\label{fundamentalH}
H-M= V +\frac{m^2}{2\hat r}-\hat p\sqrt{\frac{2H}{\hat r}}.
\end{equation}
We discuss now these issues in more detail. Let us consider at first $M$
as a datum of the problem and vary $H$. As shown in Appendix A in order
to be consistent with the gravitational equations $M$ has to be
constant in time. This is the situation examined in \cite{FLW} where
the expression (\ref{pc}) for $p_c$ was derived by a limit process in
which $l\rightarrow 0$. From eq.(\ref{outerreducedaction})
we see that the equation of
motion for $\dot{\hat r}$ is given by
\begin{equation}\label{outereqmotion}
\dot{\hat r}\frac{\partial p_c}{\partial H}-N(r_m)=0
\end{equation}
where $N(r_m)$ can be replaced by $N(\hat r)$ being $N(r)$ in the
outer gauge constant for $r>\hat r$. The computation of
eq.(\ref{outereqmotion})
keeping in mind the implicit definition of $\hat p$
eq.(\ref{fundamentalH})
gives
the correct equation of motion for the massive shell
\begin{equation}\label{1sheqmotion}
\dot{\hat r} = \frac{\hat p}{V}N(\hat r)- N^r(\hat r) =
\left(\frac{\hat p}{V}-\sqrt{\frac{2H}{\hat r}}\right)N(\hat r).
\end{equation}
The outline of the calculation is done in Appendix B.
Alternatively one can consider $H={\rm const}$ as a datum of the
problem and vary $M(t)$. We remark that as shown in Appendix A the
datum $M$ or $H$ is consistent with the gravitational equations only if
$H$ and $M$ are constant in time. Nevertheless in the variational
problem $H$ and $M$ have to be considered as functions of time, because
the constraints tell us only that $M$ and $H$ are constant in
$r$. Only after deriving the equation of motion one can insert the
consequences of the gravitational equations of motion.
The variation of $M(t)$ in the outer gauge is a far more complicated
procedure, due to
the presence of $\dot M(t)$, but
gives rise to the same result obtained by varying $H$ and keeping $M$
fixed. As this will be useful to understand the two shell reduced
dynamics to be developed in Sect.\ref{twoshell}
we go into it with some detail.
In this case the $\dot M$ term plays
a major role; in fact the equation of motion now takes the form
\begin{equation}\label{Meq}
\dot{\hat r} \frac{\partial p_c}{\partial
M}+\frac{d}{dt}\int_{r_0}^{\hat r(t)}\frac{\partial F}{\partial M} dr
+ N^r(r_0)\frac{\partial \pi_L}{\partial M}(r_0) =0
\end{equation}
where due to the vanishing of $g(x)$ for $x<-l$ we have
$\pi_L(r_0)=\sqrt{2Mr_0}$. $N^r(r)$ on the other hand is obtained by
solving the two coupled equations (\ref{NrNequations}) with the
condition that for $r>\hat r$, $N^r(r)$ equals $\sqrt{\frac{2H}{r}}$, having
normalized $N=1$ for $r>\hat r$.
One easily finds that for $r<\hat r$
\begin{equation}\label{Nrsolution}
N^r(r)= W \left[\int_{\hat r}^r \dot R \frac{\partial
\pi_R}{\partial M}dr + \frac{\sqrt{2H\hat r}}{\sqrt{2H\hat r}+\hat p}\right].
\end{equation}
Taking into account that
\begin{equation}
-\frac{d}{d r}(\frac{\partial^2 F}{\partial R'\partial M})
+\frac{\partial^2 F}{\partial M\partial R}=
\frac{\partial\pi_R}{\partial M}
\end{equation}
we have
\begin{equation}
\frac{d}{dt}\int_{r_0}^{\hat r(t)}\frac{\partial F}{\partial M} dr =
\int_{r_0}^{\hat r}\dot R\frac{\partial \pi_R}{\partial M}dr
+\left[\dot{\hat
r}\frac{\partial F}{\partial
M}+\frac{\partial^2F}{\partial M\partial R'}\dot R\right]_{\hat r-\varepsilon}
\end{equation}
and eq.(\ref{Meq}) becomes
\begin{equation}\label{generalMeqmotion}
\dot{\hat r}\frac{\partial p_c}{\partial M}+\left(\dot{\hat r} \frac{\partial
F}{\partial M}
+ \dot R\frac{\partial^2 F}{\partial R'\partial M}\right)_{\hat
r-\varepsilon}+\frac{\sqrt{2H\hat r}}{\sqrt{2H\hat r}+\hat p}=0.
\end{equation}
From the expression for $p_c$ given by the first line of
eq.(\ref{pc})
we obtain
\begin{equation}
\dot{\hat r}\left(\frac{\partial \hat p}{\partial
M}-\left.\frac{V}{W}\frac{\partial R'}{\partial M}\right|_{\hat r
-\varepsilon}\right)+\frac{\sqrt{2H\hat r}}{\sqrt{2H\hat r}+\hat p}=0
\end{equation}
where $W(\hat r -\varepsilon)=\sqrt{R'^2(\hat r-\varepsilon)-1+2M/R(\hat r)}
=\sqrt{\frac{2H}{\hat r}}+\frac{\hat p}{\hat r}$
and $R'(\hat r-\varepsilon)=1+V/\hat r$.
Using
\begin{equation}
\frac{\partial \hat p}{\partial M}=-\frac{1}{\frac{\hat
p}{V}-\sqrt{\frac{2H}{\hat r}}}
\end{equation}
we obtain eq.(\ref{1sheqmotion}) again. We remark once more that no
limit process $l\rightarrow 0$ is necessary for all these developments.
To summarize, in the present section we have derived the reduced action for
the one shell problem in the outer gauge with an arbitrary
deformation. One can vary $H$ (the exterior ADM mass) considering the
interior mass as given, or one can vary the interior mass $M$
considering the exterior mass $H$ as given, or even one can vary both
$M$ and $H$ always obtaining the correct equations of motion. Whenever
$M$ is varied the $\dot M$ term in eq.(\ref{Meq}) plays a
crucial role. All the results do not depend on the deformation $g$.
\bigskip
\section{\bf More general gauges}\label{moregeneralgauge}
It is of interest to examine more general gauges given by eq.(\ref{Rfunction})
where
$g$ does not necessarily vanish for positive argument, i.e. we can consider
$g(x)$ with $g(x)=0$ for $|x|>l$, $g(0)=0$ and $g'(+0)-g'(-0)=-1$,
thus again satisfying
the constraint (\ref{DeltaR1}).
In this case $F(R,R',H)$ does not vanish identically
for $r>\hat r $ and the bulk action (\ref{bulkaction}) is given by the time
integral of
\begin{equation}\label{generalgaugebulk}
I_g=\frac{d}{dt }\int_{r_0}^{r_m} F dr- \dot M(t)\int_{r_0}
^{\hat r(t)}\frac{\partial F}{\partial M} dr- \dot H(t)\int_{\hat r(t)}^{r_m}
\frac{\partial F}{\partial H} dr +\left[\dot{\hat r}(t) F
+\frac{\partial F}{\partial R'}
\dot R\right]_{\hat r(t)-\varepsilon}^{\hat r(t)+\varepsilon}
\end{equation}
The $p_c$ is easily computed using eq.(\ref{generalgaugebulk}) and one gets
immediately
the general form for the canonical momentum $p_c$
\begin{equation}
p_c = \hat r (\Delta {\cal L}-\Delta {\cal B})
\end{equation}
where $\Delta {\cal L}={\cal L}(\hat r+\varepsilon)-{\cal L}(\hat
r-\varepsilon)$ and similarly for $\Delta {\cal B}$ and the reduced action
becomes
\begin{equation}\label{genreducedaction}
S = \int_{t_i}^{t_f} \left(p_{c}~ \dot{\hat r} -\dot
M(t)\int_{r_0}^{\hat r(t)}\frac{\partial F}{\partial M} dr -\dot
H(t)\int_{\hat r(t)}^{r_m}\frac{\partial F}{\partial H} dr+
\left.(-N^r \pi_L + NRR')\right|^{r_m}_{r_0}\right ) dt.
\end{equation}
We shall call inner gauge the one characterized by $g(x)=0$ for $x<0$.
Due to the similarity with the treatment of Sect.\ref{oneshell}
we shall go
through rather quickly. Now $F$ vanishes identically for $r<\hat r$
and the action takes the form
\begin{equation}
S=\int_{t_i}^{t_f}dt\left (p_c^i \dot{\hat r}-\dot H\int_{\hat
r(t)}^{r_m}
\frac{\partial F}{\partial H}dr+(-N^r\pi_L+N R R')|^{r_m}_{r_0}\right)
\end{equation}
where $p_c^i$ is given by
\begin{equation}
p_c^i = \left[F+ \frac{\dot R}{\dot{\hat r}}\frac{\partial F}{\partial
R'}\right]_{\hat r +\varepsilon}+\hat p
\end{equation}
whose explicit value is
\begin{equation}
p_c^i= \sqrt{2M\,\hat r}-\sqrt{2H \,\hat r}-\hat
r\log\left(\frac{\hat r-\sqrt{2H \,\hat r}}
{\hat r- V + \hat p-\sqrt{2M\hat r}}\right)
\end{equation}
and $\hat p$, again determined by the discontinuity equation
(\ref{DeltapiL}), is given by the implicit equation
\begin{equation}\label{fundamentalM}
H-M=V-\frac{m^2}{2\hat r}-\hat p \sqrt{\frac{2M}{\hat r}}
\end{equation}
which is different from eq.(\ref{fundamentalH}).
Now the simple procedure is the one in which one varies
$M$ keeping $H$ as a fixed datum of the problem.
This time the solution of the system eq.(\ref{NrNequations})
gives simply $N={\rm const}$ and $N^r = N \sqrt{\frac{2M}{r}}$ for
$r_0<r<\hat r$.
\bigskip
\section{\bf The analytic properties of $p_c$}\label{analyticmomenta}
We saw that in the outer gauge
\begin{equation}\label{pcanalytic}
p_c= \sqrt{2M\,\hat r}-\sqrt{2H \,\hat r}-\hat
r\log\left(\frac{\hat r+V-\hat p-
\sqrt{2H \,\hat r}}{\hat r-\sqrt{2M \,\hat r}}\right).
\end{equation}
The solution of eq.(\ref{fundamentalH}) for $\hat p$ is
\begin{equation}
\frac{\hat p}{\hat r} =\frac{A \sqrt{\frac{2H}{\hat r}}~
\pm\sqrt{A^2 -(1-\frac{2H}{\hat r})\frac{m^2}{\hat r^2}}}
{1-\frac{2H}{\hat r}}
\end{equation}
where
\begin{equation}
A =\frac{H-M}{\hat r}-\frac{m^2}{2 \hat r^2}.
\end{equation}
If we want $\hat p$ to describe an outgoing shell we must choose the
plus sign in front of the square root. Moreover the shell
reaches $r=+\infty$ only if $H-M>m$ as expected.
The logarithm has branch points at zero and infinity and thus we must
investigate for which values or $\hat r$ such values are reached.
At $\hat r=2H$, $\hat p$ has a simple pole with positive residue; then the
numerator goes to zero and below $2H$ it becomes
\begin{equation}\label{contnumerator}
1 - \frac{V}{\hat r}-\frac{\hat p}{\hat r}-\sqrt{\frac{2H}{\hat r}}
\end{equation}
where here $V$ is the absolute value of the square root. Expression
(\ref{contnumerator}) is negative irrespective of the sign of $\hat p$ and
stays so
because $\hat p$ is no longer singular.
In order to compute the tunneling amplitude, below $\hat r = 2H$ we
have to use the prescription \cite{parikhwilczek} $\hat r - 2H
\rightarrow \hat r - 2H - i\varepsilon$ and as a consequence the $p_c$
below $2H$ acquires the imaginary part $i\pi \hat r$. Below $\hat r =
2M$ the denominator of the argument of the logarithm in
eq.(\ref{pcanalytic})
becomes negative so that the argument of the logarithm reverts to positive
values.
Thus the ``classically forbidden'' region is $2M<\hat r <2H$
independent of $m$ and of the deformation $g$ and the integral of the
imaginary part of $p_c$ for any deformation $g$ is
\begin{equation}\label{integratedimpart}
\int {\rm Im}~ p_c dr =
\pi \int_{2M}^{2H} r dr = 2\pi(H^2-M^2)= 4\pi (M+\frac{\omega}{2})\omega
\end{equation}
with $\omega = H-M$ which is the Parikh-Wilczek result \cite{parikhwilczek}.
A more profound way to relate the result for $p_c$ to the formula for the
Hawking radiation is to use (\ref{pc}) to compute semiclassically the modes on
which to expand the quantum field, and then proceed as usual by means of the
Bogoliubov transformation. This was done in \cite{krauswilczek,keskivakkuri}.
A more direct particle like interpretation of (\ref{integratedimpart}) was
given e.g. in \cite{hamilton}.
Similarly one can discuss the analytic properties of the
conjugate momentum $p_c^i$ in the inner gauge. We have
\begin{equation}\label{pcright}
p_c^i =\sqrt{2M\hat r} -\sqrt{2H\hat r} -
\hat r \log\left(\frac{1-\sqrt{\frac{2H}{ \hat r}}}
{1 -\frac{V}{\hat r} -\sqrt{\frac{2M}{\hat r}}+\frac{\hat p}{\hat
r}} \right).
\end{equation}
This time the solution of eq.(\ref{fundamentalM}) gives for $\hat p$
\begin{equation}\label{hatpi}
\frac{\hat p}{\hat r} =
\frac{\sqrt{\frac{2M}{\hat r}}~A\pm\sqrt{A^2 -(1-\frac{2M}{\hat
r})\frac{m^2}{\hat r^2}}}{1 -\frac{2 M}{\hat r}}
\end{equation}
where now
\begin{equation}
A=\frac{H-M}{\hat r} +\frac{m^2}{2\hat r^2}.
\end{equation}
Notice that for $m=0$ we have $p_c=p^i_c$. To describe an outgoing shell the
square root in (\ref{hatpi}) has to be taken with the positive sign.
All the point is the discussion of the sign of the term
\begin{equation}
1 -\frac{V}{\hat r} -\sqrt{\frac{2M}{\hat r}}+\frac{\hat p}{\hat r}=
R'(\hat r + \varepsilon)-\sqrt{R'^2(\hat r + \varepsilon)-1+\frac{2H}{\hat r}}
\end{equation}
where
\begin{equation}\label{W}
\sqrt{\frac{2M}{\hat r}}-\frac{\hat p}{\hat r}=
\sqrt{R'^2(\hat r + \varepsilon)-1+\frac{2H}{\hat r}}.
\end{equation}
For $m=0$ at $\hat r=2H$ eq.(\ref{W}) is negative so that at $\hat r = 2H$
\begin{equation}\label{denominator}
1 -\frac{V}{\hat r} -\sqrt{\frac{2M}{\hat r}}+\frac{\hat p}{\hat r}=
R'(\hat r + \varepsilon)+\sqrt{R'^2(\hat r + \varepsilon)-1+\frac{2H}{\hat r}}
\end{equation}
is positive, being in eq.(\ref{denominator}) the square root on the
r.h.s. understood as the positive determination of the square root.
The same happens for $m\neq 0$ provided $m<H-M$ which is the
condition for the shell to be able to reach $\hat r =+\infty$. For $\hat r<2H$
as $-1+\frac{2H}{\hat r}>0$ such a term stays positive irrespective of the
sign of $R'$, until $R'$ diverges. This happens at $2M$ where $\hat p$ given
by
eq.(\ref{hatpi}) has a
simple pole with positive residue. Thus
below $2M$ eq.(\ref{denominator}) reverts to
\begin{equation}
1 -\frac{V}{\hat r} -\sqrt{\frac{2M}{\hat r}}+\frac{\hat p}{\hat r}=
R'(\hat r + \varepsilon)-\sqrt{R'^2(\hat r + \varepsilon)-1
+\frac{2H}{\hat r}}
\end{equation}
which is negative irrespective of the sign of $R'(\hat r + \varepsilon)$.
The conclusion is that
$p^i_c$ takes the imaginary part $i\pi \hat r$ in the interval $2M, 2H$ as it
happens for $p_c$.
\bigskip
\section{\bf The two shell reduced action}\label{twoshell}
From now on we shall work in the outer gauge.
We denote with ${\hat r}_1$ and ${\hat r}_2$ the coordinates of
the first and second shell $\hat r_1 <\hat r_2$. The value of ${\cal
M}$ for $r <\hat r_1$ will be denoted by $M$, for $\hat r_1 <r<\hat
r_2$ by $M_0$ and for $r>\hat r_2$ will be denoted by $H$ as before. In
extending the treatment to two interacting shells we shall keep the
formalism as close as possible to the one developed in
Sect. \ref{oneshell}.
The most relevant difference is that in any gauge the intermediate mass $M_0$
and the total mass $H$ always intervene dynamically. We shall consider the
mass $M$ as a datum of the problem.
For the metric component $R$ we shall use
\begin{equation}\label{twoshellR}
R(r) = r+v_2 g(r-\hat r_2)+v_1 h(r-\hat r_1);~~~~
v_2= \frac{V_2}{R(\hat r_2)}; ~~~~v_1 =\frac{V_1}{R(\hat r_1)}
\end{equation}
where $h(x)$ has the same properties of $g(x)$ described in
Sect. \ref{oneshell} (actually we could use the same function).
Both $g$ and
$h$ vanish for positive argument and thus we are working in an outer
gauge according to the definition of Sect. \ref{oneshell}.
The action is given by the time integral of
\begin{equation}
I =\hat p_2 \dot{\hat r}_2+\hat p_1 \dot{\hat r}_1 +\int dr \pi_R \dot R
+ b.t.
\end{equation}
Breaking the integration range from $r_0$ to $\hat r_1$ and from
$\hat r_1$ to $\hat r_2$ and using eq.(\ref{piRfromF}) for $\pi_R$ and the
same technique as used for the one shell case, we reach the expression
\begin{eqnarray}\label{twoshellF}
&& \hat p_2 \dot{\hat r}_2+\hat p_1 \dot{\hat r}_1+
\frac{d}{dt}\int_{r_0}^{\hat r_2} F dr
-\dot M_0\int^{\hat r_2}_{\hat r_1}\frac{\partial F}{\partial M_0}dr
-\dot{\hat r}_2 F(\hat r_2-\varepsilon) - \left(\dot R\frac{\partial
F}{\partial R'}\right)(\hat r_2-\varepsilon)+\nonumber\\
&& \dot{\hat r}_1\Delta F(\hat r_1) + \Delta\left(\dot R \frac{\partial F}
{\partial R'}\right)(\hat r_1)\\
&& =\hat p_1 \dot{\hat r}_1 +\frac{d}{dt}\int_{r_0}^{\hat r_2} F
dr-\dot M_0\int^{\hat r_2}_{\hat r_1}\frac{\partial F}{\partial M_0}dr
+p_{c2}^0~ \dot{\hat r}_2+ \dot{\hat r}_1~\Delta F(\hat r_1) +
\Delta\left(\dot R \frac{\partial F}{\partial R'}\right)(\hat r_1) \nonumber
\end{eqnarray}
where $p_{c2}^0$ is given by eq.(\ref{pc}) with $M$ replaced by $M_0$, and
$\Delta$ stays for the jump
across the discontinuity at $\hat r_1$.
We recall that $F$ depends only on $R$,$R'$ and ${\cal M}$ and the
partial derivative w.r.t. these variables will have the usual
meaning. For the other quantities appearing in the calculations we
recall that the independent variables are $M$, $M_0$, $H$, $\hat r_1$,
$\hat r_2$ and when taking partial derivatives we shall consider the
remaining variables as fixed.
In eq.(\ref{twoshellF}) we have, using eq.(\ref{Fproperties}), denoting with
$\Delta$ the discontinuity across $\hat r_1$ and with the bar the average at
$\hat r_1$
\begin{eqnarray}
&& \dot{\hat r}_1\hat p_1+\dot{\hat r}_1\Delta F
+\Delta\left(\frac{\partial F}{\partial
R'}\dot R\right)= \\
&& =\dot{\hat r}_1\hat p_1+\dot{\hat r}_1\Delta F+
\overline{\frac{\partial F}{\partial
R'}}~\Delta\dot R + \overline{\dot R}\Delta\frac{\partial
F}{\partial R'}= \nonumber \\
&&\dot{\hat r}_1[\overline{R'}R( \Delta {\cal L} -
\Delta{\cal B})+\Delta R' R( \bar{\cal L} -\bar{\cal B})]+\Delta \dot R
~R(\bar{\cal L}-\bar{\cal B})
+\overline{\dot R}R( \Delta{\cal L} -\Delta{\cal B})=\nonumber
\end{eqnarray}
\begin{equation}
= \dot{\hat r}_1p_{c1} + \dot{\hat
r}_2\tilde p_{c2}+ \dot H (R(\hat r_1)-\hat r_1)\frac{\partial T}{\partial
H}{\cal D} + \dot M_0 (R(\hat r_1)-\hat r_1)\frac{\partial T}{\partial M_0}
{\cal D}\nonumber
\end{equation}
where
\begin{equation}\label{pc1}
T=\log v_2;~~~~
{\cal D} = R (\Delta{\cal L}- \Delta{\cal B});~~~~
p_{c1}= R'(\hat r_1+\varepsilon){\cal D};
\end{equation}
\begin{equation}\label{tildepc2}
\tilde p_{c2}= -(R'(\hat r_1+\varepsilon)-1){\cal D}+(R(\hat r_1)-\hat
r_1)\frac{\partial T}{\partial \hat r_2}{\cal D}=
\frac{d}{d\hat r_2}(R(\hat r_1)-\hat r_1){\cal D}
\end{equation}
having used
\begin{equation}
\Delta \dot R = - \dot{\hat r}_1 \Delta{R'}
\end{equation}
and
\begin{equation}
\bar{\dot R} = -\dot{\hat r}_1\frac{v_1}{2}-
\dot{\hat r}_2(\bar{R'}-1)+
(R(\hat r_1)-\hat r_1)\frac{dT}{dt}.
\end{equation}
Summing up the reduced action for the two shell system is given,
boundary terms included, by the time integral of
\begin{eqnarray}\label{twoshellreducedaction}
&& \dot{\hat r}_1p_{c1} + \dot{\hat
r}_2p_{c2}+ \dot H (R(\hat r_1)-\hat r_1)\frac{\partial T}{\partial
H}{\cal D} + \dot M_0 (R(\hat r_1)-\hat r_1)\frac{\partial T}{\partial M_0}
{\cal D}+ \nonumber\\
&& +\frac{d}{dt}\int_{r_0}^{\hat r_2} F dr-\dot M_0\int^{\hat r_2}_{\hat
r_1}\frac{\partial F}{\partial M_0}dr
+(-N^r\pi_L+N R R')|^{r_m}_{r_0}
\end{eqnarray}
where $p_{c2}= p_{c2}^0+\tilde p_{c2}$ with $p^0_{c2}$ given by
eq.(\ref{pc}) with $M$ replaced by $M_0$ and $\tilde p_{c2}$ by
eq. (\ref{tildepc2}).
With regard to eq.(\ref{twoshellreducedaction}) we notice that irrespective
of the gauge used both
terms in $\dot M_0$ and $\dot H$ appear in the action. Moreover $p_{c1}$ and
$p_{c2}$ depend both on $\hat r_1$ and $\hat r_2$.
\bigskip
\section{The two shell equations of motion}
From the reduced action (\ref{twoshellreducedaction}) we can derive
the equations of motion for the two shells. This is of some importance
in order to show the consistency of the scheme. We recall that action
(\ref{twoshellreducedaction}) has been derived in the outer gauge
i.e. $R(r)=r$ for $r>\hat r_2$ ($\hat r_2>\hat r_1$). While in the
one shell problem formulated in the outer gauge $\dot H$ does not
appear, in the two shell problem it is always present. Again we consider
$M={\rm const.}$ as a datum of the problem.
In the variational procedure we can vary $\hat r_1$, $\hat r_2$, $H$
and $M_0$ independently. We shall start varying $H$ but keeping all
other parameters fixed. For the sake of simplicity we shall deal
here with the massless case $m_1=m_2=0$. In Appendix C we give the
general derivation for $m_1$ and $m_2$ different from zero. The most important
fact that occurs when we
vary $H$ keeping $M_0$ fixed is that the terms proportional to
$\dot{\hat r}_1$ cancel in the variation. The simplifying feature of
the massless case is that $\Delta{\cal L}=0$ so that in eq.(\ref{pc1})
${\cal D}= -R\Delta {\cal B}$, being ${\cal B}$ function only of $R$ and
${\cal M}$ and not of $R'$ and thus
only of $R$ and $M_0$ being $M$ a datum of the problem. The
coefficient of $\dot{\hat r}_1$, taking into account the following relation,
easily
derived from (\ref{Fproperties}),
\begin{equation}\label{dFsdM}
\frac{\partial F}{\partial {\cal M}}
+R' R \frac{\partial {\cal B}}{\partial {\cal M}}=\frac{1}{W- R'}
\end{equation}
is found proportional to
\begin{equation}\label{dotr1coefficient}
\frac{\partial}{\partial H}(R'(\hat r_1+\varepsilon) {\cal D})-
(R'(\hat r_1+\varepsilon)-1) \frac{\partial T}{\partial H} {\cal D}-
(R(\hat r_1)-\hat r_1) \frac{\partial T}{\partial H}
\frac{\partial {\cal D}}{\partial R} R'(\hat r_1+\varepsilon)
\end{equation}
where we used
\begin{equation}
\frac{d R(\hat r_1)}{dt} = \dot{\hat r}_1 R'(\hat r_1+\varepsilon)
-\dot{\hat r}_2 (R'(\hat r_1+\varepsilon)-1)
+\frac{dT}{dt}(R(\hat r_1)-\hat r_1)
\end{equation}
and we took into account that $T$ does not depend on $\hat r_1$ and that on
the equations of motion $\dot H=\dot M_0=0$.
Now employing the relations
\begin{equation}\label{derivativerelations}
\frac{\partial R(\hat r_1)}{\partial H}=\frac{\partial T}{\partial H}
(R(\hat r_1)-\hat r_1);
~~~~\frac{\partial R'(\hat r_1+\varepsilon)}{\partial H}=
\frac{\partial T}{\partial H} (R'(\hat r_1+\varepsilon)-1)
\end{equation}
we see that expression (\ref{dotr1coefficient}) vanishes. Thus we are
left only with the $\dot{\hat r}_2$ terms. We know already that the
boundary term and the $p^0_{c2}$ term give the correct equation of
motion and thus we have simply to prove that the $\dot{\hat r}_2$
terms originating from
\begin{equation}
-\frac{d}{dt}[(R(\hat r_1)-\hat r_1)\frac{\partial T}{\partial H}{\cal D}]
\end{equation}
cancel $\displaystyle{\frac{\partial \tilde{p}_{c2}}{\partial H}}$ i.e.
\begin{equation}\label{dotr2add}
\frac{\partial}{\partial H}[-(R'(\hat r_1+\varepsilon)-1){\cal D}+(R(\hat
r_1)-\hat r_1)\frac{\partial T}{\partial \hat r_2}{\cal D}]-
\frac{\partial}{\partial \hat r_2}[(R(\hat
r_1)-\hat r_1) \frac{\partial T}{\partial H}{\cal D} ]=0.
\end{equation}
Using relations (\ref{derivativerelations}) we have that eq.(\ref{dotr2add})
is satisfied. Such a result is expected as the exterior shell
parameterized by $\hat r_2$ moves irrespective of the dynamics which
develops at lower values of $r$ until $\hat r_1$ crosses $\hat r_2$.
Now we vary $M_0$ keeping $H$ fixed. We have no boundary
term contribution because also $M$ is constant and we find the
equation
\begin{eqnarray}
&& \dot{\hat r}_1\frac{\partial p_{c1}}{\partial M_0}+
\dot{\hat r}_2\frac{\partial p_{c2}}{\partial M_0}
-\frac{d}{dt}[(R(\hat r_1)-\hat r_1)\frac{\partial T}{\partial M_0}{\cal D}]
+\dot{\hat r}_2\frac{\partial F(\hat r_2-\varepsilon)}{\partial M_0}-
\dot{\hat r}_1\frac{\partial F(\hat r_1+\varepsilon)}{\partial M_0}+
\nonumber\\
&& +\int^{\hat r_2}_{\hat r_1}\dot R\frac{\partial\pi_R}{\partial M_0}dr
+\left.\dot R\frac{\partial^2F}{\partial M_0\partial R'}\right|^{\hat
r_2-\varepsilon}_{\hat r_1+\varepsilon}=0.
\end{eqnarray}
Using expression (\ref{Nrsolution}) for $N^r(r)$,
relation (\ref{NrNequations}) for $N(r)$ and relation (\ref{dFsdM})
we obtain with $\alpha = \sqrt{2H\hat r}/(\sqrt{2H\hat r}+\hat p)$
\begin{eqnarray}\label{firstdothatr1}
&& [R'(\hat r_1+\varepsilon)- W(\hat r_1+\varepsilon)]^{-1}
(\dot{\hat r}_1 + N^r(\hat r_1)-N(\hat r_1)) \nonumber\\
&& +\dot{\hat r}_2 \frac{\partial p^0_{c2}}{\partial M_0}+
\left[\dot{\hat r}_2 \frac{\partial F}{\partial M_0}+
\frac{\partial^2F}{\partial M_0\partial R'}\dot R\right]_{\hat
r_2-\varepsilon} +\alpha
\nonumber\\
&&-[R'(\hat
r_1+\varepsilon)- W(\hat r_1+\varepsilon)]^{-1}
\frac{\dot R}{W}(\hat
r_1+\varepsilon)
\nonumber\\
&& +\dot{\hat r}_2\frac{\partial \tilde p_{c2}}{\partial M_0}
-\left.\frac{\partial^2F}{\partial M_0\partial R'}\dot R\right|_{\hat
r_1+\varepsilon} -\dot{\hat r}_2\frac{\partial }{\partial \hat r_2} [(R(\hat
r_1)-\hat r_1)\frac{\partial T}{\partial M_0}{\cal D}] =0.
\end{eqnarray}
The first line is the equation of motion for $\hat r_1$, the second line
vanishes being the equation of motion for $\hat r_2$ (see
eq.(\ref{generalMeqmotion}) of the
one shell problem) while exploiting the relation
\begin{equation}
\frac{\partial^2 F}{\partial {\cal M}\partial R'}+R\frac{{\partial \cal B}}
{\partial {\cal M}}=\frac{1}{W(W-R')}
\end{equation}
also derived from (\ref{Fproperties})
and substituting into $\dot{\hat r}_2$ the value given by the equations of
motion derived previously we find that the third line simplifies
with the fourth. Thus we are left with the relation
\begin{equation}
\dot{\hat r}_1 + N^r(\hat r_1)-N(\hat r_1)=0
\end{equation}
which is the correct equation of motion for the interior shell.
\section{Exchange relations}
In the present section we consider in the above developed formalism the
intersection of two shells during their motion. In the massless case
we shall rederive the well known relations of Dray and 't Hooft \cite{DtH}
and Redmount \cite{redmount} between the
masses characteristic of
the three regions before and after the collision. We shall denote by
$t_0$ the instant of collision, $\hat r_0$ the coordinate of the
collision. It is well known that some hypothesis has to be done on the
dynamics of the collision which should tell us which are the masses of the
two shells after the collision. Once these are given the problem is reduced to
that of a two particle collision in special relativity. The formulas which we
develop
below give the intermediate mass $M'_0$ after the collision. The simplest
assumption is that of the ``transparent crossing'' i.e. after the collision
shell $1$ carries along with unchanged
mass $m_1$ and the same for shell $2$. A further simplification occurs in
the massless case i.e. when the two massless shells go over to two massless
shells. We develop first the general formalism which applies also when we
have a change in the masses during the collision and then specialize to
particular situations.
Just before the collision i.e. at $t=t_0-\varepsilon$ assuming that
$\hat r_1(t_0-\varepsilon)<\hat r_2(t_0-\varepsilon)$ we have for the
momentum $\pi_L$
\begin{eqnarray}
&& \pi_L(\hat r_1-0,t_0-\varepsilon)=\pi_L(\hat
r_1+0,t_0-\varepsilon)+\hat p_1= \\
&& =\pi_L(\hat
r_2-0,t_0-\varepsilon)+\hat p_1 = \pi_L(\hat
r_2+0,t_0-\varepsilon)+\hat p_1+\hat p_2 \nonumber
\end{eqnarray}
and after the collision i.e. at $t=t_0+\varepsilon$ we have with $\hat
r_2(t_0+\varepsilon) <\hat r_1(t_0+\varepsilon)$,
\begin{eqnarray}
&& \pi_L(\hat r_2-0,t_0+\varepsilon)=\pi_L(\hat
r_2+0,t_0+\varepsilon)+\hat p'_2= \\
&& \pi_L(\hat
r_1-0,t_0+\varepsilon)+\hat p'_2 = \pi_L(\hat
r_1+0,t_0+\varepsilon)+\hat p'_1+\hat p'_2 \nonumber
\end{eqnarray}
and at the collision $\hat r_2=\hat r_1=\hat r_0$. The main point in
treating the crossing is to realize that the sum $V_1+V_2$ has to be
continuous in time as it represents the discontinuity of $R'$ at the
time of crossing. In fact just before the crossing we have for $r<\hat
r_0$, choosing for simplicity in (\ref{twoshellR}) $g(x)=h(x)$,
\begin{equation}
R(r) = r+ \frac{(V_1+V_2)}{\hat r_0}g(r-\hat r_0)
\end{equation}
while immediately after we have
\begin{equation}
R(r) = r+ \frac{(V'_1+V'_2)}{\hat r_0}g(r-\hat r_0).
\end{equation}
If $V_1+V_2\neq V'_1+V'_2$ we would have a discontinuous evolution
of the metric for $r<\hat r_0$ which is incompatible with the equations
of motion of the gravitational field. As $M$ is unchanged during the
time evolution $V_1+V_2= V'_1+V'_2$ implies that $\pi_L(\hat
r_1-0,t_0-\varepsilon)=\pi_L(\hat
r_2-0,t_0+\varepsilon)$ which combined with the fact that
$\pi_L(\hat r_2+0,t_0-\varepsilon)=\pi_L(\hat
r_1+0,t_0+\varepsilon)=\sqrt{2H \hat r_0}$ gives the further
relation $\hat p_1+\hat p_2=\hat p'_1+\hat p'_2$. Thus we have the same
kinematics as in the two particle collision in special relativity. In the case
of conservation of the masses $m_1$ and $m_2$ we have the same kinematics
as that of an elastic collision in $1+1$ dimensions. A discussion of special
cases with massive shells using a different formalism has been given in
\cite{nunez}.
The relation between $\pi_L(\hat r_1-0, t_0-\varepsilon)$
and $\pi_L(\hat r_2+0,t_0-\varepsilon)$
\begin{equation}
\hat r_0\sqrt{\left(1+\frac{V_1+V_2}{\hat
r_0}\right)^2-1+\frac{2M}{\hat r_0}}=\sqrt{2H\hat r_0}+\hat p_1+\hat p_2
\end{equation}
gives the equation
\begin{equation}\label{generalexchange}
m_1^2+m_2^2+2(V_1V_2-\hat p_1\hat p_2)+2(V_1+V_2)\hat r_0+2M\hat r_0=
2H \hat r_0+2(\hat p_1+\hat p_2)\sqrt{2H\hat r_0}
\end{equation}
where $M_0$ is given by
\begin{equation}\label{fundamentalM0}
H-M_0 = V_2+\frac{m_2^2}{2\hat r_0}-\hat p_2\sqrt{\frac{2H}{\hat r_0}}.
\end{equation}
From the knowledge of ${\hat p}'_1$ one derives $M_0'$ from
\begin{equation}\label{fundamentalM10}
H-M'_0 = V'_1+\frac{{m'_1}^2}
{2\hat r_0}-{ \hat p'}_1\sqrt{\frac{2H}{\hat r_0}}.
\end{equation}
For ``transparent'' crossing i.e. $m'_j=m_j$ and ${\hat p}'_j= {\hat p}_j$,
$M'_0$ is given by (\ref{fundamentalM10})
with ${\hat p}'_1={\hat p}_1$.
The massless case is most easily treated. In this case in order for
the shells to intersect they must move in opposite directions, the
exterior one with negative and the interior one with positive velocity.
The conservation of $V_1+V_2$ and $\hat p_1+\hat p_2$ in the massless
case has two solutions which are physically equivalent i.e. $\hat p'_1
= \hat p_1$, $\hat p'_2 = \hat p_2$. The two shells intersect only if
$\hat p_2 <0$ and $\hat p_1>0$ so that $V_2=-\hat p_2$ and $V_1=\hat
p_1$.
From eq.(\ref{fundamentalM0}) we have
\begin{equation}\label{hatp20}
\hat p_2 =-\frac{H-M_0}{1+\sqrt{\frac{2H}{\hat r_0}}}
\end{equation}
and from $\hat p'_1 = \hat p_1$
\begin{equation}\label{hatp10}
\hat p_1 =\frac{H-M'_0}{1-\sqrt{\frac{2H}{\hat r_0}}}.
\end{equation}
Eq.(\ref{generalexchange}) now becomes
\begin{equation}
H \hat r_0+\sqrt{2H\hat r_0}(\hat p_1+\hat p_2)= \hat
r_0(\hat p_1-\hat p_2)+ M \hat r_0-2\hat p_1\hat p_2
\end{equation}
and substituting here eqs.(\ref{hatp20},\ref{hatp10})
we obtain
\begin{equation}
H \hat r_0 + M \hat r_0-2 H M = M_0 \hat r_0- 2 M_0 M'_0 + M'_0 \hat r_0
\end{equation}
which is the well known Dray- 't Hooft and Redmount relation
\cite{DtH,redmount}.
\section{Integrability of the form $p_{c1}d\hat r_1+p_{c2}d\hat r_2$ and the
two shell tunneling probability}
We are interested in computing the action for the two
shell system i.e. the integral
\begin{equation}\label{intpc1pc2}
\int_{t_i}^{t_f} dt (p_{c1}~\dot{\hat r}_1+ p_{c2}~\dot{\hat r}_{c2})=
\int_{r_{1i}, r_{2i}}^{r_{1f}, r_{2f}} (p_{c1}~d\hat r_1+p_{c2}~d\hat r_2)
\end{equation}
on the solution of the equations of motion.
This is of interest in the computation of the semiclassical wave
function and the tunneling probability in the two shell case.
We shall prove explicitly that the form $p_{c1}d\hat r_1+p_{c2}d\hat
r_2$ is closed i.e. integrable. Such a result
will be very useful in the actual computation and in showing the
independence of the results of the deformation $g$.
In fact we can reduce the computation to the integral on a simple path
on which the two momenta $p_{c1}$ and $p_{c2}$ take a simpler form.
The integrability result is similar to a theorem of analytical
mechanics \cite{whittaker,arnold} stating that for a system with two degrees of
freedom, in
presence of a constant of motion the form $p_1 dq_1+p_2 dq_2$ where
the $p_j$ are expressed in terms of the energy and of the constant of
motion, is a closed form. Here the setting is somewhat different as we
are dealing with an effective action with two degrees of freedom,
arising from the action of a constrained system. We recall that for
$\hat r_1<\hat r_2$ $M_0$ like $H$ is a constant of motion in virtue of
the equations of motion of the gravitational field. The $p_{cj}$ are
functions of $H,M_0,\hat r_1,\hat r_2$ in addition to the fixed datum
of the problem $M$. The effective action takes the form
\begin{equation}
\int dt( p_{c1}\dot{\hat r}_1+p_{c2}\dot{\hat r}_2 + p_H \dot H+
p_{M_0}\dot M_0) + b.t.
\end{equation}
where $b.t.$ is the boundary term (see eq.(\ref{completeaction}))
which depends only on
$H, N^r$ and not
on $\hat r_j$. The value of $N^r$ is supplied by the solution of the
gravitational
equations of motion. Thus varying w.r.t. $\hat r_1$ we have
\begin{equation}
0 = -\frac{dp_{c1}}{dt}+\dot{\hat r}_1 \frac{\partial p_{c1}}{\partial
\hat r_1} +\dot{\hat r}_2\frac{\partial p_{c2}}{\partial\hat r_1}
+\frac{\partial p_H}{\partial
\hat r_1}\dot H +\frac{\partial p_{M_0}}{\partial
\hat r_1} \dot M_0.
\end{equation}
We show in Appendix A that the constraints combined with the
equations
for the gravitational field impose $0=\dot H=\dot M_0$
and thus we have
\begin{equation}\label{integrabilitycond}
\frac{\partial p_{c1}}{\partial \hat r_2}-\frac{\partial
p_{c2}}{\partial \hat r_1} =0.
\end{equation}
The meaning of the procedure is that the consistency of the
variational principle imposes eq.(\ref{integrabilitycond}). On the other hand
eq.(\ref{integrabilitycond}) can
be verified also from the explicit expression of $p_{c1}$ and $p_{c2}$ given
in Sect. \ref{twoshell}.
If the crossing of the shells occurs outside the region $2M, 2H$ we
can choose the path as to keep the two deformations non overlapping in such a
region; then $p_{c1}$ takes the simple form (\ref{pc}) with $H$ substituted by
$M_0$ and $p_{c2}$ again the form (\ref{pc}) with $M$ substituted by $M_0$
and thus one obtains for the imaginary part of
integral (\ref{intpc1pc2})
\begin{equation}\label{oneptimpart}
\frac{1}{2}\left((2 M_0)^2-(2 M)^2+(2 H)^2-(2 M_0)^2\right)=
\frac{1}{2}\left((2 H)^2-(2 M)^2\right)
\end{equation}
i.e. the two shells are emitted independently.
If the crossing occurs at $\hat r_0$ with $2M<\hat r_0<2H$, with e.g. $\hat
r_1<\hat r_2$ before the crossing, we choose the path $\hat r_1 = \hat
r_2-\varepsilon$ before the crossing and $\hat r_2 = \hat
r_1-\varepsilon$ after the crossing.
For clearness sake we examine at first the problem for the crossing of a null
shell with a massive shell, even when the mass of the massive shell changes.
In order to reduce the integration path to the described one, one must first,
given the two initial points with $R(\hat r_{1i})< 2M $ and $R(\hat r_{2i})<2M
$, bring them together ( $\hat r_1$ will denote the position of the massless
shell).
For $\hat r_{1i}< \hat r_{2i}$ this is done by integrating
along the line $\hat r_2 = {\rm const}$ with $\hat r_1$ varying from $\hat
r_{1i}$ to $\hat r_{2i}-\varepsilon$. The contribution is
\begin{equation}\label{coalesce}
\int_{\hat r_{1i}}^{\hat r_{2i}-\varepsilon} p_{c1} d\hat r_1=
\int_{\hat r_{1i}}^{\hat r_{2i}-\varepsilon} R'(\hat r_1+\varepsilon) ~ {\cal
D}~ d\hat r_1.
\end{equation}
But $R'(\hat r_1)$ is real and
\begin{equation}\label{calDm0}
{\cal D} = - R(\hat r_1) \left[\sqrt{\frac{2M_0}{R(\hat r_1)}}+
\log\left(\frac{1-\sqrt{\frac{2M_0}{R(\hat r_1)}}}
{1-\sqrt{\frac{2M}{R(\hat r_1)}}}\right)
-\sqrt{\frac{2M}{R(\hat r_1)}}\right].
\end{equation}
For $R(\hat r_1)<2M<2M_0$ eq.(\ref{calDm0}) is real and thus the contribution
(\ref{coalesce}) is real.
If on the other hand $\hat r_{2i}<\hat r_{1i}$ we integrate $p_{c1}$ from
$\hat r_{1i}$ to $\hat r_{2i}+\varepsilon$ keeping $\hat r_2$ fixed. This time
we recall that
\begin{equation}
p_{c1}= p^0_{c1}+\tilde p_{c1}
\end{equation}
where $p^0_{c1}$ is real because we are outside the interval $(2M_0, 2H)$, and
\begin{equation}\label{pc1m0}
\tilde p_{c1}= -(R'(\hat r_2+\varepsilon)-1) {\cal D}(\hat r_2)+
(R(\hat r_2)-\hat r_2)\frac{\partial T}{\partial \hat r_1} {\cal
D}(\hat r_2)
\end{equation}
with $T$ now given by $\log v_1$. Again all the items in eq.(\ref{pc1m0}) are
real.
Thus we can start e.g. with $\hat r_{1i} = \hat r_{2i}-\varepsilon$.
The integration along the path $\hat r_1 = \hat r_2 -\varepsilon$ up to $\hat
r_0$ is very simple because
\begin{equation}
p_{c2}+p_{c1}=\hat r\left[\sqrt{\frac{2M}{\hat r}}-\sqrt{\frac{2H}{\hat r}}-
\log\frac{1+\frac{V_2-\hat p_2}{\hat r}-\sqrt{\frac{2H}{\hat
r}}}{1-\sqrt{\frac{2M}{\hat r}}}\right]
\end{equation}
whose complete discussion has already been done in
Sect. \ref{analyticmomenta}.
The ``gap'' is
$2M,2H$. More difficult is the analysis for $\hat r_0<\hat r <2H$. Now
we have
\begin{equation}\label{p2primeplsup1prime}
p'_{c2}+p'_{c1}=\hat r\left[\sqrt{\frac{2M}{\hat r}}-\sqrt{\frac{2H}{\hat r}}-
\log\frac{1+\frac{V'_2-\hat p'_2}{\hat r}-\sqrt{\frac{2H}{\hat
r}}}{1-\sqrt{\frac{2M}{\hat r}}}\right]
\end{equation}
with
\begin{equation}
\hat {p}'_1=\frac{H-M'_0}{1-\sqrt{\frac{2H}{\hat r_1}}}.
\end{equation}
Moreover
\begin{equation}\label{hatp2prime}
\frac{\hat{p}'_2}{\hat r_2}
=\frac{AW(\hat r_2+\varepsilon)+R'(\hat r_2+\varepsilon)
\sqrt{A^2-(1-\frac{2M'_0}{\hat
r_2})\frac{m_2^2}{\hat{r}_2^2}}}{1-\frac{2M'_0 }{\hat r_2}}
\end{equation}
where
\begin{equation}
A = \frac{M'_0-M}{\hat r_2}-\frac{m_2^2}{2{\hat r}^2_2}
\end{equation}
and
\begin{equation}
W(\hat r_2+\varepsilon) = \sqrt{(1+\frac{\hat p'_1}{\hat
r_2})^2-1+\frac{2M_0}{\hat r_2}}.
\end{equation}
${\hat p}'_1$ diverges with positive residue at $\hat r_1 = 2 H$ and thus
also $W(\hat r_2+\varepsilon)$ and ${\hat p}'_2$ of eq.(\ref{hatp2prime})
diverge like $\hat p'_1$. As a consequence the analytic continuation of
$V'_2-{\hat p}'_2$ below $\hat r = 2 H$ is negative which makes the numerator
of the argument of the logarithm in eq.(\ref{p2primeplsup1prime}) negative.
We show now that such numerator stays negative all the way for $\hat r<2H$.
The argument of the square root in (\ref{hatp2prime}) never vanishes
so that at $\hat r_2 = 2M'_0$ the numerator in (\ref{hatp2prime}) reduces to
\begin{equation}
A(W(\hat r_2+\varepsilon)+R'(\hat r_2+\varepsilon)).
\end{equation}
We can now explicitly compute $W(\hat r_2+\varepsilon)$ and
$R'(\hat r_2+\varepsilon)$ at $\hat r_2 =
2M'_0$. At such a point we have
\begin{equation}
R'(\hat r_2+\varepsilon) = 1+\frac{(H-M'_0)/\hat r_2}
{1-\sqrt{\frac{2H}{\hat r_2}}}=
\frac{1}{2}\left(1-\sqrt\frac{H}{M'_0}\right)
\end{equation}
while
\begin{equation}
W(\hat r_2+\varepsilon) = \sqrt\frac{2H}{2M'_0}+ \frac{1}{\hat r_2}
\frac{H-M'_0}{1-\sqrt{\frac{2H}{\hat r_2}}}=
\sqrt\frac{H}{M'_0}+ \frac{1}{2}
\left(\frac{\frac{H}{M'_0}
-1}{1-\sqrt{\frac{H}{M'_0}}}\right)=-\frac{1}{2}
\left(1-\sqrt\frac{H}{M'_0}\right)
\end{equation}
which means that there is no pole in $\hat{p}'_2$ at $\hat r
=2M'_0$. Thus $\hat{p}'_2$ is regular below $2H$ and $V_2'-\hat{p}'_2$
cannot change sign.
In this way we proved that for the crossing of a null shell and a
massive shell, into a null shell and another massive shell even with
a change of mass, the imaginary part of the integral (\ref{intpc1pc2}) is still
given by eq.(\ref{oneptimpart}).
The reasoning in the general case of both $m_1$ and $m_2$ different from zero
is dealt with similarly. It is sufficient to examine the case $\hat r_1<\hat
r_2$ the other case being now equivalent. We shall first discuss the
integration along the path $\hat r_1 +\varepsilon = \hat r_2$
up to $\hat r_0$.
For $\hat r_1+\varepsilon=\hat r_2=\hat r$ we have
\begin{equation}\label{pc2pluspc1}
p_{c2}+p_{c1}=\hat r\left[\sqrt{\frac{2M}{\hat r}}-\sqrt{\frac{2H}{\hat r}}-
\log\frac{1+\frac{V_2-\hat p_2 + V_1-\hat p_1}{\hat r}-\sqrt{\frac{2H}{\hat
r}}}{1-\sqrt{\frac{2M}{\hat r}}}\right]
\end{equation}
with
\begin{equation}
H-M_0 = V_2+\frac{m_2^2}{2\hat r_2}-\hat p_2 \sqrt{\frac{2H}{\hat r_2}}
\end{equation}
which as before makes $\hat p_2$ diverge at
$\hat r_2 = 2 H$. For $\hat p_1$ we have
\begin{equation}\label{genhatp1}
\frac{\hat p_1}{\hat r_1} = \frac{A_1 W(\hat r_1+0) + R'(\hat
r_1+0) \sqrt{A_1^2 - (1-\frac{2M_0}{\hat r_1})\frac{m_1^2}
{\hat r_1^2}}}{1-\frac{2M_0}{\hat r_1}}
\end{equation}
with
\begin{equation}
A_1= \frac{M_0-M}{\hat r_1}-\frac{m_1^2}{2{\hat r}^2_1}.
\end{equation}
$\hat p_2$ diverges with positive residue at $\hat r=2H$ and $\hat p_1$ also
diverges like $\hat p_2$ as
\begin{equation}\label{refeq}
R'(\hat r_1+0)=1+\frac{V_2}{\hat r_1}
\end{equation}
and $W(\hat r_1+0)$ as given by eq.(\ref{Wdefinition}) also diverges like $\hat
p_2$. Then
for $\hat r$ just below $2H$ we have both $V_2-\hat p_2<0$ and $V_1-\hat
p_1<0$. As before all the point is in proving that $\hat p_1$ is regular
below $2H$ i.e. does not diverge at $\hat r=2M_0$. To this end we must examine
the numerator of eq.(\ref{genhatp1}) at $\hat r = 2M_0$.
We have for $\hat r=2M_0$
\begin{equation}
W(2M_0+0) = \frac{1}{2}(\sqrt{\frac{H}{M_0}}-1)
+\frac{m_2^2}{8 M^2_0(\sqrt{\frac{H}{M_0}}+1)} >0
\end{equation}
as $H>M_0$.
With regard to $R'(\hat r_1+0) $ which is negative for $\hat r_1=
2H-\varepsilon$ it cannot
change sign for $2 M_0<\hat r<2 H$ because at the point where $R'(\hat r_1+0)$
vanishes
\begin{equation}
W(\hat r_1+0) = \sqrt{{R'}^2(\hat r_1+0)-1+\frac{2M_0}{\hat r_1}}
\end{equation}
would become imaginary while from $W(\hat r_1+0)=\sqrt{2H/\hat
r_1} +\hat p_2/\hat r_1$ we have that $W$ is real. Then at $\hat r = 2M_0$ the
numerator in eq.(\ref{genhatp1}) vanishes and $\hat p_1$
is regular all the way below $2H$. Thus we are in the same situation as
in the previously discussed case. Below $\hat r = 2 M$ the argument of the
logarithm in eq.(\ref{pc2pluspc1}) becomes positive again due to the change in
sign of the denominator $1-\sqrt{2 M/\hat r}$. The integration for $\hat r >
\hat r_0$ is treated simply by exchanging $m_1$ with $m_2$.
Finally we deal with the integral
\begin{equation}
\int_{\hat r_{2i}}^{\hat r_{1i}+\varepsilon} p_{c2}~ d\hat r_2
\end{equation}
which makes the two initial points coalesce. As in the previously discussed
case of one massless shell, all the point is in proving that $\cal D$ is
real. In addition to the contribution of ${\cal B}$ which we already proved to
be real, we have now the contribution of ${\cal L}$. We have already proven
that at $r=\hat r_2-\varepsilon$
\begin{equation}\label{pc2ext}
R'(r) -\sqrt{R'(r)^2-1+\frac{2M_0}{R(r)}} =R'(r)-W(r)
\end{equation}
equals
\begin{equation}
1+\frac{V_2}{\hat r_2}-\frac{\hat p_2}{\hat
r_2}-\sqrt{\frac{2H}{\hat r_2}}
\end{equation}
which is negative for $\hat r_2 < 2H$ and thus in particular for
$\hat r_2 < 2M$.
In the interval $\hat r_1 <r <\hat r_2<2M $ the term (\ref{pc2ext}) cannot
change sign because $-1+ 2M_0/R$ is positive. Moreover we have
\begin{equation}\label{Lminus}
R'(\hat r_1-\varepsilon)-W(\hat r_1-\varepsilon)
= R'(\hat r_1+\varepsilon)-W(\hat r_1+\varepsilon)+ \frac{V_1}{R(\hat r_1)}
-\frac{\hat p_1}{R(\hat r_1)}~.
\end{equation}
But we proved after eq.(\ref{refeq}) that below $2 H$, the analytic
continuation of $V_1-\hat p_1$ is negative, implying that (\ref{Lminus}) is
negative, like $R'(\hat r_1+\varepsilon)-W(\hat r_1+\varepsilon)$. The outcome
is that the discontinuity of ${\cal L}$ at $\hat r_1$, being given by the
logarithm of a positive number, is real. This concludes
the proof in the case of the emission of two massive shells.
\section{Conclusions}
The main issue of the present paper is the treatment of two intersecting
shells of matter or radiation in general relativity,
in a formalism,
apt to compute the tunneling amplitude for the emission of two shells.
In order to do so it is necessary to adopt a gauge which is more general than
the one used \cite{krauswilczek,FLW} in the treatment of a single shell. In
the usual treatments of
the tunneling amplitude for a single shell, a limit gauge is
adopted. Already at the level of a single shell we show that the complete
action contains a term in which the mass of the remnant black hole plays a
dynamical role. Such a term is unimportant if the variation of the action is
taken with respect to the total mass of the system, keeping the mass of the
remnant as a datum of the problem, but becomes essential if one varies the mass
of the remnant keeping the total mass of the system fixed as done e.g. in
\cite{parikhwilczek}. The reduced canonical momentum even in the single shell
instance is gauge dependent but the tunneling probability turns out to be
independent of such a choice. All the treatment is performed in the general
massive case, the massless one being a special case. The tunneling results
are
independent of the mass of the shell. The adoption of a general non-limit
gauge allows the extension of the formalism to two or more shells. In
this instance both the intermediate mass and the total mass become dynamical
variables in the sense that the reduced action contains terms proportional to
the time derivative of them. We show how to derive the equations of motion of
both the interior and exterior shell by varying the reduced action and this is
done both in the massless and in the more complicated massive case. With
regard to
the computation of the tunneling amplitude it is possible to prove an
integrability theorem which allows to deform the trajectory in coordinate
space to a contour which drastically simplifies the computation. Firstly one
proves in such a way that the result is independent of the deformation
defining the gauge
introduced in Sect. \ref{theactionsec}, and secondly one
finds in the
general massive or massless case that the tunneling probability is given
simply by the product of the tunneling probabilities for the independent
emission of the two shells. Such a circumstance is interpreted \cite{parikh} as
the fact that in this model we have no information encoded in the radiation
emitted by the black hole.
\bigskip\bigskip
\section*{Acknowledgments}
One of us (P.M.) is grateful to M. Porrati and V. S. Rychkov for stimulating
discussions.
\bigskip
\section*{Appendix A}
For completeness we report here some formulas which are
useful in the text.
The constraints are given by \cite{polchinski,krauswilczek,FLW}
\begin{equation}
{\cal H}_r = \pi_R R'- \pi_L'L -\hat p~\delta(r-\hat r),
\end{equation}
\begin{equation}
{\cal H}_t = \frac{R R''}{L}+\frac{{R'}^2}{2
L}+\frac{L \pi_L^2}{2R^2}-\frac{R R' L'}{L^2}-
\frac{\pi_L\pi_R}{R}-\frac{L}{2}+ \sqrt{{\hat p}^2
L^{-2}+m^2}~\delta(r-\hat r).
\end{equation}
From the constraints it follows \cite{polchinski} that the quantity
\begin{equation}\label{Minvariant}
{\cal M} = \frac{\pi_L^2}{2 R}+\frac{R}{2} -\frac{R\,(R')^2}{2L^2}
\end{equation}
is constant in the regions of $r$ where there are no sources as there
\begin{equation}\label{Mprime}
{\cal M}' = -\frac{R'}{L}{\cal H}_t -\frac{\pi_L}{RL}{\cal H}_r.
\end{equation}
The equations of motion for the gravitational field are \cite{FLW}
\begin{equation}\label{Ldot}
\dot L = N\left(\frac{L\pi_L}{R^2}-\frac{\pi_R}{R}\right)+
(N^r L)',
\end{equation}
\begin{equation}\label{Rdot}
\dot R = -\frac{N\pi_L}{R}+ N^r R',
\end{equation}
\begin{equation}\label{piLdot}
\dot\pi_L
=\frac{N}{2}\left[-\frac{\pi_L^2}{R^2}-\left(\frac{R'}{L}\right)^2+1
+\frac{2~\hat p^2~\delta(r-\hat r)}{L^3\sqrt{\hat p^2 \hat L^{-2}+m^2}}\right]
-\frac{N'R' R}{L^2}+N^r\pi_L',
\end{equation}
\begin{equation}\label{piRdot}
\dot\pi_R
=N\left[\frac{L\pi_L^2}{R^3}-\frac{\pi_L\pi_R}{R^2}-\left(\frac{R'}{L}\right)'
\right]-\left(\frac{N'R}{L}\right)'+(N^r\pi_R)'.
\end{equation}
The equations of motion for $\dot{\hat r}$ and $\dot{\hat p}$ follow
from the above equations for $R$, $L$, $\pi_R$ and $\pi_L$ and the
constraints.
In fact using the relation
\begin{equation}\label{contR}
\frac{dR(\hat r)}{dt}=\dot R(\hat r+\varepsilon)
+R'(\hat r+\varepsilon) \dot{\hat r}=\dot R(\hat r-\varepsilon)
+R'(\hat r-\varepsilon) \dot{\hat r}
\end{equation}
we have
\begin{equation}
\dot{\hat r}\Delta R'+\Delta \dot R=0.
\end{equation}
Using now the equation of motion (\ref{Rdot}) and the constraints
\begin{equation}
\Delta R'(\hat r) = -\frac{V}{R};~~~~ \Delta \pi_L(\hat r)
=-\frac{\hat p}{L}
\end{equation}
where $V=\sqrt{\hat p^2 + m^2 L^2}$,
one obtains
\begin{equation}\label{hatrdot}
\dot{\hat r} =\frac{N(\hat r)\hat p}{L(\hat r)\sqrt{\hat p^2
+m^2 L^2(\hat r)}}- N^r(\hat r).
\end{equation}
Using the relation
\begin{equation}\label{contL}
\frac{dL(\hat r)}{dt} = L'(\hat r+\varepsilon) \dot{\hat r}+\dot L(\hat
r+\varepsilon) =L'(\hat r-\varepsilon) \dot{\hat r}+\dot L(\hat
r-\varepsilon)= \overline{L'}\dot{\hat r} +\overline{\dot L}
\end{equation}
and
\begin{equation}\label{discpiRdot}
-\dot{\hat r} \Delta\pi_R = -N(\hat r)\frac{\Delta R'}{L(\hat r)}
-\frac{\Delta N'~R(\hat r)}{L(\hat r)}+ N^r(\hat r) \Delta\pi_R
\end{equation}
derived from (\ref{piRdot}) and
\begin{equation}
\dot{\hat p} = -\frac{dL(\hat r)}{dt}\Delta\pi_L-L(\hat
r)\frac{d\Delta\pi_L}{dt}
\end{equation}
one obtains
\begin{equation}\label{hatpdot}
\dot{\hat p} =\frac{N(\hat r) {\hat p}^2 \overline{{L}'}}
{L^2(\hat r)\sqrt{\hat p^2 + m^2 L^2(\hat r)}}-
\frac{\overline{{N}'}}{L(\hat r)} \sqrt{\hat p^2
+m^2 L^2(\hat r)}+ \hat p ~\overline{({N}^r)'}.
\end{equation}
The occurrence of the average values $\overline{L'}=
[L'(\hat r+\varepsilon)+ L'(\hat r -\varepsilon)]/2$ etc. in the
previous equation, was pointed out
and discussed at the level of the variational principle in
\cite{FLW}. In the massless case $m=0$ however, using again
relations (\ref{contR},\ref{contL},\ref{discpiRdot}) one proves that
the r.h.s. of eq.(\ref{hatpdot}) has no discontinuity i.e. there is
no need to take the average value in the r.h.s. of eq.(\ref{hatpdot}).
In fact let us consider the discontinuity
of the zero mass version of the r.h.s. of eq.(\ref{hatpdot})
\begin{equation}
\hat p(\frac{\eta N L'}{L^2}- \frac{\eta N'}{L}+{N^r}')
\end{equation}
being $\eta$ the sign of $\hat p$, i.e.
\begin{equation}\label{discontinuity}
\hat p(\frac{\eta N \Delta L'}{L^2}- \frac{\eta \Delta N'}{L}+
\Delta {N^r}').
\end{equation}
From eq.(\ref{Ldot}) and the equation of motion (\ref{hatrdot}) we have
\begin{equation}
\eta\frac{N}{L}\Delta L' =
\frac{N}{R}\Delta\pi_R-\frac{NL}{R^2}\Delta\pi_L-L\Delta {N^r}'
\end{equation}
and from eq.(\ref{piRdot})
\begin{equation}
\eta N\Delta\pi_R=R\Delta N'+N\Delta R'.
\end{equation}
Substituting the two into eq.(\ref{discontinuity}) and taking into account that
$\Delta\pi_L=-\hat p/L$ and $\Delta R'=-\eta \hat p/R$ we have that expression
(\ref{discontinuity}) vanishes.
This fact was discussed at the level of the variational principle in
\cite{LWF}.
From the equations of motion (\ref{Ldot},\ref{Rdot},\ref{piLdot},
\ref{piRdot}) it follows that in the region where there are no sources
\begin{equation}
\frac{d{\cal M}}{dt} = -N\frac{R'}{L^3}{\cal H}_r- N^r
\frac{R'}{L}{\cal H}_t - N^r \frac{\pi_L}{RL} {\cal H}_r
\end{equation}
i.e. combining with (\ref{Mprime}), we have that in the region free
of sources ${\cal M}$ is constant both in $r$ and in $t$.
\bigskip
\section*{Appendix B}
Equation of motion for $\hat r$.
1) Outer gauge
In this case
\begin{equation}
p_c = R(\Delta {\cal L}-\Delta {\cal B}) = \hat r\left(-{\cal L}(\hat
r-\varepsilon) -\sqrt{\frac{2H}{\hat r}}+\sqrt{\frac{2M}{\hat r}}+
\log\left(1-\sqrt{\frac{2M}{\hat r}}\right)\right).
\end{equation}
Using
\begin{equation}
\frac{\partial \hat p}{\partial H}=\left(1+\frac{\hat p}
{\sqrt{2H \hat r}}\right)\left(\frac{\hat p}{V}- \sqrt{\frac{2H}
{\hat r}}\right)^{-1}
\end{equation}
and
\begin{equation}
\frac{\partial {\cal L}}{\partial R'}(\hat r-\varepsilon) = -\frac{1}{W(\hat
r-\varepsilon)} = -\left(\sqrt{\frac{2H}{\hat r}}+\frac{\hat p}
{\hat r}\right)^{-1}
\end{equation}
we have
\begin{equation}
\frac{\partial p_c}{\partial H} = -\sqrt{\frac{\hat r}{2H}}+\sqrt{\frac{\hat
r}{2H}} \frac{\hat p}{V}\frac{1}{\frac{\hat p}{V} -\sqrt{\frac{2H}{\hat r}}}=
\left(\frac{\hat p}{V}-\sqrt{\frac{2H}{\hat r}}\right)^{-1}
\end{equation}
from which
\begin{equation}
\dot{\hat r} = N(r_m) \left(\frac{\hat p}{V}-\sqrt{\frac{2H}{\hat r}}\right).
\end{equation}
2) Inner gauge
This time we have
\begin{equation}
H-M = V -\frac{m^2}{2\hat r} - \hat p \sqrt{\frac{2M}{\hat r}}
\end{equation}
and
\begin{equation}
p^i_c = \hat r\left({\cal L}(\hat
r+\varepsilon) -\sqrt{\frac{2H}{\hat r}}+\sqrt{\frac{2M}{\hat r}}
-\log\left(1-\sqrt{\frac{2H}{\hat r}}\right)\right).
\end{equation}
Using
\begin{equation}
\frac{\partial \hat p}{\partial M}=- \left(1-\frac{\hat p}
{\sqrt{2M \hat r}}\right)\left(\frac{\hat p}{V}-
\sqrt{\frac{2M}{\hat r}}\right)^{-1}
\end{equation}
and
\begin{equation}
\frac{\partial {\cal L}}{\partial R'}(\hat r+\varepsilon) = -\frac{1}{W(\hat
r+\varepsilon)} = -\left(\sqrt{\frac{2M}{\hat r}}-\frac{\hat p}
{\hat r}\right)^{-1}
\end{equation}
we have
\begin{equation}
\frac{\partial p^i_c}{\partial M} =\sqrt{\frac{\hat r}{2 M}}
\left[1- \frac{\hat p}{V}
\left(\frac{\hat p}{V}-\sqrt{\frac{2M}{\hat r}}\right)^{-1}\right] =
-\left(\frac{\hat p}{V}-\sqrt{\frac{2M}{\hat r}}\right)^{-1}
\end{equation}
i.e.
\begin{equation}
\dot {\hat r} = \left(\frac{\hat p}{V} -
\sqrt{\frac{2M}{\hat r}}\right)N(r_0).
\end{equation}
\section*{Appendix C}
In this Appendix we derive the equations of motion for the exterior and
interior shell from the reduced action (\ref{twoshellreducedaction}) for
masses $m_1$ and $m_2$ different from zero.
First we consider the variation $\delta H\neq 0$ and $\delta M_0=0$. Under
such a variation the coefficient of the $\dot{\hat r}_1$ term is given, using
\begin{equation}
\frac{\partial R}{\partial H}=(R-\hat r_1)\frac{dT}{dH};~~~~
\frac{\partial R'}{\partial H}=(R'-1)\frac{dT}{dH}
\end{equation}
by $\displaystyle{(W(\hat r_1+\varepsilon) W(\hat
r_1-\varepsilon))^{-1}\frac{dT}{dH}}$
multiplied by
\begin{eqnarray}\label{massivedothatr1}
&& R'\left[- W(\hat r_1-\varepsilon) (R'-1)+ W(\hat r_1+\varepsilon) (R'-1
+ ( R - \hat r_1)\frac{\partial v_1}{\partial R}+(
R'- 1)\frac{\partial v_1}{\partial R'})\right]- \nonumber\\
&& (R- \hat r_1)\left[-W(\hat r_1-\varepsilon )R''+W(\hat r_1+\varepsilon)
(R''+ \frac{\partial v_1}{\partial R}R'+
\frac{\partial v_1}{\partial R'}R'')\right]
\end{eqnarray}
where $R,R',R''$ stay for $R(\hat r_1),R'(\hat r_1+\varepsilon),R''(\hat
r_1+\varepsilon)$.
From
\begin{equation}
W(\hat r_1-\varepsilon)=W(\hat r_1+\varepsilon) +\frac{\hat p_1}{R(\hat
r_1)}
\end{equation}
we find
\begin{equation}\label{dv1sdR1}
W(\hat r_1+\varepsilon) \frac{\partial v_1}{\partial R'(\hat r_1+\varepsilon)}
=\frac{\hat p_1}{R(\hat r_1)}
\end{equation}
which substituted into eq.(\ref{massivedothatr1}) makes it vanish. This
is an expected
result as the motion of the exterior shell must not depend on the
dynamics which
develops at smaller radiuses, but only on the two masses $H$ and $M_0$.
With regard to the terms proportional to $\dot{\hat r}_2$ in addition to
\begin{equation}
\dot{\hat r}_2 \frac{\partial p^0_{c2}}{\partial H}
\end{equation}
we have the term given by
\begin{equation}
\frac{1}{W(\hat r_1+\varepsilon) W(\hat r_1-\varepsilon)} \frac{dT}{dH}
\end{equation}
multiplied by
\begin{eqnarray}
&&(-(R'-1)+(R-\hat r_1)\frac{\partial T}{\partial \hat r_2})\bigg[
-W(\hat r_1-\varepsilon)(R'-1)+ \\
&& \left.W(\hat r_1+\varepsilon)(R'-1+\frac{\partial v_1}{\partial R}
(R-\hat r_1) + \frac{\partial v_1}{\partial R'}(R'-1))\right]- \nonumber\\
&& (R-\hat r_1)\bigg[-W(\hat r_1-\varepsilon)(\frac{dT}{d\hat r_2}(R'-1)-R'')+
W(\hat r_1+\varepsilon)(\frac{dT}{d\hat r_2}(R'-1)-R''+ \nonumber\\
&& \frac{\partial v_1}{\partial R}(-R'+1+\frac{dT}{d\hat r_2}( R-\hat r_1))+
\frac{\partial v_1}{\partial R'}(-R''+\frac{dT}{d\hat r_2}(R'-1)))\bigg]
\nonumber
\end{eqnarray}
where again $R,R',R''$ stay for $R(\hat r_1),R'(\hat
r_1+\varepsilon),R''(\hat r_1+\varepsilon)$.
As a consequence of eq.(\ref{dv1sdR1}) the above expression vanishes.
Adding the contribution of the boundary term we have
\begin{equation}
\dot{\hat r}_2 \frac{\partial p^0_{c2}}{\partial H}-N(r_m) =0
\end{equation}
which is the single shell equation of motion (\ref{outereqmotion}).
The equation of motion of the interior shell is obtained from the variation
$\delta H=0$, $\delta M_0\neq 0$. In this case we have no contribution form the
boundary terms. For the coefficient of $\dot{\hat r}_1$ we
obtain
\begin{eqnarray}\label{dotr1massive}
&& -\frac{1}{W(\hat r_1+\varepsilon)}+ R' R
\left[-\frac{1}{W(\hat r_1+\varepsilon)} \frac{dT}{d M_0}(R'-1)+\right.\\
&&\left.\frac{1}{W(\hat r_1-\varepsilon)}\left(\frac{dT}{d M_0}(R'-1+
\frac{\partial v_1}{\partial R}(R-\hat r_1)+ \frac{\partial v_1}{\partial
R'}(R'-1))+
\frac{\partial v_1}{\partial M_0}\right)\right]- \nonumber\\
&& (R-\hat r_1)\frac{dT}{d M_0} R \left[-\frac{R''}{W(\hat
r_1+\varepsilon)}+\frac{1}{W(\hat r_1-\varepsilon)}(R''
+\frac{\partial v_1}{\partial R} R'+
\frac{\partial v_1}{\partial R'}R'')\right].\nonumber
\end{eqnarray}
having used the relation (see eq.(\ref{Fproperties}))
\begin{equation}
R'\frac{\partial^2 F}{\partial {\cal M}\partial R'}-
\frac{\partial F}{\partial {\cal M}}= -\frac{1}{W}~.
\end{equation}
Substituting in eq.(\ref{dotr1massive}) the relation
\begin{equation}
\frac{\partial v_1}{\partial M_0}=\frac{\hat p_1 (1+\frac{\hat p_1}
{R ~W(\hat r_1+
\varepsilon)})}{R(R'(\hat r_1+\varepsilon) \hat p_1 - V_1 W(\hat
r_1+\varepsilon))}
\end{equation}
such a coefficient of $\dot{\hat r}_1$ becomes simply
\begin{equation}
\frac{1}{\frac{\hat p_1}{V_1}R'(\hat r_1+\varepsilon)-W(\hat r_1+\varepsilon)}
\end{equation}
which is the non zero mass generalization of the
coefficient of $\dot{\hat r}_1$ appearing in eq.(\ref{firstdothatr1}).
Then using eqs.(\ref{NrNequations},\ref{Nrsolution}) for $N(\hat r_1)$
and $N^r(\hat r_1)$
we have
\begin{eqnarray}
& & 0=\left[\frac{\hat p_1}{V_1}R'(\hat r+\varepsilon)-W(\hat
r+\varepsilon)\right]^{-1}
\left(\dot{\hat r}_1 -\frac{\hat p_1}{V_1}N(\hat r_1)+N^r(\hat r_1)
\right) \nonumber\\
& & +\dot{\hat r}_2\left[\frac{\partial p^0_{c2}}{\partial
M_0}-\frac{\partial^2
F}{\partial M_0\partial R'}\frac{V_2}{\hat r_2}+\frac{\partial F}
{\partial M_0}\right]
+\alpha \\
& & - \left[\frac{\hat p_1}{V_1}R'(\hat r+\varepsilon)-W(\hat r
+\varepsilon)\right]^{-1}\frac{\hat p_1}{V_1}\frac{\dot R(\hat
r_1+\varepsilon)} {W(\hat r+\varepsilon)} \nonumber\\
& & +\dot{\hat r}_2\frac{\partial \tilde p_{c2}}{\partial M_0}-\dot R(\hat
r_1+\varepsilon)\frac{\partial^2 F}{\partial M_0\partial R'}(\hat
r_1+\varepsilon) -
\dot{\hat r}_2 \frac{d}{d\hat r_2}\left[(R(\hat r_1)-\hat r_1)\frac{dT}{dM_0}
{\cal D}\right]
\nonumber
\end{eqnarray}
where $\alpha = \sqrt{2H\hat r}/(\sqrt{2H\hat r}+\hat p)$.
The first line is simply the equation of motion for $\hat r_1$, the second line
vanishes being simply the equation of motion for $\hat r_2$
due to the variation of $M_0$ (see eq.(\ref{generalMeqmotion})) while
the sum of the third and fourth line vanishes in virtue of relation
(\ref{dv1sdR1}).
\eject
|
1,108,101,565,244 | arxiv | \section{Introduction}
The general Burnside problem is among the most influential problems in
combinatorial group theory. It asks whether a finitely generated group is
finite if every element has finite order. The general Burnside problem
was answered negatively by Golod~\cite{Gol64}. The first explicit
counter-examples were constructed in~\cite{Al72,Gri80,GS83}. Among
these counter-examples is the Grigorchuk group ${\mathfrak G}$ which
is a finitely generated self-similar group. The group ${\mathfrak G}$
is not finitely presented~\cite{Gri99} but it admits a recursive
presentation which could be described in finite terms using the
action of a finitely generated monoid of substitutions acting on
finitely many relations~\cite{Lys85}. These recursive presentations
are nowadays known as \emph{finite $L$-presentations}~\cite{Gri99} (or
\emph{endomorphic presentations}~\cite{Bar03}) in honor of Lys{\"e}nok's
work in~\cite{Lys85} for the Grigorchuk group; see~\cite{Bar03} or
Section~\ref{sec:SelfSim} for a definition.\smallskip
Finite $L$-presentations allow computer algorithms to be employed
in the investigation of the groups they define. A first algorithm
for finitely $L$-presented groups is the nilpotent quotient
algorithm~\cite{Har08,BEH08}. Recently, further algorithms for finitely
$L$-presented groups were developed~\cite{Har10,Har11,Har11b}. For
instance, in~\cite{Har11}, a coset enumeration process for finitely
$L$-presented groups was described. This is an algorithm which, given a
finite generating set of a subgroup of a finitely $L$-presented group,
computes the index of the subgroup in the finitely $L$-presented group
provided that this index is finite. Usually index computations in
self-similar groups have involved lots of tedious calculations (e.g.,
finding an appropriate quotient of the self-similar group; computing
the index of the subgroup in this quotient; followed by a proof that
the obtained index is correct; see, for instance,~\cite[Section~4]{BG02}
or~\cite[Chapter~\Rom{8}]{Har00}). The coset enumerator in~\cite{Har11}
makes this process completely automatic and thus it shows the
significance of finite $L$-presentations in the investigation of
self-similar groups. Moreover, coset enumeration allows one to
compute the number of low-index subgroups of finitely $L$-presented
groups~\cite{Har11}.\smallskip
We demonstrate the application of the algorithms for finitely
$L$-presented groups in the investigation of a class of self-similar
groups $\Gamma_p$ for $3 \leq p \leq 11$. The group $\Gamma_3$ was introduced
in~\cite{FG85}. It is a self-similar group with an intermediate word
growth~\cite{FG85,FG91,BP09}. The groups $\Gamma_p$, with $p>3$, were
introduced in~\cite{Gri00}. They are known as \emph{Fabrykowski-Gupta
groups}. Their abelianization $\Gamma_p / \Gamma_p' \cong {\mathbb Z}_p \times {\mathbb Z}_p$
was computed in~\cite{Gri00}. Moreover, for $p\geq 5$, the groups $\Gamma_p$
are just-infinite, regular branch groups~\cite{Gri00}. The congruence
subgroups of $\Gamma_p$, for primes $p>3$, were studied in~\cite{Su07}; see
also~\cite{FAZR11}. The lower central series sections $\gamma_c\Gamma_3 /
\gamma_{c+1}\Gamma_3$ have been computed entirely in~\cite{Bar05} while,
for $p>3$, parts of the lower central series sections $\gamma_c\Gamma_p /
\gamma_{c+1} \Gamma_p$ have been computed in~\cite{BEH08}. So far, little
more is known on the groups $\Gamma_p$.\smallskip
For $p \geq 3$, the Fabrykowski-Gupta group $\Gamma_p$ admits a finite
$L$-presentation~\cite{BEH08}. We demonstrate how the implementations
of the algorithms for finitely $L$-pre\-sen\-ted groups allow us to
investigate the groups $\Gamma_p$ for $3\leq p\leq 11$ in detail. For
instance, we demonstrate the application of our algorithm
\begin{itemize}\addtolength{\itemsep}{-1ex}
\item to compute the isomorphism type of the lower central series sections
$\gamma_c\Gamma_p / \gamma_{c+1} \Gamma_p$ using improved (parallel) methods
from~\cite{BEH08,Har08}.
\item to compute the isomorphism type of the Dwyer quotients $M_c(\Gamma_p)$
of their Schur multiplier using the methods from~\cite{Har10}.
\item to determine the number of low-index subgroups of the groups $\Gamma_p$
using the methods from~\cite{Har11}.
\item to compute the isomorphism type of the sections $\Gamma_p^{(c)} /
\Gamma_p^{(c+1)}$ of the derived series combining the methods
from~\cite{Har11b} and~\cite{BEH08,Har08}.
\end{itemize}
We briefly sketch the algorithms available for finitely $L$-presented groups.
Moreover, we compare our experimental results
for the Fabrykowski-Gupta groups $\Gamma_p$ with those results for
the Grigorchuk group ${\mathfrak G}$. The group ${\mathfrak G}$ has been investigated
for decades now. Even though a lot is known about its structure, various questions still remain open~\cite{Gr05}. For further details
on the Grigorchuk group ${\mathfrak G}$, we refer to~\cite[Chapter~\Rom{8}]{Har00}.
\section{Self-Similar Groups}\Label{sec:SelfSim}
A self-similar group can be defined by its recursive action on a regular
rooted tree: Consider the $d$-regular rooted infinite tree ${\mathcal T}_d$ as a free
monoid over the alphabet ${\mathcal X} = \{0,\ldots,d-1\}$. Then a self-similar
group can be defined as follows:
\begin{definition}
A group $G$ acting faithfully on the free monoid ${\mathcal X}^*$ is
\emph{self-similar} if for each $g \in G$ and $x \in {\mathcal X}$ there exist
$h\in G$ and $y \in {\mathcal X}$ so that
\begin{equation}
(xw)^g = y\,w^h \quad\textrm{for each}\quad w \in {\mathcal X}^*.\Label{eqn:SelfSimAct}
\end{equation}
\end{definition}
It suffices to specify the self-similar action in
Eq.~(\ref{eqn:SelfSimAct}) on a generating set of a group. For
instance, the Grigorchuk group ${\mathfrak G} = \langle a,b,c,d\rangle$ can be defined
as a subgroup of the automorphism group of the rooted binary tree ${\mathcal
T}_2 = \{0,1\}^*$ by its self-similar action:
\[
\begin{array}{rcl@{\hspace{3cm}}rcl}
(0\,w)^a&=&1\,w &(1\,w)^a&=&0\,w\\
(0\,w)^b&=&0\,w^a &(1\,w)^b&=&1\,w^c\\
(0\,w)^c&=&0\,w^a &(1\,w)^c&=&1\,w^d\\
(0\,w)^d&=&0\,w &(1\,w)^d&=&1\,w^b\,.
\end{array}
\]
The Fabrykowski-Gupta group $\Gamma_3$ is another example of a self-similar
group. It was introduced in~\cite{FG85} as a group with
an intermediate word growth~\cite{FG91,BP09}. The group $\Gamma_3$ was
generalized in~\cite{Gri00} to a class of self-similar groups $\Gamma_d$
acting on the $d$-regular rooted tree:
\begin{definition}
For $d \geq 3$, the \emph{Fabrykowski-Gupta group} $\Gamma_d = \langle a,r \rangle$
is a self-similar group acting faithfully on the $d$-regular rooted tree
${\mathcal T}_d = \{0,\ldots,d-1\}^*$ by
\[
\begin{array}{rcll}
(x\,w)^a&=&x+1\pmod d\,w,&\textrm{for }0\leq x\leq d-1\\[0.75ex]
(0\,w)^r&=&0\,w^a, & \\
(x\,w)^r&=&x\,w, &\textrm{for }1\leq x< d-1\\
(d-1\,w)^r&=&d-1\,w^r. &
\end{array}
\]
\end{definition}
The groups ${\mathfrak G}$ and $\Gamma_d$ admit a finite $L$-presentation;
that is, a \emph{finite $L$-presentation} is a group
presentation of the form
\begin{equation}
\Big\langle {\mathcal X}~\Big|~{\mathcal Q} \cup \bigcup_{\sigma \in \Phi^*} {\mathcal R}^\sigma \Big\rangle,\Label{eqn:LPresApp}
\end{equation}
where ${\mathcal X}$ is a finite alphabet, ${\mathcal Q}$ and ${\mathcal R}$ are finite subsets of
the free group $F$ over ${\mathcal X}$, and $\Phi^*$ denotes the monoid of
endomorphisms which is generated by the finite set $\Phi\subseteq {\mathrm{End}}(F)$. The
group defined by the finite $L$-presentation in Eq.~(\ref{eqn:LPresApp})
is denoted by \mbox{$\langle{\mathcal X}\mid{\mathcal Q}\mid\Phi\mid{\mathcal R}\rangle$}. If ${\mathcal Q} = \emptyset$ holds,
the $L$-presentation in Eq.~(\ref{eqn:LPresApp}) is \emph{ascending}. In
this case, every endomorphism $\sigma \in \Phi^*$ induces an endomorphism
of the group $G$.\smallskip
The Grigorchuk group ${\mathfrak G}$ is an example of a self-similar group
which is finitely $L$-presented~\cite{Lys85}: the group ${\mathfrak G}$ satisfies
\[
{\mathfrak G} \cong \Big\langle \{a,b,c,d\}\:\Big|\:\{a^2,b^2,c^2,d^2,bcd\} \cup \bigcup_{i\geq 0}\,
\{ (ad)^4, (adacac)^4 \}^{\sigma^i}\Big\rangle,
\]
where $\sigma$ is the endomorphism of the free group $F$ over
$\{a,b,c,d\}$ which is induced by the map $a\mapsto aca$, $b\mapsto
d$, $c\mapsto b$, and $d\mapsto c$. A general method for computing a
finite $L$-presentation for a class of self-similar groups was developed
in~\cite{Bar03} in order to prove
\def\cite{Roz93}{Bartholdi~\cite{Bar03}}
\begin{theorem}[\cite{Roz93}]\Label{thm:FinLPres}
Each finitely generated, contracting, semi-fractal regular branch group
is finitely $L$-presented; however, it is not finitely presented.
\end{theorem}
The constructive proof of Theorem~\ref{thm:FinLPres} in~\cite{Bar03}
was used in~\cite{BEH08} to compute the following finite $L$-presentation
for the Fabrykowski-Gupta group $\Gamma_p$:
\def\cite{Roz93}{Bartholdi et al.~\cite{BEH08}}
\begin{theorem}[\cite{Roz93}]\Label{thm:FGLp}
For $d\geq 3$, the group $\Gamma_d$ is finitely
$L$-presented by $\langle\{\alpha,\rho\} \mid \emptyset\mid \{\varphi\}\mid
{\mathcal R} \rangle$ where the iterated relations in ${\mathcal R}$ are defined as follows:
Writing $\sigma_i = \rho^{\alpha^i}$, for $1 \leq i \leq d-1$, and reading
indices modulo $d$, we have
\[
{\mathcal R}=\left\{
\alpha^d,
\Big[\sigma_i^{\sigma_{i-1}^k},\sigma_j^{\sigma_{j-1}^\ell}\Big],
\sigma_i^{-\sigma_{i-1}^{k+1}}
\sigma_i^{\sigma_{i-1}^k\sigma_{i-1}^{\sigma_{i-2}^\ell}}
\right\}_{
{\scriptstyle 1\leq i,j\leq d},\:
{\scriptstyle 2\leq |i-j|\leq d-2},\:
{\scriptstyle 0\leq k,\ell \leq d-1}
}
\]
The substitution $\varphi$ is induced by the map $\alpha\mapsto
\rho^{\alpha^{-1}}$ and $\rho\mapsto \rho$.
\end{theorem}
It follows immediately from the $L$-presentation in Theorem~\ref{thm:FGLp}
that the substitution $\varphi$ induces an endomorphism of the group
$\Gamma_d$. Finite $L$-presentations $\langle{\mathcal X}\mid{\mathcal Q}\mid\Phi\mid{\mathcal R}\rangle$
whose substitutions $\sigma\in\Phi$ induce endomorphisms of the
group are \emph{invariant $L$-presentations}. Each ascending
$L$-presentation is invariant. It is also easy to see that
the $L$-presentation for the Grigorchuk group ${\mathfrak G}$ above is
invariant~\cite[Corollary~4]{Gri98}.\smallskip
A finite $L$-presentation allows us to define a group that is possibly
infinitely presented in computer algebra systems such as {\scshape Gap}~\cite{GAP}
or {\scshape Magma}~\cite{Magma}. Beside defining a self-similar group by its
finite $L$-presentation, it can also be defined by its recursive action
on a regular tree. A finite approximation of the recursive action of a
self-similar group is often sufficient to study finite index subgroups
since various self-similar groups have the congruence property: every
finite index subgroup contains a level stabilizer (i.e., the stabilizer of
some level of the regular tree). This often yields an alternative approach
to investigate the structure of a self-similar group with the help of
computer algebra systems~\cite{FR}. However, there are self-similar
groups that do not have the congruence property~\cite{BS10}. For these
groups, their finite $L$-presentation may help to gain insight into
the structure of the group. The groups ${\mathfrak G}$ and $\Gamma_3$ have the
congruence property~\cite{BG02}.\smallskip
In the following, we demonstrate how the finite $L$-presentation
in Theorem~\ref{thm:FGLp} allows us to obtain detailed information
on the structure of the groups $\Gamma_p$, for $3 \leq p \leq 11$.
For further details on self-similar groups, we refer to the monograph
by Nekrashevych~\cite{Nek05}.
\section{A Nilpotent Quotient Algorithm}\Label{sec:NQL}
For a group $G$, the lower central series is defined recursively by
$\gamma_1 G = G$ and $\gamma_{c+1} = [\gamma_cG,G]$ for $c\in{\mathbb N}$. If
$G$ is finitely generated, $G / \gamma_{c+1}G$ is polycyclic
and therefore it can be described by a polycyclic presentation; i.e., a polycyclic presentation is a finite
presentation whose generators refine a subnormal series with cyclic
sections. A polycyclic presentation allows effective computations within
the group it defines~\cite[Chapter~9]{Sims94}.\smallskip
A nilpotent quotient algorithm computes a polycyclic presentation
for the factor group $G/ \gamma_{c+1}G$ together with a homomorphism
$G \to G/\gamma_{c+1} G$. Such an algorithm for finitely presented
groups was developed in~\cite{Nic96}. This nilpotent quotient
algorithm was a first algorithm that could be generalized to finite
$L$-presentations~\cite{Har08,BEH08}. The experimental results in
this section were obtained with an improved, parallel version of the
algorithm in~\cite{Har08,BEH08}. They extend the computational results
in~\cite{BEH08} significantly.\smallskip
We briefly sketch the nilpotent quotient algorithm for
finitely $L$-presented groups in the following. Let $G =
\langle{\mathcal X}\mid{\mathcal Q}\mid\Phi\mid{\mathcal R}\rangle$ be a finitely $L$-presented group. Denote
by $F$ the free group over the alphabet ${\mathcal X}$ and let $K$ be the
normal closure $K = \left\langle \bigcup_{\sigma\in\Phi^*} {\mathcal R}^\sigma
\right\rangle^F$. First, we assume that ${\mathcal Q} = \emptyset$ holds. Then
$K^\sigma \subseteq K$, for each $\sigma\in\Phi$, and $G = F/K$
hold. Therefore, each $\sigma\in\Phi$ induces an endomorphism of the
group $G$. Furthermore, we have $G / \gamma_cG \cong F / K\gamma_cF$.
The nilpotent quotient algorithm uses an induction on $c$ to compute a
polycyclic presentation for $G / \gamma_cG$. For $c = 2$, we have
\[
G / [G,G] \cong F/KF'
\cong (F/F')/(KF'/F').
\]
Since $G$ is finitely generated, $F/F'$ is free abelian with finite
rank. The normal generators $\bigcup_{\sigma\in\Phi^*} {\mathcal R}^\sigma$
of $K$ give a (possibly infinite) generating set of $KF'/F'$. From
this generating set it is possible to compute a finite generating set
${\mathcal U}$ with a spinning algorithm. The finite generating set ${\mathcal U}$
allows us to apply the methods from~\cite{Nic96} that eventually compute
a polycyclic presentation for $F/KF'$ together with a homomorphism $F
\to F/KF'$ which induces $G \to G/G'$.\smallskip
For $c>2$, assume that the algorithm has already computed a polycyclic
presentation for $G/ \gamma_cG \cong F / K\gamma_cF$ together with a
homomorphism $F \to F / K\gamma_cF$. Consider the factor group $H_{c+1}
= F / [K\gamma_cF, F]$. Then $[K\gamma_cF,F] = [K,F]\gamma_{c+1}F$
and $H_{c+1}$ satisfies the short exact sequence
\[
1 \to
K\gamma_cF / [K\gamma_cF, F] \to H_{c+1} \to
F / K\gamma_c F \to 1;
\]
that is, $H_{c+1}$ is a central extension of a finitely generated abelian
group by $G/\gamma_cG$. Thus $H_{c+1}$ is nilpotent and polycyclic.
A polycyclic presentation for $H_{c+1}$ together with a homomorphism
$F \to F / [K\gamma_cF,F]$ can be computed with the covering algorithm
in~\cite{Nic96}; for a proof that this algorithm generalizes to finite
$L$-presentations we refer to~\cite{Har08}. Then $K\gamma_{c+1}F /
[K\gamma_c,F]$ is subgroup of $K\gamma_cF / [K\gamma_cF, F]$ and a
(possibly infinite) generating set for $K\gamma_{c+1}F / [K\gamma_cF,F]$
can be obtained from the normal generators of $K$. Again, a finite
generating set ${\mathcal U}$ for $K\gamma_{c+1}F/[K\gamma_cF,F]$ can
be computed with a spinning algorithm from the normal generators of
$K$. The finite generating set ${\mathcal U}$ allows us to apply the methods
in~\cite{Nic96} for computing a polycyclic presentation for $G /
\gamma_{c+1}G \cong F / K\gamma_{c+1}F$ together with a homomorphism $F
\to F/K\gamma_{c+1}F$. This finishes our description of the nilpotent
quotient algorithm in the case where $Q = \emptyset$ holds.\smallskip
If, on the other hand, $G$ is given by a finite $L$-presentation
$\langle{\mathcal X}\mid{\mathcal Q}\mid\Phi\mid{\mathcal R}\rangle$ with ${\mathcal Q} \neq \emptyset$, the algorithm described above applies
to the finitely $L$-presented group $H = \langle{\mathcal X}\mid\emptyset\mid\Phi\mid{\mathcal R}\rangle$. Write $H =
F/K$ and $G = F/L$ for normal subgroups $K \leq L$.
The nilpotent quotient algorithm applied to $H$ yields a polycyclic
presentation for $H / \gamma_{c+1}H$ together with a homomorphism $F
\to F/K\gamma_{c+1}F$. This yields
\[
G / \gamma_{c+1}G \cong F / L\gamma_{c+1}F \cong
(F/K\gamma_{c+1}F) / (L\gamma_{c+1}F/K\gamma_{c+1}F).
\]
The subgroup $L\gamma_{c+1}F/K\gamma_{c+1}F$ is finitely generated by
the images of the relations in ${\mathcal Q}$. Standard methods for polycyclic
groups~\cite{Sims94} then give a polycyclic presentation for the
factor group $G / \gamma_{c+1}G$ of the polycyclically presented group
$H/\gamma_{c+1}H$ and a homomorphism $F \to G / \gamma_{c+1}G$.
\subsection{Applications of the Nilpotent Quotient Algorithm}\Label{sec:AppsNQL}
The nilpotent quotient algorithm allows us to compute within the lower
central series quotients $G / \gamma_{c+1}G$ of a finitely $L$-presented
group $G$. For instance, it allows us to determine the isomorphism
type of the lower central series sections $\gamma_cG / \gamma_{c+1}G$.
For various self-similar groups, the lower central series sections
often exhibit periodicities. For instance, the Grigorchuk group ${\mathfrak G}$
satisfies
\def\cite{Roz93}{Rozhkov~\cite{Roz96}}
\begin{theorem}[\cite{Roz93}]
The lower central series sections $\gamma_c{\mathfrak G}/\gamma_{c+1}{\mathfrak G}$
are $2$-elementary abelian with the following $2$-ranks:
\[
\mathrm{rk}_2(\gamma_c{\mathfrak G}/\gamma_{c+1}{\mathfrak G}) = \left\{ \begin{array}{cl}
3\textrm{ or } 2,&\textrm{ if }c=1\textrm{ or }c=2,\textrm{ respectively}\\[0.5ex]
2,&\textrm{ if }c\in\{2\cdot 2^m+1,\ldots,3\cdot 2^m\}\\[0.5ex]
1,&\textrm{ if }c\in\{3\cdot 2^m+1,\ldots,4\cdot 2^m\}
\end{array}\right\}\textrm{ with }m\in{\mathbb N}_0.
\]
The group ${\mathfrak G}$ has finite width $2$.
\end{theorem}
Our implementation of the nilpotent quotient algorithm in~\cite{NQL}
allows a computer algebra system to be applied in the investigation of the
quotients $G / \gamma_cG$ for a finitely $L$-presented group $G$. For instance,
our implementation suggests that the group $\Gamma_d$ has a maximal
nilpotent quotient whenever $d$ is not a prime-power. Based on this
experimental observation, the following proposition was proved:
\def\cite{Roz93}{Bartholdi et al.~\cite{BEH08}}
\begin{proposition}[\cite{Roz93}]\Label{prop:MaxNilQuot}
If $d$ is not a prime-power, the group $\Gamma_d$ has a maximal nilpotent
quotient. Its nilpotent quotients are isomorphic to the nilpotent quotients
of the wreath product ${\mathbb Z}_d \wr {\mathbb Z}_d$.
\end{proposition}
For a prime $p\geq 3$, the lower central series sections $\gamma_c\Gamma_p
/ \gamma_{c+1}\Gamma_p$ are $p$-elementary abelian. For $p = 3$, the lower
central series sections $\gamma_c \Gamma_3 / \gamma_{c+1}\Gamma_3$ were
computed in~\cite{Bar05}:
\def\cite{Roz93}{Bartholdi~\cite{Bar05}}
\begin{proposition}[\cite{Roz93}]
The sections $\gamma_c\Gamma_3/\gamma_{c+1}\Gamma_3$ are $3$-elementary
abelian with the following $3$-ranks:
\[
\mathrm{rk}_3(\gamma_{c}\Gamma_3 / \gamma_{c+1}\Gamma_3)
= \left\{ \begin{array}{cl}
2 \textrm{ or } 1,&\textrm{ if } c = 1 \textrm{ or } c=2\textrm{, respectively}, \\[0.5ex]
2,& \textrm{ if } c \in \{3^k+2,\ldots, 2\cdot 3^k+1\}, \\[0.5ex]
1,& \textrm{ if } c \in \{2\cdot 3^k+2,\ldots, 3^{k+1}+1 \}
\end{array} \right\}
\]
with $k \in {\mathbb N}_0$. The group $\Gamma_3$ has finite width $2$.
\end{proposition}
For primes $p > 3$, little is known about the series
sections $\gamma_{c}\Gamma_p / \gamma_{c+1}\Gamma_p$ so far~\cite{BEH08}. We use the
following abbreviation to list the ranks of these sections: If the
same entry $a \in {\mathbb N}$ appears in $m$ consecutive places in a list,
it is listed once in the form $a^{[m]}$. The sections $\gamma_c\Gamma_p /
\gamma_{c+1}\Gamma_p$ are $p$-elementary abelian. Their $p$-ranks are given
by the following table:
\[
\begin{array}{cl@{\,}l@{\,}l@{\,}l@{\,}lr}
\toprule
p & \multicolumn{5}{c}{\mathrm{rk}_p \big(\gamma_c\Gamma_p/
\gamma_{c+1}\Gamma_p\big)}&\multicolumn{1}{c}{\textrm{class}}\\
\midrule
3 \rule{0ex}{2.5ex} & 2, 1^{[1]}, &2^{[1]}, 1^{[1]}, &2^{[3]}, 1^{[3]},&2^{[9]},&1^{[9]},
2^{[27]}, 1^{[27]}, 2^{[65]} &147\\
5 & 2, 1^{[3]}, &2^{[1]}, 1^{[13]},&2^{[5]}, 1^{[65]},&2^{[25]},&1^{[26]} &139\\
7 & 2, 1^{[5]}, &2^{[1]}, 1^{[33]},&2^{[7]}, 1^{[68]} & & &115\\
11& 2, 1^{[9]}, &2^{[1]}, 1^{[97]}& 2^{[4]} & & &112\\
\bottomrule
\end{array}
\]
These computational results were obtained with a parallel version of the
nilpotent quotient algorithm in~\cite{BEH08,Har08}. They were intended
to be published in~\cite{EH10}. These computational results extend those
in~\cite{BEH08} significantly so that we obtain detailed conjectures
on the structure of the lower central series sections $\gamma_c\Gamma_p
/ \gamma_{c+1}\Gamma_p$: The sections $\gamma_c\Gamma_p /
\gamma_{c+1}\Gamma_p$ are $p$-elementary abelian with the following
$p$-ranks: Write $f_p(\ell) = p + (p^2-2p-1)(p^{\ell+1} - 1)/(p-1)$
and $g_p(\ell) = f_p(\ell) + p^{\ell+1}$. Then we conjecture that
\[
\mathrm{rk}_p(\gamma_c\Gamma_p/\gamma_{c+1}\Gamma_p) = \left\{\begin{array}{cl}
2, &\textrm{ if } c \in \{1,p\} \textrm{ or }
f_p(\ell) \leq c < g_p(\ell)\textrm{ for some } \ell\in{\mathbb N}_0, \\
1, &\textrm{ otherwise}
\end{array}\right.
\]
holds.
If this conjecture is true, the group $\Gamma_p$ would have finite width
$2$. For prime powers $3 \leq d \leq 11$, our implementation yields the
following results:
\begin{itemize}\addtolength{\itemsep}{-1ex}
\item For $d = 4$, the Fabrykowski-Gupta group $\Gamma_4$ satisfies
\[
\Gamma_4 / \Gamma_4' \cong {\mathbb Z}_4 \times {\mathbb Z}_4\quad\textrm{and}\quad
\gamma_2\Gamma_4 / \gamma_3\Gamma_4 \cong {\mathbb Z}_4.
\]
For $3 \leq c \leq 141$, the sections $\gamma_c\Gamma_4/\gamma_{c+1} \Gamma_4$ are $2$-elementary
abelian with $2$-ranks: $2^{[4]}, 3^{[3]}, 2^{[13]}, 3^{[12]}, 2^{[52]}, 3^{[48]}, 2^{[7]}$.
\item For $d = 8$, the Fabrykowski-Gupta group $\Gamma_8$ satisfies
\[
\Gamma_8 / \Gamma_8' \cong {\mathbb Z}_8 \times {\mathbb Z}_8,\qquad
\gamma_2\Gamma_8 / \gamma_3\Gamma_8 \cong {\mathbb Z}_8,
\]
and
\[
\gamma_3\Gamma_8 / \gamma_4\Gamma_8 \cong
\gamma_4\Gamma_8 / \gamma_5\Gamma_8 \cong
\gamma_5\Gamma_8 / \gamma_6\Gamma_8 \cong
\gamma_6\Gamma_8 / \gamma_7\Gamma_8 \cong {\mathbb Z}_4.
\]
For $7 \leq c\leq 111$, the sections $\gamma_c\Gamma_8 / \gamma_{c+1}\Gamma_8$ are $2$-elementary
abelian with $2$-ranks:
$2,1,2^{[2]},3,2,3^{[2]},4,3^{[8]},2^{[23]},3^{[5]},2^{[3]},1^{[8]},2^{[16]},3^{[8]},2^{[8]},3^{[16]},4$.
\item For $d = 9$, the Fabrykowski-Gupta group $\Gamma_9$ satisfies
\[
\Gamma_9 / \Gamma_9' \cong {\mathbb Z}_9 \times {\mathbb Z}_9,\quad
\gamma_2\Gamma_9 / \gamma_3\Gamma_9 \cong {\mathbb Z}_9,\quad\textrm{and}\quad
\gamma_3\Gamma_9 / \gamma_4\Gamma_9 \cong {\mathbb Z}_9.
\]
For $4\leq c\leq 117$, the sections $\gamma_c\Gamma_9 /
\gamma_{c+1}\Gamma_9$ are $3$-elementary abelian with $3$-ranks:
$1^{[5]}, 2^{[6]}, 3,2^{[17]},1^{[38]},1^{[47]}$.
\end{itemize}
\section{Computing Dwyer Quotients of the Schur Multiplier}\Label{sec:Dwyer}
The Schur multiplier $M(G)$ of a group $G$ can be defined as the second
homology group $H_2(G,{\mathbb Z})$ with integer coefficients. It is an invariant
of the group which is of particular interest for infinitely presented
groups because proving the Schur multiplier being infinitely generated
proves that the group does not admit a finite presentation. This is due
to the fact that the Schur multiplier of a finitely presented group is
finitely generated abelian which can be seen as a consequence of Hopf's
formula: If $F$ is a free group and $R \unlhd F$ a normal subgroup so
that $G \cong F/R$ holds, the Schur multiplier $M(G)$ satisfies
\begin{equation}
M(G) \cong (R \cap F') / [R,F].\Label{eqn:HopfForm}
\end{equation}
However, a group with a finitely generated Schur multiplier is not
necessarily finitely presented~\cite{Bau71}. For further details on
the Schur multiplier, we refer to~\cite[Chapter~11]{Rob96}.\smallskip
It is known that the Schur multiplier of a finitely $L$-presented
group (and even the Schur multiplier of a finitely presented group) is not
computable in general~\cite{Gor95}. Nevertheless, the Schur multiplier of some
self-similar groups has been computed in~\cite{Gri99,BS10}:
For instance, the Grigorchuk group ${\mathfrak G}$ satisfies
\def\cite{Roz93}{Grigorchuk~\cite{Gri99}}
\begin{proposition}[\cite{Roz93}]
The Schur multiplier \mbox{$M({\mathfrak G})$} is infinitely generated
$2$-elementary abelian. Therefore, the group ${\mathfrak G}$ is not finitely
presented.
\end{proposition}
There are various examples of self-similar groups for which nothing is
known on their Schur multiplier. Even though the Schur multiplier $M(G)$
is not computable in general, it is possible to compute successive quotients of $M(G)$
provided that the group $G$ is given by an invariant
finite $L$-presentation~\cite{Har10}. These quotients often exhibit periodicities as
well: For instance, our experiments with the implementation of the algorithm
in~\cite{Har10} suggest that the Schur multiplier of
the Fabrykowski-Gupta groups $\Gamma_d$, for a prime-power $d = p^\ell$, is
infinitely generated. The algorithm for computing successive quotients
of $M(G)$ provides a first method to investigate the structure of the
Schur multiplier of an invariantly finitely $L$-presented group (and
even the Schur multiplier of a finitely presented group).\smallskip
We briefly sketch the idea of this algorithm: Let $G$ be an invariantly
finitely $L$-presented group. Write $G \cong F / K$ for a free group
$F$ and a normal subgroup $K$. Then $G/\gamma_cG \cong F/K\gamma_cF$.
We identify $M(G)$ with $(K \cap F')/[K,F]$ and $M(G/\gamma_cG)$ with
$(K\gamma_cF\cap F')/[K\gamma_cF,F]$ and define
\[
\varphi_c\colon M(G)\to M(G/\gamma_cG),\: g[K,F]\mapsto g[K\gamma_cF,F].
\]
Then $\varphi_c$ is a homomorphism of abelian groups. In the induction step
of the nilpotent quotient algorithm, the algorithm
computes a homomorphism $F \to F/[K\gamma_cF,F]$. This homomorphism allows
us to compute the image of the Schur multiplier $M(G)$ in $M(G/\gamma_cG)$. In particular,
it allows us to compute the isomorphism type of the \emph{Dwyer
quotient} $M_c(G) = M(G)/\ker\varphi_c$, for $c \in {\mathbb N}$, where
\[
M(G) \geq \ker\varphi_1 \geq \ker\varphi_2 \geq \ldots.
\]
The algorithm for computing $M_c(G)$ has been implemented in
{\scshape Gap}. Its implementation allows us to compute the Dwyer
quotients of various self-similar groups: Since the Schur multiplier
of the Grigorchuk group ${\mathfrak G}$ is $2$-elementary abelian, the Dwyer
quotients of ${\mathfrak G}$ are $2$-elementary abelian. We have computed the
Dwyer quotients $M_c({\mathfrak G})$ for $1\leq c\leq 301$. These quotients are
$2$-elementary abelian with the following $2$-ranks:
\[
1,2,3^{[3]},5^{[6]},7^{[12]}, 9^{[24]}, 11^{[48]}, 13^{[96]}, 15^{[110]}.
\]
These experiments suggest that the Grigorchuk group satisfies
\[
M_c({\mathfrak G}) \cong \left\{\begin{array}{cl}
{\mathbb Z}_2\textrm{ or }({\mathbb Z}_2)^2,&\textrm{if }c=1\textrm{ or }c=2\textrm{, respectively},\\
({\mathbb Z}_2)^{2m+3},&\textrm{if }c\in\{3\cdot 2^m,\ldots,3\cdot 2^{m+1}-1\},
\end{array}\right\}
\]
with $m\in{\mathbb N}_0$. For the Fabrykowski-Gupta groups $\Gamma_d$,
the algorithm in~\cite{Har10} yields first
insight into the structure of $M(\Gamma_d)$: We restrict ourself to the
groups $\Gamma_d$ for prime powers $d = p^\ell$ because,
otherwise, the groups have a maximal nilpotent quotient by Proposition~\ref{prop:MaxNilQuot}. For a
prime $p \in \{3,5,7,11\}$, the Dwyer quotients $M_c(\Gamma_p)$ are
$p$-elementary abelian groups with the following $p$-ranks:
\[
\begin{array}{cl@{\,}l@{\,}l@{\,}l@{\,}l@{\,}l@{\,}l@{\,}l@{\,}l@{\,}l@{\,}}
\toprule
p &\multicolumn{10}{c}{\mathrm{rk}_p(M_c(\Gamma_p))} \\
\midrule
3\rule{0ex}{2.5ex} &0^{[2]},&1^{[3]},& 2^{[0]},& 3^{[9]},& 4^{[1]}, &5^{[26]},& 6^{[4]}, &7^{[77]},& 8^{[13]}, &9^{[12]} \\
5&0^{[1]}, &1^{[4]}, &2^{[2]}, &3^{[20]}, &4^{[10]}, &5^{[100]},&6^{[1]}&&& \\
7&0^{[1]},& 1^{[2]}, &2^{[6]},& 3^{[2]},& 4^{[14]}, &5^{[42]}, &6^{[14]},&7^{[34]} &&\\
11&0^{[1]}, &1^{[2]},&2^{[2]},& 3^{[2]},& 4^{[10]}, &5^{[2]}, &6^{[22]}, &7^{[22]}, &8^{[22]}, &9^{[27]} \\
\bottomrule
\end{array}
\]
As noted by Bartholdi, these experimental results suggest that
\[
\mathrm{rk}_3(M_{c+1}(\Gamma_3)) = \left\{
\begin{array}{cl}
2\left\lfloor\log_3\left(\frac{2c-1}{10}\right) \right\rfloor + 3,
& \textrm{ if }\log_3(2c-1) \in {\mathbb Z},\\[0.75ex]
\left\lfloor \log_3(2c-1) \right\rfloor
+ \left\lfloor \log_3\left(\frac{2c-1}{10}\right) \right\rfloor
+ 1, &\textrm{ otherwise,}
\end{array}\right.
\]
for $c\geq 6$. Our results for the Dwyer quotients $M_c(\Gamma_d)$,
for $d\in\{4,8,9\}$, are shown in Table~\ref{tab:Dwyer}
where we list the abelian invariants of $M_c(G)$. Here, a
list $(\alpha_1,\ldots,\alpha_n)$ stands for the abelian group
${\mathbb Z}_{\alpha_1}\times\cdots\times{\mathbb Z}_{\alpha_n}$. Again, we list the abelian
invariants $(\alpha_1,\ldots,\alpha_n)^{[m]}$ just once if they appear
in $m$ consecutive places.
\begin{table}[ht]
\begin{center}
\label{tab:Dwyer}
\caption{Dwyer quotients of the Fabrykowski-Gupta groups $\Gamma_d$}\bigskip
\begin{tabular}{lr}
\toprule
\multicolumn{1}{c}{$d$} & \multicolumn{1}{c}{$M_c(\Gamma_d)$} \\
\midrule
& \raisebox{0ex}[2.5ex]{}
$(1)^{[1]}$
$(2)^{[1]}$
$(2,2)^{[1]}$
$(2,4)^{[4]}$
$(2,2,2,4)^{[1]}$ \\ 4 &
$(2,2,2,2,4)^{[4]}$
$(2,2,2,4,4)^{[16]}$
$(2,2,2,2,4,4)^{[1]}$
$(2,2,2,2,2,4,4)^{[3]}$ \\ &
$(2,2,2,2,2,2,4,4)^{[16]}$
$(2,2,2,2,2,4,4,4)^{[64]}$
$(2,2,2,2,2,2,4,4,4)^{[5]}$ \\ &
$(2,2,2,2,2,2,2,4,4,4)^{[11]}$
$(2,2,2,2,2,2,2,2,4,4,4)^{[26]}$ \\
\midrule
& \raisebox{0ex}[2.5ex]{}
$(1) ^ {[1]}$
$(8) ^ {[2]}$
$(4,8) ^{[3]}$
$(2,4,8) ^{[4]}$
$(2,8,8) ^{[1]}$
$(2,2,8,8) ^{[2]}$\\ &
$(2,2,2,8,8) ^{[2]}$
$(2,2,4,8,8) ^{[2]}$
$(2,4,4,8,8) ^{[2]}$
$(2,4,8,8,8) ^{[2]}$\\ \raisebox{1.5ex}[-1.5ex]{8} &
$(2,8,8,8,8) ^{[8]}$
$(2,2,8,8,8,8) ^{[4]}$
$(2,4,8,8,8,8) ^{[20]}$
$(2,2,4,8,8,8,8) ^{[32]}$\\ &
$(2,2,8,8,8,8,8) ^{[7]}$
$(2,2,2,8,8,8,8,8) ^{[16]}$
$(2,2,2,2,8,8,8,8,8) ^{[16]}$\\ &
$(2,2,2,4,8,8,8,8,8) ^{[16]}$
$(2,2,4,4,8,8,8,8,8) ^{[3]}$\\
\midrule
& \raisebox{0ex}[2.5ex]{}
$(1) ^ {[1]}$
$(9) ^ {[2]}$
$(3,9) ^ {[2]}$
$(3,3,9) ^ {[4]}$
$(3,9,9) ^ {[2]}$ \\ &
$(9,9,9) ^ {[2]}$
$(3,9,9,9) ^ {[2]}$
$(3,3,9,9,9) ^ {[4]}$
$(3,9,9,9,9) ^ {[2]}$ \\ \raisebox{1.5ex}[-1.5ex]{9} &
$(9,9,9,9,9) ^ {[12]}$
$(3,9,9,9,9,9) ^ {[18]}$
$(3,3,9,9,9,9,9) ^ {[36]}$ \\ &
$(3,9,9,9,9,9,9) ^ {[18]}$
$(9,9,9,9,9,9,9) ^ {[17]}$
$(3,9,9,9,9,9,9,9) ^ {[12]}$\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\section{Coset Enumeration for Finite Index Subgroups}\Label{sec:TC}
A standard algorithm for finitely presented groups is the \emph{coset
enumerator} introduced by Todd and Coxeter~\cite{TC36}. Coset enumeration
is an algorithm that, given a finite generating set of a subgroup $H\leq
G$, computes the index $[G:H]$ provided that this index is finite. Its
overall strategy is to compute a permutation representation for the
group's action on the right-cosets $H \backslash G$. For finitely presented
groups, coset enumeration techniques have been investigate for some
time~\cite{Lee63,CDHW73,Neu82,Sims94}. They allow computer algorithms
to be applied in the investigation of finitely presented groups by
their finite index subgroups~\cite{HR94}. It was shown in~\cite{Har11},
that even finitely $L$-presented groups allow one to develop a coset
enumeration process. This latter algorithm reduces the computation to
finite presentations first and then it proves correctness of the obtained
result. A coset enumerator for finitely $L$-presented groups has various
interesting applications: For instance, it allows one to compute low-index
subgroups, as suggested in~\cite{DSch74}, and it solves the generalized
word problem for finite index subgroups~\cite{Har11}.\smallskip
We briefly sketch the idea of the coset enumeration process in~\cite{Har11}
in the following.
Let $G = \langle{\mathcal X}\mid{\mathcal Q}\mid\Phi\mid{\mathcal R}\rangle$ be a finitely $L$-presented
group. Suppose that a subgroup $H \leq G$ is given by its finitely
many generators $\{g_1,\ldots,g_n\}$. We consider the generators
$g_1,\ldots,g_n$ as elements of the free group $F$ over ${\mathcal X}$. Then $E
= \langle g_1,\ldots,g_n \rangle \leq F$ satisfies $H \cong U\!K/K$ where $K
= \langle {\mathcal Q} \cup \bigcup_{\sigma\in\Phi^*} {\mathcal R}^\sigma\rangle^F$ is the kernel of
the free presentation. We are to compute the index
$[G:H] = [F:U\!K]$. For this purpose, we define $\Phi_\ell =
\{ \sigma \in \Phi^* \mid \|\sigma\| \leq \ell\}$ where $\| \cdot \|$
denotes the usual word-length in the free monoid $\Phi^*$. Consider
the finitely presented groups $G_\ell = F/K_\ell$ given by the finite presentation
\begin{equation}
G_\ell = \Big\langle {\mathcal X}\:\Big|\: {\mathcal Q} \cup \bigcup_{\sigma \in \Phi_\ell} {\mathcal R}^\sigma\Big\rangle.\Label{eqn:FPCov}
\end{equation}
Then $G_\ell$ naturally maps onto $G$ and we obtain a series of subgroups
\[
U\!K_0 \leq U\!K_1 \leq \ldots \leq U\!K \leq F.
\]
Since $U\!K \leq F$ is a finite index subgroup of a finitely generated
group, it is finitely generated by $u_1,\ldots, u_n$, say. Furthermore,
we have $U\!K = \bigcup_{\ell\geq 0} U\!K_\ell$. For each $u_i\in U\!K$,
there exists $n_i\in{\mathbb N}_0$ so that $u_i \in U\!K_{n_i}$. For $m = \max\{ n_i
\mid 1\leq i\leq n\}$ we have $\{u_1,\ldots,u_n\} \subseteq U\!K_m$. Thus
$U\!K = U\!K_m$. In fact, there exists a positive integer $m \in{\mathbb N}_0$
so that $H$ has finite index in the finitely presented group $G_m = \langle
{\mathcal X} \mid {\mathcal Q} \cup \bigcup_{\sigma \in \Phi_m} {\mathcal R}^\sigma\rangle$.\smallskip
Coset enumeration for finitely presented groups allows us to compute
a permutation representation $\pi\colon F \to {\mathrm{Sym}}(U\!K_m \backslash F)$. The
integer $m$ cannot be given \emph{a priori}. However, various coset
enumerators can be applied in parallel to the finitely presented groups
$G_\ell$. In theory, termination is guaranteed for a sufficiently large
integer $\ell$ if $[G:H]$ is finite. Suppose that one coset enumerator has
terminated for $[G_\ell:H]$ and suppose that it has computed a permutation
representation $\pi_\ell\colon F \to {\mathrm{Sym}}(U\!K_\ell\backslash F)$. Then $[G:H]
= [F:U\!K]$ divides the index $[G_\ell:H] = [F:U\!K_\ell]$. It suffices
to check whether or not $\pi_\ell$ induces a group homomorphism $G
\to {\mathrm{Sym}}(U\!K_\ell\backslash F)$. In this case, we obtain $[G_\ell:H] = [G:H]$
and $\pi_\ell$ is a permutation representation for $G$'s action on the
right-cosets $H \backslash G$. Otherwise, we have to enlarge the index $\ell$
and we would finally compute the index $[G:H]$ in this way. The following
theorem was proved in~\cite{Har11}:
\begin{theorem}
For a finitely $L$-presented group $G = \langle{\mathcal X}\mid{\mathcal Q}\mid\Phi\mid{\mathcal R}\rangle$
and a homomorphism $\pi\colon F \to H$ into a finite group $H$,
there exists an algorithm that decides whether or not $\pi$
induces a group homomorphism $G \to H$.
\end{theorem}
\begin{proof}
For an explicit algorithm, we refer to~\cite{Har11}.
\end{proof}
Coset enumeration for finitely $L$-presented groups allows various
computations with finite index subgroups; e.g. computing the intersection
of two finite index subgroups, computing the core of a finite index
subgroup, solving the generalized word problem for finite index
subgroups, etc. In the following, we demonstrate the application of our
coset enumerator to the Fabrykowski-Gupta groups $\Gamma_p$. In
particular, we show how to compute the number of finite index subgroups
with a moderate index.
\subsection{An Application of Coset Enumeration: Low-Index Subgroups}\Label{sec:LowX}
As an application of the coset enumeration process, we consider subgroups
with small index in a finitely $L$-presented group. Since the finitely
presented group $G_\ell$ from Eq.~(\ref{eqn:FPCov}) naturally maps onto
the finitely $L$-presented group $G$, it suffices to compute low-index
subgroups of the finitely presented group $G_\ell$. These subgroups
map to subgroups of $G$ with possibly smaller index. On the other hand,
each finite index subgroup of $G$ has a full preimage with same index
in $G_\ell$. Therefore it remains to remove duplicates from the list
of subgroups obtained from the finitely presented group $G_\ell$. For
finitely presented groups, an algorithm for computing all subgroups up to
a given index was described in~\cite{DSch74}. An implementation of this
algorithm can be found in~\cite{LOWX}. This implementation includes an
algorithm for computing only the normal subgroups of a finitely presented
group~\cite{CD05}. The latter algorithm allows to deal with possibly
larger indices than the usual low-index subgroup algorithms.\smallskip
We first consider the Grigorchuk group ${\mathfrak G}$: its lattice of
normal subgroups is well-understood~\cite{Bar05,CST01} while its
lattice of finite index subgroups is widely unknown~\cite{Gr05}. It
is known that the Grigorchuk group has seven subgroups of index
two~\cite{Gr05}. In~\cite{Per00}, it was shown that these index-two
subgroups are the only maximal subgroups of ${\mathfrak G}$. The implementation
of our coset enumeration process allows us to compute the number of
subgroups with index at most $64$ in the group ${\mathfrak G}$~\cite{Har11}. Our
computations correct the counts in~\cite[Section~7.4]{BGZ03}
and~\cite[Section~4.1]{BG02}. The following list summarizes the number
of subgroups ($\leq$) and the number of normal subgroups ($\unlhd$)
of ${\mathfrak G}$:
\[
\begin{array}{cccccccc}
\toprule
{\rm index} & 1 & 2 & 4 & 8 & 16 & 32 & 64\\
\midrule
\leq & 1 & 7 & 31 & 183 & 1827 & 22931 & 378403 \\
\unlhd & 1 & 7 & 7 & 7 & 5 & 3 & 3 \\
\bottomrule
\end{array}
\]
For the Fabrykowski-Gupta groups $\Gamma_p$, where $3 \leq p \leq 11$ is prime,
we only found subgroups with prime-power index in $\Gamma_p$. Their counts
are as follows:
\[
\begin{array}{ccccccccc}
\toprule
& \multicolumn{2}{c}{p = 3} & \multicolumn{2}{c}{p = 5 } & \multicolumn{2}{c}{p=7} & \multicolumn{2}{c}{p=11} \\
\raisebox{1.5ex}[-1.5ex]{index} & \leq & \unlhd & \leq & \unlhd & \leq & \unlhd & \leq & \unlhd \\
\midrule
p^0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
p^1 & 4 & 4 & 6 & 6 & 8 & 8 & 12&12 \\
p^2 & 31 & 1 &806& 1 & ? & 1 & ? & 1 \\
p^3 &1966& 1 & ? & 1 & ? & ? & ? & ? \\
p^4 & ? & 4 & ? & ? & ? & ? & ? & ? \\
p^5 & ? & 1 & ? & ? & ? & ? & ? & ? \\
p^6 & ? & 1 & ? & ? & ? & ? & ? & ? \\
p^7 & ? & 4 & ? & ? & ? & ? & ? & ? \\
\bottomrule
\end{array}
\]
Here '$?$' denotes an index where our computations did not terminate
within a reasonable amount of time. The only normal subgroups with index
$p^2$ are the derived subgroups since $\Gamma_p / \Gamma_p' \cong {\mathbb Z}_p
\times {\mathbb Z}_p$ holds~\cite{Gri00}. For a prime power index $d = p^\ell$,
the groups $\Gamma_d$ only admit subgroups with prime power index $p^j$:
\[
\begin{array}{ccccccc}
\toprule
& \multicolumn{2}{c}{p^\ell = 2^2} & \multicolumn{2}{c}{p^\ell = 2^3 } & \multicolumn{2}{c}{p^\ell = 3^2} \\
\raisebox{1.5ex}[-1.5ex]{index} & \leq & \unlhd & \leq & \unlhd & \leq & \unlhd \\
\midrule
p^0 & 1 & 1 & 1 & 1 & 1 & 1 \\
p^1 & 3 & 3 & 3 & 3 & 4 & 4 \\
p^2 & 19 & 7 &19 & 7 &76 &13 \\
p^3 &211 & 7 &163&19 & ? & ? \\
p^4 &2419&11 &2227&23 & ? & ? \\
\bottomrule
\end{array}
\]
For the groups $\Gamma_6$ and $\Gamma_{10}$, we obtain the following subgroup counts:
\[
\begin{array}{ccccc}
\toprule
& \multicolumn{2}{c}{\Gamma_6} & \multicolumn{2}{c}{\Gamma_{10}} \\
\raisebox{1.5ex}[-1.5ex]{\rm index} & \leq & \unlhd & \leq & \unlhd \\
\midrule
1 & 1 & 1 & 1 & 1 \\
2 & 3 & 3 & 3 & 3 \\
3 & 7 & 4 & 0 & 0 \\
4 & 9 & 1 & 5 & 1 \\
5 & 0 & 0 & 11 & 6 \\
6 & 39 & 13 & 0 & 0 \\
7 & 0 & 0 & 0 & 0 \\
8 & 45 & 1 & 1 & 1 \\
9 & 79 & 1 & 0 & 0 \\
10& 0 & 0 & 113 & 19\\
\bottomrule
\end{array} \qquad\qquad\quad
\begin{array}{ccccc}
\toprule
& \multicolumn{2}{c}{\Gamma_6} & \multicolumn{2}{c}{\Gamma_{10}} \\
\raisebox{1.5ex}[-1.5ex]{\rm index} & \leq & \unlhd & \leq & \unlhd \\
\midrule
11 & 0 & 0 & 0 &0\\
12 & 219 & 6 & 0 &0\\
13 & 0 & 0 & 0 &0\\
14 & 0 & 0 & 0 &0\\
15 & 0 & 0 & 0 &0\\
16 & 188 & 0 & 16 &0\\
17 & 0 & 0 & 0 &0\\
18 &1299 & 7 & 0 &0\\
19 & 0 & 0 & 0 &0\\
20 & 0 & 0 & ? &?\\
\bottomrule
\end{array}
\]
\section{Computing Solvable Quotients}\Label{sec:DerSer}
The coset enumeration process in~\cite{Har11} was used to prove the
following version of the Reide\-meister-Schreier theorem for finitely
presented groups in~\cite{Har11b}:
\begin{theorem}\Label{thm:ReidSchrApps}
Each finite-index subgroup of a finitely $L$-presented
group is finitely $L$-presented.
\end{theorem}
\begin{proof}
For a constructive proof, we refer to~\cite{Har11b}.
\end{proof}
The constructive proof of Theorem~\ref{thm:ReidSchrApps} allows
us to apply the method for finitely $L$-presented groups to finite
index subgroups of a finitely $L$-presented group. As an application
of this method, we consider the successive quotients $G /G^{(i)}$
of the derived series. This series is defined recursively by $G^{(1)} =
G' = [G,G]$ and $G^{(i+1)} = [G^{(i)},G^{(i)}]$ for $i\in{\mathbb N}$. The
isomorphism type of the abelian quotient $G / G'$ can be computed with
the methods from~\cite{BEH08,Har08} provided that $G$ is given by a
finite $L$-presentation. Moreover, it is decidable whether or not $G'$
has finite index in $G$; see~\cite{Har08,BEH08}.\smallskip
Suppose that $G / G'$ is finite. Then the constructive proof
of Theorem~\ref{thm:ReidSchrApps} allows us to compute a finite
$L$-presentation for the finite index subgroup $G'\leq G$. Then we can
compute its abelianization and we can continue this process. In general,
if $G / G^{(i+1)}$ is finite, we can therefore compute the quotients
$G^{(i+1)} / G^{(i+2)}$ recursively. An alternative approach to compute
the sections $G^{(i)} / G^{(i+1)}$ could generalize the methods for
finitely presented groups~\cite{Lo97}.\smallskip
For the Grigorchuk group ${\mathfrak G}$, the sections $G^{(i)} / G^{(i+1)}$
of the derived series have been computed by Rozhkov~\cite{Roz93};
see also~\cite{Vie98}:
\def\cite{Roz93}{\cite{Roz93}}
\begin{theorem}[Rozhkov~\cite{Roz93}]
The Grigorchuk group ${\mathfrak G}$ satisfies $[{\mathfrak G}:{\mathfrak G}'] = 2^3$,
\mbox{$[{\mathfrak G}:{\mathfrak G}''] = 2^7$}, and $[{\mathfrak G}:{\mathfrak G}^{(k)}] = 2^{2+2^{2k-2}}$
for $k\geq 3$.
\end{theorem}
Our implementation of the Reidemeister-Schreier
Theorem~\ref{thm:ReidSchrApps} yields that
\[
{\mathfrak G}/{\mathfrak G}' \cong ({\mathbb Z}_2)^3,\quad
{\mathfrak G}'/{\mathfrak G}'' \cong {\mathbb Z}_2 \times{\mathbb Z}_2 \times {\mathbb Z}_4,\quad\textrm{and}\quad
{\mathfrak G}''/{\mathfrak G}^{(3)} \cong ({\mathbb Z}_2)^2 \times ({\mathbb Z}_4)^3 \times {\mathbb Z}_8
\]
Since the abelianization $\Gamma_p / \Gamma_p' \cong {\mathbb Z}_p \times {\mathbb Z}_p$
of the Fabrykowski-Gupta group $\Gamma_p$ is finite~\cite{Gri00},
the derived subgroup $\Gamma_p'$ satisfies $[\Gamma_p:\Gamma_p'] = p^2$. A
finite $L$-presentation for $\Gamma_p'$ can be computed with the methods
in~\cite{Har11b}. We obtain that
\[
\Gamma_3' / \Gamma_3'' \cong ({\mathbb Z}_3)^2, \qquad
\Gamma_3'' / \Gamma_3^{(3)} \cong ({\mathbb Z}_3)^4,\quad\textrm{and}\quad
\Gamma_3^{(3)} / \Gamma_3^{(4)} \cong ({\mathbb Z}_3)^{10}
\]
as well as $\Gamma_4' / \Gamma_4'' \cong ({\mathbb Z}_4)^2$,
\[
\Gamma_4'' / \Gamma_4^{(3)} \cong {\mathbb Z}_2 \times ({\mathbb Z}_4)^2 \times {\mathbb Z}_8,\quad\textrm{ and }\quad
\Gamma_4^{(3)} / \Gamma_4^{(4)} \cong ({\mathbb Z}_2)^3\times ({\mathbb Z}_4)^9\times ({\mathbb Z}_8)^3.
\]
For $5 \leq d \leq 41$, our computations suggest the following
\begin{proposition}
For $d \geq 5$, $\Gamma_d$ satisfies $\Gamma_d
/ \Gamma_d' \cong ({\mathbb Z}_d)^2$ and $\Gamma_d' / \Gamma_d'' \cong
({\mathbb Z}_d)^{d-1}$.
\end{proposition}
\begin{proof}
It was already shown in~\cite{Gri00} that $\Gamma_d / \Gamma_d' \cong
{\mathbb Z}_d \times {\mathbb Z}_d$ holds. For the second statement, we combine the methods
from~\cite{FAZR11} and~\cite{Gri00}: For primes $p$, the structure
of the congruence subgroups $\Gamma_p / {\mathrm{Stab}}_{\Gamma_p}(n)$, $n\in{\mathbb N}$,
were studied in~\cite{FAZR11}. Moreover, it was shown in~\cite{Gri00} that,
for $d \geq 5$, the index $[\Gamma_d':\Gamma_d'']$ is finite.\smallskip
Let $d \geq 5$ be given. Denote by ${\mathrm{Stab}}_{\Gamma_d}(1)$ the first level
stabilizer in $\Gamma_d$. Then $\Gamma_d = {\mathrm{Stab}}_{\Gamma_d}(1) \rtimes \langle a
\rangle$ and ${\mathrm{Stab}}_{\Gamma_d}(1) = \langle r,r^a,\ldots,r^{a^{d-1}} \rangle$ hold. Since
$\Gamma_d' = \langle [a,r]\rangle^{\Gamma_d} = \langle r^{-a}\,r\rangle^{\Gamma_d}$,
we have that $\Gamma_d' \leq {\mathrm{Stab}}_{\Gamma_d}(1)$ and, as $\Gamma_d / \Gamma_d'
\cong {\mathbb Z}_d \times {\mathbb Z}_d$ holds, we have that $[{\mathrm{Stab}}_{\Gamma_d}(1):\Gamma_d']
= d$. More precisely, we have ${\mathrm{Stab}}_{\Gamma_d}(1) = \Gamma_d' \rtimes \langle
r \rangle$.\smallskip
For each $0\leq i < d$, we write $g_i = r^{a^i}$. In the following, indices
are read modulo $d$. For $0\leq\ell < d$, $g_i^\ell$ decomposes as
$(1,\ldots,1,r^\ell,a^\ell,1,\ldots,1)$ where $a^\ell$ is at position
$i$. If \mbox{$|\ell-k|>1$}, the commutator $[g_i^\ell,g_j^k]$ is
trivial; otherwise, the commutator $[g_i^\ell,g_{i+1}^k]$ decomposes
as $(1,\ldots,1,[a^\ell,r^k],1,\ldots,1)$ with $[a^\ell,r^k]$ at
position $i$. Since $[a^\ell,r^k] \in {\mathrm{Stab}}_{\Gamma_d}(1)$, we have that
$[g_i^\ell,g_j^k] \in {\mathrm{Stab}}_{\Gamma_d}(2)$. Thus, ${\mathrm{Stab}}_{\Gamma_d}(1)
/ {\mathrm{Stab}}_{\Gamma_d}(2)$ is abelian and it is generated by the images
of the elements $g_0,\ldots,g_{d-1}$. Because $[a^\ell,r^k] =
a^{-\ell}\,r^{-k}\,a^\ell\,r^k = g_\ell^{-k} g_0^k$, we have that
$[g_i^\ell,g_j^k] \in {\mathrm{Stab}}_{\Gamma_d}(3)$ if and only if $\ell\,k \equiv
0 \pmod d$. Therefore ${\mathrm{Stab}}_{\Gamma_d}(1) / {\mathrm{Stab}}_{\Gamma_d}(2) \cong {\mathbb Z}_d
\times \cdots \times {\mathbb Z}_d$ and $\Gamma_d / {\mathrm{Stab}}_{\Gamma_d}(2) \cong {\mathbb Z}_d \wr
{\mathbb Z}_d$. Since ${\mathrm{Stab}}_{\Gamma_d}(1) / {\mathrm{Stab}}_{\Gamma_d}(2)$ is abelian,
we have that ${\mathrm{Stab}}_{\Gamma_d}(1)' \leq {\mathrm{Stab}}_{\Gamma_d}(2)$. Because each
generator of ${\mathrm{Stab}}_{\Gamma_d}(1)$ has order $d$, the largest abelian
quotient ${\mathrm{Stab}}_{\Gamma_d}(1) / {\mathrm{Stab}}_{\Gamma_d}(1)'$ has order at most
$d^d$. It follows that ${\mathrm{Stab}}_{\Gamma_d}(2) = {\mathrm{Stab}}_{\Gamma_d}(1)'$. Moreover,
we have ${\mathrm{Stab}}_{\Gamma_d}(2) = {\mathrm{Stab}}_{\Gamma_d}(1)' \leq \Gamma_d'$ and, since
$\Gamma_d' \leq {\mathrm{Stab}}_{\Gamma_d}(1)$ holds, it follows that $\Gamma_d'' \leq
{\mathrm{Stab}}_{\Gamma_d}(1)' = {\mathrm{Stab}}_{\Gamma_d}(2)$. The proofs in~\cite{Gri00,BEH08}
yield that ${\mathrm{Stab}}_{\Gamma_d}(2) \leq \Gamma_d''$ if $d \geq 5$. Therefore
$d^{d-1} = |\Gamma_d' / {\mathrm{Stab}}_{\Gamma_d}(2)| = |\Gamma_d' / \Gamma_d''|$ and $\Gamma_d'
/ \Gamma_d'' \cong {\mathbb Z}_d \times \cdots \times {\mathbb Z}_d$.
\end{proof}
The constructive proof of Theorem~\ref{thm:ReidSchrApps} in~\cite{Har11b}
yields a finite $L$-presentation over the Schreier generators of
the subgroup. By the Nielsen-Schreier theorem (as, for instance,
in~\cite[6.1.1]{Rob96}), a subgroup $H$ with index $m = [G:H]$ in an
$n$-generated finitely $L$-presented group $G$ has $nm+1-m$ Schreier
generators. The Fabrykowski-Gupta groups are $2$-generated and therefore,
the subgroup $\Gamma_3^{(3)}$ satisfies \mbox{$[\Gamma_3:\Gamma_3^{(3)}] =
3^{16}$}. Thus $\Gamma_3^{(3)}$ has $3^{16}-1$ Schreier generators as a
subgroup of the $2$-generated group $\Gamma_3$. Therefore, computing the
sections $\Gamma_3^{(i)} / \Gamma_3^{(i+1)}$, $i\geq 4$, with the above method
is hard in practice.
\subsection*{Acknowledgments}
I am grateful to Laurent Bartholdi for valuable comments and
suggestions.
\def$'${$'$}
|
1,108,101,565,245 | arxiv | \section{Scaling solutions at $H=0$}
\subsection{Numerical evaluation of scaling solutions}
In the main text, we make use of the scaling solutions for the disconnected correlation function at $H=0$,
\begin{equation}
\langle \sigma_r \sigma_0 \rangle = r^{-1/4} F_{\pm} (s)
\end{equation}
which is valid as $T \rightarrow T_c$ and $r \rightarrow \infty$ with $s$ fixed. The symbol $\pm$ denotes solutions for $T > T_c$ and $T< T_c$ respectively. Analytical studies use $s^* = |z^2 +2 z -1|/\sqrt{z (1-z^2)} r$ with $z = \tanh(1/T)$ as the argument for this function, however in the main text we used the scaling form $s = (4/T_c) (r/t^{-\nu})$. The solutions are of the form~\cite{wu1976spinb}:
\begin{widetext}
\begin{equation}
F_{\pm}(s) = 2^{-1/2} (2 \sinh(2/T))^{1/8} (s/2)^{1/4} \left( 1\mp \eta(s/2) \right) \eta(s/2)^{-1/2} \exp \left( \int_{s/2}^\infty dx \frac{x}{4} \eta(x)^{-2} ((1-\eta(x)^2)^2 - \eta^\prime (x)^2) \right)
\label{eq:FPlusMinus}
\end{equation}
\end{widetext}
$\eta(\theta)$ is the solution to the Painlev{\'e} differential equation of the third kind,
\begin{equation}
\frac{d^2 \eta}{d \theta^2} = \frac{1}{\eta} \left( \frac{d \eta}{d \theta} \right)^2 - \eta^{-1} + \eta^3 - \theta^{-1} \frac{d \eta}{d \theta}
\end{equation}
with boundary conditions
\begin{equation}
\eta(\theta) = -\theta \left[ ln\left(\frac{\theta}{4}\right) + \gamma_E \right] + O(\theta^5 ln^3 \theta)
\label{eq:smalleta}
\end{equation}
as $\theta \rightarrow 0$, and
\begin{equation}
\eta(\theta) = 1- \frac{2}{\pi} K_0(2 \theta) + O(e^{-4 \theta})
\label{eq:largeeta}
\end{equation}
as $\theta \rightarrow \infty$ with $K_0(x)$ is a modified Bessel function of the 2nd kind.
The Painleve transcendent $\eta(\theta)$ is not expressible in terms of elementary functions; to evaluate it numerically, we choose to use tools available in the Chebfun Matlab package~\cite{chebfunv4}. We use a Chebyshev polynomial approximation for $\eta(\theta)$ between arguments of 0.003 and 3, while we use the asymptotics given in Equations~\ref{eq:smalleta} and~\ref{eq:largeeta} for arguments outside of this range. To evaluate Equation~\ref{eq:FPlusMinus}, we use integration subroutines available in Matlab and Python, the adaptive Simpson quadrature function \textit{quad} in Matlab, and the scipy.integrate.quad function which draws from the Fortran library QUADPACK (mainly adaptive quadrature techniques). Our Matlab implementation and the Python module containing the Chebyshev polynomial for $\eta(\theta)$ and the $F_{\pm} (s)$ scaling function are available online~\cite{supplementalWeb}.
\subsection{Effective Functional Form}
For convenience and less opaque representation of the scaling solutions, we also provide an effective functional form, which is good to $3.4\%$ relative accuracy for $F_+$ and $1\%$ relative accuracy for $F_-$. These functions are an interpolation between the small and large distance asymptotics for the exact scaling solutions at $H=0$.
Both scaling functions have $F_\pm (0) = C_0 = 0.7033801577...$. The asymptotic large-r behavior is different depending on whether one is above or below criticality. The $T>T_{c}$ case is particularly simple, partially since $\langle M^{2} \rangle = 0$. We simply choose the effective large-r functional form to be the exponential decay given by the Ornstein-Zernike decay, which is like $s^{-1/4} \exp(-s)$ for $T>T_c$. The amplitude of this piece, called $p_1$, is determined by an asymptotic expansion of the large distance Bessel functions, and we get $p_1 = 1/(2^{1/8} \sqrt{\pi})$.
We find a simple and effective nonlinear interpolation that we will employ in both the high and low-temperature cases. Empirically, we find that both functions are well-described by $F^{\text{fit}}_{\pm} = (B(s)^{|k|} (\text{Small-r})^{k} + (1-B(s)^{|k|}) (\text{Large-r})^{k})^{1/k}$, where $k$ is a fit parameter, that controls the nonlinear interpolation of the functions, whereas a weighting function $B(s)$ that has the limits $B(0)=1$ and $B(\infty) = 0$ controls the weight of each piece of the interpolation. For $T>T_c$ we write:
\begin{equation}
F^{\text{fit}}_{+}(s) = \left(0.70338^{k} B(s)^{|k|} + (1-B(s))^{|k|} (p_1 \cdot s^{-1/4} \exp(- s))^{k} \right)^{1/k}.
\label{eq:Fplus}
\end{equation}
with
\begin{equation}
B(s) = \exp(-(c x)^b)
\end{equation}
If $k$ is negative, we need to make the weights $(1/B(s))^{k}$ and $(1/(1-B(s)))^{k}$, for $F^{\text{fit}}_+(s)$ to have the right limits at $s=0$ and $s=\infty$, hence the absolute value $|k|$ in the power of those terms.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.35]{FPlusFit}
\caption{{\bf fit to Painleve results} This is a fit to equation~\ref{eq:Fplus}. Fit parameters $c = 1.73$, $b =0.92$, $k=3.8$. Red dots are Painleve results, and the black line is the result of the fit.}
\label{fplusFit}
\end{center}
\end{figure}
Our form matches the exact solution to within $3.4\%$ maximum relative error, with an average of $1.5\%$ error, for the range of our fit $0.01 \le s \le 10$. (See figure \ref{fplusFit}).
Now let's turn our attention to the $T<T_c$ case. The philosophy for constructing the effective functional form is identical to the high temperature case, although for the disconnected correlation function, the long distance asymptote is dominated by the magnetization $\langle M \rangle^2$. For the connected correlation function $\langle \sigma_0 \sigma_r \rangle - \langle \sigma_0 \rangle \langle \sigma_0 \rangle$, with the magnetization squared subtracted off, the long distance decay for the scaling function is $p_2 s^{-7/4} \exp(-2 s)$, with $p_2 = 1/(2^{21/8} \pi) $. In our effective functional form, for simplicity, we choose to fit only the connected correlation function, interpolating between the short distance behavior and long distance decay, while add the scaling magnetization squared to the result. (if one wishes, analytic corrections may be incorporated into the scaling magnetization as well). We use:
\begin{equation}
\begin{split}
F^{\text fit}_{-} (s) =& \left((B(s) \cdot 0.700883)^k + ((1-B(s))(p_2 s^{-7/4} \right. \\
&\left. \vphantom{(B(s))^k}\exp(-s)))^k \right)^{1/k} + 2^{3/8} s^{1/4}
\label{eq:nomagFminus}
\end{split}
\end{equation}
Here,
\begin{equation}
B(s) = \exp(-(s/c)^b)
\end{equation}
From fits, we find $c=0.007 \pm 0.07$, $b= 0.4 \pm 2$, and $k=-0.2\pm0.1$ .
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.35]{FMinusFit}
\caption{{\bf fit to Painleve results} This is a fit to equation~\ref{eq:nomagFminus}. Fit parameters are: $c=3.62$, $b=0.87$, and $k=-0.4$. The magnetization is separated out, so that we may subtract it off for the interpolation- which makes things less complicated. Red dots are Painleve results, and the black line is the result of the fit.}
\label{fminusFitTwo}
\end{center}
\end{figure}
The fit is good to a maximum of $1\%$ error when compared against our Chebyshev form for the range of our fit $0.01 \le s \le 10$ (Figure~\ref{fminusFitTwo}).
\section{High-precision scaling form for the susceptibility}
We use a high-precision form of the susceptibility as an integration constraint for our functional form. The susceptibility was derived from the high-precision approximate forms for the equation of state in Reference~\cite{caselle2001criticalb}. Using the parametric representation
\begin{equation}
\begin{split}
t &= \frac{T-T_c}{T} = R(1-\theta^{2}) \\
h &= H/T = h_{0} R^{\beta\delta}h(\theta) \\
M &= m_{0} R^{\beta} \theta
\end{split}
\end{equation}
the high-precision form for the equation of state for $h(\theta)$ is
\begin{equation}
\begin{split}
h(\theta) =& \left( \theta - \frac{\theta^3}{1.16951} \right) (1 - 0.222389 \theta^2 - 0.043547\theta^4 \\
&- 0.014809\theta^6 - 0.007168\theta^8),
\end{split}
\end{equation}
and the definition $\chi = d M / d h$, we have:
\begin{widetext}
\begin{equation}
\chi(R, \theta)= R^{-7/4}m_0 \frac{\left(1+(2\beta-1) \theta ^2\right)}{h_0 \left(1-0.482344 \theta ^2-0.0750424 \theta ^4-0.0262771 \theta ^6-0.0234342 \theta ^8+0.0385732 \theta ^{10}-0.0444357 \theta ^{12}\right)}
\end{equation}
\end{widetext}
\section{Analytic Corrections to Scaling}
The analytic corrections to scaling to the RG field $u_t$ and $u_h$ are given in coordinates of $t$ and $h$ in the literature. Since we give our function in parametric coordinates, here we provide forms for the corrections to be expressed in $R$ and $\theta$. In the main text we state that:
\begin{align}
u_t &= t (1+ c_t t + O(t^2)) \\
u_h &= h(t+c_h t + O(t^2)).
\end{align}
Since $t = R (1-\theta^2)$, $R$ scales with $t$, so the first order corrections should also be linear in $R$. However, $\theta$ is not small as it can take any value from $0$ to $\theta_c \approx 1.08144...$, so we will assume that
\begin{align}
u_R &= R (1 + g_1(\theta) R + O(R^2)) \\
u_\theta &= \theta (1+ g_2(\theta) R + O(R^2)).
\end{align}
We can then solve for $g_1(\theta)$ and $g_2(\theta)$ using the Schofield definition:
\begin{widetext}
\begin{align}
g_1(\theta) &= \frac{ c_h \theta^3 \left(2-6.1549 \theta^2+6.60301 \theta^4-2.69648 \theta^6+0.214502 \theta^8+0.0351324 \theta^{10}-0.0135271 \theta^{12}+0.0122581 \theta^{14}\right) }{\left(\theta -1.48234 \theta ^3+0.407301 \theta ^5+0.0487653 \theta ^7+0.00284291 \theta ^9+0.0620074 \theta ^{11}-0.0830089 \theta ^{13}+0.0444357 \theta^{15}\right)} \cr
&+ \frac{c_t \theta \left(1-6.23234 \theta ^2+13.4301 \theta ^4-12.7392 \theta ^6+5.00997 \theta ^8-0.343026 \theta ^{10}-0.210889 \theta ^{12}+0.152808 \theta ^{14}-0.0674197 \theta ^{16}\right)}{\left(\theta -1.48234 \theta ^3+0.407301 \theta ^5+0.0487653 \theta ^7+0.00284291 \theta ^9+0.0620074 \theta ^{11}-0.0830089 \theta ^{13}+0.0444357 \theta^{15}\right)} \\
g_2(\theta) & = \frac{\left((c_h - \beta \delta c_t) \left(1-\theta^2\right)^2 \left(\theta -0.855059 \theta ^3\right) \left(1.-0.222389 \theta ^2-0.043547 \theta ^4-0.014809 \theta ^6-0.007168 \theta ^8\right)\right)}{\left(\theta -0.482344 \theta ^3-0.0750424 \theta ^5-0.0262771 \theta ^7-0.0234342 \theta ^9+0.0385732 \theta^{11}-0.0444357 \theta^{13}\right)}
\end{align}
\end{widetext}
\section{Accuracies and Errors}
Here we report the quality our interpolation form in terms of average cost per data point, and average relative error per data point for each of the simulation datasets at R = 0.0336737 with and without analytic corrections. We define the un-weighted residual to be:
\begin{equation}
r_j = D(j, \theta, R) - C(j, \theta, R)
\end{equation}
where $D(j, \theta, R)$ is the data, $C(j, \theta, R)$ the interpolating form. The average cost was calculated with the covariance matrix multiplying the residual:
\begin{equation}
{\text cost} = r_i \sigma^{\text cov}_{ij} r_j / N.
\label{eq:avcost}
\end{equation} The relative error was measured as $\langle e_{\text rel}^2 \rangle$,
where
\begin{equation}
e_{\text rel} = \frac{D(j, \theta, R) - C(j, \theta, R)}{C(j, \theta, R)} .
\label{eq:relerror}
\end{equation}
\begin{table}[htdp]
\begin{center}
\begin{tabular}{c|c|c|c|c}
$\theta$ & h& T& cost & error (\%) \cr
\hline
0 &0&2.348260&0.5&2.85 \cr
0.10&1.612125e-04&2.347442&5.1&1.74 \cr
0.20&3.119619e-04&2.344991&2.9&3.55 \cr
0.30&4.420782e-04&2.340918&3.4&1.45 \cr
0.40&5.419973e-04&2.335240&3.8&2.72 \cr
0.50&6.031200e-04&2.327979&4.5&2.14 \cr
0.60&6.182554e-04&2.319166&5.1&0.91 \cr
0.70&5.822216e-04&2.308836&6.1&2.17 \cr
0.80&4.927350e-04&2.297031&8.7&1.96 \cr
0.90&3.518271e-04&2.283797&8.6&1.28 \cr
1.00&1.681982e-04&2.269185&0.7&0.20 \cr
1.01&1.481081e-04&2.267650&2.1&0.53 \cr
1.02&1.277970e-04&2.266102&2.4&2.58 \cr
1.03&1.072934e-04&2.264541&6.3&0.86 \cr
1.04&8.662751e-05&2.262967&6.8&1.00 \cr
1.05&6.583165e-05&2.261380&6.9&1.02 \cr
1.06&4.494000e-05&2.259780&5.5&0.92 \cr
1.07&2.398890e-05&2.258167&3.1&0.95 \cr
1.08&3.016912e-06&2.256541&1.0&0.19 \cr
$\theta_c$&0&2.256306&3.6&0.85 \cr
\end{tabular}
\end{center}
\caption{{\bf Cost and Errors for Interpolation} The quality of our interpolation function is tabulated here in terms of average relative error (Equation~\ref{eq:relerror}) and average cost (Equation~\ref{eq:avcost}). For the calculation of this table, we skip the first 3 points (where lattice effects and higher-order corrections to scaling dominate) and data for $C<10^{-2}$ (where the error is dominated by insufficient numerical statistics). (Note that the only data sets with values smaller than $10^{-2}$ are $\theta=0$ and $\theta=0.1$.) For $\theta=0$, the statistical error becomes comparable to the data value once $C(r)<0.01$, the error approaches $50 \%$ of the data value and exceeds that once $C(r)<0.01$, and for $\theta=0.1$ it approaches $5-10\%$ after $C(r)<0.01$. We expect our scaling form to be excellent in these large-distance regimes, where the corrections to scaling are negligible and the effects of the external field are small.}
\label{table:accuracies}
\end{table}%
\begin{table}[htdp]
\begin{center}
\begin{tabular}{c|c|c|c}
$\theta_{\text eff}$ & $R_{\text eff}$ & cost & error (\%) \cr
\hline
0.0000000&0.0339815&0.8&1.81 \cr
0.0992207&0.0340277&1.4&1.43 \cr
0.1985608&0.0340413&1.1&2.91 \cr
0.2981203&0.0340234&1.2&2.11 \cr
0.3979613&0.0339790&1.4&1.95 \cr
0.4980952&0.0339168&1.3&1.41 \cr
0.5984751&0.0338478&1.3&1.28 \cr
0.6989978&0.0337834&1.6&1.52 \cr
0.7995184&0.0337330&1.5&1.49 \cr
0.8998839&0.0336999&1.4&1.34 \cr
1.0000000&0.0335563&0.1&0.14 \cr
1.0099992&0.0336703&0.5&0.53 \cr
1.0199971&0.0336664&2.0&2.49 \cr
1.0299941&0.0336619&0.9&0.95 \cr
1.0399909&0.0336568&1.0&1.06 \cr
1.0499883&0.0336507&1.1&1.10 \cr
1.0599874&0.0336434&0.9&0.91 \cr
1.0699899&0.0336345&0.7&0.74 \cr
1.0799981&0.0336234&0.6&0.56 \cr
1.0814389&0.0336216&0.4&0.43 \cr
\end{tabular}
\end{center}
\caption{{\bf Cost and Errors with Corrections} Here are the accuracies of the interpolation with all first order corrections (for $a(T)$, $\xi(T)$ $u_t$, and $u_h$) reported in terms of average relative error (Equation~\ref{eq:relerror}) and average cost (Equation~\ref{eq:avcost}). As in Table~\ref{table:accuracies}, we skip the first 3 points, and data below $10^{-2}$. Note that with corrections the errors are smaller.}
\label{table:accuracies_corrections}
\end{table}%
We may note that the relative error and cost do not necessarily reflect the same measure of theory quality. Relative error gives us a measure of the level of accuracy for the theory numbers, irrespective of how large the error bars are on the data. Cost, on the other hand, is weighted by the error of the data, and when the average cost is near or smaller than 1.0, the error is mainly caused by statistical fluctuations in the data. The higher the cost, the less well the theory is capturing the data to within error bars.
The accuracies in Table~\ref{table:accuracies} and~\ref{table:accuracies_corrections} were calculated for distances where the value of the disconnected correlation function $C(j,\theta,R) >0.01$, this means skipping the points above $r=44$ for $\theta=0$, and $r=51$ for $\theta=0.1$. This is due to the fact that for $\theta=0$ and $r>44$, the errors are around $50\%$ of the data value. We've also skipped the first $3$ points of each data set due to the fact of short distance corrections dominated by lattice effects of higher-order analytic corrections to scaling (see next section).
Notice in both tables that the special points $\theta=0$ and $\theta=\theta_c$ whose exact results we interpolate between have a cost that is relatively small, and also that the corrections to scaling improves the overall accuracy. As noted in the main text, the analytic corrections to scaling are small, and do not uniformly improve fits away from these special values. This is not surprising. Since the analytic corrections to scaling this close to the critical point are smaller than our interpolation errors in the scaling function, we might expect they would have cancelling effects roughly half the time. The analytic corrections should be of significant value farther from the critical point at all fields and temperatures.
\section{Small Distance Discrepancies}
\label{sec:smalldistance}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.35]{smallRTPlus}
\caption{\textbf{Small Distance Discrepancies for $T>T_c$ $H=0$} This figure shows the small distance discrepancies for data along $T>T_c$, $H=0$. The dashed line is the scaling theory, while the solid line is including all first order corrections in $a(T)$, $\xi(T)$, and $u_t$. One can see that the discrepancy between simulation data and theory gets smaller as the distance increases.}
\label{fig:smallRTPlus}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.35]{smallRTMinus}
\caption{\textbf{Small distance corrections for $T<T_c$ $H=0$} This figure shows the small distance discrepancies for data along $T<T_c$, $H=0$. The dashed line is the scaling theory, while the solid line is including all first order corrections in $a(T)$, $\xi(T)$, and $u_t$. One can see that the discrepancy between simulation data and theory gets smaller as the distance increases.}
\label{fig:smallRTMinus}
\end{center}
\end{figure}
The scaling solutions differ from the numerical data at small distances, as shown in Figures~\ref{fig:smallRTPlus},~\ref{fig:smallRTMinus} and~\ref{fig:smallRAllTheta}. We have investigated where this discrepancy stems from, by looking along the $H=0$ axes where exact scaling solutions are known, and consistent with the literature we see no existence of singular corrections (which would be indicated by a power law), nor do we see a dependence between the discrepancy and the distance from the critical point. Most likely the small distance discrepancy is due to the fact that the form of the scaling solution goes as $C_{\mathrm theory}(r) \sim a_0 r^{-1/4}$ for small distances, diverging as $r \rightarrow 0$, however for any data, $C_{\mathrm data}(0) = 1.0$. Therefore, the ratio between the theory and data $C_{\mathrm theory}/C_{\mathrm data}$ diverges as $r \rightarrow 0$. We attempted to multiply our function by $1/\exp(A/r)$ or equivalently $\exp(-A/r)$ with $A>0$ to incorporate the lattice corrections, but a fit to with this correction does not noticeably improve the quality of our fit.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.35]{smallRAllTheta}
\caption{\textbf{Small Distance Corrections for Varying $H$ and $T$} This is a plot that shows the small distance discrepancies for all the data we included in matching our interpolation results. It is a larger version of the inset of Figure~2 of the main paper, so that we may see the details of where the scaling theory fails.}
\label{fig:smallRAllTheta}
\end{center}
\end{figure}
\section{Numerical Methods}
\subsection{Wolff Algorithm in a field}
The Wolff algorithm~\cite{Wolff89} efficiently simulates the 2D Ising model in zero field, and requires small modifications to be used in non-zero magnetic field. In the usual Wolff algorithm, which generates members of the ensemble of the Ising model in zero magnetic field, a random spin is chosen which 'seeds' a cluster. All of the nearest neighbors of this new cluster that have the same spin are then stochastically added to the cluster with the Wolff Probability, $P_{\text{Wolff}}=1-e^{-\beta J}$. The nearest neighbors of these new additions to the cluster are again added with the Wolf probability, and this process is iterated until a step adds no new spins to the cluster. At this juncture, the entire cluster is flipped. To implement a positive magnetic field, $h$, we distinguish between clusters which flip spins from up to down, and those that flip spins from down to up. Clusters that flip spins from down to up proceed as usual, but whenever an up spin is added to a down cluster, the entire cluster is rejected stochastically with probability $1-\exp(-h)$. In implementing this algorithm, we were careful to use a predetermined number of proposed cluster flips, rather than a set number of spins, or successful cluster flips.
We implemented all of our simulations on $1024 \times 1024$ square lattices. Equilibration times were conservatively estimated by waiting for many times the amount of time it takes for the magnetization to reach and then oscillate around its long time value. After equilibrating we determined the approximate correlation time: the number of proposed clusters that must on average be flipped to generate a new configuration whose magnetization is almost uncorrelated with the previous one. We generated 100 such independent configurations for each $h$ and $t$ value, and used these to estimate the correlation functions.
|
1,108,101,565,246 | arxiv | \section{Introduction}
The logarithmic minimal models ${\cal LM}(p,p')$ were introduced in \cite{PRZ0607},
and the present paper concerns these models in the Virasoro picture without
consideration for eventual extensions with respect to some $\Wc$-algebras.
As logarithmic conformal field theories \cite{RS9203,Gur9303,Flohr0111,Gab0111},
the models arise in the continuum scaling limit of an infinite family of Yang-Baxter integrable
lattice models labelled by the pair of coprime integers $p,p'$.
For each pair of positive integers $r,s\in\mathbb{N}$, there is a so-called
Kac representation associated with an integrable boundary condition
in the lattice model \cite{RP0706,RP0707}.
Despite their importance, these Kac representations are in general rather poorly
understood as modules over the Virasoro algebra. Their characters are known empirically
from the lattice approach, but this is in general not sufficient to determine the
underlying representations.
Fusion can be implemented on the lattice without detailed knowledge of the structure of the
Kac representations. This gives significant insight into the fusion algebra generated from
repeated fusion of the Kac representations and has led to a concrete conjecture
for the so-called fundamental fusion algebra and its representation content
\cite{RP0706,RP0707}.
This fundamental fusion algebra is generated from repeated fusion of the
fundamental Kac representations $(2,1)$ and $(1,2)$, but does not involve all
Kac representations.
The lattice implementation of fusion also provides further information
on the structure of the representations themselves, but certain crucial
questions are left unanswered.
However, the fusion rules should be compatible with the outcome of
the Nahm-Gaberdiel-Kausch (NGK) algorithm \cite{Nahm9402,GK9604},
thus providing an additional tool to determine the fusion rules in ${\cal LM}(p,p')$.
The NGK algorithm has
played a prominent role in the study of fusion in the so-called augmented $c_{p,p'}$ models
\cite{GK9604,EF0604} as well as in \cite{MR0708,MR0711} on critical percolation
and related models. Alternative approaches to the computation of fusion rules
in these models are discussed in \cite{Flohr9605,RS0701}.
The logarithmic minimal models are non-rational conformal field theories
as they contain infinitely many Virasoro representations.
Some of these can be organized in finitely many extended representations
associated with new integrable boundary conditions \cite{PRR0803,RP0804,Ras0805}.
This is referred to as the ${\cal W}$-extended picture of the logarithmic
minimal models, and the extension is believed to be with respect to the triplet
$\Wc_{p,p'}$ algebra
\cite{Kausch9510,GK9606,FGST0606}.
Due to their `rational nature', these ${\cal W}$-extended models have been studied
extensively, see \cite{PR1010} and references therein.
Here we consider the infinite family of logarithmic minimal models ${\cal LM}(1,p)$.
We propose a classification of the Kac representations $(r,s)$
for all $r,s\in\mathbb{N}$ as finitely-generated submodules of Feigin-Fuchs modules \cite{FF89},
and present a conjecture for their fusion algebra. We thus find that the
only higher-rank representations generated by repeated fusion of the Kac
representations are the rank-2 modules $\R_r^b$ already present in
the fundamental fusion algebra.
The proposals are tested using a combination of
the lattice approach and applications of the NGK algorithm.
Under some natural assumptions about the continuum scaling limit
of the lattice model, some results are in fact {\em exact} rather than conjectural.
We also discuss how the fusion algebra may be extended by inclusion
of the modules contragredient to the Kac representations, and determine polynomial
fusion rings isomorphic to the conjectured Kac fusion algebra and its contragredient extension.
\section{Logarithmic minimal model ${\cal LM}(1,p)$}
\label{SecLM1p}
The logarithmic minimal model ${\cal LM}(1,p)$ is a logarithmic conformal field theory
with central charge
\be
c=1-6\frac{(p-1)^2}{p}
\label{c}
\ee
Here we are mainly interested in the Virasoro representations
associated with the boundary conditions appearing in the lattice approach to
${\cal LM}(1,p)$ as described in \cite{PRZ0607,RP0707}, but consider also other representations.
\subsection{Highest-weight modules}
Before describing the representations associated with boundary conditions, let us
recall some basic facts about highest-weight modules over the Virasoro algebra
with central charge given by (\ref{c}).
For each pair of positive Kac labels $r,s\in\mathbb{N}$, the highest-weight Verma module of
conformal weight $\D_{r,s}$ is denoted by $V_{r,s}$ where $\D_{r,s}$ is given by the Kac formula
\be
\D_{r,s}=\frac{(rp-s)^2-(p-1)^2}{4p},\qquad r,s\in\mathbb{Z}
\label{Drs}
\ee
As indicated, it is convenient to consider also negative or
vanishing Kac labels, in particular when applying the Kac-table symmetries
\be
\D_{r,s}=\D_{-r,-s},\qquad \D_{r,s}=\D_{r+k,s+kp},\qquad k\in\mathbb{Z}
\ee
The distinct conformal weights appearing in (\ref{Drs}) appear exactly once in the set
\be
\{\D_{r,s};\,r\in\mathbb{N},\,s\in\mathbb{Z}_{1,p}\}
\ee
where we have introduced the notation
\be
\mathbb{Z}_{n,m}=\mathbb{Z}\cap[n,m]
\ee
The Verma module $V_{r,s}$ has a proper submodule at Virasoro level $rs$ given by $V_{r,-s}$
(where $V_{r,-s}=V_{r',s'}$ for some $r',s'\in\mathbb{N}$),
allowing us to define the quotient module
\be
Q_{r,s}=V_{r,s}/V_{r,-s}
\label{Qrs}
\ee
Its character is given by
\be
\chit[Q_{r,s}](q)=\frac{q^{\frac{1-c}{24}+\D_{r,s}}}{\eta(q)}\big(1-q^{rs}\big)
=\frac{q^{(rp-s)^2/4p}}{\eta(q)}\big(1-q^{rs}\big)
\label{Qchar}
\ee
where $q$ is the modular nome while the Dedekind eta function is given by
\be
\eta(q)=q^{1/24}\prod_{m\in\mathbb{N}}(1-q^m)
\ee
This module is in general not irreducible. The
irreducible highest-weight module $M_{r,s}$ of conformal weight $\D_{r,s}$
is obtained by quotienting out the {\em maximal} proper submodule of $V_{r,s}$, and we denote its
character by
\be
\ch_{r,s}(q)=\chit[M_{r,s}](q)
\ee
For
\be
s=s_0+kp,\qquad s_0\in\mathbb{Z}_{1,p-1};\quad k\in\mathbb{N}_0
\label{s}
\ee
the character of the quotient module $Q_{r,s}$ can be written as
\be
\chit[Q_{r,s}](q)=\sum_{j=0}^{\min(2r-1,2k)}\ch_{r+k-j,(-1)^j s_0+(1-(-1)^j)p/2}(q)
\ee
\subsection{Kac representations}
There is a so-called Kac representation $(r,s)$ for each pair of positive Kac labels
$r,s\in\mathbb{N}$. It is associated with a Yang-Baxter integrable boundary condition in the lattice
approach to ${\cal LM}(1,p)$ \cite{PRZ0607,RP0707} and arises in the continuum scaling limit.
As we will discuss, these Kac representations can be irreducible, fully reducible
or reducible yet indecomposable as modules over the Virasoro algebra.
They are all of rank 1 as the dilatation generator
(the Virasoro mode $L_0$) is found to be diagonalizable.
Empirically, the Virasoro character of the Kac representation $(r,s)$ is identical to the character
(\ref{Qchar}) of the quotient module $Q_{r,s}$
\be
\chit_{r,s}(q)=\chit[Q_{r,s}](q)
\label{char}
\ee
Aside from its character and its rank-1 nature, it is not, however, a priori clear from the lattice
what type of Virasoro module the Kac representation $(r,s)$ actually is.
A typical dilemma is the distinction between a reducible yet indecomposable
module and the direct sum of its irreducible subfactors (subquotients).
By construction, the indecomposable module has
the same character as the direct sum but they are nevertheless inequivalent due
to the indecomposable nature of the former. The situation can be
rather intricate, as we will argue below, since some Kac representations are found to be
non-highest-weight representations. Our assertion is that they can all be viewed as
finitely-generated submodules
of Feigin-Fuchs modules, see (\ref{emb}) and (\ref{23013}), for example.
In particular, despite the character identity (\ref{char}),
we thus assert that $(r,s)$ and $Q_{r,s}$ in general differ as representations.
It follows from the character expressions that
$(r,s)$ is irreducible for $s\in\mathbb{Z}_{1,p}$ and that $(1,kp)$ is irreducible for
$k\in\mathbb{N}$, thus giving rise to the identifications $(1,rp)\equiv(r,p)$ \cite{RP0707}.
These are the only irreducible Kac representations.
Following Section 4.2 in \cite{RP0707},
one deduces that the Kac representation $(r,kp)$ is fully reducible
\be
(r,kp)=\bigoplus_{j=|r-k|+1,\,{\rm by}\,2}^{r+k-1}(j,p)
\label{pkpk}
\ee
The remaining Kac representations were not fully characterized in \cite{RP0707}.
Below, we offer a conjecture for the classification of the {\em full} set of Kac representations.
\subsection{Rank-2 representations}
The infinite family
\be
\{\R_r^b\,;\, r\in\mathbb{N},\,b\in\mathbb{Z}_{1,p-1}\}
\label{r2}
\ee
of reducible yet indecomposable modules of rank 2 arises
from repeated fusion of {\em irreducible} Kac representations \cite{RP0707}.
This follows by isolating $\R_r^b$ in
\be
(1,b+1)\otimes(1,rp)=\bigoplus_\beta^b\R_r^\beta,\qquad b\in\mathbb{Z}_{1,p-1}
\ee
for example, as $b$ increases from 1 to $p-1$.
Here we have introduced the summation convention
\be
\bigoplus_n^N R_n=\bigoplus_{n=\eps(N),\,\mathrm{by}\,2}^N R_n,\qquad
\eps(N)=\frac{1-(-1)^N}{2}=N\ (\mathrm{mod}\ 2)
\label{parity}
\ee
and extended the notation $\R_r^b$ by writing
\be
\R_r^0\equiv(1,rp)\equiv(r,p)
\label{R0}
\ee
for the irreducible rank-1 module $(r,p)$.
The rank-2 module $\R_r^b$ is characterized by the structure diagram
\psset{unit=.25cm}
\setlength{\unitlength}{.25cm}
\be
\mbox{}
\hspace{-1cm}
\mbox{
\begin{picture}(13,6)(0,3.5)
\unitlength=1cm
\thinlines
\put(-1.8,1){$\R_1^b:$}
\put(1,2){$M_{2,b}$}
\put(-.4,1){$M_{1,p-b}$}
\put(2,1){$M_{1,p-b}$}
\put(1.05,1){$\longleftarrow$}
\put(1.65,1.5){$\nwarrow$}
\put(0.65,1.5){$\swarrow$}
\end{picture}
},
\hspace{3cm}
\mbox{
\begin{picture}(13,6)(0,3,5)
\unitlength=1cm
\thinlines
\put(-1.8,1){$\R_r^b:$}
\put(0.8,2){$M_{r+1,b}$}
\put(-0.4,1){$M_{r,p-b}$}
\put(2,1){$M_{r,p-b}$}
\put(0.8,0){$M_{r-1,b}$}
\put(1.05,1){$\longleftarrow$}
\put(1.65,1.5){$\nwarrow$}
\put(0.65,1.5){$\swarrow$}
\put(1.65,0.5){$\swarrow$}
\put(0.65,0.5){$\nwarrow$}
\put(3.5,1){$,\qquad r\in\mathbb{Z}_{\geq2}$}
\end{picture}
}
\label{Remb}
\\[0.8cm]
\ee
where an arrow from the irreducible subfactor $M$ to the irreducible subfactor $M'$
indicates that vectors in $M$ are mapped not only to vectors in $M$ itself but also to vectors
in $M'$ by the action of the Virasoro algebra.
An arrow from one copy of $M$ to another copy of $M$ indicates that
$L_0$ is non-diagonalizable and that the module is of rank 2. Representations of rank $\rho>2$
are not present here, but would otherwise require $\rho$ copies of a given
irreducible subfactor suitably arranged in a chain and connected by aligned arrows.
The character of the rank-2 module $\R_r^b$ follows from the structure diagram (\ref{Remb})
and is given by
\be
\chit[\R_r^b](q
=(1-\delta_{r,1})\ch_{r-1,b}(q)+2\ch_{r,p-b}(q)+\ch_{r+1,b}(q)
\ee
According to the fusion algebra conjectured in Section~\ref{SecFull},
no additional rank-2 modules nor higher-rank modules are
generated from repeated fusion of the {\em full} set of Kac representations $(r,s)$.
\subsection{Reducible yet indecomposable Kac representations}
\label{SecRed}
There is a pair of Feigin-Fuchs modules corresponding to each Verma module $V_{r,s}$.
We denote them by $F^{\to}_{r,s}$ and $F^{\gets}_{r,s}$, and
they can be constructed by reversing every second arrow in the structure diagram for $V_{r,s}$.
The arrow on $F^\to_{r,s}$ indicates that vectors in $M_{r,s}$ are mapped not only to vectors
in $M_{r,s}$ itself but also {\em to} vectors in the next
subfactor by the action of the Virasoro algebra. Similarly, the arrow on $F^\gets_{r,s}$
indicates that vectors in $M_{r,s}$ can be reached {\em from} vectors in the next subfactor.
Likewise, we can associate a pair of finitely-generated
Feigin-Fuchs modules to every quotient module $Q_{r,s}$.
For $2r-1<2k$, where $s=s_0+kp$,
the Feigin-Fuchs modules corresponding to $Q_{r,s}$ are characterized by the structure diagrams
\be
\begin{array}{rcl}
&Q^{\to}_{r,s}:&\quad
M_{k-r+1,p-s_0}\to M_{k-r+2,s_0}\gets M_{k-r+3,p-s_0}\to\ldots\gets M_{k+r-1,p-s_0}\to
M_{k+r,s_0}
\\[.5cm]
&Q^{\gets}_{r,s}:&\quad
M_{k-r+1,p-s_0}\gets M_{k-r+2,s_0}\to M_{k-r+3,p-s_0}\gets\ldots\to M_{k+r-1,p-s_0}\gets
M_{k+r,s_0}
\end{array}
\ee
For $2r-1>2k$, the Feigin-Fuchs modules corresponding to $Q_{r,s}$ are characterized by the
structure diagrams
\be
\begin{array}{rcl}
&Q^{\to}_{r,s}:&\quad
M_{r-k,s_0}\to M_{r-k+1,p-s_0}\gets M_{r-k+2,s_0}\to\ldots\to M_{r+k-1,p-s_0}\gets M_{r+k,s_0}
\\[.5cm]
&Q^{\gets}_{r,s}:&\quad
M_{r-k,s_0}\gets M_{r-k+1,p-s_0}\to M_{r-k+2,s_0}\gets\ldots\gets M_{r+k-1,p-s_0}\to M_{r+k,s_0}
\end{array}
\ee
By construction, we have
\be
\chit[Q^{\to}_{r,s}](q)= \chit[Q^{\gets}_{r,s}](q)= \chit[Q_{r,s}](q)
\ee
In all cases, the Feigin-Fuchs modules $Q^{\to}_{r,s}$ and $Q^{\gets}_{r,s}$
are {\em contragredient} to each other
where the contragredient module $A^\ast$ to a module $A$ is obtained by reversing all
structure arrows between
its irreducible subfactors. It follows that $\chit[A^\ast](q)=\chit[A](q)$ and that $A^{\ast\ast}=A$.
Since we know the structure of the Kac representation $(r,s)$ for $s\leq p$ (irreducible) or
$s=kp$ (fully reducible), we now consider the cases where $s$ is of the form (\ref{s}) for $k\geq1$.
\\[.2cm]
{\bf Highest-weight assumption.}\quad
The Kac representation $(1,s_0+kp)$ is the indecomposable highest-weight module
$Q^\to_{1,s_0+kp}$, that is,
\be
(1,s_0+kp)=Q^\to_{1,s_0+kp}=Q_{1,s_0+kp}=V_{1,s_0+kp}/V_{1,s_0+(k+2)p}:\qquad
M_{1,s_0+kp}\to M_{1,(k+2)p-s_0}
\label{1s}
\ee
It is emphasized that, a priori, this Kac representation could be the direct sum
of its two irreducible subfactors or contragredient to the highest-weight module (\ref{1s}).
Consistency of the fusion algebra excludes the first possibility, though.
To appreciate this, let us initially compare certain fusion properties of the Kac
representation $(1,p+1)$ with the similar properties of the direct sum $(1,p-1)\oplus(2,1)$ of
its constituent subfactors. According to the fundamental fusion algebra \cite{RP0707}, we have
\be
\big[(1,p-1)\oplus(2,1)\big]\otimes\big[(1,p-1)\oplus(2,1)\big]
=2(1,1)\oplus(3,1)\oplus2(2,p-1)\oplus\bigoplus_\beta^{p-3}\R_1^\beta
\label{1p21}
\ee
On the other hand, it is observed from the lattice that the decomposition of the
fusion product $(1,p+1)\otimes(1,p+1)$ for small $p$ contains rank-2 Jordan cells
linking the two copies of the irreducible subfactor $M_{1,1}=(1,1)$. This is incompatible
with (\ref{1p21}) as the former indicates the presence of the rank-2 module $\R_1^{p-1}$
(\ref{Remb}) in the decomposition, in accordance with the conjectured fusion rule (\ref{fullver2})
\be
(1,p+1)\otimes(1,p+1)=(1,2p+1)\oplus\bigoplus_\beta^{p-1}\R_1^\beta
\ee
More generally, the lattice approach
provides similar evidence for the indecomposability of $(1,s_0+kp)$. We thus find that
\be
(1,s_0+kp)\neq(k,p-s_0)\oplus(k+1,s_0)
\ee
since the lattice approach indicates the presence of $\R_1^{p-1}$ in the decomposition
of the fusion product $(1,s_0+kp)\otimes(1,s_0+kp)$, while the decomposition of the
fusion product of $(k,p-s_0)\oplus(k+1,s_0)$ with itself follows from the fundamental
fusion algebra
\be
\big[(k,p-s_0)\oplus(k+1,s_0)\big]\otimes\big[(k,p-s_0)\oplus(k+1,s_0)\big]=2(1,1)\oplus\ldots
\ee
and contains two {\em unlinked} copies of $M_{1,1}$.
We are still faced with the problem of identifying the Kac representations
$(1,s_0+kp)$ as highest-weight modules or as the corresponding contragredient modules.
As indicated, here we {\em assume} that they are highest-weight modules and then study the
implications of this assumption. We will nevertheless return to this question in
Section~\ref{SecContra} and Section~\ref{SecFusion}.
\\[.2cm]
\noindent
{\bf Structure conjecture.}\quad For $s=s_0+kp$, $k\in\mathbb{N}$,
the Kac representation $(r,s)$ is the Feigin-Fuchs module
\be
(r,s)=\begin{cases} Q^{\to}_{r,s},\ &2r-1<2k
\\[.2cm]
Q^{\gets}_{r,s},\ &2r-1>2k
\end{cases}
\label{emb}
\ee
Below, we present arguments in support of this conjecture by combining
results from the lattice approach with results from applications of the NGK algorithm based
on (\ref{1s}).
The range for $s_0$ in (\ref{emb}) can be extended from $\mathbb{Z}_{1,p-1}$ to
$\mathbb{Z}_{0,p-1}$ such that $s$ can be any positive integer
$s\in\mathbb{N}$ (where we exclude $s_0=k=0$ for which $s=0$).
For $s_0=0$, the structure diagrams associated with $Q^\to_{r,s}$ and $Q^\gets_{r,s}$
in (\ref{emb}) are separable (degenerate) and the modules are fully reducible
\be
Q_{r,kp}^\to=Q_{r,kp}^\gets=Q_{r,kp}=\bigoplus_{j=|r-k|+1,\,\mathrm{by}\,2}^{r+k-1}M_{j,p}
\label{s00}
\ee
in accordance with (\ref{pkpk}).
It is noted that this decomposition is symmetric in $r$ and $k$.
In retrospect, we could have {\em defined} a Kac representation $(r,s)$ mathematically,
for general $r,s\in\mathbb{N}$, as the finitely-generated Feigin-Fuchs submodule (\ref{emb}) where
\be
s=s_0+kp,\qquad s_0\in\mathbb{Z}_{0,p-1},\quad k\in\mathbb{N}_0
\ee
{}From the lattice, we would then conjecture that the Virasoro modules
associated with the aforementioned boundary conditions
are Kac representations in the mathematical sense just given.
A major goal of the present work is indeed to collect evidence for this conjecture.
It is recalled that we are working under the assumption that the modules $(1,s_0+kp)$
are highest-weight modules.
\subsubsection{Evidence for the structure conjecture}
Fusion can be implemented on the lattice
without detailed knowledge of the structure of the Kac representations.
The Kac representation $(r,s)$ itself is actually constructed by fusing the `horizontal'
Kac representation $(r,1)$ with the `vertical' Kac representation $(1,s)$
\be
(r,s)=(r,1)\otimes (1,s)
\label{r11s}
\ee
Under the assumption (\ref{1s}), we have applied the NGK algorithm to many fusion
products of this kind and they all corroborate the structure conjecture (\ref{emb}).
Some of our findings and observations are summarized in the following with additional
details deferred to Appendix~\ref{AppNGK}.
In the decomposition of a fusion product examined using the NGK algorithm,
the vectors appearing at Nahm level 0 are the ones which are not the image of
negative Virasoro modes. These
vectors\footnote{These vectors actually span a subspace, but it is convenient
to think in terms of a set of basis vectors.}
constitute the minimal set of vectors from which the entire
(decomposable or indecomposable) module, arising as the result of the fusion product,
can be generated by the action of negative Virasoro modes only.
It thus suffices to analyze a fusion product at Nahm level 0 in order to
identify this minimal set of vectors. This knowledge is then sufficient to
distinguish between a highest-weight module like (\ref{1s}) and its contragredient
module. Indeed, the minimal set of vectors associated with the highest-weight
module in (\ref{1s}) consists of only one vector, namely $\ket{\D_{1,s_0+kp}}$,
whereas the minimal set associated with the contragredient module
consists of the two vectors $\ket{\D_{1,s_0+kp}}$ and $\ket{\D_{1,(k+2)p-s_0}}$.
Once we know this minimal set, we can use our knowledge of the character $\chit_{r,s}(q)$
to deduce the number and conformal weights of the vectors appearing at higher Nahm levels.
This is very helpful when determining the otherwise evasive spurious subspaces
appearing in the NGK algorithm, see Appendix~\ref{AppNGK}.
\\[.2cm]
\noindent
{\bf Singular vector conjecture.}\quad
With the normalization convention for singular vectors used in Appendix~\ref{AppSing},
we conjecture that at Nahm level 0 in the fusion product $(2,1)\otimes(1,s)$
\be
\ket{\D_{2,1}}\times\ket{\la_{1,s}}=-\big(\prod_{j=1}^{s-1}\frac{(p+j)(p-j)}{p}\big)
\big\{L_{-1}\times I+\tfrac{s-1}{2}I\times I\big\}\ket{\D_{2,1}}\times\ket{\D_{1,s}}
\label{D21}
\ee
We have verified this remarkably simple expression explicitly for $s\leq6$.
The action of the co-multiplication of $L_0$
on the corresponding two-dimensional initial vector space is given by
\bea
\D(L_0)\ket{\D_{2,1}}\times\ket{\D_{1,s}}
&=&(\D_{2,1}+\D_{1,s})\ket{\D_{2,1}}\times\ket{\D_{1,s}}
+L_{-1}\ket{\D_{2,1}}\times\ket{\D_{1,s}} \nn
\!\!\!\!\!\!\D(L_0)L_{-1}\ket{\D_{2,1}}\times\ket{\D_{1,s}}
&=&p\D_{1,s}\ket{\D_{2,1}}\times\ket{\D_{1,s}}
+(\D_{2,1}+\D_{1,s}+1-p)L_{-1}\ket{\D_{2,1}}\times\ket{\D_{1,s}}
\label{DL0}
\eea
It follows readily from (\ref{D21}) that a spurious subspace at Nahm level 0
is generated by setting the singular vector $\ket{\la_{1,s}}=0$ if and only if $s\leq p$,
in which case this subspace is one-dimensional.
For $s\leq p$, the matrix realization of
$\D(L_0)$ is therefore one-dimensional and is given by $\D_{2,s}$, reflecting
that the Kac representation $(2,s)$ is irreducible for all $s\leq p$.
For $s>p$, it follows from (\ref{DL0}) that the two-dimensional matrix realization of
$\D(L_0)$ is diagonalizable and has
eigenvalues $\D_{1,s-p}$ and $\D_{1,s+p}$.
For $s=kp$, this is in accordance with the decomposition
(\ref{pkpk}), while for $s=s_0+kp$, it is in accordance with the structure conjecture (\ref{emb}).
At Nahm level 0,
we have confirmed the structure conjecture (\ref{emb}) in many cases. In some of these,
we have continued the analysis to higher Nahm level and always with affirmative results.
Details of the analysis for $(2,3)$ in critical dense polymers ${\cal LM}(1,2)$ appear
in Appendix~\ref{App23} and are summarized by the structure diagram
\be
(2,3)=Q_{2,3}^\gets:\qquad \Vc(0)\gets\Vc(1)\to\Vc(3)
\label{23013}
\ee
where $\Vc(\D)$ denotes the irreducible highest-weight module of conformal weight $\D$.
\subsection{Contragredient Kac representations}
\label{SecContra}
We recall our working assumption (\ref{1s}) that the reducible yet indecomposable
Kac representation $(1,s_0+kp)$ is the highest-weight module
$Q^{\to}_{1,s_0+kp}$ and not its contragredient module $Q^{\gets}_{1,s_0+kp}$.
It then follows from the structure diagram (\ref{Remb}) that the rank-2 module
$\R_r^b$ admits the short exact sequence
\be
0\to (1,rp-b)\to\R_r^b\to(1,rp+b)\to0
\ee
It also admits the short exact sequence
\be
0\to(r,p-b)\to\R_r^b\to(r,p+b)\to0
\ee
in terms of the (for $r\neq1$) reducible yet indecomposable non-highest-weight
module $(r,p+b)$.
As Virasoro modules, the Feigin-Fuchs modules contragredient to the ones appearing
in (\ref{emb}), namely
\be
(r,s)^\ast=\begin{cases} Q^{\gets}_{r,s},\ &2r-1<2k
\\[.2cm]
Q^{\to}_{r,s},\ &2r-1>2k
\end{cases}
\label{Cemb}
\ee
are perfectly well defined. An immediate application is to provide alternative characterizations
of the rank-2 module $\R_r^b$ in terms of short exact sequences as we have
\be
\begin{array}{c}
0\to(1,rp+b)^\ast\to\R_r^b\to(1,rp-b)^\ast\to0
\\[.3cm]
0\to(r,p+b)^\ast\to\R_r^b\to(r,p-b)^\ast\to0
\end{array}
\ee
noting that the rank-2 modules are invariant under reversal of structure arrows
\be
(\R_r^b)^\ast=\R_r^b
\ee
For $r>1$, the rank-2 module $\R_r^b$ thus admits four independent
non-trivial short exact sequences.
This is in accordance with the structure diagram (\ref{Remb}) for $\R_r^b$ as it follows
from the diagram that $\R_r^b$ has four inequivalent proper submodules.
We also note that a fully reducible (in particular irreducible)
module is identical to its contragredient module
\be
(r,s)^\ast=(r,s),\qquad r\in\mathbb{N};\quad s\in\mathbb{Z}_{1,p-1}\cup p\mathbb{N}
\label{rsast}
\ee
It seems natural to expect that the category or family of representations appearing in
the full-fledged logarithmic conformal field theory ${\cal LM}(1,p)$
is closed under reversal of structure arrows in the sense that the contragredient
module to a module in the category is also in the category.
Above, we have only considered the Virasoro modules
associated with boundary conditions \cite{PRZ0607,RP0707},
namely the Kac representations $(r,s)$ and the rank-2 modules $\R_r^b$.
As already mentioned, these rank-2 modules are invariant under reversal of structure arrows,
whereas the only invariant Kac representations are the fully reducible ones
(including the irreducible ones).
As a consequence of the indicated expectation, the
{\em contragredient Kac representations} (\ref{Cemb}) should also be members of the invariant category.
This situation resembles the logarithmic minimal models ${\cal LM}(p,p')$ in the
so-called $\Wc$-extended picture \cite{PRR0803,Ras0805}
in which the modules associated
with boundary conditions only constitute a subcategory of the
full category if $p>1$. This idea was originally put forward and examined
in \cite{Ras0812} and has since been studied in more detail
\cite{GRW0905,Ras0906,Wood0907,GRW1008}.
In Section~\ref{SecContraFusion}, we discuss how the fusion algebra
generated by the Kac representations may be extended by the inclusion
of the contragredient Kac representations.
\section{Fusion algebras}
\label{SecFusion}
\subsection{Fundamental fusion algebra}
There are infinitely many fusion (sub)algebras associated with ${\cal LM}(1,p)$.
The {\em fundamental fusion algebra} \cite{RP0707}
\be
\big\langle (1,1),(2,1),(1,2)\big\rangle
\label{fund}
\ee
in particular, is generated from the two fundamental Kac representations $(2,1)$ and $(1,2)$
in addition to the identity $(1,1)$.
This fusion algebra involves all the irreducible Kac representations
and all the rank-2 representations (\ref{r2}).
On the other hand, no reducible yet indecomposable Kac representations
arise as the result of repeated fusion of the fundamental Kac representations.
The fundamental fusion algebra has two canonical subalgebras
\be
\big\langle(1,1),(2,1)\big\rangle,\qquad\big\langle(1,1),(1,2)\big\rangle
\label{horver}
\ee
\subsection{Kac fusion algebra}
\label{SecFull}
The {\em Kac fusion algebra} is generated by the {\em full} set of Kac representations
\be
\big\langle(r,s);\ r,s\in\mathbb{N}\big\rangle
\label{full}
\ee
and its description is a main objective of this work.
To appreciate this fusion algebra, it is instructive to examine
its vertical component
\be
\big\langle(1,s);\ s\in\mathbb{N}\big\rangle
\label{fullver}
\ee
which is characterized by the fusion rules of the vertical component
$\langle(1,1),(1,2)\rangle$ of the fundamental fusion algebra
supplemented by the fusion rules involving the reducible yet indecomposable
Kac representations $(1,s_0+kp)$. To describe (\ref{fullver}), we introduce
the sign function
\be
\mathrm{sg}(n)=\begin{cases} 1,\ &n>0\\ -1,\ &n<0 \end{cases}
\ee
Since this function only appears in conjunction with certain constraints,
the value $\mathrm{sg}(0)$ turns out to be immaterial.
\\[.2cm]
\noindent
{\bf Fusion conjecture.}\quad The vertical component of the Kac fusion algebra
satisfies
\be
\big\langle(1,s);\ s\in\mathbb{N}\big\rangle
=\big\langle(1,b+kp),\R_r^b;\ b\in\mathbb{Z}_{0,p-1},\,
k\in\mathbb{N}_0,\, r\in\mathbb{N}\big\rangle
\label{fullver0}
\ee
where we recall $\R_r^0\equiv(1,rp)$ and set $(1,0)\equiv\R_0^\beta\equiv0$,
and is characterized by the fusion rules\footnote{This revises the conjecture in \cite{PR0610} for
the decomposition of the fusion product $(1,2j_1-1)\otimes(1,2j_2-1)$ in ${\cal LM}(1,2)$,
see also Appendix~\ref{AppCrit}.}
\bea
(1,b+kp)\otimes(1,b'+k'p)&=&\bigoplus_{j=|k-k'|+1,\,\mathrm{by}\,2}^{k+k'-1}
\!\!\bigoplus_{\beta}^{p-|b-b'|-1}\R_{j}^\beta
\oplus\bigoplus_{j=|k-k'+\mathrm{sg}(b-b')|+1,\,\mathrm{by}\,2}^{k+k'}
\!\!\bigoplus_{\beta}^{|b-b'|-1}\R_{j}^\beta\nn
&\oplus&\bigoplus_{\beta}^{b+b'-p-1}\R_{k+k'+1}^\beta
\oplus\bigoplus_{\beta=|b-b'|+1,\,\mathrm{by}\,2}^{p-|p-b-b'|-1}(1,\beta+(k+k')p)\nn
\R_{r}^{b}\otimes(1,b'+k'p)&=&\bigg(
\bigoplus_{j=|r-k'|+1,\,\mathrm{by}\,2}^{r+k'-1}
\!\!\bigoplus_{\beta}^{p-|b-b'|-1}\R_{j}^\beta
\oplus\bigoplus_{j=|r-k'-1+\mathrm{sg}(p-b-b')|+1,\,\mathrm{by}\,2}^{r+k'-\mathrm{sg}(p-b-b')}
\!\!\bigoplus_{\beta}^{|p-b-b'|-1}\R_{j}^\beta \nn
&\oplus&\bigoplus_{j=|r-k'-1|+1,\,\mathrm{by}\,2}^{r+k'-2}
\!\!\bigoplus_{\beta}^{p-|p-b-b'|-1}\R_{j}^\beta
\oplus\bigoplus_{j=|r-k'+\mathrm{sg}(b-b')|+1,\,\mathrm{by}\,2}^{r+k'}
\!\!\bigoplus_{\beta}^{|b-b'|-1}\R_{j}^\beta\nn
&\oplus&\bigoplus_{\beta}^{b'-b-1}\R_{r+k'}^\beta
\oplus\bigoplus_{\beta=|b-b'|+1,\,\mathrm{by}\,2}^{p-|p-b-b'|-1}\R_{r+k'}^\beta
\bigg)/(1+\delta_{b,0}) \nn
\R_r^b\otimes\R_{r'}^{b'}&=&\bigg(
\bigoplus_{j=|r-r'|,\,\mathrm{by}\,2}^{r+r'}
\big(2-\delta_{j,|r-r'|}\big)
\Big\{\bigoplus_{\beta}^{|b-b'|-1}\oplus\,\big(1-\delta_{j,r+r'}\big)\!\!\bigoplus_{\beta}^{p-|p-b-b'|-1}
\Big\}\R_j^\beta\nn
&\oplus&\Big\{
\bigoplus_{j=|r-r'-1+\mathrm{sg}(p-b-b')|+1,\,\mathrm{by}\,2}^{r+r'-\mathrm{sg}(p-b-b')}
\oplus
\bigoplus_{j=|r-r'+1-\mathrm{sg}(p-b-b')|+1,\,\mathrm{by}\,2}^{r+r'-1}\Big\}
\bigoplus_{\beta}^{|p-b-b'|-1}\R_{j}^\beta\nn
&\oplus&
\bigoplus_{\beta=|b-b'|+1,\,\mathrm{by}\,2}^{p-|p-b-b'|-1}\R_{r+r'}^\beta
\oplus\bigoplus_{\beta=|p-b-b'|+1,\,\mathrm{by}\,2}^{p-|b-b'|-1}\R_{r+r'-1}^\beta
\oplus\bigoplus_{\beta}^{p-b-b'-1}\R_{r+r'-1}^\beta
\nn
&\oplus&
\bigoplus_{j=|r-r'|+1,\,\mathrm{by}\,2}^{r+r'-1}\big(2-\delta_{j,r+r'-1}\big)
\bigoplus_{\beta}^{p-|b-b'|-1}\R_j^\beta
\bigg)/\{(1+\delta_{b,0})(1+\delta_{b',0})\}
\label{fullver2}
\eea
The divisions by $(1+\delta_{b,0})$ and $(1+\delta_{b',0})$ ensure that the fusion
rules for $\R_r^0$ match those for $(1,rp)$.
Evidence for this fusion conjecture is presented in Section~\ref{SecEviFusionLattice}
and Section~\ref{SecEviFusionNGK}.
Mnemonically, the fusion rules (\ref{fullver2})
are reconstructed straightforwardly using the underlying $sl(2)$ structure \cite{RP0707}.
This structure is evident from the lattice where defects can be
annihilated in pairs thus implying that the fusion product of two Kac representations
$(1,s)$ and $(1,s')$ can be decomposed, up to indecomposable structures,
as a sum of Kac representations
\be
(1,s)\otimes(1,s')=(1,|s-s'|+1)\stackrel{\mbox{?}}{\oplus}(1,|s-s'|+3)\stackrel{\mbox{?}}{\oplus}
\ldots\stackrel{\mbox{?}}{\oplus}(1,s+s'-1)
\label{ss}
\ee
The question marks indicate that the sums can be {\em direct} or {\em indecomposable}.
The $sl(2)$ structure of the fusion product $(1,s)\otimes (1,s')$ is thus encoded in the
character decomposition
\bea
\chit\!\left[(1,s)\otimes(1,s')\right]\!(q)
&=&\chit\!\left[(1,|s-s'|+1)\oplus(1,|s-s'|+3)\oplus\ldots\oplus(1,s+s'-1)\right]\!(q)\nn
&=&\sum_{s''=|s-s'|+1,\ \!{\rm by}\ \!2}^{s+s'-1}\chit_{1,s''}(q)\nn
&=&\sum_{t=0}^{\min\{s,s'\}-1}\chit_{1,s+s'-2t-1}(q)
\label{chardec}
\eea
Following the discussion of short exact sequences in Section~\ref{SecContra},
we may view the rank-2 module $\R_r^b$ as an indecomposable combination
of the two Kac representations $(1,rp-b)$ and $(1,rp+b)$, that is,
\be
\R_r^b=(1,rp-b)\oplus_i(1,rp+b)
\label{Rrb}
\ee
Utilizing this, we introduce the `forgetful functor' $\Fc$ by
\be
\Fc[(1,s)]=(1,s),\qquad \Fc[\R_r^b]=(1,rp-b)\oplus(1,rp+b),\qquad
\Fc[\Ac\otimes\Bc]=\Fc[\Fc[\Ac]\otimes\Fc[\Bc]]
\label{F}
\ee
and apply it to the various fusion products such as
\be
\Fc[(1,s)\otimes(1,s')]=\bigoplus_{s''=|s-s'|+1,\,\mathrm{by}\,2}^{s+s'-1}(1,s'')
\label{Ffus}
\ee
We note that applying $\Fc$ does not correspond to moving to the Grothendieck ring
associated with characters since we are working here with the reducible yet indecomposable
Kac representations $(1,rp\pm b)$.
Clearly, $\Fc$ does not have an inverse, but on fusion products, we can
devise a prescription that `reintroduces' the rank-2 modules in a unique and well-defined way.
To describe this prescription, let us consider the fusion product $(1,s)\otimes(1,s')$
in (\ref{Ffus}) and initially focus on the Kac representation $(1,s_1'')$ with minimal Kac
label $s_1''=|s-s'|+1$. Depending on $p$, this will appear as the submodule $(1,rp-b)$ of
the rank-2 module $\R_r^b$ if and only if the matching module $(1,rp+b)$ also appears in the
decomposition in (\ref{Ffus}). If not, the Kac representation $(1,s_1'')$ will appear `by itself' in
the decomposition of the fusion product.
Having completed the examination of $(1,s_1'')$, we remove it
together with its potential partner $(1,rp+b)$ from the direct sum
in (\ref{Ffus}) and repeat the analysis for $(1,s_2'')$ corresponding to the new minimal Kac
label $s_2''$. This algorithm is continued until all the Kac representations in (\ref{Ffus})
have been accounted for.
This prescription also works for more complicated fusion products than $(1,s)\otimes(1,s')$
and always yields a unique and well-defined result, namely the fusion rules
given in (\ref{fullver2}).
Loosely speaking, the prescription
corresponds to writing the decomposition of a fusion product in terms of
Kac representations and then forming rank-2 modules whenever possible, starting
with the lowest Kac label and moving up.
\subsubsection{Full Kac fusion algebra}
To describe the {\em full} Kac fusion algebra, not just its vertical component (\ref{fullver0}),
we note that the horizontal component
$\langle(r,1);\, r\in\mathbb{N}\rangle$ is characterized by the ordinary $sl(2)$ fusion rules
\be
(r,1)\otimes(r',1)=\bigoplus_{r''=|r-r'|+1,\,\mathrm{by}\,2}^{r+r'-1}(r'',1),\qquad r,r'\in\mathbb{N}
\ee
and that the lattice description implies not only (\ref{r11s}) but also \cite{RP0707}
\be
\R_r^b=(r,1)\otimes\R_1^b,\qquad r\in\mathbb{N}
\label{r1R}
\ee
The fusion rules of the full Kac fusion algebra now follow straightforwardly using
the requirement of commutativity and associativity as we then have
\bea
(r,b+kp)\otimes(r',b'+k'p)&=&\big((r,1)\otimes(r',1)\big)\otimes\big((1,b+kp)\otimes(1,b'+k'p)\big)\nn
\R_r^b\otimes(r',b'+k'p)&=&\big((r,1)\otimes(r',1)\big)\otimes\big(\R_1^b\otimes(1,b'+k'p)\big)\nn
\R_r^b\otimes\R_{r'}^{b'}&=&\big((r,1)\otimes(r',1)\big)\otimes\big(\R_1^b\otimes\R_1^{b'}\big)
\eea
The last of these relations is not needed to determine the full Kac fusion algebra
but must be satisfied for self-consistency of the fusion algebra.
The fusion rules needed to complete the Kac fusion algebra are
\bea
(r,b+kp)\otimes(r',b'+k'p)&=&
\bigoplus_{i=|r-r'|+1,\,\mathrm{by}\,2}^{r+r'-1}
\bigg\{
\bigoplus_{j=|k-k'|+1,\,\mathrm{by}\,2}^{k+k'-1}\ \,
\bigoplus_{\ell=|i-j|+1,\,\mathrm{by}\,2}^{i+j-1}
\bigoplus_{\beta}^{p-|b-b'|-1}
\R_\ell^\beta\nn
&\oplus&
\bigoplus_{j=|k-k'+\mathrm{sg}(b-b')|+1,\,\mathrm{by}\,2}^{k+k'}\ \,
\bigoplus_{\ell=|i-j|+1,\,\mathrm{by}\,2}^{i+j-1}
\bigoplus_{\beta}^{|b-b'|-1}
\R_\ell^\beta\nn
&\oplus&
\bigoplus_{\ell=|i-k-k'-1|+1,\,\mathrm{by}\,2}^{i+k+k'}
\bigoplus_{\beta}^{b+b'-p-1}
\R_\ell^\beta
\oplus
\bigoplus_{\beta=|b-b'|+1,\,\mathrm{by}\,2}^{p-|p-b-b'|-1}(i,\beta+(k+k')p)
\bigg\}\nn
\R_{r}^{b}\otimes(r',b'+k'p)&=&\bigg(
\bigg\{
\bigoplus_{j=|r-k'-1|+1,\,\mathrm{by}\,2}^{r+k'-2}
\!\!\bigoplus_{\beta}^{p-|p-b-b'|-1}
\oplus
\bigoplus_{j=|r-k'-1+\mathrm{sg}(p-b-b')|+1,\,\mathrm{by}\,2}^{r+k'-\mathrm{sg}(p-b-b')}
\!\!\bigoplus_{\beta}^{|p-b-b'|-1}
\nn
&\oplus& \bigoplus_{j=|r-k'|+1,\,\mathrm{by}\,2}^{r+k'-1}
\!\!\bigoplus_{\beta}^{p-|b-b'|-1}
\oplus
\bigoplus_{j=|r-k'+\mathrm{sg}(b-b')|+1,\,\mathrm{by}\,2}^{r+k'}
\!\!\bigoplus_{\beta}^{|b-b'|-1}
\bigg\}\bigoplus_{\ell=|r'-j|+1,\,\mathrm{by}\,2}^{r'+j-1}\R_{\ell}^\beta\nn
&\oplus&
\bigoplus_{\ell=|r-r'+k'|+1,\,\mathrm{by}\,2}^{r+r'+k'-1}
\bigg\{
\bigoplus_{\beta}^{b'-b-1}
\oplus\bigoplus_{\beta=|b-b'|+1,\,\mathrm{by}\,2}^{p-|p-b-b'|-1}
\bigg\}\R_{\ell}^\beta
\bigg)/(1+\delta_{b,0})
\eea
It follows, in particular, that the fundamental fusion algebra (\ref{fund}) is a subalgebra
of the Kac fusion algebra (\ref{full}).
The fusion rules for critical dense polymers ${\cal LM}(1,2)$ are summarized in
Appendix~\ref{AppCrit}.
Recalling that $\R_r^0\equiv(1,rp)\equiv(r,p)$, it is noted that the modules
\be
\big\{\R_r^b;\, r\in\mathbb{N},\, b\in\mathbb{Z}_{0,p-1}\big\}
\label{proj}
\ee
form an {\em ideal} of the Kac fusion algebra. This is in accordance with the
expectation that these modules are {\em projective}.
\subsubsection{Evidence for the fusion conjecture: lattice approach}
\label{SecEviFusionLattice}
The lattice approach to the logarithmic minimal model ${\cal LM}(1,p)$
\cite{PRZ0607,RP0707} is based on a loop model with loop fugacity
\be
\beta=-2\cos\tfrac{\pi}{p}
\ee
Here we are interested in the model defined on strips of width $N$.
To describe the vertical Kac representations and their fusions,
it suffices to consider the hamiltonian defined by
\be
H=-\sum_{j=1}^{N-1}e_j
\ee
where $\{e_j;\ j\in\mathbb{Z}_{1,N-1}\}$ is the set of Temperley-Lieb generators
acting on $N$ strands.
Empirically \cite{PRZ0607}, the character of the Kac representation $(1,s)$ arises
in the scaling limit of the spectrum of the hamiltonian acting
on link states with exactly $s-1$ defects.
Viewing these defects as linked to the right (or left) boundary,
the Kac representation is associated with the corresponding boundary condition.
In particular, there are $N-1$ link states with exactly $N-2$ defects and our
choice of canonical ordering of these link states is
\be
\bigcap\,\big|\,\big|\,\ldots\,\big|\,\big|\,,\qquad
\big|\,\bigcap\,\big|\,\ldots\,\big|\,\big|\,,\qquad
\ldots\ldots,\qquad
\big|\,\big|\,\ldots\,\big|\,\big|\,\bigcap
\label{can}
\ee
We refer to \cite{PR0610} for more details.
Fusion is implemented diagrammatically by considering non-trivial
boundary conditions on {\em both} sides of the bulk.
In the diagrammatic description of the fusion product $(1,s)\otimes(1,s')$,
there are thus $s-1$ and $s'-1$ links emanating from the left and right boundaries,
respectively. As links from the left boundary can be joined with links from the right
boundary to form half-arcs above the bulk, the number of defects propagating through
the bulk is given by $s+s'-2-2t$ where $0\leq t\leq \min\{s,s'\}-1$.
In the last expression in (\ref{chardec}), the integer $t$ labels the number of such half-arcs
linking the two boundaries. For given $t$, we thus have $s-t-1$ and $s'-t-1$
half-arcs linking the bulk to the left and right boundary, respectively.
As usual, we group the link states according to their number of half-arcs linking
the bulk to the boundaries and order these groups with increasing such numbers.
The resulting matrix representation of the hamiltonian is then upper block-triangular
with vanishing blocks beyond the first super-diagonal of blocks.
It is recalled that we do not anticipate Jordan cells of ranks greater than 2
in the hamiltonian.
To examine Jordan cells of rank 2 formed
between {\em neighbouring} blocks on the diagonal,
it thus suffices to analyze the upper block-triangular
matrix defined by the four adjacent blocks spanned diagonally by the said two blocks.
This gives insight into the appearance of rank-2 modules of the type $\R_r^1$.
Beyond neighbouring blocks, care has to be taken, though, since some
non-trivial Jordan cells are formed using `ligatures', see (\ref{M}) below.
This is indeed the case for $\R_r^b$ for $b>1$ since such a rank-2 module
can be viewed as an indecomposable sum (\ref{Rrb})
of two Kac representations corresponding to boundary conditions differing in numbers
of defects by $2b>2$. The responsible Jordan cells are thus formed
between blocks which are {\em not} neighbours.
As illustration of this `ligature phenomenon', we consider the matrix
\be
M=\begin{pmatrix}a&1&0\\ 0&b&1\\ 0&0&a\end{pmatrix}
\label{M}
\ee
For $a\neq b$, its Jordan canonical form reads
\be
J=S^{-1}MS=\begin{pmatrix}b&0&0\\0&a&1\\0&0&a\end{pmatrix}
\ee
where
\be
S^{-1}=\begin{pmatrix}0&-\sigma&1\\ \sigma&1&0\\ 0&0&1\end{pmatrix},\hspace{1cm}
S=\begin{pmatrix}\sigma^{-2}&\sigma^{-1}&-\sigma^{-2}\\ -\sigma^{-1}&0&\sigma^{-1}\\
0&0&1\end{pmatrix},\hspace{1cm}\sigma=a-b
\ee
That is, a rank-2 Jordan cell is formed between the two copies of the degenerate
eigenvalue $a$.
If, on the other hand, we eliminate the second row and column from $M$ {\em before}
examining the possibility of a rank-2 Jordan cell, we end up with the {\em diagonal} matrix
${\rm diag}(a,a)$. A search for non-trivial Jordan cells can therefore not be conducted this
naively, and focus here is on neighbouring blocks. That is, we are only concerned with the
appearance of rank-2 modules of the type $\R_r^1$.
It is also noted that permutations alone cannot resolve the indicated problems associated with treating
blocks which are not neighbours. This is again illustrated by the matrix $M$ in (\ref{M})
which is similar to
\be
P^{-1}MP=\begin{pmatrix}a&0&1\\ 0&a&0\\ 0&1&b\end{pmatrix},\qquad
P=\begin{pmatrix}1&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix}
\ee
However, the matrix $P^{-1}MP$ is not upper block-triangular with vanishing blocks beyond the
first super-diagonal of blocks.
Now, let us implement the fusion product $(1,s)\otimes(1,s')$ for $s,s'>1$
on a lattice of limited system size
\be
N=s+s'-2-2t,\qquad t=0,1,\ldots,\min\{s,s'\}-1
\ee
for some $t$ in the range given.
This means that the bulk can accommodate up to $N$ defects while there must
be at least $t$ half-arcs linking the two boundaries.
In the decomposition (\ref{ss}), the $t$ rightmost Kac representations are therefore not
present while the remaining ones are
\be
(1,s)\otimes(1,s')\big|_{N}
=(1,|s-s'|+1)\stackrel{\mbox{?}}{\oplus}
\ldots\stackrel{\mbox{?}}{\oplus}(1,s+s'-3-2t)\stackrel{\mbox{?}}{\oplus}(1,s+s'-1-2t)
\label{ssN}
\ee
To gain insight into whether the final sum in this decomposition is
{\em direct} or {\em indecomposable},
we will now characterize when a non-trivial Jordan cell is formed in the
hamiltonian $H_{s,s'}^{(N)}$ between the two neighbouring blocks corresponding to
$N-2$ or $N$ defects, respectively.
For $N=6$, using the ordered basis (\ref{can}),
the corresponding matrix realization of the hamiltonian is given by
\be
-H_{s,s'}^{(6)}=\left(\!\!\begin{array}{rccccl}
\beta&1&0&0&0&\delta_{s-t,2}\\[.2cm]
1&\beta&1&0&0&\delta_{s-t,3}\\[.2cm]
0&1&\beta&1&0&\delta_{s-t,4}\\[.2cm]
0&0&1&\beta&1&\delta_{s-t,5}\\[.2cm]
0&0&0&1&\beta&\delta_{s-t,6}\\[.2cm]
0&0&0&0&0&0
\end{array}\!\!\!\right)
\ee
The extension to general $N$ is straightforward and discussed in
Appendix~\ref{AppJordan}. We thus find that
$H_{s,s'}^{(N)}$ is diagonalizable unless
there exists $j_0\in\mathbb{Z}_{1,N-1}$ for which
$\beta+2\cos\frac{j_0\pi}{N}=0$ and $\sin\frac{j_0(s-t-1)\pi}{N}\neq0$ in which case the
Jordan canonical form of $H_{s,s'}^{(N)}$ contains a single non-trivial Jordan cell.
This cell is of rank 2 and has diagonal elements 0.
It follows that this non-trivial Jordan cell appears if and only if
\be
p\mid (s+s'-2-2t),\qquad p\nmid (s-t-1)
\label{pss}
\ee
Since the pair of conditions $q\,|\,(n+m)$ and $q\nmid n$ implies $q\nmid m$, we may restore
the symmetry between $s$ and $s'$ in (\ref{pss}) by redundantly including $p\nmid (s'-t-1)$.
This symmetry is a manifestation of the commutativity of the fusion product $(1,s)\otimes(1,s')$,
of the equivalence of the left- and right-sided decompositions of the diagrammatic
implementation of this fusion product, and of the choice of canonical ordering of the link states
with $N-2$ defects (\ref{can}).
This {\em exact} result (\ref{pss}) for finite system sizes is in accordance with the fusion rule
(\ref{fullver2}) for $(1,s)\otimes(1,s')$. Indeed, assuming that the observed
Jordan-cell structures survive in the
continuum scaling limit, the result provides valuable insight used to determine
whether the particular sum
\be
(1,s)\otimes(1,s')=\ldots (1,s+s'-3-2t)\stackrel{\mbox{?}}{\oplus}(1,s+s'-1-2t)\ldots
\ee
in the decomposition (\ref{ss}) of the fusion product is {\em direct} or {\em indecomposable}.
{}From the lattice analysis above, we thus conclude that it is indecomposable due
to the presence of non-trivial Jordan cells if and only if the conditions in (\ref{pss}) are satisfied.
For this to be compatible with the conjectured fusion rules, the latter must predict
that the rank-2 module $\R_r^1$ appears (with multiplicity 1) in the decomposition
of $(1,s)\otimes(1,s')$ if and only if
\be
\exists\, \tau\in\mathbb{Z}_{1,\min\{s,s'\}-1}:\qquad rp=|s-s'|+2\tau,\qquad \tau\not\in p\mathbb{N}
\ee
Writing $\tau=a+\ell p$, this is easily verified.
{}From the lattice approach, we now know where certain Jordan cells appear
in the decomposition of $(1,s)\otimes(1,s')$,
but in general, this is not sufficient to determine the various representations.
In critical dense polymers ${\cal LM}(1,2)$, for example, we have thus found that
\be
(1,3)\otimes(1,3)=(1,1)\oplus_i(1,3)\stackrel{\mbox{?}}{\oplus}(1,5)
\label{1311}
\ee
where the indecomposable sum is due to the formation of non-trivial Jordan cells.
The lattice approach offers an additional clue.
Continuing the examination of (\ref{1311}), we note that the link states
associated with the subfactor $(1,1)$ (corresponding to $t=2$) and the link states
associated with the subfactor $(1,3)$ (corresponding to $t=1$) all contain a half-arc
linking the two boundaries. Ignoring this common spectator half-arc, the diagrammatic
description becomes equivalent to the lattice implementation of the fusion product
\be
(1,2)\otimes(1,2)=\R_1^1
\label{1212}
\ee
Alternatively, we may focus on the link states associated with the subfactors
$(1,3)$ and $(1,5)$ corresponding to $t=1$ or $t=0$, respectively.
Unlike before, this does not correspond to a single fusion product.
The only candidate with the same number of defects propagating through the bulk
is $(1,2)\otimes(1,4)$, but this is associated with link states with 1 and 3 links emanating
from the left and right boundaries, respectively.
We thus conclude that the fusion product $(1,3)\otimes(1,3)$ contains
the rank-2 module $\R_1^1$ as a subfactor, that is,
\be
(1,3)\otimes(1,3)=\R_1^1\stackrel{\mbox{?}}{\oplus}(1,5)
\label{131315}
\ee
Below, we supplement this lattice analysis of the fusion product $(1,3)\otimes(1,3)$
by applications of the NGK algorithm.
\subsubsection{Evidence for the fusion conjecture: NGK algorithm}
\label{SecEviFusionNGK}
A priori, the right side of (\ref{131315}) could correspond to a single indecomposable
representation (since it remains to be established that (\ref{proj}) is the set of projective
representations).
According to the conjectured fusion rules (\ref{fullver2}), however, the full decomposition
reads
\be
(1,3)\otimes(1,3)=\R_1^1\oplus(1,5)
\label{1313R15}
\ee
To test this, we have applied the NGK algorithm to the fusion product $(1,3)\otimes(1,3)$,
assuming that $(1,3)$ is a highest-weight module.
Details of this analysis to Nahm level 2 appear in Appendix~\ref{App1313}, and they confirm
the fusion rule (\ref{1313R15}). They also confirm that the Kac representation $(1,5)$ is
a highest-weight module and not its contragredient module.
Likewise in ${\cal LM}(1,2)$, we have confirmed the fusion rule
\be
(1,3)\otimes(1,5)=\R_2^1\oplus(1,7)
\ee
and the highest-weight property of $(1,7)$ to Nahm level 3.
As observed in Section~\ref{SecRing} below,
the vertical Kac representations $(1,s)$ are all generated from repeated fusion
of $(1,2)$ and $(1,p+1)$. In accordance with the results of the NGK algorithm,
it is therefore natural to expect that the Kac representations $(1,s)$ thereby generated
are all highest-weight modules provided that $(1,p+1)$ is.
It thus {\em suffices} to assume that $(1,p+1)$ is a highest-weight module.
\\[.2cm]
\noindent
{\bf Refined highest-weight assumption.}\quad (i) The Kac representation $(1,p+1)$
is a highest-weight Virasoro module. (ii) Repeated fusion subsequently ensures that
all vertical Kac representations $(1,s)$ are highest-weight Virasoro modules.
\\[.2cm]
Our analysis does not, however, provide direct arguments for the assumption that
the Kac representation $(1,p+1)$ is a highest-weight module.
As we will see below, the fusion rules actually turn out to be independent of
whether $(1,p+1)$ is indeed a highest-weight module or the corresponding
contragredient module.
\subsubsection{Even and odd sectors}
{}From the lattice approach, it is of interest to understand the continuum scaling limit of the
situation where the only constraint on the number
of defects is that it is of the same parity as the bulk system size $N$. Depending on the parity of
$N$, we refer to the two possible scenarios as the {\em even} and {\em odd sectors}.
They can be viewed as systems with {\em free boundary conditions}, but they can also
be interpreted as finitized versions of the fusion products
\be
(1,\tfrac{N+2}{2})\otimes(1,\tfrac{N+2}{2})\quad\mathrm{and}\quad
(1,\tfrac{N+1}{2})\otimes(1,\tfrac{N+3}{2})
\label{1N1N}
\ee
respectively. To examine the continuum scaling limit of a system with free boundary conditions,
we can thus resort to the fusion rules for the fusions in (\ref{1N1N}) as given in (\ref{fullver2}).
For $b\in\mathbb{Z}_{0,p-1}$ and $k\in\mathbb{N}_0$, the first fusion rule in
(\ref{fullver2}) yields
\be
\begin{array}{rcl}
(1,b+kp)\otimes(1,b+kp)&=&
\displaystyle{
\bigoplus_{j=1}^k\,\bigoplus_\beta^{p-1}\R_{2j-1}^\beta
\oplus\bigoplus_\beta^{2b-p-1}\R_{2k+1}^\beta
\oplus\bigoplus_\beta^{p-|p-2b|-1}(1,\beta+2kp)
}\\[.7cm]
(1,b+kp)\otimes(1,b+1+kp)&=&
\displaystyle{
\bigoplus_{j=1}^k\,\bigoplus_\beta^{p-2}\R_{2j-1}^\beta
\oplus\bigoplus_\beta^{2b-p}\R_{2k+1}^\beta
\oplus\bigoplus_{j=1}^k\R_{2j}^0
\oplus\bigoplus_{\beta=2,\,\mathrm{by}\,2}^{p-|p-2b-1|-1}(1,\beta+2kp)
}
\end{array}
\ee
It is verified that the second of these rules applies for $b=p-1$, even though $b'=p$ in that case.
It follows that the continuum scaling limit of a system with free boundary conditions
is described by
\be
\lim_{n\to\infty}(1,n)\otimes(1,n)=\bigoplus_{j\in\mathbb{N}}\,
\bigoplus_\beta^{p-1}\R_{2j-1}^\beta,\qquad
\lim_{n\to\infty}(1,n)\otimes(1,n+1)=\bigoplus_{j\in\mathbb{N}}\Big(
\R_{2j}^0\oplus\bigoplus_\beta^{p-2}\R_{2j-1}^\beta\Big)
\ee
in accordance with the recent analysis of Jordan structures in~\cite{MS1101}.
In particular, for critical dense polymers as described by ${\cal LM}(1,2)$~\cite{PR0610},
we thus have
\be
\lim_{n\to\infty}(1,n)\otimes(1,n)=\bigoplus_{j\in\mathbb{N}}\R_{2j-1}^1,\qquad
\lim_{n\to\infty}(1,n)\otimes(1,n+1)=\bigoplus_{j\in\mathbb{N}}\,(1,2j)
\ee
showing that reducible yet indecomposable representations only arise in the even sector.
\subsection{Contragredient extension}
\label{SecContraFusion}
It is stressed that the set
\be
\Jc^{\mathrm{Kac}}=
\big\{(r,s),\R_r^b;\,r,s\in\mathbb{N},\,b\in\mathbb{Z}_{1,p-1}\big\}
\ee
of representations appearing in the Kac fusion algebra exhausts the set of representations
associated with boundary conditions in \cite{PRZ0607,RP0707}.
Extending this set by the contragredient Kac representations
\be
\Jc^{\mathrm{Kac}}\,\to\,
\Jc^{\mathrm{Cont}}=\Jc^{\mathrm{Kac}}\cup
\big\{(r,s)^\ast;\,r,s\in\mathbb{N}\big\}
\label{sets}
\ee
gives rise to the larger fusion algebra
\be
\big\langle\Jc^{\mathrm{Cont}}\big\rangle
=\big\langle(r,s),(r,s)^\ast,\R_r^b;\, r,s\in\mathbb{N},\,b\in\mathbb{Z}_{1,p-1}\big\rangle
\label{FusCon}
\ee
where we recall (\ref{rsast}).
A priori, additional representations could be generated by repeated fusion
of the representations listed.
However, preliminary evaluations of a variety of fusion products
seem to suggest that the extended fusion algebra (\ref{FusCon})
closes on the set of representations listed.
To describe this fusion algebra, we introduce
\be
\Cc_n[(r,s)]=\begin{cases}
(r,s),\quad &n>0
\\[.2cm]
(r,s)^\ast,\quad &n<0
\end{cases}
\label{C}
\ee
In our applications, $\Cc_0[(r,s)]$ only appears if $(r,s)$ is irreducible
in which case
\be
\Cc_0[(r,s)]=(r,s)=(r,s)^\ast,\qquad s\in\mathbb{Z}_{1,p-1}\cup p\mathbb{N}
\ee
where we have extended the definition of $\Cc_0$ to all fully reducible representations
(\ref{rsast}).
\\[.2cm]
{\bf Contragredient fusion conjecture.}\quad
The fusion rules involving contragredient Kac representations in
the extended fusion algebra (\ref{FusCon}) are given by or follow readily from
\be
(r,s)^\ast\otimes(r',s')^\ast=\big((r,s)\otimes(r',s')\big)^\ast,\qquad
\R_r^b\otimes(r',s')^\ast=\R_r^b\otimes(r',s')
\label{rsrs}
\ee
and
\bea
(1,b+kp)\otimes(1,b'+k'p)^\ast&=&\!\!\bigoplus_{j=|k-k'|+2,\,\mathrm{by}\,2}^{k+k'}\!\!\!\!
\bigoplus_{\beta}^{p-|p-b-b'|-1}\!\!\R_j^\beta\oplus
\bigoplus_{j=|k-k'|+1,\,\mathrm{by}\,2}^{k+k'-\mathrm{sg}(p-b-b')}\
\bigoplus_{\beta}^{|p-b-b'|-1}\!\!\R_j^\beta
\nn
&\oplus&\!\!\bigoplus_\beta^{(b-b')\mathrm{sg}(k'-k)-1}\!\R_{|k-k'|}^\beta
\oplus\bigoplus_{\beta=|b-b'|+1,\,\mathrm{by}\,2}^{p-|p-b-b'|-1}
\!\!\Cc_{k-k'}[(1,\beta+|k-k'|p)]
\label{bkbk}
\eea
where $b,b'\in\mathbb{Z}_{0,p-1}$ and $k,k'\in\mathbb{N}_{0}$.
\\[.2cm]
Since $(r,1)$ is irreducible, we thus have
\be
(r,s)^\ast=(r,1)^\ast\otimes(1,s)^\ast=(r,1)\otimes(1,s)^\ast
\ee
from which it follows that the general fusion product $(r,s)\otimes(r',s')^\ast$ can be
computed as
\be
(r,s)\otimes(r',s')^\ast=\big((r,1)\otimes(r',1)\big)\otimes\big((1,s)\otimes(1,s')^\ast\big)
\ee
This yields the general fusion rule
\bea
(r,b+kp)\otimes(r',b'+k'p)^\ast&=&
\bigoplus_{i=|r-r'|+1,\,\mathrm{by}\,2}^{r+r'-1}
\bigg\{
\bigoplus_{j=|k-k'|+2,\,\mathrm{by}\,2}^{k+k'}\ \,
\bigoplus_{\ell=|i-j|+1,\,\mathrm{by}\,2}^{i+j-1}
\bigoplus_{\beta}^{p-|p-b-b'|-1}
\R_\ell^\beta\nn
&\oplus&
\bigoplus_{j=|k-k'|+1,\,\mathrm{by}\,2}^{k+k'-\mathrm{sg}(p-b-b')}\ \,
\bigoplus_{\ell=|i-j|+1,\,\mathrm{by}\,2}^{i+j-1}
\bigoplus_{\beta}^{|p-b-b'|-1}
\R_\ell^\beta\nn
&\oplus&
\bigoplus_{\ell=|i-|k-k'||+1,\,\mathrm{by}\,2}^{i+|k-k'|-1}
\bigoplus_{\beta}^{(b-b')\mathrm{sg}(k'-k)-1}
\R_\ell^\beta\nn
&\oplus&
\bigoplus_{\beta=|b-b'|+1,\,\mathrm{by}\,2}^{p-|p-b-b'|-1}\!\!\Cc_{k-k'}[(i,\beta+|k-k'|p)]
\bigg\}
\label{bkbk2}
\eea
In general, the fusion rules are not invariant under replacement by contragredient modules
as illustrated by
\be
(1,1)^\ast\otimes(r,s)=(r,s)\neq(r,s)^\ast=(1,1)\otimes(r,s)^\ast,\qquad p<s\neq kp
\ee
These trivial fusion rules are encoded in (\ref{bkbk2}) and correspond to
$r'=1,b'=1,k'=0$ or $r=1,b=1,k=0$, respectively.
As a consequence of (\ref{rsrs}),
we note that the extended fusion algebra contains the two isomorphic fusion subalgebras
\be
\big\langle(r,s),\R_r^b;\,r,s\in\mathbb{N},\,b\in\mathbb{Z}_{1,p-1}\big\rangle
\simeq
\big\langle(r,s)^\ast,\R_r^b;\,r,s\in\mathbb{N},\,b\in\mathbb{Z}_{1,p-1}\big\rangle
\label{KacKac}
\ee
of which the first one is the Kac fusion algebra.
It is also noted that the representations in (\ref{proj}) form an {\em ideal} of the
extended fusion algebra, still in accordance with the representations being projective.
The fusion rules (\ref{bkbk}) can be obtained by extending the applications of the
forgetful functor (\ref{F}) with
\be
\Fc[(1,s)^\ast]=(1,s)
\ee
and subsequently modifying the prescription or algorithm discussed following (\ref{Ffus}).
In that discussion, we formed rank-2 modules starting with the {\em lowest} Kac label
-- now we start with the {\em greatest} Kac label.
That is, after applying the forgetful functor to the fusion product $(1,s)\otimes(1,s')^\ast$
\be
\Fc[(1,s)\otimes(1,s')^\ast]=\bigoplus_{s''=|s-s'|+1,\,\mathrm{by}\,2}^{s+s'-1}(1,s'')
\label{Ffusast}
\ee
we initially focus on the Kac representation $(1,s_1'')$ with maximal Kac
label $s_1''=s+s'-1$. Depending on $p$, this will appear as the submodule $(1,rp+b)$ of
the rank-2 module $\R_r^b$ if and only if the matching module $(1,rp-b)$ also appears in the
decomposition in (\ref{Ffusast}). If not, the (contragredient) Kac representation
$\Cc_{s-s'}[(1,s_1'')]$
will appear `by itself' in the decomposition of the fusion product.
If a rank-2 module is not formed for $s=s'$,
the two options $(1,s_1'')$ and $(1,s_1'')^\ast$ turn out to be identical.
Having completed the examination of $(1,s_1'')$, we remove it
together with its potential partner $(1,rp-b)$ from the direct sum
in (\ref{Ffusast}) and repeat the analysis for $(1,s_2'')$ corresponding to the new maximal
Kac label $s_2''$. As before, this algorithm is continued until all the Kac representations
in (\ref{Ffusast}) have been accounted for.
It is straightforward to verify that this prescription yields the fusion rules (\ref{bkbk}).
\subsection{Polynomial fusion rings}
\label{SecRing}
Together with the fact that the fundamental fusion algebra is a subalgebra of the
Kac fusion algebra, the fusion rules
\bea
(1,2)\otimes(1,kp+b)&=&(1,kp+b-1)\oplus(1,kp+b+1)\nn
(1,p+1)\otimes(1,kp+b)&=&\bigoplus_{\beta}^{p-b}\R_k^\beta
\oplus\bigoplus_\beta^{b-2}\R_{k+1}^\beta
\oplus(1,(k+1)p+b)
\label{12p}
\eea
demonstrate that the Kac fusion algebra
is generated from repeated fusion of the Kac representations
\be
\big\{(1,1),(2,1),(1,2),(1,p+1)\big\}
\ee
that is,
\be
\big\langle\Jc^{\mathrm{Kac}}\big\rangle
=\big\langle(r,s);\ r,s\in\mathbb{N}\big\rangle
=\big\langle(1,1),(2,1),(1,2),(1,p+1)\big\rangle
\label{full2}
\ee
It is therefore natural to expect that this fusion algebra is
isomorphic to a polynomial ring in the three entities
$X\leftrightarrow(2,1)$, $Y\leftrightarrow(1,2)$ and $Z\leftrightarrow(1,p+1)$.
This is indeed what we find.
\\[.2cm]
{\bf Proposition 1.}\quad The Kac fusion algebra is isomorphic to the polynomial ring generated
by $X$, $Y$ and $Z$ modulo the ideal $(P_p(X,Y),Q_p(Y,Z))$, that is,
\be
\big\langle\Jc^{\mathrm{Kac}}\big\rangle
\simeq\mathbb{C}[X,Y,Z]/\big(P_p(X,Y),Q_p(Y,Z)\big)
\ee
where
\be
P_p(X,Y)=\big[X-2T_p(\tfrac{Y}{2})\big]U_{p-1}(\tfrac{Y}{2}),\qquad
Q_p(Y,Z)=\big[Z-U_p(\tfrac{Y}{2})\big]U_{p-1}(\tfrac{Y}{2})
\label{PQ}
\ee
For $r\in\mathbb{N}$, $k\in\mathbb{N}_0$ and $b\in\mathbb{Z}_{0,p-1}$,
the isomorphism reads
\bea
(r,kp+b)&\leftrightarrow&
U_{r-1}(\tfrac{X}{2})
\Big(U_{kp+b-1}(\tfrac{Y}{2})+\big[Z^k-U_{p}^k(\tfrac{Y}{2})\big]U_{b-1}(\tfrac{Y}{2})\Big)\nn
\R_r^b&\leftrightarrow&
(2-\delta_{b,0})U_{r-1}(\tfrac{X}{2})T_b(\tfrac{Y}{2})U_{p-1}(\tfrac{Y}{2})
\eea
where $T_n(x)$ and $U_n(x)$ are Chebyshev polynomials of the first and second kind,
respectively.
\\[.2cm]
{\bf Proof.}\quad The relation $P_p(X,Y)=0$ corresponds to the identification
$(2,p)\equiv(1,2p)$ and encodes $(r,p)\equiv(1,rp)$ more generally, cf.\! (\ref{UU}),
while the relation $Q_p(Y,Z)=0$ follows from the fusion rule
\be
(1,p)\otimes(1,p+1)=\bigoplus_{\beta}^{p-2}\R_1^\beta\oplus(1,2p)
\ee
The remaining fusion rules are then verified straightforwardly in the polynomial ring.
Here we only demonstrate the two fusion rules in (\ref{12p}).
The first of these follows immediately from the recursion relation for the Chebyshev
polynomials. To show the second of the fusion rules, we note
the basic decomposition rules
\be
U_m(x)U_n(x)=\sum_{j=|m-n|,\,\mathrm{by}\,2}^{m+n}U_j(x),\qquad
2T_m(x)U_{n-1}(x)=U_{n+m-1}(x)+\mathrm{sg}(n-m)U_{|n-m|-1}(x)
\ee
where $U_{-1}(x)=0$. As a consequence, we have
\be
U_{p-1}(x)\sum_{j=0}^{k-1}U_p^{k-j-1}(x)U_{jp+b-2}(x)=U_{b-1}(x)U_p^k(x)-U_{kp+b-1}(x)
\ee
which is established by induction in $k$ and shows that the expression on the right side is
divisible by $U_{p-1}(x)$. This is of importance when multiplied by $Z$ due to
the form of $Q_p(Y,Z)$. With the additional observation that
\be
U_{r-1}(\tfrac{X}{2})U_{p-1}(\tfrac{Y}{2})\equiv U_{rp-1}(\tfrac{Y}{2})\quad
(\mathrm{mod}\ P_p(X,Y))
\label{UU}
\ee
which follows by induction in $r$, the second fusion rule readily follows.
$\quad\Box$
\\[.2cm]
Extending the arguments just presented for the Kac fusion algebra, one
finds that the extended Kac fusion algebra
(\ref{FusCon}) is also generated from repeated fusion of a small number of Kac representations
\be
\big\langle(r,s),(r,s)^\ast;\ r,s\in\mathbb{N}\big\rangle
=\big\langle(1,1),(2,1),(1,2),(1,p+1),(1,p+1)^\ast\big\rangle
\label{full3}
\ee
and that it is isomorphic to a polynomial ring.
\\[.2cm]
\noindent
{\bf Proposition 2.}\quad The extended Kac fusion algebra (\ref{FusCon})
is isomorphic to the
polynomial ring generated
by $X$, $Y$, $Z$ and $Z^\ast$ modulo the ideal
$(P_p(X,Y),Q_p(Y,Z),Q_p(Y,Z^\ast),R_p(Y,Z,Z^\ast))$, that is,
\be
\big\langle\Jc^{\mathrm{Cont}}\big\rangle
\simeq\mathbb{C}[X,Y,Z,Z^\ast]/\big(P_p(X,Y),Q_p(Y,Z),Q_p(Y,Z^\ast),R_p(Y,Z,Z^\ast)\big)
\ee
where the polynomials $P_p$ and $Q_p$ are defined in (\ref{PQ}) while
\be
R_p(Y,Z,Z^\ast)=ZZ^\ast-U_p^2(\tfrac{Y}{2})
\label{R}
\ee
For $r\in\mathbb{N}$, $k\in\mathbb{N}_0$ and $b\in\mathbb{Z}_{0,p-1}$,
the isomorphism reads
\bea
(r,kp+b)&\leftrightarrow&
U_{r-1}(\tfrac{X}{2})
\Big(U_{kp+b-1}(\tfrac{Y}{2})+\big[Z^k-U_{p}^k(\tfrac{Y}{2})\big]U_{b-1}(\tfrac{Y}{2})\Big)\nn
(r,kp+b)^\ast&\leftrightarrow&
U_{r-1}(\tfrac{X}{2})
\Big(U_{kp+b-1}(\tfrac{Y}{2})+\big[(Z^\ast)^k-U_{p}^k(\tfrac{Y}{2})\big]U_{b-1}(\tfrac{Y}{2})\Big)\nn
\R_r^b&\leftrightarrow&
(2-\delta_{b,0})U_{r-1}(\tfrac{X}{2})T_b(\tfrac{Y}{2})U_{p-1}(\tfrac{Y}{2})
\eea
{\bf Proof.}\quad Compared to the proof of Proposition 1, the essential new feature
is the appearance of $Z^\ast$. The relation $Q_p(Y,Z^\ast)=0$ plays the same
role for the contragredient Kac representations and $Z^\ast$ as
$Q_p(Y,Z)=0$ does for the Kac representations and $Z$. This yields the
part of the polynomial ring corresponding to (\ref{KacKac}).
The relation $R_p(Y,Z,Z^\ast)=0$ corresponds to the fusion rule
\be
(1,p+1)\otimes(1,p+1)^\ast=(1,1)\oplus\bigoplus_\beta^{p-3}\R_1^\beta\oplus\R_2^1
\ee
To establish the general fusion rule (\ref{bkbk}) in the ring picture, we first use induction in $n$
to establish
\be
U_p^{2n}(\tfrac{Y}{2})Z^m\equiv Z^m
+\sum_{j=0}^{n-1}U_p^{m+2j}(\tfrac{Y}{2})U_{p-1}(\tfrac{Y}{2})
U_{p+1}(\tfrac{Y}{2})
\quad (\mathrm{mod}\ Q_p(Y,Z)),\qquad n\in\mathbb{N}
\ee
and similarly for $Z$ replaced by $Z^\ast$. This is needed when reducing
\be
Z^k(Z^\ast)^{k'}\equiv U_p^{2\min(k,k')}(\tfrac{Y}{2})\begin{cases} Z^{k-k'},\quad&k\geq k' \\
(Z^\ast)^{k'-k},\quad&k<k' \end{cases}
\qquad (\mathrm{mod}\ R_p(Y,Z,Z^\ast))
\ee
For simplicity, we let $k\geq k'$ in which case we find
\be
(1,b+kp)\otimes(1,b'+k'p)^\ast\leftrightarrow
\big[Z^{k-k'}-U_p^{k-k'}(\tfrac{Y}{2})\big]U_{b-1}(\tfrac{Y}{2})U_{b'-1}(\tfrac{Y}{2})
+U_{kp+b-1}(\tfrac{Y}{2})U_{k'p+b'-1}(\tfrac{Y}{2})
\ee
This polynomial expression is recognized as corresponding to the right side of (\ref{bkbk}).
$\quad\Box$
\section{Conclusion}
We have discussed the representation content and fusion algebras of the
logarithmic minimal model ${\cal LM}(1,p)$.
We have thus proposed a classification of the entire family of Kac representations
as submodules of Feigin-Fuchs modules and presented a conjecture
for their fusion algebra. To test these proposals, we have used a combination
of the lattice approach to ${\cal LM}(1,p)$ and applications of the NGK algorithm.
We have also discussed a natural extension of the representation content by
inclusion of the modules contragredient to the Kac representations,
and we have presented a conjecture for the corresponding fusion algebra.
This extended fusion algebra as well as the conjecture for the Kac fusion algebra itself
were then shown to be isomorphic to
polynomial fusion rings which were described explicitly.
Continuing the work in \cite{BFGT0901} on a Kazhdan-Lusztig-dual
quantum group for the logarithmic minimal model ${\cal LM}(1,p)$,
fusion of Kac representations is considered in \cite{BGTnotes}.
The corresponding fusion algebra appears to be equivalent to the one
discussed here. This is a very reassuring observation for both methodologies
and offers independent evidence for the Kac fusion algebra discussed here.
The work presented here pertains to the logarithmic minimal models
${\cal LM}(1,p)$, but the methods used in obtaining the various results
are expected to extend straightforwardly to the general family of
logarithmic minimal models ${\cal LM}(p,p')$.
We hope to discuss the corresponding classification of Kac
representations and their fusion algebras elsewhere.
The case ${\cal LM}(2,3)$ is particularly interesting as it describes
critical percolation.
We find the remarkably simple expression (\ref{D21})
in the singular vector conjecture very intriguing.
Preliminary results indicate that it can be extended from $\D_{2,1}$
to general $\D_{r,1}$ and even to general logarithmic minimal models ${\cal LM}(p,p')$.
We also hope to discuss this elsewhere.
In the ${\cal W}$-extended picture ${\cal WLM}(1,p)$, Yang-Baxter integrable boundary conditions
associated with irreducible or projective representations of the triplet ${\cal W}$-algebra
${\cal W}(p)$ were introduced in \cite{PRR0803}.
With the results of the present work, it is natural to expect that there also exist Yang-Baxter
integrable boundary conditions associated with the reducible yet indecomposable
${\cal W}(p)$-representations of rank 1 appearing in \cite{FGST0512}.
This is indeed what we find as we will discuss elsewhere \cite{Ras1106}.
\section*{Acknowledgments}
\vskip.1cm
\noindent
This work is supported by the Australian Research Council
under the Future Fellowship scheme, project number FT100100774.
The author thanks Paul A. Pearce, David Ridout,
Philippe Ruelle and Ilya Yu.\! Tipunin
for helpful discussions and comments,
and the authors of \cite{BGTnotes} for sharing their results prior to publication.
|
1,108,101,565,247 | arxiv | \section{Introduction}\label{sec:intro}
SMC methods are amongst the most widely used computational techniques in statistics, engineering, physics, finance and many other disciplines;
see \cite{doucet} for a recent overview.
They are designed to approximate a sequence $\{ \eta_n \}_{n \geq 0}$ of probability distributions of increasing dimension or complexity. The method uses $N \geq 1$
weighted samples, or particles, generated in parallel and propagated via Markov kernels and resampling methods.
The method has accuracy which increases as the number of particles grows and is typically asymptotically exact.
Standard SMC methodology is by now very well understood with regards to its convergence properties and several consistency results have been proved \cite{chopin1,delmoral}.
SMC methods have also recently been proved to be stable in certain high-dimensional contexts \cite{beskos}.
In this article, we are concerned with \emph{adaptive} SMC methods; in an effort to improve algorithmic efficiency, the weights and/or Markov proposal kernels can depend upon the history of the simulated process. Such procedures appear in a wealth of articles including \cite{chopin,delmoralabc,jasra,schafer} and have important applications in, for example, econometrics, population genetics and data assimilation. The underlying idea of these algorithms is that, using the particle approximation $\eta^N_n$ of the distribution $\eta_n$, one can exploit the induced information to build effective proposals or even to \emph{determine} the next probability distribution in the sequence; this is often achieved by using the expectation $\eta^N_n(\xi_{n+1})$ of a summary statistic $\xi_{n+1}$ with respect to the current SMC approximation $\eta^N_n$. In other cases, one can use the particles to determine the next distribution in an artificial sequence of densities; we expand upon this point below.
Such approaches are expected to lead to algorithms that are more efficient than their `non-adaptive' counter-parts. Critically, such ideas also deliver more automated algorithms by reducing the number of user-specified tuning parameters.
Whilst the literature on adaptive MCMC methods is by now well-developed e.g.~\cite{andrieu} and sufficient conditions for an adaptive MCMC algorithm to be ergodic are well-understood, the analysis of adaptive SMC algorithms is still in its infancy.
To the best of our knowledge, a theoretical study of the consistency and fluctuation properties
of adaptive SMC algorithms is lacking
in the current literature. This article aims at filling this critical gap in the theory of SMC methods. Some preliminary results can be found, under exceptionally strong conditions, in \cite{crisan,jasra}. Proof sketches are given in \cite{delmoralabc} with some more realistic but limited analysis in \cite{giraud}.
We are not aware of any other asymptotic analysis of these particular class of algorithms in the literature.
Contrary to adaptive MCMC algorithms, we show in this article that it is reasonable to expect most adaptive SMC methods to be asymptotically correct.
\begin{comment}
The underlying idea of our approach for proving asymptotic results is to consider a `perfect' algorithm, as in \cite{giraud}, for which the proposals and/or weights use perfect information of the exact probability distribution at the previous time-point. This ideal algorithm could also be thought as the one where the adaptive decisions are made with an infinite number of particles.
We prove
a WLLN and a multivariate CLT for the SMC method, also referred to as `practical algorithm' in the sequel, which approximates the perfect algorithm.
\end{comment}
\subsection{Results and Structure}
This article explores two distinct directions.
In the first part, an asymptotic analysis of a class of SMC methods with adaptive Markov kernels and weights is carried out. The second part of the article looks at the case where an additional layer of randomness is taken into account through an adaptive tempering procedure.
A weak law of large numbers (WLLN) and a central limit theorem (CLT) relevant to each situation are proved. In both cases we consider a sequence of target distributions $\{ \eta_{n} \}_{n\ge 0}$ defined on a corresponding sequence of measurable spaces $(E_n,\mathscr{E}_n)_{n\geq 0}$.
We write $\eta_{n}^{N}=(1/N)\sum_{i=1}^{N}\delta_{x_n^{i}}$ for the $N$-particle SMC approximation of $\eta_n$, with $\delta_{x_n}$ the Dirac measure at $x_n \in E_n$ and $\{x_n^{i}\}_{i=1}^{N} \in E_n^N$ the collection of particles at time $n\ge 0$.
In the first part of the paper, for each $n \geq 1$ we consider parametric families, indexed by a parameter $\xi\in \mathbb R^d$, of Markov kernels $M_{n, \xi}: E_{n-1} \times \mathscr{E}_n \to \mathbb R_+$ and potential functions $G_{n-1, \xi}: E_{n-1} \to \mathbb R_{+}$. To construct the particle approximation $\eta^N_{n}$, the \emph{practical} SMC algorithm exploits summary statistics $\xi_n: E_{n-1} \to \mathbb R^d$ by reweighing and propagating the particle approximation $\eta^N_{n-1}$ through the potential $G_{n,\eta^N_{n-1}({\xi_n})}$ and the Markov kernel $M_{n,\eta_{n-1}^N{(\xi_n)}}$. This is a substitute for the \emph{perfect} algorithm (as also used by \cite{giraud} and which cannot be implemented) which employs the Markov kernel $M_{n,\eta_{n-1}{(\xi_n)}}$ and weight function
$G_{n,\eta_{n-1}({\xi_n})}$. We prove a WLLN and a CLT for both the approximation of the probability distribution $\eta_n$ and its normalising constant.
This set-up is relevant, for example, in the context of sequential Bayesian parameter inference \cite{chopin,kantas} when $\{ \eta_{n} \}_{n \geq 0}$ is a sequence of posterior distributions that corresponds to increasing amount of data. The Markov kernel $M_{n,\eta_{n-1}^{N}{(\xi_n)}}$ is user-specified
and its role is to efficiently move the particles within the state space. In many situations the Markov kernel $M_{n,\eta_{n-1}^{N}{(\xi_n)}}$ is constructed so that it leaves the distribution $\eta_n$ invariant; a random walk Metropolis kernel that uses the estimated covariance structure of $\eta_{n-1}^{N}$ for scaling its jump proposals is a popular choice.
The case when there is also a tuned parameter in the weight function $G_{n,\eta_{n-1}^{N}({\xi_n})}$
is relevant to particle filters \cite{doucet}, as described in Section \ref{sec:ex_filt}.
The second part of this article investigates an adaptive tempering procedure.
Standard MCMC methods can be inefficient for directly exploring complex probability distributions involving high-dimensional state spaces, multi-modality, greatly varying scales, or combination thereof. It is a standard approach to introduce a bridging sequence of distributions $\{\eta_n\}_{n=0}^{n=n_*}$ between a distribution $\eta_0$ that is easy to sample from and the distribution of interest $\eta_{n_*} \equiv \pi$. In accordance with the simulated annealing literature, the probability distribution of interest is written as $\pi(dx) = Z^{-1} \, e^{-\beta_* \, V(x)} \, m(dx)$ for a potential $V$, temperature parameter $\beta_*\in \mathbb R$, dominating measure $m(dx)$ and normalisation constant $Z$; the bridging sequence of distributions is constructed by introducing a ladder of temperature parameters $\beta_0 \leq \beta_1 \leq \cdots \leq \beta_{n_*} =: \beta_*$ and setting
$\eta_{n}(dx) = Z(\beta_n)^{-1} \, e^{-\beta_n \, V(x)} \, m(dx)$
for a normalisation constant $Z(\beta_n)$. The choice of the bridging sequence of distributions is an important and complex problem, see e.g.~\cite{gelman}. To avoid the task of having to pre-specify a potentially large number of temperature parameters, an adaptive SMC method can compute them `on the fly' \cite{jasra,schafer}, thus obtaining a random increasing sequence of temperature parameters
$\big\{ \beta_n^N \big\}_{n \geq 0}$. In this article, we adopt the following strategy: assuming a particle approximation $\eta_{n-1}^N = (1/N) \sum_{i=1}^N \delta_{x^i_{n-1}}$ with temperature parameter $\beta_{n-1}^{N}$, the particles are assigned weights proportional to $e^{-(\beta^N_n - \beta^N_{n-1}) \, V(x^i_{n-1})}$ to represent the next distribution in the sequence; the choice of $\beta^N_{n}$ is determined from the particle collection $\{x_{n-1}^{i}\}_{i=1}^{N}$ by ensuring a minimum effective sample size (ESS) (it is described later on, why this might be a sensible choice). This can efficiently be implemented using a bisection method; see e.g.~\cite{jasra}. We prove a WLLN and a CLT for both the approximation of the probability distribution $\eta_n$ and the estimates of the normalising constants $Z(\beta_n)$.
One of the contributions of the article is the proof that the asymptotic variance in the CLT, for some algorithms in the first part of the paper, is \emph{identical} to the one of the `perfect' SMC algorithm using the ideal kernels.
One consequence of this effect is that if the asymptotic variance associated to the (relative) normalizing constant estimate increases linearly with respect to time (see e.g.~\cite{cerou}), then so does the asymptotic variance for the adaptive algorithm. We present numerical results on a complex high-dimensional posterior distribution associated with the Navier-Stokes model (as in e.g.~\cite{kantas}), where adapting the proposal kernels over hundreds of different directions is critical for the efficiency of the algorithm. Whilst our theoretical result (with regards to the asymptotic variance) only holds for the case where one adapts the proposal kernel, the numerical application will involve much more advanced adaptation procedures. These experiments provide some evidence that our theory could be relevant
in more general scenarios.
This article is structured as follows. In Section \ref{sec:algo} the adaptive SMC algorithm is introduced and the associated notations are detailed.
In Section \ref{sec:exam1} we provide some motivating examples for the use of adaptive SMC.
In Section \ref{sec:main_res}
we study the asymptotic properties
of a class of SMC algorithms with adaptive Markov kernels and weights.
In Section \ref{sec:annealed},
we extend our analysis to the case where an adaptive tempering scheme is taken into account. In each situation, we prove a WLLN and a CLT.
In Section \ref{sec:exam}, we verify that our assumptions
hold when using the adaptive SMC algorithm in a
real scenario. In addition, we provide numerical results associated to the Navier-Stokes model and some theoretical insights associated to
the effect of the dimension of the statistic which is adapted.
The article is concluded in Section \ref{sec:summ} with a discussion of future work.
The appendix features a proof of one of the results in the main text.
\section{Algorithm and Notations}\label{sec:algo}
In this section we provide the necessary notations and describe the SMC algorithm with adaptive Markov kernels and weights. The description of the adaptive tempering procedure is postponed to Section \ref{sec:annealed}.
\subsection{Notations and definitions}
Let $(E_n,\mathscr{E}_n)_{n\geq 0}$ be a sequence of measurable spaces endowed with a countably generated $\sigma$-field $\mathscr{E}_n$. The set $\mathcal{B}_b(E_n)$ denotes the class of bounded $\mathscr{E}_n/\mathbb{B}(\mathbb R)$-measurable functions on $E_n$ where $\mathbb{B}(\mathbb{R})$ Borel $\sigma$-algebra on $\mathbb{R}$. The supremum norm is written as $\|f\|_{\infty} = \sup_{x\in E_n}|f(x)|$ and $\mathcal{P}(E_n)$ is the set of probability measures on $(E_n,\mathscr{E}_n)$. We will consider non-negative operators $K : E_{n-1} \times \mathscr{E}_n \rightarrow \mathbb R_+$ such that for each $x \in E_{n-1}$ the mapping $A \mapsto K(x, A)$ is a finite non-negative measure on $\mathscr{E}_n$ and for each $A \in \mathscr{E}_n$ the function $x \mapsto K(x, A)$ is $\mathscr{E}_{n-1} / \mathbb{B}(\mathbb{R})$-measurable; the kernel $K$ is Markovian if $K(x, dy)$ is a probability measure for every $x \in E_{n-1}$.
For a finite measure $\mu$ on $(E_{n-1},\mathscr{E}_{n-1})$ and Borel test function $f \in \mathcal{B}_b(E_n)$ we define
\begin{equation*}
\mu K : A \mapsto \int K(x, A) \, \mu(dx)\ ;\quad
K f : x \mapsto \int f(y) \, K(x, dy)\ .
\end{equation*}
We will use the following notion of continuity at several places in this article.
\begin{definition} \label{def.unif.cont}
Let $\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{Z}$ be three metric spaces. A function $f: \mathcal{X} \times \mathcal{Y} \to \mathcal{Z}$ is continuous at $y_0\in \mathcal{Y}$ uniformly on $\mathcal{X}$ if
\begin{equation} \label{eq.uniform.cont}
\limsup_{\delta \to 0^+} \; \Big\{ d_{\mathcal{Z}}
\big(f(x,y), f(x,y_0) \big) \; : \; x \in \mathcal{X}, \; d_{\mathcal{Y}}(y,y_0) < \delta \Big\} = 0\ .
\end{equation}
\end{definition}
We write $\rightarrow_{\mathbb{P}}$ and $\Rightarrow$ to denote convergence in probability and in distributions. The Kroenecker product $u \otimes v$ of two vectors $u,v \in \mathbb R^d$ designates the matrix $u \cdot v^{\top} \in
\mathbb{R}^{d\times d}$; the covariance of a function $\phi\in\mathcal{B}_b(E)^r$ with respect to a probability measure $\mu\in\mathcal{P}(E)$ is denoted by
$\Sigma_{\mu}(\phi)
=
\int_E [\phi(x)-\mu(\phi)] \otimes [\phi(x)-\mu(\phi)] \, \mu(dx)$.
\subsection{SMC Algorithm}\label{sec:algo1}
For each index $n \geq 1$, we consider Markov operators $M_{n,\xi}: E_{n-1} \times \mathscr{E}_n \rightarrow \mathbb R_+$ and weight functions $G_{n-1, \xi}: E_{n-1} \to \mathbb R_+$ parametrized by $\xi \in \mathbb R^d$. The adaptive SMC algorithm to be described exploits summary statistics $\xi_n: E_{n-1} \to \mathbb R^d$ and aims at approximating the sequence of probability distributions $\{\eta_n\}_{n \geq 0}$, on the measurable spaces $(E_n,\mathscr{E}_n)_{n\geq 0}$,
defined via their operation on a test function $\phi_n \in \mathcal{B}_b(E_n)$ as
\begin{equation}
\label{eq:eta_def}
\eta_n(\phi) := \gamma_n(\phi_n) / \gamma_n(1)
\end{equation}
where $\gamma_n$ is the unnormalised measure on $(E_n,\mathscr{E}_n)$ given by
\begin{equation}
\label{eq:gamma_def}
\gamma_n(\phi) := \mathbb{E}\,\Big[\prod_{p=0}^{n-1} G_p(X_p)\cdot\phi(X_n)\Big]\ .\end{equation}
The above expectation is under the law of a non-homogeneous Markov chain $\big\{ X_n \big\}_{n \geq 0}$ with initial distribution $X_0 \sim \eta_0 \equiv \gamma_0$ and transition $\mathbb{P}\,[\,X_{n} \in A \mid X_{n-1}=x\,] = M_n(x,A)$ where we have used the notations
\begin{equation*}
M_n \equiv M_{n, \eta_{n-1}(\xi_n)}\ ; \quad
G_{n} \equiv G_{n, \eta_{n}(\xi_{n+1})}\ .
\end{equation*}
In practice, the expectations $\eta_{n-1}(\xi_n)$ of the summary statistics are not analytically tractable and it is thus impossible to simulate from the Markov chain $\{X_n\}_{n \geq 0}$ or compute the weights $G_n$. Nevertheless, for the purpose of analysis, we introduce the following idealized algorithm, referred to as the \emph{perfect} SMC algorithm in the sequel, that propagates a set of $N \geq 1$ particles by sampling from the distribution
\begin{equation}
\label{eq:strange}
\mathbb{P}\big(\,d(x_0^{1:N},x_1^{1:N},\ldots,x_n^{1:N})\,\big)
=
\prod_{i=1}^N \eta_0(dx_0^i) \, \prod_{p=1}^n \prod_{i=1}^N\Phi_{p}(\eta_{p-1}^N)(dx_p^i)
\end{equation}
where the $N$-particle approximation of the distribution \eqref{eq:eta_def} is defined as
\begin{equation} \label{eq.approx.normalized.measure}
\eta_n^{N}= \frac{1}{N} \, \sum_{i=1}^N \delta_{x_n^i}\ .
\end{equation}
In \eqref{eq:strange}, the operator $\Phi_n: \mathcal{P}(E_{n-1}) \to \mathcal{P}(E_{n})$ is
\begin{equation*}
\Phi_n(\mu)(dy) = \frac{\mu(G_{n-1}M_{n})(dy)}{\mu(G_{n-1})} \ .
\end{equation*}
Expression \eqref{eq:strange} is a mathematically concise way to describe a standard particle method that begins by sampling $N$ i.i.d.~particles from
$\eta_0$ and, given particles $\{ x_{n-1}^{i}\}_{i=1}^{N}$, performs multinomial resampling according to the unnormalised weights
$G_{n-1}(x_{n-1}^i)$ before propagating the particles via the Markov kernel $M_{n}(x,dy)$.
The SMC algorithm that is actually simulated in practice, referred to as the \emph{practical} SMC algorithm in the sequel, has joint law
\begin{equation}
\label{eq:d1}
\mathbb{P}\,\big(\,d(x_0^{1:N},x_1^{1:N},\dots,x_n^{1:N})\,\big) = \prod_{i=1}^N \eta_0(dx_0^i) \,\prod_{p=1}^n \prod_{i=1}^N\Phi_{p,N}(\eta_{p-1}^N)(dx_p^i)\ .
\end{equation}
The operator $\Phi_{n,N}$ approximates the ideal one, $\Phi_{n}$, and is defined as
\begin{equation*}
\Phi_{n,N}(\mu)(dy) = \frac{\mu(G_{n-1,N} \, M_{n,N})(dy)}{\mu(G_{n-1,N})}\ .
\end{equation*}
We have used the short-hand notations
\begin{equation*}
M_{n,N} \equiv M_{n, \eta^N_{n-1}(\xi_n)}
\ ; \qquad
G_{n,N} \equiv G_{n, \eta^N_n(\xi_{n+1})}\ .
\end{equation*}
Throughout this article we assume that the potentials are strictly positive,
$G_{n,\xi}(x) > 0$ for all $x\in E_{n}$ and $\xi \in \mathbb R^d$
so that there is no possibility that the algorithm collapses.
The particle approximation of the unnormalised distribution \eqref{eq:gamma_def} is defined as
\begin{equation}
\label{eq.approx.unnormalized.measure}
\gamma_n^N(\phi_n) = \Big\{ \prod_{p=0}^{n-1} \eta_p^N(G_{p,N}) \Big\} \, \eta^N_n(\phi_n)\ .
\end{equation}
It will bel useful to introduce the non-negative operator
\begin{equation} \label{def.operator.Q}
Q_{n,N}(x,dy) = G_{n-1,N}(x) M_{n,N}(x,dy)
\end{equation}
and the idealised version
\begin{equation*}
Q_{n}(x,dy) = G_{n-1}(x) M_{n}(x,dy)
\equiv G_{n-1,\eta_{n-1}(\xi_n)}(x) M_{n,\eta_{n-1}(\xi_n)}(x,dy)\ .
\end{equation*}
Many times we will be interested in the properties of involved operators as functions of $\xi$, thus we will also write $$Q_{n,\xi}(x,dy) := G_{n-1,\xi}(x) \, M_{n,\xi}(x,dy)$$ to emphasise the dependency on the parameter $\xi \in \mathbb R^d$. Unless otherwise stated, the differentiation operation $\partial_{\xi}$ at step $n$ is evaluated at the limiting parameter value $\xi=\eta_{n-1}(\xi_{n})$.
With these definitions, one can verify that the following identities hold
\begin{equation}
\label{eq:defPhi}
\eta_n(\phi_n) = \Phi_{n}(\eta_{n-1})(\phi_n) = \frac{\eta_{n-1}(Q_{n} \phi_n)}{\eta_{n-1}(G_{n-1})}\ ;\quad
\gamma_n(\phi_n) = \gamma_{n-1}(Q_n \phi_n)\ .
\end{equation}
Similar formulae are available for the $N$-particle approximations; if $\mathscr{F}_{n}^N$ designates the filtration generated by the particle system up-to (and including) time $n$ we have
\begin{equation}
\mathbb{E}\,\big[\eta^N_n(\phi_n) \mid \mathscr{F}_{n-1}^N \big] = \Phi_{n,N}(\eta^N_{n-1})(\phi_n)\ ; \quad
\mathbb{E}\,\big[\gamma^N_n(\phi_n) \mid \mathscr{F}_{n-1}^N \big] = \gamma^N_{n-1}(Q_{n,N} \phi_n)\ .
\end{equation}
In the sequel, we will use the expressions $\mathbb{E}_{n-1}[\, \cdot\, ]$ and $\mathrm{Var}_{n-1}[ \,\cdot\, ]$ to denote the conditional expectation $\mathbb{E}\,[\, \cdot \mid \mathscr{F}^N_{n-1}\,]$ and conditional variance $\mathrm{Var}\,[\,\cdot \mid \mathscr{F}^N_{n-1}\,]$ respectively.
\begin{rem}
Our results concern multinomial resampling at each time.
Extension of our analysis to adaptive resampling \cite{delm:12} is possible but would require many additional calculations and technicalities; this is left as a topic for future work.
\end{rem}
\section{Motivating Examples}\label{sec:exam1}
\subsection{Sequential Bayesian Parameter Inference}
\label{sec:seq_bi}
Consider Bayesian inference for the parameter $x \in E$, observations $y_i \in \mathcal{Y}$ and prior measure $\eta_0(dx)$. The posterior distribution $\eta_n$ after having observed $y_{1:n} \in \mathcal{Y}^{n+1}$ reads
\begin{equation*}
\eta_n(dx) = \big(\, \mathbb{P}\,[\,y_{1:n} \mid x\,] \,/\, \mathbb{P}\,[\,y_{1:n}\,]\, \big) \, \eta_0(dx)\, .
\end{equation*}
The approach in \cite{chopin} fits in the framework described in Section \ref{sec:algo1} with state spaces $E_n=E$ and potential functions $G_n(x)= \mathbb{P}\,[\,y_{n+1} \mid y_{1:n},x\,]$.
For an MCMC kernel $M_n \equiv M_{n,\eta_{n-1}(\xi_n)}$ with invariant measure $\eta_n$ the posterior distribution $\eta_n$ is given by $\eta_n(\phi_n) = \gamma_n(\phi_n) / \gamma_n(1)$
where the unnormalised measure $\gamma_n$ is defined as in \eqref{eq:gamma_def}.
A popular choice consists in choosing for $M_{n,\eta_{n-1}(\xi_n)}$ a random walk Metropolis kernel reversible with respect to $\eta_{n}$ and jump covariance structure matching the one of the distribution $\eta_{n-1}$. Under our assumptions, the analysis of Section \ref{sec:main_res} applies in this context.
Whilst such an example is quite simple it is indicative of more complex applications in the literature.
Article \cite{kantas} considers a state-space with dimension of about $10^4$ and dimension of adapted statistic of about $500$.
In such a setting, pre-specifying the covariance structure of the random walk Metropolis proposals is impractical; the adaptive SMC strategy of Section~\ref{sec:algo} provides a principled framework for automatically setting this covariance structure, see also Section \ref{sec:num_ex}.
\subsection{Filtering}\label{sec:ex_filt}
This section illustrates the case of having an adaptive weight function.
Consider a state-space model with observations $Y_{1:n} \in \mathcal{Y}^n$, unobserved Markov chain $U_{0:n}\in \mathcal{U}^{n+1}$ and joint density with respect to a dominating measure $\lambda_{\mathcal{Y}}^{\otimes n} \otimes \lambda_{\mathcal{U}}^{\otimes n+1}$ given by
\begin{equation*}
\eta_0(u_0) \prod_{p=1}^n g_p(u_p,y_p) \, f_p(u_{p-1},u_p)\ .
\end{equation*}
The probability $\eta_0(u_0) \, \lambda_{\mathcal{U}}(du_0)$ is the prior distribution for the initial state of the unobserved Markov chain, $g_p(u_p, y_p) \, \lambda_{\mathcal{Y}}(dy_p)$ is the conditional observation probability at time $p$ and $f_p(u_{p-1}, u_p) \, \lambda_{\mathcal{U}}(du_p)$ describes the dynamics of the unobserved Markov process.
A standard particle filter with proposal at time $p$ corresponding to the Markov kernel $\mathbb{P}[U_{p} \in du_p \mid U_{p-1}=u_{p-1}] = m_p(u_{p-1}, u_p) \, \lambda_{\mathcal{U}}(du_p)$ has importance weights of the form
\begin{equation*}
G_p(x_p) = \frac{g_p(u_p,y_p) f_p(u_{p-1},u_p)}{m_p(u_{p-1},u_p)}
\end{equation*}
where here $x_p \equiv (x_p^{(1)}, x_p^{(2)}) \equiv (u_{p-1}, u_p)$. The process $\{X_p\}_{p=1}^n$ is Markovian with transition $M_p(x_{p-1}, dx_p) = \delta_{x_{p-1}^{(2)}}(dx_p^{(1)}) \, m_p(x_{p-1}^{(2)}, x^{(2)}_p) \, \lambda_{\mathcal{U}}(dx_p^{(2)})$. The marginals of the sequence of probability distributions $\eta_n$ described in Equation \eqref{eq:eta_def} are the standard predictors.
In practice, the choice of the proposal kernel $m_n$ is critical to the efficiency of the SMC algorithm. In such settings, one may want to exploit the information contained in the distribution $\eta_{n-1}$ in order to build efficient proposal kernels. Approximating the filter mean is a standard strategy. In these cases, both the Markov kernel $M_n$ and the weight function $G_{n-1}$ depend upon the distribution $\eta_{n-1}$; this is covered by the framework adapted in Section~\ref{sec:algo}. See \cite{doucet} and the references therein for ideas associated to such approaches.
\section{Asymptotic Results for Adaptive SMC via Summary Statistics}\label{sec:main_res}
In this section we develop an asymptotic analysis of the class of adaptive SMC algorithm described in section \ref{sec:algo}.
After first stating our assumptions in Section \ref{sec.algo1.assump}, we give a WLLN in Section \ref{sec.algo1.WLLN} and a CLT in Section \ref{sec.algo1.CLT}.
\subsection{Assumptions} \label{sec.algo1.assump}
Our results will make use of conditions (A\ref{hyp:1}-\ref{hyp:2}) below.
By $\mbox{Dom}(\xi_n) \subset \mathbb{R}^d$ we denote a convex set that contains the range of the statistic $\xi_n:E_{n-1} \to \mathbb{R}^{d}$.
\begin{hypA}
\label{hyp:1}
For each $n \geq 0$, function $(x,\xi) \mapsto G_{n, \xi}(x)$ is bounded and continuous at $\xi=\eta_{n}(\xi_{n+1})$ uniformly over $x\in E_n$. Statistics $\xi_{n+1}:E_n \to \mathbb R^d$ are bounded. For any test function $\phi_{n+1} \in \mathcal{B}_b(E_{n+1})$ the function $(x,\xi) \mapsto Q_{n+1, \xi} \phi_{n+1} (x)$ is bounded, continuous at
$\xi=\eta_{n}(\xi_{n+1})$ uniformly over $x\in E_n$.
\end{hypA}
\begin{hypA}
\label{hyp:2}
For each $n \geq 0$ and test function $\phi_{n+1} \in \mathcal{B}_b(E_{n+1})$, function
$(x,\xi) \mapsto \partial_{\xi} Q_{n+1,\xi} \phi_{n+1}(x)$
is well defined on $E_{n} \times \mbox{Dom}(\xi_{n+1})$, bounded and continuous at $\xi = \eta_{n}(\xi_{n+1})$ uniformly over $x\in E_n$.
\end{hypA}
Assumptions (A\ref{hyp:1}-\ref{hyp:2}) are reasonably weak in comparison to some assumptions used in the SMC literature, such as in \cite{delmoral}, but are certainly not the weakest adopted for WLLN and CLTs (see e.g.~\cite{chopin1}).
The continuity assumptions in (A\ref{hyp:2}) are associated to the use of a first order-Taylor expansion. We have defined $\mbox{Dom}(\xi_p)$ as a convex set because we need to compute integrals along segments between points of $\mbox{Dom}(\xi_p)$. In general, we expect that the assumptions can be relaxed for unbounded functions at the cost of increased length and complexity of the proofs.\\
\subsection{Weak Law of Large Numbers}
\label{sec.algo1.WLLN}
In this section we establish a weak law of large numbers (WLLN). To do so, we state first a slightly stronger result that will be repeatedly used in the fluctuation analysis presented in Section \ref{sec.algo1.CLT}.
\begin{theorem}
\label{theo:wlln}
Assume (A\ref{hyp:1}). Let $\mathsf{V}$ be a Polish space and $\{V_N\}_{N \geq 0}$ a sequence of $\mathsf{V}$-valued random variables that converges in probability to $\mathsf{v} \in \mathsf{V}$. Let $n \ge 0$, $r \ge 1$ and $\phi_n: E_n \times \mathsf{V} \to \mathbb R^{r}$ be a bounded function continuous at $\mathsf{v} \in \mathsf{V}$ uniformly on $E_n$. The following limit holds in probability
\begin{equation*}
\lim_{N \to \infty} \eta_n^N\,[\,\phi_n(\cdot, V_N)\,]
=
\eta_n\,[\,\phi_n(\cdot, \mathsf{v})\,]\ .
\end{equation*}
\end{theorem}
\begin{cor}[WLLN]
\label{cor:wlln}
Assume (A\ref{hyp:1}). Let $n \ge 0$, $r \ge 1$ and $\phi_n: E_n \to \mathbb R^{r}$ a bounded measurable function. The following limit holds in probability,
$\lim_{N \to \infty} \eta_n^N(\phi_n)
=
\eta_n(\phi_n)$.
\end{cor}
\begin{proof}[Proof of Theorem \ref{theo:wlln}]
It suffices to concentrate on the scalar case $r=1$.
The proof is by induction on $n$. The initial case $n=0$ is a direct consequence of WLLN for i.i.d.~random variables and Definition \ref{def.unif.cont}. For notational convenience, in the rest of the proof we write $\bar{\phi}_n(\cdot)$ instead of $\phi_n(\cdot, \mathsf{v})$.
We assume the result at rank $n-1$ and proceed to the induction step.
Since $V_N$ converges in probability to $\mathsf{v} \in \mathsf{V}$, Definition \ref{def.unif.cont} shows that it suffices to prove that $[\eta_n^N-\eta_n]\big(\bar{\phi}_n\big)$ converges in probability to zero. We use the decomposition
\begin{align*}
[\eta_n^N-\eta_n](\bar{\phi}_n)
&=
\big(\eta_n^N(\bar{\phi}_n)-\mathbb{E}_{n-1}[\eta_n^N(\bar{\phi}_n)]\big)
+
\big(\mathbb{E}_{n-1}[\eta_n^N(\bar{\phi}_n)]-\eta_n(\bar{\phi}_n)\big)\\
&=
[\eta_n^N-\Phi_{n,N}(\eta_{n-1}^N)](\bar{\phi}_n)
+
[\Phi_{n,N}(\eta_{n-1}^N)-\eta_n](\bar{\phi}_n)
=: A(N) + B(N)\ .
\end{align*}
To conclude the proof, we now prove that each of these terms converges to zero in probability.
\begin{itemize}
\item
Since the expected value of $A(N)$ is zero, it suffices to prove that its moment of order two also converges to zero as $N$ goes to infinity. To this end, it suffices to notice that
\begin{equation*}
\mathbb{E}_{n-1}\big[A(N)^2 \big] = \tfrac{1}{N}\,\mathbb{E}_{n-1}\Big[\big( \bar{\phi}(x^i_n) - \mathbb{E}_{n-1}[\bar{\phi}(x^i_n)] \big)^2 \Big] \leq \frac{\|\bar{\phi}\|^2_{\infty}}{N}\ .
\end{equation*}
\item
To treat the quantity $B(N)$, we use the definition of $\Phi_{n,N}(\eta_{n-1}^N)$ in \eqref{eq:defPhi} and decompose it as the sum of three terms $B(N) = B_1(N) + B_2(N) + B_3(N)$ with
\begin{align*}
B_1(N) &
=\eta_{n-1}^N \big\{ [Q_{n,N} - Q_n](\bar{\phi}_n) \big\} \, / \, \eta_{n-1}^N(G_{n-1,N})\ ;\\
B_2(N) &= [\eta_{n-1}^N-\eta_{n-1}]\big( Q_n(\bar{\phi}_n) \big) \, / \, \eta^N_{n-1}(G_{n-1,N})\ ;\\
B_3(N) &= \eta_{n-1}^N[ Q_n(\bar{\phi}_n) ] \times
\big\{
1 / \eta^N_{n-1}(G_{n-1,N}) - 1 / \eta_{n-1}(G_{n-1})
\big\}\ .
\end{align*}
We prove that $B_i(N)$ converges in probability to zero for $i=1,2,3$. The induction hypothesis shows that $\eta_{n-1}^N(\xi_n)$ converges to $\eta_{n-1}(\xi_n)$ in probability. By Assumption~\ref{hyp:1}, the bounded function $(x,\xi) \mapsto G_{n-1, \xi}(x)$ is continuous at $\xi=\eta_{n-1}(\xi_n)$ uniformly on $E_{n-1}$; the induction hypothesis applies and $\eta^N_{n-1}(G_{n-1,N})$ converges in probability to $\eta_{n-1}(G_{n-1})$. Similarly, since $Q_n(\bar{\phi}) \in \mathcal{B}_b(E_{n-1})$ is bounded by boundedness of $\bar{\phi}_n$, it follows that $\eta^N_{n-1}[ Q_n(\bar{\phi}_n) ]$ converges in probability to $\eta_{n-1}[ Q_n(\bar{\phi}_n) ]$. Slutsky's Lemma thus yields that $B_2(N)$ and $B_3(N)$ converge to zero in probability. Finally, note that by Assumption \ref{hyp:1} the bounded function $(x,\xi) \mapsto Q_{n, \xi}(x,\bar{\phi}_n)$ is continuous at $\xi=\eta_{n-1}(\xi_n)$ uniformly on $E_{n-1}$; the induction yields
\begin{align*}
\lim_{N \to \infty}
\eta_{n-1}^N \big\{ [Q_{n,N} - Q_n](\bar{\phi}_n) \big\}
=
\lim_{N \to \infty}
&\big\{\,\eta_{n-1}^N[Q_{n,N}(\bar{\phi}_n)] -
\eta_{n-1}[Q_n(\bar{\phi}_n)]\,\big\}\\
&-
\lim_{N \to \infty}
\big\{\,\eta_{n-1}^N[Q_{n}(\bar{\phi}_n)] -
\eta_{n-1}[Q_{n}(\bar{\phi}_n)]\,\big\} = 0\ ,
\end{align*}
which is enough for concluding that $B_1(N)$ converges to zero in probability.
\end{itemize}
\end{proof}
As a corollary, one can establish a similar consistency result for the sequence of particle approximations $\gamma^N_n(\phi_n)$, defined in Equation \eqref{eq.approx.unnormalized.measure}, of the unnormalised quantity $\gamma_n(\phi_n)$.
\begin{cor}
\label{cor.wlln.normalisation}
Assume (A\ref{hyp:1}). Let $n \ge 0$, $r \ge 1$ and $\phi_n: E_n \to \mathbb R^{r}$ be a bounded measurable function. The following limit holds in probability,
$\lim_{N \to \infty} \gamma_n^N(\phi_n)
=
\gamma_n(\phi_n)$.
\end{cor}
\begin{proof}
Since $\gamma^N_n(\phi_n) = \gamma^N_n(1) \, \eta^N_n(1)$ and $\gamma_n(\phi_n) = \gamma_n(1) \, \eta^N_n(1)$, by Corollary \ref{cor:wlln} it suffices to prove that $\gamma^N_n(1) = \eta^N_0(G_0) \times \ldots \times \eta^N_{n-1}(G_{n-1})$ converges in probability to the value $\gamma_n(1)=\eta_0(G_0) \times \ldots \times \eta_{n-1}(G_{n-1})$. By Assumption \ref{hyp:1}, the potentials $\{ G_p \}_{p \geq 0}$ are bounded so that Corollary \ref{cor:wlln} applies and the quantity $\eta^N_{p}(G_{p})$ converges in probability to $\eta_{p}(G_{p})$ for any index $p \geq 0$. The conclusion directly follows.
\end{proof}
\subsection{Central Limit Theorems}
\label{sec.algo1.CLT}
In this section, for a test function $\phi_n: E_n \to \mathbb{R}^r$, we carry out a fluctuation analysis of the particle approximations $\gamma^N_n(\phi_n)$ and $\eta^N_n(\phi_n)$ around their limiting value. As expected, we prove that there is convergence at standard Monte-Carlo rate $N^{-1/2}$; in some situations, comparison with the perfect and non-adaptive algorithm is discussed in Section \ref{sec.stability}.
\begin{theorem} \label{thm.clt.unnormalised}
Assume (A\ref{hyp:1}-\ref{hyp:2}). Let $n \ge 0$, $r \ge 1$ and $\phi_n: E_n \to \mathbb R^{r}$ be a bounded measurable function. The sequence $\sqrt{N} \, [\gamma^N_n - \gamma_n](\phi_n)$ converges weakly to a centered Gaussian distribution with covariance
\begin{equation} \label{eq.asymp.var.unnormalised}
\sum_{p=0}^n \gamma_p(1)^2 \, \Sigma_{\eta_p}(\mathscr{L}_{p,n} \phi_n)
\end{equation}
where the linear operator $\mathscr{L}_p:\mathcal{B}_b(E_p)^r \to \mathcal{B}_b(E_{p-1})^r$ is defined by
\begin{equation} \label{eq.semigroup}
\mathscr{L}_p \phi_p =
\eta_{p-1}[\partial_{\xi} Q_p \phi_p ] \, \big( \, \xi_p - \eta_{p-1}(\xi_p) \, \big)
+ Q_p(\phi_p)
\end{equation}
with $\mathscr{L}_{p,n} := \mathscr{L}_{p+1} \circ \ldots \circ \mathscr{L}_{n}$ and $\mathscr{L}_{n,n} = \mathrm{Id}$.
\end{theorem}
\begin{proof}
For notational convenience, we concentrate on the scalar case $r=1$. The proof of the multi-dimensional case is identical, with covariance matrices replacing scalar variances.
We proceed by induction on the parameter $n \geq 0$.
The case $n=0$ follows from the usual CLT for i.i.d.\@ random variables.
To prove the induction step it suffices to show that for any $t \in \mathbb R$ the following identity holds
\begin{equation}
\label{eq.induction.norm.clt}
\lim_{N \to \infty} \, \mathbb{E}\,[\,e^{i t \sqrt{N} \,
[ \gamma^N_n-\gamma_n ](\phi_n)}\,]
=
e^{-\frac12 t^2 \, \gamma_n(1)^2 \, \Sigma_{\eta_n}(\phi_n)} \,
\lim_{N \to \infty} \,
\mathbb{E}\,[\,e^{i t \sqrt{N} \, [ \gamma^N_{n-1}-\gamma_{n-1} ](\mathscr{L}_n \phi_n)}\,]\ .
\end{equation}
\noindent
Indeed, assuming that the induction hypothesis holds at time $n-1$, we have that
\begin{equation*}
\lim_{N \to \infty} \; \mathbb{E}\,[\,e^{i t \sqrt{N} \, [ \gamma^N_{n-1}-\gamma_{n-1} ](\mathscr{L}_n \phi_n)}\,]
=
\exp\big\{ -\tfrac{1}{2} t^2 \, \sum_{p=0}^{n-1} \gamma_p(1)^2 \, \Sigma_{\eta_p}(\mathscr{L}_{p,n} \phi_n) \big\}\
\end{equation*}
and the proof of the induction step then follows from Levy's continuity theorem and \eqref{eq.induction.norm.clt}.
To prove \eqref{eq.induction.norm.clt} we use the following decomposition
\begin{align*}
\begin{aligned} \,
[\gamma^N_n-\gamma_n ](\phi_n)
&=
\big\{ \gamma^N_n(\phi_n) - \mathbb{E}_{n-1}[\gamma^N_n(\phi_n)] \big\}
+
\big\{ \mathbb{E}_{n-1}[\gamma^N_n(\phi_n)]
-
\gamma_n(\phi_n) \big\}\\
&=: \widetilde{A}(N) + \widetilde{B}(N)\ .
\end{aligned}
\end{align*}
Since $\widetilde{B}(N) \in \mathscr{F}^N_{n-1}$ the expectation $\mathbb{E}[e^{i t \sqrt{N} \, [ \gamma^N_n-\gamma_n ](\phi_n)}]$ can be decomposed as
\begin{align*}
\begin{aligned}
\mathbb{E} \Big[\Big( \mathbb{E}_{n-1}\big[ e^{it \sqrt{N} \widetilde{A}(N)}\big]
&-
e^{-\frac12 t^2 \, \gamma_n(1)^2 \, \Sigma_{\eta_n}(\phi_n)}
\Big) \times e^{it \sqrt{N} \, \widetilde{B}(N)} \Big]\\
&\qquad+
e^{-\frac12 t^2 \, \gamma_n(1)^2 \, \Sigma_{\eta_n}(\phi_n)} \times \mathbb{E} \big[ e^{it \sqrt{N} \, \widetilde{B}(N)} \big]\ .
\end{aligned}
\end{align*}
As a consequence, \eqref{eq.induction.norm.clt} follows once it is established that the limit
\begin{equation}
\label{eq.cv.prob.fourier}
\lim_{N \to \infty} \mathbb{E}_{n-1}\Big[ e^{it \sqrt{N} \widetilde{A}(N)} \Big]
=
\exp\big\{-\tfrac12 t^2 \, \gamma_n(1)^2 \, \Sigma_{\eta_n}(\phi_n) \big\}
\end{equation}
holds in probability and that $\sqrt{N} \, \widetilde{B}(N) = \sqrt{N} \, [\gamma^N_{n-1} - \gamma_{n-1}](\mathscr{L}_n(\phi_n)) + o_{\mathbb{P}}(1)$.
We finish the proof of Theorem \ref{thm.clt.unnormalised} by establishing these two results.
\begin{itemize}
\item
Quantity $\widetilde{A}(N)$ also reads as $\gamma^N_n(1) \, A(N)$ with $A(N) := \big[ \eta^N_n-\Phi_{n,N}(\eta^N_{n-1}) \big](\phi_n)$.
By Corollary \ref{cor.wlln.normalisation}, $\gamma_n^N(1)$ converges in probability to $\gamma_n(1)$; to prove that $\mathbb{E}_{n-1}\big[ e^{it \sqrt{N} \widetilde{A}(N)}\big]$ converges in probability to $\exp\big\{-\frac12 t^2 \, \gamma_n(1)^2 \, \Sigma_{\eta_n}(\phi_n) \big\}$ it thus suffices to show that $\mathbb{E}_{n-1}\big[ e^{it \sqrt{N} A(N)}\big]$ converges in probability to $\exp\big\{-\tfrac12 t^2 \, \Sigma_{\eta_n}(\phi_n) \big\}$. We will exploit the following identity
\begin{equation*}
\mathbb{E}_{n-1}\,\Big[\, e^{ i \, t \, \sqrt{N} \, A(N) }\,\Big]
=
\mathbb{E}_{n-1}\,\Big[\,e^{ i \, t \, \{ \phi_n(X_N)- \mathbb{E}_{n-1}[\phi_n(X_N)] \} / \sqrt{N}}\,\Big]^N
\end{equation*}
with $X_N$ is distributed according to $\sum_{i=1}^N \frac{G_{n-1,N}(x_{n-1}^i)}{\sum_{j=1}^N G_{n-1,N}(x_{n-1}^i)} \, M_{n,N}(x_{n-1}^i, dx)$. Since the test function $\phi_n$ is bounded, a Taylor expansion yields that
\begin{equation*}
\mathbb{E}_{n-1}\,\Big[\, e^{ i \, t \, \{ \phi_n(X_N)- \mathbb{E}_{n-1}[\phi_n(X_N)] \} / \sqrt{N}}\,\Big]
=
1 - \tfrac{t^2}{N}\, \mathrm{Var}_{n-1}[\phi_n(X_N)] + N^{-3/2} \times \mathcal{O}_{\mathbb{P}}(1)\ .
\end{equation*}
Consequently, $\mathbb{E}_{n-1}[e^{i t \sqrt{N} \, A(N)}] = \exp\big\{ - t^2 \, \mathrm{Var}_{n-1}[\phi_n(X_N)] / 2\big\} + o_{\mathbb{P}}(1)$ and the proof is complete once it is shown that
\begin{align*}
\mathrm{Var}_{n-1}[\phi_n(X_N)]
&=
\sum_{i=1}^N G_{n-1,N}(x_{n-1}^i) M_{n,N}(\phi_n^2)(x_{n-1}^i) \; / \; \sum_{i=1}^N G_{n-1,N}(x_{n-1}^i)
\\
&\qquad -
\Big\{ \sum_{i=1}^N G_{n-1,N}(x_{n-1}^i) M_{n,N}(\phi_n)(x_{n-1}^i) \; / \; \sum_{i=1}^N G_{n-1,N}(x_{n-1}^i) \Big\}^2\\
&=
\eta^N_{n-1}\big[Q_{n-1, \eta^N_{n-1}(\xi_{n})}\phi_n^2 \big] \; / \; \eta^N_{n-1}\big[ G_{n-1, \eta^N_{n-1}(\xi_{n})} \big]
\\
&\qquad -
\Big\{ \eta^N_{n-1}\big[Q_{n-1, \eta^N_{n-1}(\xi_{n})}\phi_n \big] \; / \; \eta^N_{n-1}\big[ G_{n-1, \eta^N_{n-1}(\xi_{n})} \big] \Big\}^2
\end{align*}
converges in probability to $\Sigma_{\eta_n}(\phi_n)$.
By Assumption \ref{hyp:1}, functions
$(x,\xi) \mapsto G_{n-1, \xi}(x)$,
$(x,\xi) \mapsto Q_{n, \xi} \phi_n(x)$,
$(x,\xi) \mapsto Q_{n, \xi} \phi_n^2(x)$ are bounded and continuous at $\xi=\eta_{n-1}(\xi_{n})$ uniformly on $E_{n-1}$. By Corollary \ref{cor:wlln}, $\eta^N_{n-1}(\xi_{n})$ converges in probability to $\eta_{n-1}(\xi_{n})$;
by Theorem \ref{theo:wlln} and Slutsky's Lemma we get that $\mathrm{Var}_{n-1}[\phi_n(X_N)]$ converges in probability to
\begin{align*}
\eta_{n-1} [ Q_n(\phi_n^2) ] / \eta_{n-1}(G_{n})
-
\big( \eta_{n-1} [ Q_n(\phi_n) ] / \eta_{n-1}(G_{n}) \big)^2\ ,
\end{align*}
which is another formula for $\eta_n(\phi_n^2) - \eta_n(\phi_n)^2 = \Sigma_{\eta_n}(\phi_n)$,
as required.
\item
To prove that $\sqrt{N} \, \widetilde{B}(N) = \sqrt{N} \, [\gamma^N_{n-1} - \gamma_{n-1}](\mathscr{L}_n(\phi_n)) + o_{\mathbb{P}}(1)$ we write $\widetilde{B}(N)$ as
\begin{equation} \label{eq.decomposition.B}
\gamma^{N}_{n-1}(1) \times \eta^N_{n-1} [Q_{n,N}-Q_n](\phi_n)
\;+\;
[\gamma^N_{n-1}-\gamma_{n-1}](Q_n \phi_n)\ .
\end{equation}
Furthermore, we have
\begin{equation} \label{eq.Q.diff.eta}
\eta^N_{n-1} [Q_{n,N}-Q_n](\phi_n) = \eta^N_{n-1}\,[\,\omega(\cdot, \eta^N_{n-1}(\xi_n))\,] \times
[\eta^N_{n-1}-\eta_{n-1}](\xi_n)
\end{equation}
with
$
\omega(x, z) := \int_0^1 \partial_{\xi} Q_{n,\xi} \phi_n (x)|_{\xi= \eta_{n-1}(\xi_n) + \lambda(z - \eta_{n-1}(\xi_n))} \, d \lambda
$.
Under Assumption \ref{hyp:2}, function $\omega$
is bounded and continuous at $z = \eta_{n-1}(\xi_n)$ uniformly over $x \in E_{n-1}$. Theorem \ref{theo:wlln} applies so that $\eta^N_{n-1}\,[\,\omega(\cdot, \eta^N_{n-1}(\xi_n))\,]\rightarrow\eta_{n-1}\,[\, \omega(\cdot, \eta_{n-1}(\xi_n))\,] = \eta_{n-1}[\partial_{\xi}Q_n(\phi)]$, in probability. The induction hypothesis, Slutky's Lemma and standard manipulations yield that $\sqrt{N} \times \gamma^N_n [Q_{n,N}-Q_n](\phi_n)$ equals
\begin{equation*}
\sqrt{N} \times \eta_{n-1}\big[\partial_{\xi}Q_n(\phi) ] \times [\gamma^N_{n-1}-\gamma_{n-1}](\xi_n-\eta_{n-1}(\xi_n)) + o_{\mathbb{P}}(1)\ .
\end{equation*}
It then follows from \eqref{eq.decomposition.B} that
$\sqrt{N} \, \widetilde{B}(N) = \sqrt{N} \, [\gamma^N_{n-1} - \gamma_{n-1}](\mathscr{L}_n \phi_n) \;+\; o_{\mathbb{P}}(1)$.
\end{itemize}
This concludes the proof of the induction steps and finishes the proof of Theorem \ref{thm.clt.unnormalised}.
\end{proof}
In the case where the summary statistics are constant, i.e. $\xi_p \equiv C\in\mathbb{R}$ for $p \geq 0$, expression \eqref{eq.asymp.var.unnormalised} reduces to the usual non-adaptive asymptotic variance as presented, for example, in \cite{delmoral}. In the special case $\phi_n \equiv 1$, one obtains the following expression for the asymptotic variance of the relative normalisation constant $\gamma^N_n(1) / \gamma_n(1)$.
\begin{cor} Assume (A\ref{hyp:1}-\ref{hyp:2}) and let $n \ge 0$ be a non-negative integer. Then the quantity $\sqrt{N} \, \big\{ \gamma^N_n(1) / \gamma_n(1) - 1\big\}$ converges, as $N \to \infty$, to a centered Gaussian distribution with variance
\begin{equation*}
\sum_{p=0}^n
\frac{\mathrm{Var}_{\eta_p}(\mathscr{L}_{p,n} \, 1)}
{\prod_{k=p}^{n-1} \eta_k(G_k)^2}\ .
\end{equation*}
\end{cor}
Similarly, one can obtain a CLT for the empirical normalised measures $\eta^N_n(\phi_n)$:
\begin{theorem}
\label{theo:clt}
Assume (A\ref{hyp:1}-\ref{hyp:2}). Let $n \ge 0$, $r \ge 1$ and $\phi_n: E_n \to \mathbb R^{r}$ be a bounded measurable function. The sequence $\sqrt{N} \, [\eta^N_n - \eta_n](\phi_n)$ converges weakly to a centered Gaussian distribution with covariance
\begin{equation} \label{eq.asymp.var.normalised}
\Sigma_n(\phi_n)
:=
\sum_{p=0}^n \frac{\gamma_p(1)^2}{\gamma_n(1)^2} \, \Sigma_{\eta_p}\big[\mathscr{L}_{p,n} \big(\phi_n-\eta_n(\phi_n)\big) \big]
\end{equation}
with the linear operators $\mathscr{L}_p$ for $p \geq 0$ as defined in \eqref{eq.semigroup}. The asymptotic variances satisfy
\begin{equation} \label{eq.asymp.var.normalised.rec}
\Sigma_n(\phi_n) := \Sigma_{\eta_n}(\phi_n) + \frac{\Sigma_{n-1}\big[\mathscr{L}_{n} \big(\phi_n - \eta_n(\phi_n)\big) \big]}{\eta_{n-1}(G_{n-1})^2}\ .
\end{equation}
\end{theorem}
\begin{proof}
One can verify that the normalised measure $\eta^N_n$ is related to the unnormalised measure $\gamma^N_n$ through the identity (\cite[pp.~301]{delmoral})
\begin{equation*}
[\eta^N_n - \eta_n](\phi_n) = \frac{\gamma_n(1)}{\gamma_n^N(1)} \, \gamma_n^N\big[ \tfrac{1}{\gamma_n(1)} (\phi_n - \eta_n(\phi_n))\big]\ .
\end{equation*}
By Corollary \ref{cor.wlln.normalisation}, $\gamma_n(1)/ \gamma_n^N(1)$ converges in probability to 1. Slutsky's Lemma and Theorem \ref{thm.clt.unnormalised} yield that $\sqrt{N} \, [\eta^N_n - \eta_n](\phi_n)$ converges weakly to a centered Gaussian variable with variance
$
\sum_{p=0}^n \gamma_p(1)^2 \, \Sigma_{\eta_p}[\mathscr{L}_{p,n} \big( \gamma_n(1)^{-1} (\phi_n - \eta_n(\phi_n)\big)]
$,
which is just another way of writing \eqref{eq.asymp.var.normalised}. Equation \eqref{eq.asymp.var.normalised.rec} follows from the identities
$\gamma_p(1) = \prod_{k=0}^{p-1} \eta_k(G_k)$,
$\eta_{n-1}(\mathscr{L}_n \phi_n) = \eta_n(\phi_n)$.
\end{proof}
\subsection{Stability}
\label{sec.stability}
We now show that in the majority of applications of interest, the asymptotic variance of the adaptive SMC algorithm is identical to the asymptotic variance of the \emph{perfect} algorithm.
\begin{theorem} [Stability] \label{thm.stability}
Assume (A\ref{hyp:1}-\ref{hyp:2}). Suppose further that for any index $n \geq 1$ the identity
\begin{equation} \label{eq.stability}
\eta_{n-1}(G_{n-1, \xi} M_{n,\xi}) / \eta_{n-1}(G_{n-1, \xi}) = \eta_n
\end{equation}
holds for any parameter $\xi \in \mbox{Dom}(\xi_n)$.
For any test function $\phi_n \in \mathcal{B}_b(E_n)$, the asymptotic variance of the adaptive SMC algorithm identified in Theorem \ref{thm.clt.unnormalised} equals the asymptotic variance of the perfect SMC algorithm.
\end{theorem}
\begin{proof}
Formula \eqref{eq.semigroup} shows that it suffices to prove that the term $\eta_{n-1}(\partial_{\eta} Q_n \phi_n )$ vanishes. By differentiation under the integral sign, it is enough to prove that the mapping $\xi \mapsto \eta_{n-1}(Q_{n, \xi} \phi_n)$ is constant on $\mbox{Dom}(\xi_n)$. Indeed, it follows from \eqref{eq.stability} that $\eta_{n-1}(Q_{n, \xi} \phi_n) = \eta_n(\phi_n)$ for any $\xi \in \mbox{Dom}(\xi_n)$, concluding the proof of Theorem \ref{thm.stability}.
\end{proof}
Theorem \ref{thm.stability} applies for instance to the sequential Bayesian parameter inference context discussed in Section \ref{sec:seq_bi} and to the filtering setting of Section \ref{sec:ex_filt}.
A consequence of Theorem \ref{thm.stability} is that standard behaviours for the asymptotic variance of the \emph{perfect} SMC algorithm, such as linear growth of the asymptotic variance of $\sqrt{N} \, \big( \gamma^N_n(1) / \gamma_n(1) - 1\big)$, are inherited by the adaptive SMC algorithm.
%
%
\section{Adaptive Tempering}\label{sec:annealed}
We now look at the scenario when one uses the information in the evolving particle population
to adapt a sequence of distributions by means of a tempering parameter $\beta \in (0,1)$.
\subsection{Algorithmic Set-Up}
In many situations in Bayesian inference one seeks to sample from a distribution $\pi$ on a set $E$ of the form
\begin{equation*}
\pi(dx) = \tfrac{1}{Z} \, e^{- \beta_{*} \, V(x)} \, m(dx)
\end{equation*}
where $Z$ is a normalisation constant, $m(dx)$ a dominating measure on the set $E$ and $V:E \to \mathbb R$ a potential. Coefficient $\beta_{*}\in \mathbb R$ can be thought of as an inverse temperature parameter.
A frequently invoked algorithm involves forming a sequence of `tempered' probability distributions
\begin{equation*}
\eta_n(dx) = \frac{1}{Z(\beta_n)} \, e^{-\beta_{n} V(x)} \, m(dx)
\end{equation*}
for inverse temperatures
$\beta_0 \leq \ldots \leq \beta_{n-1} \leq \beta_n\leq \cdots \leq \beta_{n_*}=\beta_*$; in many applications $\beta_*=1$.
The associated unnormalised measures are
\begin{equation*}
\gamma_n(dx) = e^{-\beta_{n} V(x)} \, m(dx)\ .
\end{equation*}
Particles are propagated from $\eta_{n-1}$ to $\eta_{n}$ through a Markov kernel $M_n
$ that preserves $\eta_{n}$. In other words, the algorithm corresponds to the SMC approach discussed in Section \ref{sec:algo} with potentials
\begin{equation*}
G_n(x)
= e^{-\Delta_{n} \, V(x)}\ ,
\quad \Delta_n := \beta_{n+1}-\beta_n\ ,
\end{equation*}
and Markov kernels $M_n$ satisfying $\eta_n M_{n} = \eta_n$. For test function $\phi_n \in \mathcal{B}_b(E)$, the $N$-particle approximation of the normalised and unnormalised distribution are given in \eqref{eq.approx.normalized.measure}, \eqref{eq.approx.unnormalized.measure}.
To be consistent with the notations introduced in Section \ref{sec.algo1.CLT},
note that the normalisation constants also read as $Z(\beta_n) = \gamma_n(1)$ and $Z=Z(\beta_*)=\gamma_{n^*}(1)$.
In most scenarios of practical interest, it can be difficult or even undesirable to decide \emph{a-priori} upon the annealing sequence $\{\beta_n\}_{n=0}^{n_*}$.
Indeed, if the chosen sequence features big gaps, one may reach the terminal temperature rapidly, the variance of the weights being potentially very large due to large discrepancies between consecutive elements of the bridging sequence of probability distributions. Alternatively, if the gaps between the annealing parameters are too small, the variance of the final weights can be very small; this comes at the price of needlessly wasting a lot of computation time. Knowing what constitutes `big' or `small' with regards to the temperature gaps can be very-problem specific. Thus, an automated procedure for determining the annealing sequence is of great practical importance. In this section we investigate the asymptotic properties of an algorithm where the temperatures, as well as statistics of the MCMC kernel, are determined empirically by the evolving population of particles.
A partial analysis of the algorithm to be described can be found in \cite{giraud}. However, the way in which the annealing sequence is determined in that work does not correspond to one typically used in the literature. In addition, the authors assume that the perfect MCMC kernels are used at each time step, whereas we do
not assume so. It should also be noted, however, that the analysis in \cite{giraud} is non-asymptotic
The adaptive version of the above described algorithm constructs the (random) temperatures sequence $\{ \beta_p^{N} \}_{p \geq 0}$ `on the fly' as follows.
Once a proportion $\alpha\in(0,1)$ has been specified, the random tempering sequence is determined through the recursive equation
\begin{equation} \label{eq.beta.recursion}
\beta^N_{n+1} =
\inf \Big\{ \beta^N_n < \beta \leq \beta_*
\; : \;
\mathrm{ESS}(\eta^N_{n}, e^{-(\beta-\beta^N_n) \, V}) = \alpha
\Big\}
\end{equation}
initialized at a prescribed value $\beta_0$ typically chosen so that the distribution $\eta_0$ is easy to sample from.
For completeness, we use the convention that $\inf \varnothing =\beta_{*}$.
In the above displayed equation, we have used the $\mathrm{ESS}$ functional defined for a measure $\eta$ on the set $E$ and a weight function $\omega: E \to (0, \infty)$ by
\begin{equation*}
\mathrm{ESS}(\eta, \omega) := \eta(\omega)^2 / \eta(\omega^2)\ .
\end{equation*}
The following lemma guaranties that under mild assumptions the effective sample size functional $\beta \mapsto \mathrm{ESS}(\eta_p, e^{-(\beta-\beta_n) \, V})$ is continuous and decreasing so that \eqref{eq.beta.recursion} is well-defined and the inverse temperature $\beta_{n+1}$ can be efficiently computed by a standard bisection method.
\begin{lem} \label{lem.ess.decreasing}
Let $\eta$ be a finite measure on the set $E$ and $V:E \to \mathbb R$ be a bounded potential. Then, the function $\lambda \mapsto \mathrm{ESS}(\eta, e^{-\lambda \, V})$
is continuous and decreasing on $[0,\infty)$. Furthermore, if $\mathbb{P}\,[\,V(X) \neq V(Y)\,] > 0$ for $X,Y$ independent and distributed according to $\eta$, the function is strictly decreasing.
\end{lem}
\begin{proof}
We treat the case where $\mathbb{P}\,[\,V(X) \neq V(Y)\,] > 0$, the case $\mathbb{P}\,[\,V(X) \neq V(Y)\,] = 0$ being trivial.
Let $X$ and $Y$ be two independent random variables distributed according to $\eta$.
The dominated convergence theorem shows that the function $\lambda \mapsto \mathrm{ESS}(\eta, e^{-\lambda \, V})$ is continuous, with a continuous derivative. Standard manipulations show that the derivative is strictly negative if
$\eta( V \, e^{-\lambda V}) \, \eta( e^{-2\lambda V}) >
\eta( e^{-\lambda V} ) \, \eta( V \, e^{-2 \lambda V})$, which is equivalent to the condition
\begin{equation*}
\mathbb{E}\,\Big[\, e^{-\lambda \{ V(X)+V(Y) \} } \times \Big\{ V(X)-V(Y) \Big\} \times \Big\{ e^{-\lambda V(X)} - e^{-\lambda V(Y)}\Big\}\,\Big] < 0\ .
\end{equation*}
This last condition is satisfied since for any $x,y \in \mathbb R$ and any $\lambda > 0$ we have the inequality $\{ V(x)-V(y)\} \{e^{-\lambda \, V(x)} - e^{- \lambda \, V(y)}\} < 0$, with strict inequality for $x \neq y$.
\end{proof}
We will assume that the sequence of temperatures $\{ \beta_n \}_{n \geq 0}$ and $\{ \beta^N_n\}_{n \geq 0}$ are defined for \emph{any} index $n \geq 0$,
using the convention that the first time that the parameter $\beta^N_n$ reaches the level $\beta_*$, which is random for the practical algorithm, the algorithm still goes on with fixed inverse temperatures equal to $\beta_*$.
Under this convention, we can carry out an asymptotic analysis using an induction argument. Ideally one would like to prove asymptotic consistency (and a CLT) for the empirical measure
at the random termination time of the practical algorithm; we do not do this, due to the additional technical challenge that it poses.
We believe that the result to be proven still provides a very satisfying theoretical justification for the practical adaptive algorithm.
We assume from now on that for the perfect algorithm the sequence of inverse temperatures is given by the limiting analogue of
\eqref{eq.beta.recursion},
\begin{equation}
\label{eq:hitting}
\beta_{n+1} =
\inf \Big\{ \beta_n < \beta \leq \beta_*
\; : \;
\mathrm{ESS}(\eta_{n}, e^{-(\beta-\beta_n) \, V}) = \alpha
\Big\}\ .
\end{equation}
We will show in the next section that under mild assumptions the adaptive version $\beta^N_n$ converges in probability towards $\beta_n$. For statistics $\xi_{n+1}:E \to \mathbb R^d$ we set
\begin{equation*}
\theta^N_{n} =\big( \beta^N_n, \beta^N_{n+1},\eta^N_{n}(\xi_{n+1})^{\top} \big)^{\top}
\end{equation*}
and denote by $\theta_n$ its limiting value. At time $n$, for a particle system $\{x_n^i\}_{i=1}^N$ and associated empirical distribution $\eta^N_n$ targeting the distribution $\eta_n$, the next inverse temperature $\beta^N_{n+1}$ is computed according to \eqref{eq.beta.recursion}; the particle system is re-sampled according to a multinomial scheme with weights
\begin{equation*}
G_{n,N}(x) := e^{-\Delta^N_{n} \, V(x)}\ ;
\quad
\Delta^N_n = \beta^N_{n+1} - \beta^N_{n}\ ,
\end{equation*}
and then evolves via a Markov kernel $M_{n+1,N} \equiv M_{n+1, \eta^N_n(\xi_{n+1}), \beta^N_{n+1}}$
that preserves the preserves $Z(\beta^N_{n+1})^{-1} \, e^{-\beta^N_{n+1} \, V} \, m(dx)$.
Similarly to Section \ref{sec:algo1}, we will make use of the operator
\begin{equation*}
Q_{n,N}(x,dy) \equiv
G_{n-1,N}(x) \, M_{n,N}(x,dy)
\end{equation*}
and its limiting analogue $Q_n$. With these notations, note that Equation \eqref{eq:d1} holds. To emphasise the dependencies upon the parameter $\theta=(\beta_1, \beta_2, \eta)$, we will sometimes use the expression $Q_{n,\theta} = G_{n,\theta}(x) \, M_{n,\eta,\beta_2}(x,dy)$ with $G_{n,\theta} = e^{-(\beta_2-\beta_1) \, V} = e^{- \Delta \, V}$ and $\Delta = \beta_2 - \beta_1$. For notational convenience, we sometimes write $\partial_{\Delta}$ when the meaning is clear. For example, by differentiation under the integral sign, the quantity $\partial_{\Delta} \eta_n(G_n)$ also equals $- \eta_n( V \, G_n)$. Unless otherwise stated, the derivative $\partial_\theta$ is evaluated at the limiting parameter $\theta_n = (\beta_n, \beta_{n+1}, \eta_n(\xi_{n+1}))$.
\subsection{Assumptions}
We define $\mbox{Dom}(\beta)= \{ (\beta_1, \beta_2) \in [\beta_0, \beta_*]^2 \;
; \; \beta_1 \leq \beta_2 \}$. By $\mbox{Dom}(\xi_p) \subset \mathbb{R}^d$ we denote a convex set that contains the range of the statistic $\xi_p:E_{p-1} \to \mathbb{R}^{d}$.
The results to be presented in the next section make use of the following hypotheses.
\begin{hypA}\label{hyp:anneal1}
The potential $V$ is bounded on the set $E$.
For each $n \geq 0$ the function $(x,\theta) \mapsto G_{n,\theta}(x)$ is bounded and continuous at $\theta_n= \big(\beta_n, \beta_{n+1}, \eta_{n}(\xi_{n+1}) \big)$ uniformly on $E$. The statistic $\xi_{n}:E \to \mathbb R^d$ is bounded. For any bounded Borel test function $\phi_{n}:E \to \mathbb R^r$, the function $(x,\theta) \mapsto Q_{n, \theta} \phi_{n} (x)$ is bounded and continuous at $\theta =\theta_{n-1}$ uniformly on $E$.
\end{hypA}
\begin{hypA}\label{hyp:anneal2}
For each $n \geq 1$, $r\ge 1$ and bounded Borel test function $\phi_{n}:E \to \mathbb R^r$ the function
$(x,\theta) \mapsto \partial_{\theta} Q_{n,\theta} \, \phi_{n}(x)$ is well defined, bounded and continuous at $\theta=\theta_{n-1}$ uniformly on $E$.
\end{hypA}
These conditions could be relaxed at the cost of considerable technical complications in the proofs.
\subsection{Weak Law of Large Numbers}
In this section we prove that the consistency results of Section \ref{sec.algo1.WLLN} also hold in the adaptive annealing setting. To do so, we prove that for any index $n \geq 0$ the empirical inverse temperature parameter $\beta^N_n$ converges in probability towards $\beta_n$.
\begin{theorem}[WLLN]\label{theo:wlln_aneal}
Assume (A\ref{hyp:anneal1}). For any $n \geq 0$, the empirical inverse temperature $\beta^N_n$ converges in probability to $\beta_n$ as $N \to \infty$. Also,
let $\mathsf{V}$ be a Polish space and $\{V_N\}_{N \geq 0}$ a sequence of $\mathsf{V}$-valued random variables that converges in probability to $\mathsf{v} \in \mathsf{V}$. Let $r \ge 1$ and $\phi_n: E \times \mathsf{V} \to \mathbb R^{r}$ a bounded function continuous at $\mathsf{v} \in \mathsf{V}$ uniformly on $E$. Then, the following limit holds in probability
\begin{equation*}
\lim_{N \to \infty} \eta_n^N[\phi_n(\cdot, V_N)]
=
\eta_n[\phi_n(\cdot, \mathsf{v})]\ .
\end{equation*}
\end{theorem}
\begin{cor}[WLLN]
\label{cor:wlln_aneal}
Assume (A\ref{hyp:anneal1}). Let $n \ge 0$, $r \ge 1$ and $\phi_n: E \to \mathbb R^{r}$ be a bounded measurable function. The following limit holds in probability,
$\lim_{N \to \infty} \eta_n^N(\phi_n)
=
\eta_n(\phi_n)$.
\end{cor}
\begin{cor}
\label{cor.wlln_aneal.normalisation}
Assume (A\ref{hyp:anneal1}). Let $n \ge 0$, $r \ge 1$ and $\phi_n: E \to \mathbb R^{d}$ a bounded measurable function. The following limit holds in probability,
$\lim_{N \to \infty} \gamma_n^N(\phi_n)
=
\gamma_n(\phi_n)$.
\end{cor}
\begin{proof}[Proof of Theorem \ref{theo:wlln_aneal}]
Clearly, it suffices tp concentrate on the case $r=1$.
We prove by induction on the rank $n \geq 0$ that $\beta^N_{n}$ converges in probability to $\beta_{n}$ and for any test function $\phi: E \times \mathsf{V} \to \mathbb R$ bounded and continuous at $\mathsf{v} \in \mathsf{V}$ uniformly on $E$
that
$[\eta_n^N-\eta_n](\phi) \rightarrow_{\mathbb{P}} 0$.
The initial case $n=0$ is a direct consequence of WLLN for i.i.d.~random variables and Definition \ref{def.unif.cont}.
We assume the result at rank $n-1$ and proceed to the induction step.
\begin{itemize}
\item We first focus on proving that $\beta_{n}^{N}$ converges in probability to $\beta_{n}$.
Note that $\beta^N_{n}$ can also be expressed as
\begin{equation*}
\beta^N_{n} := \inf \Big\{ \beta \in [\beta_0, \beta_*] \; : \;
\frac{\zeta_{1,n-1}^N(\beta)}{\zeta_{2,n-1}^N(\beta)} \leq \alpha \Big\}
\end{equation*}
with
$\zeta_{1,n-1}^N(\beta) = \eta_{n-1}^N[e^{-\max(0, \beta-\beta^N_{n-1}) \, V}]^2$
and
$\zeta_{2,n-1}^N(\beta) = \eta_{n-1}^N[e^{-2\max(0, \beta-\beta^N_{n-1}) \, V}]$.
Indeed, the limiting temperature $\beta_{n}$ can also be expressed as
\begin{equation*}
\beta_{n} := \inf \Big\{ \beta \in [\beta_0, \beta_*] \; : \;
\frac{\zeta_{1,n-1}(\beta)}{\zeta_{2,n-1}(\beta)} \leq \alpha \Big\}
\end{equation*}
where $\zeta_{1,n-1}(\beta)$ and $\zeta_{2,n-1}(\beta)$ are the limiting values of
$\zeta^N_{1,n-1}(\beta)$ and $\zeta^N_{2,n-1}(\beta)$. The dominated convergence theorem shows that the paths
$\beta \mapsto \zeta_{1,n-1}^N(\beta) / \zeta_{2,n-1}^N(\beta)$ and
$\beta \mapsto \zeta_{1,n-1}(\beta) / \zeta_{2,n-1}(\beta)$ are continuous; it thus suffices to prove that
the limit
\begin{equation} \label{eq.uniform.cv.temperature}
\lim_{N \to \infty} \;
\big\| \zeta_{1,n-1}^N(\beta) / \zeta_{2,n-1}^N(\beta)-\zeta_{1,n-1}(\beta) / \zeta_{2,n-1}(\beta)\big\|_{\infty, [\beta_0, \beta_*]}
= 0
\end{equation}
holds in probability. Lemma \ref{lem.ess.decreasing} shows that the function $\beta \mapsto \zeta_{i,n-1}^N(\beta)$ is decreasing on $[\beta_0, \beta_*]$ for any $1 \leq i \leq 2$ and $n,N \geq 1$; by standard arguments, for proving \eqref{eq.uniform.cv.temperature} it suffices to show that for any fixed inverse temperature $\beta \in [\beta_0, \beta_*]$ the difference $\zeta_{1,n-1}^N(\beta) / \zeta_{2,n-1}^N(\beta)-\zeta_{1,n-1}(\beta) / \zeta_{2,n-1}(\beta)$ converges to zero in probability. Indeed, one can focus on proving that $\zeta^N_{i,n-1}(\beta)$ converges in probability to $\zeta_{i,n-1}(\beta)$ for $i \in \{1,2\}$. We present the proof for $i=2$, the case $i=1$ being entirely similar.
\begin{itemize}
\item For the case $\beta < \beta_{n-1}$, the induction hypothesis shows that $\beta^N_{n-1}$ converges in probability to $\beta_{n-1}$. Since $\zeta^N_{2,n-1}(\beta)=1=\zeta_{2,n-1}(\beta)$ for $\beta \leq \min(\beta^N_{n-1}, \beta_{n-1})$, the conclusion follows.
\item The case $\beta \geq \beta_{n-1}$ follows from the convergence in probability of $\beta^N_{n-1}$ to $\beta_{n-1}$ and $\eta^N_{n-1}(e^{ -(\beta-\beta_{n-1}) \, V})$ to $\eta_{n-1}(e^{ -(\beta-\beta_{n-1}) \, V})$.
\end{itemize}
\item
To prove that $\eta_n^N[\phi_n(\cdot, V_N)]$ converges in probability towards $\eta_n[\phi_n(\cdot, \mathsf{v})]$, because of the convergence in probability of $\beta^N_{n}$ to $\beta_{n}$, of $\eta^N_{n-1}(\xi_n)$ to $\eta_{n-1}(\xi_n)$ and of $V_n$ to $\mathsf{v}$, one can use exactly the same approach as the one in the proof of Theorem \ref{theo:wlln}.
\end{itemize}
\end{proof}
\subsection{Central Limit Theorem}
In this section we extend the fluctuation analysis of Section \ref{sec.algo1.CLT} to the adaptive annealing setting. We prove that for a test function $\phi_n$ the empirical quantity $\gamma^N_n(\phi_n)$ converges at $N^{-1/2}$-rate towards its limiting value $\gamma_n(\phi_n)$; we give explicit recursive expressions for the asymptotic variances.
It is noted that results for $\eta_n^N(\phi_n)$ may also be proved as in Section \ref{sec.algo1.CLT}, but are omitted for brevity.
Before stating the main result of this section, several notations need to be introduced. For any $n \geq 0$ and test function $\phi_n: E \to \mathbb R^r$ we consider the extension operator $\mathrm{Ext}_n$ that maps the test function $\phi_n$ to the function $\mathrm{Ext}_n(\phi_n): E \to \mathbb R^{r+2}$ defined by
\begin{equation*}
\mathrm{Ext}_n(\phi) := \Big(G_n - \eta_{n}(G_n), \, G_n^2- \eta_{n}(G^2_n), \, \phi_n \Big)^\top.
\end{equation*}
The linear operator $\mathcal{A}_n$ maps the bounded Borel function $\phi_n: E \to \mathbb R^r$ to the rectangular $(r+1)\times (r+3)$ matrix $\mathcal{A}_n(\phi_n)$
defined by
$[\mathcal{A}_n(\phi_n)]_{1,1}=1$, $[\mathcal{A}_n(\phi)]_{1,[4:r+3]} = 0_{1 \times r}$, $[\mathcal{A}_n(\phi_n)]_{[2:r+1],[4:r+3]} = I_{r \times r}$ and
\begin{align*}
&[\mathcal{A}_n(\phi_n)]_{1,2}= -2\gamma_{n-1}^{-1}(1) \, \frac{ \eta_{n-1}(G_{n-1})}{\eta_{n-1}(G^2_{n-1})}\cdot \Big\{ \partial_\Delta \Big[
\frac{\eta_{n-1}(G_{n-1})^2}{\eta_{n-1}(G_{n-1}^2)} \Big] \Big\}^{-1} \ ; \\
&[\mathcal{A}_n(\phi_n)]_{1,3} =
\gamma_{n-1}^{-1}(1) \, \frac{\eta_{n-1}(G_{n-1})^2}{\eta_{n-1}(G_{n-1}^2)^2} \cdot \Big\{ \partial_\Delta
\Big[ \frac{\eta_{n-1}(G_{n-1})^2}{\eta_{n-1}(G_{n-1}^2)} \Big] \Big\}^{-1}\ ;\\
&[\mathcal{A}_n(\phi_n)]_{2:r+1,1} = \big(\partial_{\beta_{n-1}}+\partial_{\beta_{n}}\big) \, \eta_{n-1}(Q_n \phi_n)\ ; \\
&[\mathcal{A}_n(\phi_n)]_{2:r+1,2}= \gamma_{n-1}(1) \, \eta_{n-1}[\partial_{\beta_{n}} Q_n \phi_n] \times [\mathcal{A}_n(\phi_n)]_{1,2}\ ; \\
&[\mathcal{A}_n(\phi_n)]_{2:r+1,3}= \gamma_{n-1}(1) \, \eta_{n-1}[\partial_{\beta_{n}} Q_n \phi_n] \times [\mathcal{A}_n(\phi_n)]_{1,3}\ .
&\end{align*}
\begin{theorem}[CLT]\label{theo:clt2}
Assume (A\ref{hyp:anneal1})-(A\ref{hyp:anneal2}).
Let $n \ge 0$, $r \ge 1$ and $\phi_n: E_n \to \mathbb R^{r}$ be a bounded measurable function. The sequence $\sqrt{N} \, \big( \beta^N_n - \beta_n , [\gamma^N_n - \gamma_n](\phi_n) \big)^\top$ converges weakly to a centred Gaussian distribution with covariance
\begin{equation} \label{eq.recursion.cov}
\Sigma_n(\phi_n)
\;=\;
\mathcal{A}_n(\phi_n) \cdot \Sigma_{n-1}\big( \mathrm{Ext}_{n-1}( Q_n \phi_n ) \big) \cdot \mathcal{A}_n(\phi_n)^\top + \gamma^2_n(1) \, \widetilde{\Sigma}_{\eta_n}(\phi_n)
\end{equation}
where $\widetilde{\Sigma}_{\eta_n}(\phi_n)$ is the covariance matrix of the function $\big(0,\phi_n \big)^\top$ under $\eta_n$.
\end{theorem}
\begin{proof}
The proof follows closely the one of Theorem \ref{thm.clt.unnormalised}. For the reader's convenience, we only highlight the differences. The proof proceeds by induction, the case $n=0$ directly following from the CLT for i.i.d random variables. For proving the induction step, assuming that the result holds at rank $n-1$, it suffices to prove that
\begin{equation} \label{eq.conditional.rec}
\mathbb{E}_{n-1}\left(\begin{array}{c}
\beta^N_n - \beta_n \\
\big[ \gamma^N_n - \gamma_n\big](\phi_n)
\end{array} \right)
\; = \;
\mathcal{A}_{n,N}(\phi_n)\,
\left( \begin{array}{c}
\beta^N_{n-1} - \beta_{n-1} \\
\big[ \gamma^N_n - \gamma_n\big] \big( \mathrm{Ext}[Q_n \phi_n] \big)
\end{array}\right),
\end{equation}
with $\mathcal{A}_{n,N}(\phi_n) \in \mathsf{M}_{r+1,r+3}(\mathbb R)$ converging in probability to $\mathcal{A}_n(\phi_n)$, and that for any vector $t \in \mathbb R^r$ the following limit holds in probability
\begin{equation*}
\lim_{N \to \infty}
\mathbb{E}\Big[ \exp \Big\{ i \, t \, \sqrt{N} \, C(N) \Big]
\;=\;
\exp\big\{ -\gamma^2_n(1) \, \bra{t, \Sigma_{\eta_n}(\phi_n) \, t} / 2\big\}\
\end{equation*}
with
$C(N) = (\gamma^N_n-\gamma_n)(\phi_n) - \mathbb{E}_{n-1}\big[ (\gamma^N_n-\gamma_n)(\phi_n) \big]$.
The proof of the above displayed equation is identical to the proof of \eqref{eq.cv.prob.fourier} and is thus omitted. We now prove \eqref{eq.conditional.rec}.
\begin{itemize}
\item
We first treat the term $\mathbb{E}_{n-1}[\beta^N_n - \beta_n] = \beta^N_n - \beta_n$.
The relation
$\mathrm{ESS}(\eta^N_{n-1}, e^{-\Delta^N_{n-1} \, V}) = \alpha = \mathrm{ESS}(\eta_{n-1}, e^{-\Delta_{n-1} \, V})$
can be rearranged as
\begin{align} \label{eq.ess.decomp}
\begin{aligned}
\eta_{n-1}(G_{n-1})^2 \, &\Big\{ \eta^N_{n-1}(e^{-2 \Delta^N_{n-1}V}) - \eta_{n-1}(e^{-2 \Delta_{n-1}V})\Big\} = \\
&
\eta_{n-1}(G_{n-1}^2) \, \Big\{ \eta^N_{n-1}(e^{- \Delta^N_{n-1}V})^2 - \eta_{n-1}(e^{- \Delta_{n-1}V})^2\Big\}\ .
\end{aligned}
\end{align}
Decomposing $\eta^N_{n-1}(e^{-2 \Delta^N_{n-1}V}) - \eta_{n-1}(e^{-2 \Delta_{n-1}V})$
as the sum of
$\eta^N_{n-1}(e^{-2 \Delta^N_{n-1}V}) - e^{-2 \Delta_{n-1}V})$
and
$[\eta^N_{n-1} - \eta_{n-1}](G_{n-1}^2)$,
and using a similar decomposition for the difference
$\eta^N_{n-1}(e^{- \Delta^N_{n-1}V})^2 - \eta_{n-1}(e^{- \Delta_{n-1}V})^2$,
one can exploit the boundedness of the potential $V$, Theorem \ref{theo:wlln_aneal} and
the same approach as the one used for proving \eqref{eq.Q.diff.eta} to obtain that
$\eta^N_{n-1}(e^{-2 \Delta^N_{n-1}V}) - \eta_{n-1}(e^{-2 \Delta_{n-1}V})$ equals
\begin{align} \label{eq.ess.d.1}
\Big\{ \partial_{\Delta}\eta_{n-1}(G^2_{n-1}) + o_{\mathbb{P}}(1) \Big\}
\times (\Delta_{n-1}^N - \Delta_{n-1})+
[\eta^N_{n-1} - \eta_{n-1} ] (G_{n-1}^2)
\end{align}
and
$[\eta^N_{n-1}(\kappa^{\Delta^N_{n-2}})^2-\eta_{n-2}(\kappa^{\Delta_{n-2}})^2]$ equals
\begin{align} \label{eq.ess.d.2}
\begin{aligned}
\Big\{ 2 \eta_{n-1}&(G_{n-1}) \partial_{\Delta}\eta_{n-1}(G_{n-1}) + o_{\mathbb{P}}(1) \Big\} \times (\Delta_{n-2}^N - \Delta_{n-2})\\
&+
\Big\{ 2 \eta_{n-1}(G_{n-1}) + o_{\mathbb{P}}(1) \Big\} \times
[\eta^N_{n-1} - \eta_{n-1} ] (G_{n-1})\ .
\end{aligned}
\end{align}
Since $(\Delta^N_{n-1}-\Delta_{n-1})$ equals $(\beta^N_{n}-\beta_{n})+(\beta^N_{n-1}-\beta_{n-1})$, Slutsky's Lemma, Equations \eqref{eq.ess.decomp}, \eqref{eq.ess.d.1}, \eqref{eq.ess.d.2} and standard algebraic manipulations yield
\begin{align} \label{eq.beta.rec}
\begin{aligned}
(\beta_n^N-\beta_n)
&=
[\mathcal{A}_{n,N}(\phi)]_{1,1} \, (\beta_{n-1}^N-\beta_{n-1}) \\
&\qquad +
[\mathcal{A}_{n,N}(\phi)]_{1,2} \, [\gamma_{n-1}^N-\gamma_{n-1}]
\big(G_{n-1}-\eta_{n-1}(G_{n-1}) \big)\\
&\qquad +
[\mathcal{A}_{n,N}(\phi)]_{1,3} \, [\gamma_{n-1}^N-\gamma_{n-1}]\big(G^2_{n-1}-\eta_{n-1}(G^2_{n-1}) \big)
\end{aligned}
\end{align}
where $[\mathcal{A}_{n,N}(\phi)]_{1,i}$ converges in probability to $[\mathcal{A}_{n,N}(\phi)]_{1,i}$ for $1 \leq i \leq 3$.
\item
To deal with the term $\mathbb{E}_{n-1}\big[ (\gamma^N_n - \gamma_n)(\phi_n) \big]$ we make use of the decomposition
\begin{equation} \label{eq.gamma.rec}
\mathbb{E}_{n-1}\big[ (\gamma^N_n - \gamma_n)(\phi_n) \big]
=
\gamma_{n-1}^N(1) \times \eta^N_{n-1}[Q_{n,N}-Q_n](\phi_n) + [\gamma^N_{n-1}-\gamma_{n-1}](Q_n \phi_n)\ .
\end{equation}
Assumptions (A\ref{hyp:anneal1})-(A\ref{hyp:anneal2}), Theorem \ref{theo:wlln_aneal} and
the same approach as the one used for proving \eqref{eq.Q.diff.eta} show that the term $\eta^N_{n-1}[Q_{n,N}-Q_n](\phi_n)$ equals
\begin{equation*}
\Big\{ \eta_{n-1}[\partial_{\beta_{n-1}}Q_n \phi_n ] + o_{\mathbb{P}}(1)\Big\} \, (\beta^N_{n-1}-\beta_{n-1})
+
\Big\{ \eta_{n-1}[\partial_{\beta_{n}}Q_n \phi_n] + o_{\mathbb{P}}(1)\Big\} \, (\beta^N_{n}-\beta_{n})\ .
\end{equation*}
Note that there is no term involving the derivative with respect to the value of the summary statistics; indeed, this is because for any value of $\xi \in \mathbb R^r$ the Markov kernel $M_{n,\xi}$ preserves $\eta_{n}$ so that one can readily check that $\eta_{n-1}[ \partial_{\xi} Q_{n,\xi} \phi_n] = 0$. One can then use \eqref{eq.beta.rec} to express $(\beta^N_{n}-\beta_{n})$ in terms of the three quantities $(\beta^N_{n-1}-\beta_{n-1})$, $[\gamma_{n-1}^N-\gamma_{n-1}]
\big(G_{n-1}-\eta_{n-1}(G_{n-1}) \big)$ and $[\gamma_{n-1}^N-\gamma_{n-1}]
\big(G^2_{n-1}-\eta_{n-1}(G^2_{n-1}) \big)$ and obtain, via Slutsky's Lemma and \eqref{eq.gamma.rec}, that for any coordinate $1 \leq i \leq r$,
\begin{align} \label{eq.gamma.rec.2}
\begin{aligned}
\mathbb{E}_{n-1}\big[ (\gamma^N_n - \gamma_n)(\phi_n) \big]_i
&=
[\mathcal{A}_{n,N}(\phi)]_{i+1,1} \, (\beta_{n-1}^N-\beta_{n-1}) \\
&\qquad +
[\mathcal{A}_{n,N}(\phi)]_{i+1,2} \, [\gamma_{n-1}^N-\gamma_{n-1}]
\big(G_{n-1}-\eta_{n-1}(G_{n-1}) \big)\\
&\qquad +
[\mathcal{A}_{n,N}(\phi)]_{i+1,3} \, [\gamma_{n-1}^N-\gamma_{n-1}]\big(G^2_{n-1}-\eta_{n-1}(G^2_{n-1}) \big)\\
&\qquad +
[\gamma_{n-1}^N-\gamma_{n-1}](Q_n \phi_n)_i
\end{aligned}
\end{align}
where $[\mathcal{A}_{n,N}(\phi)]_{i+1,j}$ converges in probability to $[\mathcal{A}_{n}(\phi)]_{i+1,j}$ for $1 \leq j \leq 3$.
\end{itemize}
Equation \eqref{eq.conditional.rec} is a simple rewriting of
\eqref{eq.beta.rec} and \eqref{eq.gamma.rec.2}. This concludes the proof.
\end{proof}
\section{Applications}\label{sec:exam}
\subsection{Verifying the Assumptions}
We consider the sequential Bayesian parameter inference framework of Section \ref{sec:seq_bi}. That is, for a parameter $x \in E=\mathbb R^m$, observations $y_i \in \mathcal{Y}$ and prior measure with density $\eta_0(x)$ with respect to the Lebesgue measure in $\mathbb R^m$.
We assume the following.
\begin{hypB}\label{hyp:b1}
For each $n \geq 1$ the function $G_n(x) := \mathbb{P}[y_{n+1} \mid y_{1:n},x]$ is bounded and strictly positive. The statistics $\xi_n: E \to \mathbb R^d$ is bounded.
\end{hypB}
\begin{hypB}\label{hyp:b2}
For each $n\geq 1$, the parametric family of Markov kernel $M_{n,\xi}$ is given by a Random-Walk-Metropolis kernel. The proposal density $q(\cdot; \xi)$ is symmetric; for a current position $x \in E$ the proposal $y$ is such that $\mathbb{P}(y-x \in du) = q(u; \xi) \, du$. We suppose that the first and second derivatives
\begin{equation*}
\xi \mapsto \nabla_{\xi} q(u;\xi) \ ; \quad
\xi \mapsto \nabla^2_{\xi} q(u;\xi) \ ,
\end{equation*}
are bounded on the range $\mbox{Dom}(\xi_n)$ of the adaptive statistics $\xi_n: E \to \mbox{Dom}(\xi_n) \subset \mathbb R^d$.
\end{hypB}
Assumption (B\ref{hyp:b1}) is reasonable and satisfied by many real statistical models. Similarly, it is straightforward to construct proposals verifying Assumption (B\ref{hyp:b2}); one can for example show that for a function $\sigma: \mbox{Dom}(\xi_n) \to \mathbb R_+$, bounded away from zero with bounded first and second derivatives, the Gaussian proposal density $q(u; \xi) := \exp\big\{ -u^2 / [2 \sigma^2(\xi)] \big\} / \sqrt{2 \pi \sigma^2(\xi)}$ satisfies Assumption (B\ref{hyp:b2}); multi-dimensional extensions of this settings are readily constructed.
\begin{prop} \label{prop.verif.assump}
Assume (B\ref{hyp:b1}-\ref{hyp:b2}). The kernels $(M_{n,\cdot})_{n\geq 1}$ and potentials $(G_n)_{n\geq 0}$ satisfy Assumptions (A\ref{hyp:1}-\ref{hyp:2}).
\end{prop}
\begin{proof}
By assumption, the potentials $\{ G_n \}_{n \geq 0}$ are bounded and strictly positive and the statistics $\xi_n: E \to \mathbb R^d$ are bounded.
To verify that Assumptions (A\ref{hyp:1}-\ref{hyp:2}) are satisfied, it suffices to prove that for any test
function $\phi \in \mathcal{B}_b(E)$, the first and second derivatives of
$(x,\xi) \mapsto M_{n,\xi} \phi (x)$ exist and are uniformly bounded.
The Metropolis-Hastings accept-reject
ratio of the proposal $x \mapsto x+u$ is $r(x,u) := \min\big\{ 1, \big( \mathbb{P}[y_{1:n} \mid x+u] \, \eta_0(x+u) \big) \, / \, \big( \mathbb{P}[y_{1:n} \mid x] \, \eta_0(x) \big) \big\}$
and we have
$M_{n,\xi}(\phi)(x)
=
\phi(x) +
\int_{\mathbb R^m} \big[\phi(x+u) - \phi(x) \big] \, r(x,u) \, q(u; \xi) \, du$.
Differentiation under the integral sign yields
\begin{align*}
&\nabla_{\xi} M_{n,\xi}(\phi)(x)
= \int \big[\phi(x+u) - \phi(x) \big] \, r(x,u) \,
\nabla_{\xi} q(u;\xi) \, du\ , \\
&\nabla^2_{\xi} M_{n,\xi}(\phi)(x)
= \int \big[\phi(x+u) - \phi(x) \big] \, r(x,u)
\nabla^2_{\xi} q(u;\xi) \, du\ ,
\end{align*}
and the conclusion follows by boundedness of the first and second derivative of $q(u;\xi)$ with respect to the parameter $\xi \in \mbox{Dom}(\xi_n)$.
\end{proof}
\subsection{Numerical Example}\label{sec:num_ex}
We now provide a numerical study of a high-dimensional sequential Bayesian parameter inference, as described in Section \ref{sec:seq_bi}, applied to the Navier-Stokes model. In this section, we briefly describe the Navier-Stokes model, the associated SMC algorithm and focus on the analysis of the behavior of the method when estimating the normalising constant. The SMC method to be presented is described in detail in \cite{kantas}. In the subsequent discussion, we highlight the algorithmic
challenges and the usefulness of the adaptive SMC methodology when applied to such high-dimensional scenarios. This motivates theoretical results presented in Section \ref{sec:simos} where the stability properties of the SMC estimates are investigated in the regime where the dimension $d$ of the adaptive statistics is large.
\subsubsection{Model Description}
We work with the Navier-Stokes dynamics describing the incompressible flow of a fluid in a two dimensional torus $\mathbb{T}=[0,2\pi)\times[0,2\pi)$. The time-space varying velocity field is denoted by $v(t,x):[0,\infty) \times \mathbb{T} \rightarrow \mathbb R^{2}$. The Newton's laws of motion yield the Navier-Stokes system of partial differential equations \cite{doer}
\begin{align}
\begin{aligned}\label{eq:NSPDE}
\partial_{t}v-\nu\Delta v+(v\cdot\nabla)\, v=f-\nabla\mathfrak{p}\ ,
\qquad
\nabla\cdot v=0\ ,
\qquad
\int_{\mathbb{T}}v(x,\cdot) \, dx=0\ ,
\end{aligned}
\end{align}
with initial condition $v(x,0)=u(x)$. The quantity $\nu>0$ is a viscosity parameter, $\mathfrak{p}:\mathbb{T}\times[0,\infty) \rightarrow \mathbb R$ is the pressure field and $f:\mathbb{T}\rightarrow\mathbb R^{2}$ is an exogenous time-homogeneous forcing. For simplicity, we assume periodic boundary conditions. We adopt a Bayesian approach for inferring the unknown initial condition $u = u(x)$ from noisy measurements of the evolving velocity
field $v(\cdot,t)$ on a fixed grid of points $\big(x_{1},\ldots,x_{M} \big) \in \mathbb{T}$. Performing inference with this type of data is referred to as Eulerian data assimilation.
Measurements are available at time $t_j := j \times \delta$ for time increment $\delta > 0$ and index $1 \leq j \leq T$ at each fixed location $x_m \in \mathbb{T}$. We assume i.i.d Gaussian measurements error with standard deviation $\varepsilon > 0$ so that the noisy observations $y:=\big\{ y_{j,m}\}_{j,m}$ for $1 \leq j \leq T$ and $1 \leq m \leq M$ can be modelled as
\begin{equation*}
y_{j,m}=v\left(x_{m},t_j\right)+\varepsilon\,\zeta_{j,m}
\end{equation*}
for an i.i.d sequence $\zeta_{j,m}\stackrel{iid}{\sim}\mathcal{N}(0,I_2)$.
We follow the notations of \cite{kantas} and set
\begin{equation*}
\mathbb{U}=\Big\{ \left.2\pi\textrm{-}\mbox{periodic trigonometric polynomials }u:\:\mathbb{T}\rightarrow\mathbb{R}^{2}\right|\:\nabla\cdot u=0\ ,\:\int_{\mathbb{T}}u(x)dx=0\,\Big\}\ .
\end{equation*}
We use a Gaussian random field prior for the unknown initial condition; as will become apparent from the discussion to follow, it is appropriate in this setting to assume that the initial condition $u=u(x)$ belongs the closure $U$ of $\mathbb{U}$ with respect to the $\big(L^{2}(\mathbb{T}) \big)^{2}$ norm.
The semigroup operator for the Navier-Stokes PDE is denoted by $\Psi:U \times [0, \infty) \rightarrow U$ so that the likelihood for the noisy observation $y$ reads
\begin{equation}
\label{eq:likeli}
\ell(y;u)
=
\exp \Big\{ -\frac{1}{2 \varepsilon^2} \sum_{j=1}^T \sum_{m=1}^M \big\|y_{j,m}-
[\Psi(u, t_j)](x_m)\,\big\|^{2} \Big\} / (2 \pi \varepsilon^2)^{MT}\ .
\end{equation}
Under periodic boundary conditions, an appropriate orthonormal basis for $U$ is comprised of the functions $\psi_{k}(x) := \big( k^{\perp} / (2\pi \, |k| \big) \, e^{ik\cdot x}$ for $k \in \mathbb{Z}^2_* := \mathbb{Z}^{2}\setminus\{(0,0)\}$ and $k^{\perp}:=(-k_{2},k_{1})^\top$, $\left|k\right| :=\sqrt{ k_1^2+k_2^2}$. The index $k$ corresponds to a bivariate frequency and the Fourier series decomposition of an element $u\in U$ reads
\begin{equation}
\label{eq:Fourier}
u(x)=\sum_{k\in\mathbb{Z}^{2}_*} \, u_{k} \, \psi_{k}(x)
\end{equation}
with Fourier coefficients $u_{k} = \langle u,\psi_{k} \rangle = \int_{\mathbb{T}} u(x) \cdot\overline{\psi}_{k}(x) \, dx$. Since the initial condition $u \in U$ is real-valued we have $\overline{u_{k}}=-u_{-k}$ and one can focus on reconstructing the frequencies in the subset
\begin{align*}
\mathbb{Z}_{\uparrow}^{2}=
\Big\{ k=(k_{1},k_{2})\in\mathbb{Z}^{2}_* \, : \, [k_{1}+k_{2}>0] \; \textrm{or} \; [k_{1} = -k_{2} >0]\Big\}.
\end{align*}
We adopt a Bayesian framework and assume a centred Gaussian random field prior $\eta_0$ on the unknown initial condition
\begin{equation}\label{eq:prior}
\eta_{0}=\mathcal{N}(0,\beta^{2}A^{-\alpha})
\end{equation}
with hyper-parameters $\alpha,\beta$ affecting the roughness and magnitude of the initial vector field. In \eqref{eq:prior}, $A=-P\Delta$ denotes the Stokes operator where $\Delta = \big(\partial^2_{x_1}+\partial^2_{x_2},\,\partial^2_{x_1}+\partial^2_{x_2} \big)$ is the usual Laplacian and $P: \big(L^{2}(\mathbb{T}) \big)^{2}\rightarrow U$ is the Leray-Helmholtz orthogonal projector that maps a field to its divergence-free and zero-mean part.
A simple understanding of the prior distribution $\eta_0$ can be obtained through the Karhunen-Lo\'{e}ve representation; a draw from the prior distribution $\eta_0$ can be realised as the infinite sum
\begin{equation}
\mathfrak{Z} = \beta \, \sum_{k\in\mathbb{Z}^{2}_*} \left| k \right|^{-\alpha} \, \xi_{k} \,\psi_{k} \; \sim \; \eta_0
\end{equation}
where variables $\{ \xi_k \}_{k \in \mathbb{Z}^2_*}$ correspond standard complex centred Gaussian random variables with $\big( \mbox{Re}(\xi_{k}), \mbox{Im}(\xi_{k}) \big)$ $\stackrel{iid}{\sim} \mathcal{N}\big(0, \frac12 \, I_2 \big)$ for $k\in\mathbb{Z}_{\uparrow}^{2}$ and $\xi_{k}=-\overline{\xi_{-k}}$ for $k \in \mathbb{Z}^{2}_* \setminus \mathbb{Z}_{\uparrow}^{2}$. In other words, \emph{a-priori}, the Fourier coefficients $u_{k}$ with $k\in\mathbb{Z}_{\uparrow}^{2}$ are assumed independent, normally distributed, with a particular rate of decay for their variances as $\left|k\right|$ increases. Statistical inference is carried out by sampling from the posterior probability measure $\eta$ on $U$ defined as the Gaussian change of measure
\begin{equation} \label{eq:target}
\frac{d\eta}{d\eta_{0}}(u) = \frac{1}{Z(y)} \, \ell(y; u)
\end{equation}
for a normalisation constant $Z(y)>0$.
\subsubsection{Algorithmic Challenges and Adaptive SMC}
\label{sec:what}
With a slight abuse of notation we will henceforth use a single subscript to count the observations and set $y_{(j-1)M+m} \equiv y_{j,m}$.
We will apply an SMC sampler on the sequence of distributions $\{ \eta_n \}_{n=0}^{M \times T}$ defined by
\begin{equation}
\label{eq:seq}
\frac{d \eta_n}{d\eta_0}(u) = \frac{1}{Z(y_{1:n})} \, \ell(y_{1:n}; u)
\end{equation}
for a normalisation constant $Z(y_{1:n})$ and likelihood $\ell(y_{1:n}; u)$.
Note that the state space $U$ is infinite-dimensional even though in practice, as described in \cite{kantas}, our solver truncates
the Fourier expansion \eqref{eq:Fourier} on a pre-specified window of frequencies $-k_{\max}+1 \le k_1, k_2 \le k_{\max}$ for $k_{\max}=32$.
We now describe the MCMC mutation steps used for propagating the $N$-particle system. For a tuning parameter $\rho\in (0,1)$, a simple Markov kernel suggested in several articles (see e.g.\@ \cite{cotter} and the references therein) for target distributions that are Gaussian changes of measure of the form \eqref{eq:seq} is the following. Given the current position $u \in U$, the proposal $\widetilde{u}$ is defined as
\begin{equation}\label{eq:RWW}
\widetilde{u} = \rho \, u + (1-\rho^2)^{1/2} \, \mathfrak{Z}
\end{equation}
with $\mathfrak{Z} \sim \eta_0$; the proposal is accepted with probability $\min\big(1, \ell(y_{1:n};\widetilde{u}) / \ell(y_{1:n};u) \big)$. Proposal \eqref{eq:RWW} preserves the prior Gaussian distribution \eqref{eq:prior} for any $\rho\in (0,1)$ and the above Markov transition is well-defined on the infinite-dimensional space $U$. It follows that the method is robust upon mesh-refinement in the sense that $\rho$ does not need to be adjusted as $k_{\max}$ increases \cite{pillai2014}. In contrast, for standard Random-Walk Metropolis
proposals, one would have to pick a smaller step-size upon mesh-refinement; for the optimal step-size, the mixing time will typically deteriorate as $\mathcal{O}(k_{\max}^2)$, see e.g.~\cite{besk}. Still,
proposal \eqref{eq:RWW} can be inefficient when targeting the posterior distribution $\eta$ when it differs significantly from the prior distribution $\eta_0$. Indeed, \emph{a-priori} the Fourier coefficients $u_k$ have known scales appropriately taken under consideration in \eqref{eq:RWW}; \emph{a-posteriori}, information from the data spreads non-uniformly
on the Fourier coefficients, with more information being available for low frequencies than for high ones. Taking a glimpse into results from the execution of the adaptive SMC algorithm yet to be defined, in Figure~\ref{ex2:circle} we plot the
fractions, as estimated by the SMC method, between posterior and prior standard deviations for the Fourier coefficient $\mathrm{Re}(u_k)$ (left panel) and
$\mathrm{Im}(u_k)$ (right panel) over all pairs of frequencies $k=(k_1,k_2)$ with $-20\le k_1,k_2 \le 20$. In this case it is apparent that most of the information in the data concentrates on a window of frequencies around the origin; still there is a large number of variables (around $2\cdot 10^2$ in this example) which have diverse posterior standard deviations under the posterior distribution. The standard deviations of these Fourier coefficients can potentially be very different from their prior standard deviations.
\begin{figure}[!h]
\centering \includegraphics[width=16cm]{circle_long}
\caption{Ratio of (estimated) posterior vs prior standard deviations for $\mathrm{Re}(u_k)$ (left panel) and
$\mathrm{Im}(u_k)$ (right panel) over all pairs $k=(k_1,k_2)$ with $-20\le k_1,k_2 \le 20$.
The model here corresponds to: $\delta = 0.2$, $m=4$, $T=20$, $\alpha=2$, $\beta^2=5$, $\varepsilon^2=0.2$,
$f(x)=\nabla^{\perp}\cos((5,5)'\cdot x)$. The $m=4$ observation locations
were at $(0,\pi)$, $(\pi,0)$, $(0,0)$, $(\pi,\pi)$.
Samples from the posterior were generated by applying a version of the adaptive SMC algorithm described in Section \ref{sec:what} for $K=7$, see \cite{kantas} for full details. The `true' initial condition was sampled from the prior; data were then simulated accordingly.}
\label{ex2:circle}
\end{figure}
The approach followed in \cite{kantas} for constructing better-mixing Markov kernels involves selecting a `window' of frequencies $\mathbf{K}=\left\{ k\in\mathbb{Z}^{2}_* \,:\, \max( k_{1}, k_{2} )\leq K \right\}$, for a user pre-specified threshold $K \geq 1$, and using the following Markov mutation steps within an SMC algorithm.
\begin{itemize}
\item Use the currently available particles approximation $\{u^{i}\}_{i=1}^{N}$ of $\eta_n$ to estimate the current marginal mean and covariance $\mathfrak{m}^{N}_k$ and $\Sigma^N_k$ of the two-dimensional variable $u_k=\big(\mathrm{Re}(u_k),\mathrm{Im}(u_k) \big)$ over the window $k=(k_1, k_2) \in \mathbf{K} \cap \mathbb{Z}_{\uparrow}^{2}$,
\begin{equation*}
\mathfrak{m}^{N}_k = \tfrac{1}{N}\,\sum_{i=1}^{N}u_k^{i}
\ ; \quad
\Sigma^N_k = \tfrac{1}{N-1}
\sum_{i=1}^{N}
(u_k^{i}-\mathfrak{m}^{N}_k) \otimes (u_k^{i}-\mathfrak{m}^{N}_k)\ .
\end{equation*}
For high-frequencies $k=(k_1, k_2) \in \mathbf{K}^c \cap \mathbb{Z}_{\uparrow}^{2}$, only the information contained in the prior distribution is used and we thus set $\mathfrak{m}^{N}_k = 0$ and $\Sigma^N_k = \frac{1}{2}\, |k|^{-2\alpha} \, I_2$.
\item
For a current position $u = \sum u_k \, \psi_k$, the proposal $\widetilde{u}=\sum \widetilde{u}_k \, \psi_k$ is defined as
\begin{equation*}
\widetilde{u}_k = \mathfrak{m}^N_{k} + \rho \, (u_{k} - \mathfrak{m}^N_{k}) +
(1-\rho^{2})^{1/2}\, \mathfrak{Z}_k
\end{equation*}
for $k \in \mathbb{Z}_{\uparrow}^{2}$ and $\mathfrak{Z}_k \sim \mathcal{N}(0,\Sigma^N_k)$ and $\widetilde{u}_{-k} = -\overline{\widetilde{u}_{k} }$ for $\mathbb{Z}^2_* \setminus \mathbb{Z}_{\uparrow}^{2}$; this proposal is accepted with the relevant Metropolis-Hastings ratio.
\item In addition to the above adaptation at the Markov kernel, the analytical algorithm also involved an annealing step as
described in Section \ref{sec:annealed}, whereby additional intermediate distributions were introduced, if needed, in between
any pairs $\eta_{n-1}$, $\eta_n$. We found this to be important for avoiding weight degeneracy and getting a stable algorithm.
As explained in Section~\ref{sec:annealed}, the choice of temperatures was determined
on the fly, according to a minimum requirement of the effective sample size (we choose $\alpha=\tfrac{1}{3}$).
\end{itemize}
It is important to note that in this Navier-Stokes setting, the regularity assumptions adopted in the theoretical parts of this article for the derivation of the asymptotic results do not apply anymore. As illustrated by this numerical analysis, the asymptotic behaviour predicted in Theorem \ref{thm.stability} is likely to hold in far more general contexts.
Figure \ref{fig:1} shows a plot of an estimate of the variance of $Z^{N}(y_{1:n}) / Z(y_{1:n})$, where $Z^{N}(y_{1:n})$ is the $N$-particle particle approximation of normalisation constant $Z^{N}(y_{1:n})$, as a function of the amount of data $n$ for an adaptive SMC algorithm using $N=500$ particles. In this complex setting, the numerical results seem to confirm the theoretical asymptotic results of Theorem \ref{thm.stability}: the estimated asymptotic variance seems to grow linearly with $n$, as one would have expected to be true for the perfect SMC algorithm that does not use adaptation.
This is an indication that Theorem \ref{thm.stability} is likely to hold under weaker assumptions than adopted in this article.
\begin{figure}[h!]
\vspace{-4cm}
\centering
{\includegraphics[width=\textwidth,height=18cm]{fig_paper3v2.pdf}}
\vspace{-5cm}
\caption{Estimated variance for the estimate of the normalizing constant of adaptive SMC. The `true' normalizing constant was estimated from 1000 independent runs with $N=500$ and the relative variance is estimated when $N=500$ over 500 independent runs. The crosses are the estimated values of the relative variance.
\label{fig:1}}
\end{figure}
\subsubsection{Algorithmic Stability in Large Scale Adaptation}\label{sec:simos}
When the dimension $d$ of the adapted statistics is large, as in the Navier-Stokes case
(in our simulation study $d=\textbf{Card}(\mathbf{K} \cap \mathbb{Z}_{\uparrow}^{2}) \times 5\approx [(2K)^2/2] \times 5 \approx 500$) and potentially in other scenarios,
it is certainly of interest to quantify the effect of the dimensionality $d$ of the adaptive statistics on the overall accuracy of the SMC estimators.
We will make a first modest attempt to shed some light on this issue via the consideration of a very simple modelling structure motivated by the Navier-Stokes example and allowing for some simple calculations.
For each $n \geq 1$ we assume a product form Gaussian target on $E_n=\mathbb R^{\infty}$,
\begin{equation*}
\eta_{n} = \bigotimes_{j=1}^{\infty} \mathcal{N}(0,\sigma_j^{2})\ ,
\end{equation*}
for a given sequence of variances $\{\sigma_j^2\}_{j=1}^{\infty}$ that does not depend on the index $n \geq 1$. This represents an optimistic case where the incremental weights $G_{n}(x)$ are small enough to be irrelevant for the study of the influence of the dimension $d$; we set $G_n(x)\equiv 1$.
It is assumed that the SMC method has worked well up-to time $(n-1)$ and has produced a collection of i.i.d.\@ samples $\{x_{n-1}^{i}\}_{i=1}^{N}$ from $\eta_{n-1}$. For the mutation step, we consider an adaptive Metropolis-Hastings Markov kernel $M_{n,\xi}$ preserving $\eta_{n}$ that proposes, when the current position is $x \in \mathbb R^{\infty}$, a new position $\widetilde{x} \in \mathbb R^{\infty}$ distributed as
\begin{equation} \label{eq:M}
\begin{aligned}
\widetilde{x}_j &= \rho \, x_j + (1-\rho^2)^{1/2} \, \mathcal{N}(0, \widehat{\sigma}^2_{j})\ ,
\quad \textrm{for} \quad 1 \leq j \leq d \ ,\\
\widetilde{x}_j &= \rho \, x_j + (1-\rho^2)^{1/2} \, \mathcal{N}(0, \sigma^2_{j})\ ,
\quad \textrm{for} \quad j \geq d+1 \ ,
\end{aligned}
\end{equation}
where we have set $\widehat{\sigma}^2_j := (1/N) \, \sum_{i=1}^{N}\{x_{n-1,j}^i\}^2$. This corresponds to the adaptive SMC approach described in Section \ref{sec:algo} with a $d$-dimensional adaptive statistics $\xi_n(x)=(x_1^2, \ldots, x_d^2)$. Thus, the $d$ first coordinates of the proposal are adapted to the estimated marginal variance while the ideal variance is used for the remaining coordinates.
We want to investigate the effect of the amount of adaptation on the
accuracy of the estimator $\eta_{n}^{N}(\phi)$ for a bounded function $\phi$ that only depends on the $(d+1)$-th coordinate,
\begin{equation*}
\phi(x)=\phi(x_{d+1})\ .
\end{equation*}
Notice that in this simple scenario the Metropolis-Hastings proposal corresponding to the ideal kernel $M_{n,\eta_{n-1}(\xi_n)}$ preserves $\eta_n$ and is thus always accepted; under the ideal kernel, the particles at time $n$ would still be a set of $N$ i.i.d.\@ samples from $\eta_n$. Consequently, any deviation from the $\mathcal{O}(N^{-1/2})$ rate of convergence for the estimator $\eta_{n}^{N}(\phi)$ will be solely due to the effect of the adaptation.
We now investigate in this context the behavior of the difference $\eta_{n}^{N}(\phi)-\eta_{n}(\phi)$. Following the proof of Theorem \ref{theo:wlln} we use the decomposition
\begin{equation*}
[\eta_{n}^{N}-\eta_{n}](\phi) = A(N) + B_1(N) + B_2(N)
\end{equation*}
where, using the notations of Section \ref{sec:algo}, we have set
$A(N) = [\eta_n^N -\Phi_{n,N}(\eta_{n-1}^N)](\phi)$,
$B_1(N) =\eta_{n-1}^N[Q_{n,N} - Q_n](\phi)$
and
$B_2(N) = [\eta_{n-1}^N-\eta_{n-1}](Q_n \phi)$.
Denoting by $\|\!\cdot\! \|_2$ the $L_2$-norm of random variables and
conditioning upon $\mathcal{F}_{n-1}^{N}$, we have that
\begin{equation}
\label{eq:A}
\| A(N) \|_2^2 = \tfrac{1}{N}\,\mathbb{E}\,\big[\,\mathrm{Var}\,[\,\phi(x_n^{1})\,|\,\mathcal{F}_{n-1}^{N}\,]\,\big]
= \mathcal{O}(\tfrac{1}{N})\ .
\end{equation}
For $B_2(N)$ one can notice that $Q_n(\phi)$ is a bounded mapping from $\mathbb R^{\infty}$ to $\mathbb R$, thus
\begin{equation}
\label{eq:B}
\| B_2(N) \|_2^2 = \tfrac{1}{N}\,\mathrm{Var}_{\eta_{n-1}}\,[\,Q_n(\phi)\,] = \mathcal{O}(\tfrac{1}{N})\ .
\end{equation}
The critical term with regards to the effect of the dimension $d$ on the magnitude of the difference $[\eta_{n}^{N}-\eta_{n}](\phi)$ is $B_1(N)$. An approach similar to Equation \eqref{eq.Q.diff.eta} in the proof of Theorem \ref{thm.clt.unnormalised} yields
\begin{align*}
B_1(N)
&= \eta^N_{n-1} [Q_{n,N}-Q_n](\phi)
= \eta^N_{n-1}\big( \, \big[ M_{n,N} -
M_{n} \big]( \phi ) \, \big) \\
&=
\eta^N_{n-1} \big[ \partial_{\xi} M_{n} \phi \big]
\cdot
[\eta^N_{n-1}-\eta_{n-1}](\xi_n) + R
=:
\widetilde{B}_1(N) + R\ , \label{eq:R}
\end{align*}
for a residual random variable $R$. Controlling the residual term in the above expansion poses
enormous technical challenges and we restrict our analysis to the main order term $\widetilde{B}_1(N)$.
\begin{prop}
\label{pr:B}
The term $\widetilde{B}_1(N)$ satisfies
\begin{equation*}
\|\widetilde{B}_1(N) \|_2 = \mathcal{O}\big(\tfrac{\sqrt{d}}{N} \big) + \mathcal{O}\big(\tfrac{d}{N^{3/2}}\big)\ .
\end{equation*}
\end{prop}
\begin{proof}
See the Appendix.
\end{proof}
Proposition \ref{pr:B} combined with \eqref{eq:A}-\eqref{eq:B} suggests that, in a high dimensional setting with $d \gg 1$, it is reasonable to choose $N$ of order $\mathcal{O}(d)$, yielding a mean squared error of order $\mathcal{O}(1/d)$.
Even if this choice of $N$ should be thought of as a minimum requirement for the complete sequential method,
it could maybe explain the fairly accurate SMC estimates of the marginal expectation obtained in the Navier-Stokes example when $N=500$ and $d\approx 500$; we refer the reader to \cite{kantas} for further simulation studies.
\section{Summary}\label{sec:summ}
This article studies the asymptotic properties of a class of adaptive SMC algorithms; weak law of large numbers and a central limit theorems are established in several settings.
There are several extensions to the work in this article.
First, one could relax the boundedness assumptions used in the paper; our proof technique, also used in \cite{chopin1}, is particularly amenable to this.
Second, an approach to deal with the random stopping of some adaptive SMC algorithms (see Section \ref{sec:annealed})
also needs to be developed. Lastly, one can extend the analysis to the context of adaptive resampling.
\subsubsection*{Acknowledgements}
AB and AT were supported by a Singapore MOE grant.
AJ was supported by Singapore MOE grant R-155-000-119-133 and is also affiliated with the risk management institute at the National University of Singapore.
|
1,108,101,565,248 | arxiv | \section{\refname}}
\usetikzlibrary{shapes,arrows}
\renewcommand{\vec}[1]{\boldsymbol{#1}}
\newcommand{\<}{\langle}
\renewcommand{\>}{\rangle}
\newcommand{U_{p}(\vec{\alpha})}{U_{p}(\vec{\alpha})}
\newcommand{U^\dagger_{p}(\vec{\alpha})}{U^\dagger_{p}(\vec{\alpha})}
\newcommand{U_{p}(\vec{\alpha}_{\text{opt}})}{U_{p}(\vec{\alpha}_{\text{opt}})}
\newcommand{U^\dagger_{p}(\vec{\alpha}_{\text{opt}})}{U^\dagger_{p}(\vec{\alpha}_{\text{opt}})}
\newcommand{\tilde{\rho}_p(\vec{\alpha}_\text{opt})}{\tilde{\rho}_p(\vec{\alpha}_\text{opt})}
\newcommand{\tilde{\rho}_p(\vec{\alpha})}{\tilde{\rho}_p(\vec{\alpha})}
\newcommand{\ingr}[2]{\begin{matrix}\includegraphics[height=#1 cm]{#2}\end{matrix}}
\long\def\ca#1\cb{}
\def\cites#1{XXX Need citation #1 XX}
\newcommand{\pat}[1]{\textcolor{red}{#1}}
\newcommand{\eric}[1]{\textcolor{red}{[E: #1]}}
\newcommand{\abs}[2][]{#1| #2 #1|}
\newcommand{\braket}[2]{\langle #1 \hspace{1pt} | \hspace{1pt} #2 \rangle}
\newcommand{\braketq}[1]{\braket{#1}{#1}}
\newcommand{\ketbra}[2]{| \hspace{1pt} #1 \rangle \langle #2 \hspace{1pt} |}
\newcommand{\ketbras}[3]{| \hspace{1pt} #1 \rangle_{#3} \langle #2 \hspace{1pt} |}
\newcommand{\ketbraq}[1]{\ketbra{#1}{#1}}
\newcommand{\bramatket}[3]{\langle #1 \hspace{1pt} | #2 | \hspace{1pt} #3 \rangle}
\newcommand{\bramatketq}[2]{\bramatket{#1}{#2}{#1}}
\newcommand{\norm}[2][]{#1| \! #1| #2 #1| \! #1|}
\newcommand{\nbox}[2][9]{\hspace{#1pt} \mbox{#2} \hspace{#1pt}}
\newcommand{\avg}[1]{\langle #1\rangle }
\newcommand{\ket}[1]{|#1\rangle}
\newcommand{\colo}{\,\hbox{:}\,}
\newcommand{\bra}[1]{\langle #1|}
\newcommand{\dya}[1]{\ket{#1}\!\bra{#1}}
\newcommand{\dyad}[2]{\ket{#1}\!\bra{#2}}
\newcommand{\ipa}[2]{\langle #1,#2\rangle}
\newcommand{\ip}[2]{\langle #1|#2\rangle}
\newcommand{\matl}[3]{\langle #1|#2|#3\rangle}
\newcommand{\boldsymbol{\theta}}{\boldsymbol{\theta}}
\newcommand{\text{tot}}{\text{tot}}
\newcommand{\round}[1]{\ensuremath{\left\lfloor#1\right\rceil}}
\newcommand*\est{\mathrel{\widehat{=}}}
\newcommand{\ensuremath{\mathsf{POOQ}}\xspace}{\ensuremath{\mathsf{POOQ}}\xspace}
\newcommand{\ensuremath{\mathsf{POTQ}}\xspace}{\ensuremath{\mathsf{POTQ}}\xspace}
\newcommand{\ensuremath{\mathsf{HS\mbox{-}Test}}\xspace}{\ensuremath{\mathsf{HS\mbox{-}Test}}\xspace}
\newcommand{\text{fid}}{\text{fid}}
\newcommand{\text{sq}}{\text{sq}}
\newcommand{\text{cor}}{\text{cor}}
\newcommand{\text{sec}}{\text{sec}}
\newcommand{\text{rob}}{\text{rob}}
\newcommand{\text{rank}}{\text{rank}}
\newcommand{\mathbf{S}}{\mathbf{S}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{A}}{\mathbb{A}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{X}}{\mathbb{X}}
\newcommand{\mathbb{Y}}{\mathbb{Y}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\overline{A}}{\overline{A}}
\newcommand{\overline{X}}{\overline{X}}
\newcommand{\overline{Z}}{\overline{Z}}
\newcommand{\widehat{K}}{\widehat{K}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\mathbb{X}}{\mathbb{X}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\widehat{\Theta}}{\widehat{\Theta}}
\newcommand{\text{HS}}{\text{HS}}
\newcommand{\text{tol}}{\text{tol}}
\newcommand{{\rm Tr}}{{\rm Tr}}
\newcommand{{\rm Det}}{{\rm Det}}
\newcommand{\text{supp}}{\text{supp}}
\newcommand{\text{pass}}{\text{pass}}
\newcommand{\ave}[1]{\langle #1\rangle}
\renewcommand{\geq}{\geqslant}
\renewcommand{\leq}{\leqslant}
\newcommand{\mte}[2]{\langle#1|#2|#1\rangle }
\newcommand{\mted}[3]{\langle#1|#2|#3\rangle }
\newcommand{\eqprop}[2]{\stackrel{\tiny{#1}}{#2}}
\newcommand{\eqpropa}[2]{\stackrel{\scriptstyle{#1}}{#2}}
\newcommand{\text{leak}^{\text{EC}}_{\text{obs}}}{\text{leak}^{\text{EC}}_{\text{obs}}}
\newcommand{\text{vec}}{\text{vec}}
\renewcommand{\Re}{\text{Re}}
\renewcommand{\Im}{\text{Im}}
\newcommand{\text{noise}}{\text{noise}}
\newcommand{\text{Var}}{\text{Var}}
\newcommand{\widetilde{\mathcal{D}}}{\widetilde{\mathcal{D}}}
\newcommand{\widetilde{\mathcal{V}}}{\widetilde{\mathcal{V}}}
\newcommand{\widetilde{\mathbb{V}}}{\widetilde{\mathbb{V}}}
\newcommand{\widetilde{X}}{\widetilde{X}}
\newcommand{\widetilde{Z}}{\widetilde{Z}}
\newcommand{\widetilde{A}}{\widetilde{A}}
\newcommand{\widetilde{B}}{\widetilde{B}}
\newcommand{\widetilde{C}}{\widetilde{C}}
\newcommand{\widehat{A}}{\widehat{A}}
\newcommand{\widehat{B}}{\widehat{B}}
\newcommand{\widehat{C}}{\widehat{C}}
\newcommand{\overline{A}}{\overline{A}}
\newcommand{\overline{B}}{\overline{B}}
\newcommand{\overline{C}}{\overline{C}}
\newcommand{\mathbb{V}}{\mathbb{V}}
\newcommand{\text{int}}{\text{int}}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\renewcommand{\vec}[1]{\boldsymbol{#1}}
\newcommand{\otimes}{\otimes}
\newcommand{^\dagger}{^\dagger}
\newcommand{\fip}[2]{\langle #1\, , \, #2\rangle}
\newcommand{^\textup{T}}{^\textup{T}}
\newcommand*{\openone}{\openone}
\newcommand*{\cong}{\cong}
\newcommand*{H_{\min}}{H_{\min}}
\newcommand*{H_{\max}}{H_{\max}}
\newcommand*{D_{\min}}{D_{\min}}
\newcommand*{D_{\max}}{D_{\max}}
\newcommand*{\text{guess}}{\text{guess}}
\newcommand*{\text{guess}}{\text{guess}}
\newcommand{\hat{\rho}}{\hat{\rho}}
\newcommand{\widehat{\rho}}{\widehat{\rho}}
\newcommand{\overline{\rho}}{\overline{\rho}}
\newcommand{\tilde{\rho}}{\tilde{\rho}}
\newcommand{\textsf{BS}}{\textsf{BS}}
\newcommand{\textsf{QBS}}{\textsf{QBS}}
\newcommand{\textsf{PBS}}{\textsf{PBS}}
\newcommand{\tilde{\phi}}{\tilde{\phi}}
\newcommand{\tilde{\psi}}{\tilde{\psi}}
\newcommand{\text{fringe} }{\text{fringe} }
\newcommand{\alpha }{\alpha }
\newcommand{\beta }{\beta }
\newcommand{\gamma }{\gamma }
\newcommand{\Gamma }{\Gamma }
\newcommand{\delta }{\delta }
\newcommand{\Delta}{\Delta}
\newcommand{\epsilon}{\epsilon}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\zeta }{\zeta }
\renewcommand{\th}{\theta }
\newcommand{\vartheta }{\vartheta }
\newcommand{\Theta }{\Theta }
\newcommand{\iota }{\iota }
\newcommand{\kappa }{\kappa }
\newcommand{\lambda }{\lambda }
\newcommand{\vec{\lambda} }{\vec{\lambda} }
\newcommand{\vec{\gamma} }{\vec{\gamma} }
\newcommand{\vec{\Gamma} }{\vec{\Gamma} }
\newcommand{\Lambda }{\Lambda }
\newcommand{\varpi}{\varpi}
\newcommand{\varrho}{\varrho}
\newcommand{\sigma }{\sigma }
\newcommand{\varsigma}{\varsigma}
\newcommand{\Sigma }{\Sigma }
\newcommand{\upsilon }{\upsilon }
\newcommand{\Upsilon }{\Upsilon }
\newcommand{\varphi }{\varphi }
\newcommand{\omega }{\omega }
\newcommand{\Omega }{\Omega }
\newcommand{d }{d }
\newcommand{F}{F}
\newcommand{M}{M}
\newcommand{\tilde{\mathbf{S} }}{\tilde{\mathbf{S} }}
\newcommand{\hat{\mathbf{S} }}{\hat{\mathbf{S} }}
\newcommand{\onenorm}[2][]{#1\| #2 #1\|_1}
\newcommand{\widehat{E}}{\widehat{E}}
\makeatletter
\newcommand{\justified}{%
\rightskip=10pt $\,$%
\leftskip=10pt }
\makeatother
\makeatletter
\renewcommand\@make@capt@title[2]{%
\@ifx@empty\float@link{\@firstofone}{\expandafter\href\expandafter{\float@link}}%
{\textbf{#1}}\@caption@fignum@sep#2\quad}%
\makeatother
\makeatletter
\renewcommand{\fnum@figure}{\textbf{Figure~\thefigure}}
\makeatother
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{corollary}{Corollary}
\newtheorem{proposition}{Proposition}
\newtheorem{example}{Example}
\newtheorem{definition}{Definition}
\newenvironment{specialproof}{\textit{Proof:}}{\hfill$\square$}
\begin{document}
\title{Operator Sampling for Shot-frugal Optimization in Variational Algorithms}
\author{Andrew Arrasmith}
\email{[email protected]}
\affiliation{Theoretical Division, MS B213, Los Alamos National Laboratory, Los Alamos, NM 87545, USA.}
\author{Lukasz Cincio}
\affiliation{Theoretical Division, MS B213, Los Alamos National Laboratory, Los Alamos, NM 87545, USA.}
\author{Rolando D. Somma}
\affiliation{Theoretical Division, MS B213, Los Alamos National Laboratory, Los Alamos, NM 87545, USA.}
\author{Patrick J. Coles}
\affiliation{Theoretical Division, MS B213, Los Alamos National Laboratory, Los Alamos, NM 87545, USA.}
\begin{abstract}
Quantum chemistry is a near-term application for quantum computers. This application may be facilitated by variational quantum-classical algorithms (VQCAs), although a concern for VQCAs is the large number of measurements needed for convergence, especially for chemical accuracy. Here we introduce a strategy for reducing the number of measurements (i.e., shots) by randomly sampling operators $h_i$ from the overall Hamiltonian $H = \sum_i c_i h_i$. In particular, we employ weighted sampling, which is important when the $c_i$'s are highly non-uniform, as is typical in chemistry. We integrate this strategy with an adaptive optimizer developed recently by our group to construct an improved optimizer called Rosalin (Random Operator Sampling for Adaptive Learning with Individual Number of shots). Rosalin implements stochastic gradient descent while adapting the shot noise for each partial derivative and randomly assigning the shots amongst the $h_i$ according to a weighted distribution. We implement this and other optimizers to find the ground states of molecules H$_2$, LiH, and BeH$_2$, without and with quantum hardware noise, and Rosalin outperforms other optimizers in most cases.
\end{abstract}
\maketitle
\section{Introduction}
The variational quantum eigensolver (VQE) is a potential tool for elucidating the electronic structure of molecules and materials~\cite{peruzzo2014VQE}. VQE and other similar variational quantum-classical algorithms (VQCAs)~\cite{farhi2014QAOA, johnson2017qvector, romero2017quantum, larose2018, arrasmith2019variational, cerezo2019variational, jones2019variational, yuan2018theory, li2017efficient, kokail2019self, Khatri2019quantumassisted, jones2018quantum, heya2018variational, endo2018variational,sharma2019noise, carolan2019variational,yoshioka2019variational,bravo-prieto2019,xu2019variational,mcardle2019variational,cirstoiu2019variational,otten2019noise,LubaschVariational20,verdon2019quantum,bravo2019quantum,cerezo2020variational} have been proposed as methods to make use of near-term quantum computers. VQCAs efficiently evaluate a cost function on a quantum computer while optimizing the cost value using a classical computer. Important results have been obtained for the ``quantum portion'' of VQCAs, such as efficient gradient evaluation~\cite{mitarai2018quantum,Schuld2019}, scaling analysis for gradients~\cite{mcclean2018barren,cerezo2020cost}, resilience to certain types of noise~\cite{sharma2019noise,mcclean2016theory}, and reducing measurements by finding commuting operator subsets~\cite{Jena2019,Izmaylov2019,Yen2019,Gokhale2019,Crawford2019,Gokhale2019-2, huggins2019efficient}.
However, to realize the full potential of VQCAs, it is not enough to focus only on the quantum part of these algorithms. One needs a powerful classical optimizer.
Certain issues arise in VQCAs that are not common in classical algorithms, implying that standard off-the-shelf classical optimizers may not be best suited to VQCAs. For example, multiple runs of quantum circuits are required to reduce the effects of shot noise on cost evaluation. Furthermore, applications like VQE require the measurement of large sets of non-commuting operators, significantly increasing the number of shots needed to obtain a given shot noise~\cite{troyer2015}. An additional complication is that quantum hardware noise flattens the training landscape~\cite{sharma2019noise}. Ideally, for VQCAs, one should design an optimizer that can handle both shot noise and hardware noise.
Some recent works have focused on classical optimizers~\cite{verdon2019learning, wilson2019optimizing, nakanishi2019, parrish2019, stokes2019quantum, kubler2019adaptive, sweke2019, zhang2019collective,koczor2019quantum,lavrijsen2020classical}. One trend that has emerged is gradient-based optimizers, which are motivated by a result that gradient information improves convergence~\cite{harrow2019}. This approach brings with it the challenge (i.e., the large number of shots required) of potentially needing to estimate many partial derivatives of a function that is a sum over expectation values of many possibly non-commuting observables.
As a result, our group~\cite{kubler2019adaptive} as well as Sweke et al.~\cite{sweke2019} have recently investigated shot-frugal gradient descent for VQCAs. Specifically, we introduced an optimizer, called iCANS (individual Coupled Adaptive Number of Shots), which outperformed off-the-shelf classical optimizers such as Adam~\cite{Kingma2015}
for variational quantum compiling and VQE tasks~\cite{kubler2019adaptive}. The key feature of iCANS is that it maximizes the expected gain per shot by frugally adapting the shot noise for each individual partial derivative.
In this article, we take shot-frugal optimization to the next level. In VQE and other VQCAs, it is common to express the cost function $C = \avg{H}$ as the expectation value of a Hamiltonian $H$ that is expanded as a weighted sum of directly measurable operators $\{ h_i \}_i$:
\begin{equation}
\label{eqn1}
H=\sum_{i=1}^N c_i h_i.
\end{equation}
Then $C$ is computed from estimations of each expectation $\avg{h_i}$, which is obtained from many shots. In this work, we propose to randomly assign shots to the $h_i$ operators according to a weighted probability distribution (proportional to $|c_i|$). We prove that this leads to an unbiased estimator of the cost $C$, even when the number of shots is extremely small (e.g., just a single shot). This allows one to unlock a level of shot-frugality for unbiased estimation that simply cannot be accessed without operator sampling. In addition, the randomness associated with operator sampling can provide a means to escape from local minima of $C$. We note that randomly sampling the $h_i$ terms was also examined in Ref.~\cite{sweke2019} although their approach is different from ours (as discussed in Sec.~\ref{sec:single-rand}), and it was also employed by Campbell in the context of dynamical simulation and phase estimation~\cite{campbell2019random}.
A combination of the new sampling strategy with iCANS leads to our main result, which is an improved optimizer for VQCAs that we call Rosalin (Random Operator Sampling for Adaptive Learning with Individual Number of shots). Rosalin retains the crucial feature of maximizing the expected gain (i.e., cost reduction) per shot. We analyze the potential of Rosalin by applying it to VQE for three molecules, namely H$_2$, LiH, and BeH$_2$, and compare
its performance with that of other optimizers. In cases with more than a few terms in the Hamiltonian, Rosalin outperforms all other optimizer and sampling strategy combinations considered. Hence, we believe Rosalin should be regarded as the state-of-the-art method for any application that is concerned about shot frugality.
\section{Results}
\subsection{Variances of Estimation Strategies}\label{sec:analytical}
In what follows, we compare various strategies (in terms of their variances) for estimating expectation values with a finite total number of shots, $s_\text{tot}$. For this purpose, we denote $\widehat{E}$ as the estimator for $\avg{H}$ and $\widehat{\mathcal{E}_i}$ as the estimator for $\avg{h_i}$, where
\begin{equation}
\label{eq:gen_expectation}
\widehat{E} = \sum_{i=1}^N c_i \widehat{\mathcal{E}_i} \,,\quad\text{with}\quad \widehat{\mathcal{E}_i}= \frac{1}{\text{E}[s_i]}\sum_{j=1}^{s_i} r_{ij}\,.
\end{equation}
Here, $s_i$ is the number of shots allocated to the measurement of $h_i$. Note that $s_i$ may be a random variable. As we will work in terms of the total shot budget for the estimation, $s_\text{tot}$, we impose $\sum_{i=1}^Ns_i=s_\text{tot}$. Also, each $r_{ij}$ is an independent random variable associated with the $j$-th measurement of $h_i$. $\text{E}[\cdot]$ denotes the expectation value and we will assume $\text{E}[s_i]>0$ for all $i$. We now state two useful propositions about this estimator.
\begin{proposition}
\label{prop1}
$\widehat{E}$ is an unbiased estimator for $\langle H \rangle$.
\end{proposition}
See Sec.~\ref{App:bias} for a proof of Prop.~\ref{prop1}.
We remark that if $\text{E} [s_{i}]=0$ for any operator, $\widehat{E}$ becomes undefined. However, one could choose to resolve this by modifying \eqref{eq:gen_expectation} to exclude such operators (which we index with the set $\mathcal{I}$) in the operator sum, as this is essentially a statement that one will choose not to measure those operators. Doing this gives
\begin{align}
\text{E}[\widehat{E}] =\avg{H}-\sum_{i'\in \mathcal{I}} c_{i'} \avg{h_{i}}.
\end{align}
We therefore have that $\text{E}[\widehat{E}]\ne\left\langle H \right\rangle$ unless $\sum_{i'\in \mathcal{I}} c_{i'} \avg{h_{i'}}=0$. Hence, in the absence of special symmetries or vanishing coefficients, the estimator becomes biased. This justifies our assumption that $\text{E}[s_i]>0$ for all $i$ to achieve an unbiased estimator.
\begin{proposition}
\label{prop2}
The variance of $\widehat{E}$ is
\begin{align}\label{eq:gen_variance}
{\rm Var}(\widehat{E})= &\sum_{i=1}^N \frac {c_i^2}{{\rm E}[s_{i}]} \sigma_i^2\nonumber\\
&+\sum_{i, i'}\frac{c_ic_{i'}\langle h_i \rangle \langle h_{i'} \rangle }{{\rm E}[s_i]{\rm E}[s_{i'}]}{\rm Cov}[s_i,s_{i'}]\;,
\end{align}
where $\sigma_i^2=\langle h_i^2 \rangle-\langle h_i \rangle^2$ is the quantum mechanical variance of $h_i$ in the given state and ${\rm Cov}[s_i,s_{i'}]={\rm E}[s_is_{i'}]-{\rm E}[s_i] {\rm E}[s_{i'}]$ are the entries of the covariance matrix associated with the $s_i$'s..
\end{proposition}
See Sec.~\ref{App:Var} for a proof of Prop.~\ref{prop2}.
We note that in this formalism each $h_i$ operator can either be a unitary operator (such as tensor products of Pauli operators) or, more generally, a weighted sum over a commuting set of unitary operators. For simplicity, we will work with normalized versions of these operators ($h_i'=h_i/\|h_i\|$) and absorb the norm into the coefficients ($c_i'=c_i\|h_i\|$) so that $c_i h_i=c_i' h_i'$. For the remainder of this article, we will assume that all $h_i$'s are normalized in this way and drop the primes. Additionally, we define $M=\sum_{i=1}^N|c_i|$ for convenience.
\subsubsection{Uniform Deterministic Sampling}\label{sec:uniform}
The simplest approach to estimating $\langle H\rangle$ with a finite total number of shots $s_{\text{tot}}$ is to simply divide the number of shots equally among the $N$ terms. That then leads us to $s_i=s_\text{tot}/N$. Note that since we need to work with positive integer numbers of shots, this strategy is only valid for $s_\text{tot}=nN$ for some positive integer $n$. When working with optimization environments where other values of $s_\text{tot}$ may be requested by the method, we resolve the disparity by instead using $s_\text{tot}'=n'N$ shots, where $n'=\left\lfloor s_\text{tot}/N\right\rfloor$. We will refer to this strategy below as Uniform Deterministic Sampling (UDS).
With this deterministic strategy, ${\rm E}[s_i]=s_\text{tot}/N$ for all $i$, and ${\rm Cov}[s_i,s_{i'}]=0$ for all $i,i'$. From~\eqref{eq:gen_variance} we then have
\begin{equation}\label{eq:uni-var}
\text{Var}\left(\widehat{E}\right)=\frac{N}{s_{\text{tot}}}\sum_{i=1}^N\abs{ c_i}^2 \sigma_i^2.
\end{equation}
We note that this strategy represents the optimal allocation of shots in the special case where $\sigma_i\propto 1/\abs{c_i}$~\cite{rubin2018application}.
\subsubsection{Weighted Deterministic Sampling}\label{sec:weighted}
To improve the shot frugality of reaching a given precision, it has been proposed~\cite{troyer2015,rubin2018application} that the number of shots $s_i$ allocated to the measurement of each operator $h_i$ should be proportional to the magnitude of the coefficients $c_i$. That is, the shots would be deterministically proportioned so that:
\begin{equation}
s_i= s_\text{tot}\frac{\abs{c_i}}{M}.
\end{equation}
Note that for physical Hamiltonians $s_\text{tot} \abs{c_i}/{M}$ will often not be an integer. When this occurs we again take the floor:
\begin{equation}
\label{eq:weighted_alloc}
s_i=\left\lfloor s_\text{tot}\frac{\abs{c_i}}{M}\right\rfloor,
\end{equation}
which also redefines the total number of shots used as $s_\text{tot}'=\sum_{i=1}^N\lfloor s_\text{tot}{\abs{c_i}}/M\rfloor$. We will refer to this strategy below as Weighted Deterministic Sampling (WDS).
From~\eqref{eq:gen_variance}, the variance of the estimator for this deterministic strategy (neglecting any corrections due to $s_{\rm tot}|c_i|/M$ not being integer) is
\begin{equation}\label{eq:weighted-var}
\text{Var}\left(\widehat{E}\right)=\frac{{M}}{s_{\text{tot}}}\sum_{i=1}^N |c_i| \sigma_i^2\,.
\end{equation}
For equal magnitude coefficients, this method reduces to the UDS approach above, while when the $\sigma_i$'s are equal in magnitude this strategy becomes optimal~\cite{rubin2018application}. In the case of performing VQE for chemical Hamiltonians, we empirically find that for both random states as well as low energy states there is a greater variation in the $\abs{c_i}$'s than in the $\sigma_i$'s, and so WDS tends to perform better than UDS.
\subsubsection{Weighted Random Sampling}\label{sec:weighted-rand}
The above deterministic frameworks have a hard floor on the number of shots needed for any unbiased estimate, and this floor increases with $N$ as those methods must measure all operators at least once. This is a crucial point: deterministic methods cannot be unbiased when the number of shots is below some threshold, and hence this limits the shot frugality of these methods. In general, this shot floor is derived from demanding that $\min_i\,s_i>0$. For the specific case of WDS this floor is
\begin{equation}
\label{eq:ShotFloor}
s_{\text{floor}} =\left \lceil \frac{M}{\min_i \abs{c_i}} \right \rceil \le s_{\text{tot}} \,.
\end{equation}
To lower this floor, one can introduce randomness. With randomness, an unbiased estimator for $\avg{H}$ can be computed with as little as a single shot. Randomness, therefore, unlocks a new level a shot frugality. While this shot frugality naturally leads to noisy gradient estimates, it has been demonstrated for VQCAs that stochastic gradient descent can be effective even with highly noisy gradient estimates~\cite{kubler2019adaptive,sweke2019}.
We therefore propose a weighted random sampling (WRS) method. This sampling can be accomplished by choosing which $h_i$ terms to measure by drawing from a multinomial probability distribution where the probability $p_i$ of measuring $h_i$ is proportional to the magnitude of the coefficient $c_i$:
\begin{equation}
p_i=\frac{\abs{c_i}}{M}.
\end{equation}
Note that here $\text{E} [s_i]=p_is_{\text{tot}}$. The variance of this estimator then follows from plugging the variance and covariance of the multinomial distribution into~\eqref{eq:gen_variance}:
\begin{align}\label{eq:rand-var}
\text{Var}\left(\widehat{E}\right)=&\frac{{M}}{s_{\text{tot}}}\sum_{i=1}^N |c_i| \avg{h_i^2}-\frac{\avg{H}^2}{s_{\text{tot}}}\nonumber \\
=&\frac{{M}}{s_{\text{tot}}}\sum_{i=1}^N |c_i| \sigma_i^2\nonumber \\
&+\frac{{M}}{s_{\text{tot}}}\sum_{i=1}^N |c_i| \avg{h_i}^2-\frac{\avg{H}^2}{s_{\text{tot}}}\,.
\end{align}
When allowed to take many shots this procedure is very similar to the weighted, deterministic method above while also extending to the few shots, high variance regime. The price for this extension is the additional two terms added to the variance, the sum of which is always non-negative as it represents an expectation value for a (positive-semidefinite) covariance matrix. We note that these additional terms do not alter the ($1/s_\text{tot}$) scaling of the deterministic case.
\subsubsection{Weighted Hybrid Sampling}\label{sec:weighted-hyb}
There is a middle ground between the deterministic and stochastic weighted sampling procedures listed above: we can allocate some of the shots according to each method. To this end, we divvy up the shots as follows. If $s_{\text{tot}} \ge s_{\text{floor}}$, we first assign shots according to the WDS strategy (i.e.,~\eqref{eq:weighted_alloc}) and then assign any leftover shots randomly:
\begin{equation}
s_{\text{rand}}=s_{\text{tot}}-\sum_{i=1}^N\lfloor s_\text{tot}{\abs{c_i}}/M\rfloor\, ,
\end{equation}
where $s_{\text{rand}}$ is the number of shots allocated randomly according to the WRS strategy. If instead $s_{\text{tot}} < s_{\text{floor}}$ we set $s_{\text{rand}}=s_{\text{tot}}$ and allocate all of the shots randomly. This gives the following variance:
\begin{align}\label{eq:hyb-var}
\text{Var}\left(\widehat{E}\right)=&\frac{{M}}{s_{\text{tot}}}\sum_{i=1}^N |c_i| \sigma_i^2\nonumber \\
&+\frac{s_{\text{rand}}M}{s_{\text{tot}}^2}\sum_{i=1}^N |c_i| \avg{h_i}^2-\frac{s_{\text{rand}}\avg{H}^2}{s_{\text{tot}}^2}\,.
\end{align}
This formula follows from~\eqref{eq:weighted-var} and~\eqref{eq:rand-var} by standard properties of the variance of the sum of two independent random variables. We note that this variance is no smaller than~\eqref{eq:weighted-var}. We will refer to this strategy as Weighted Hybrid Sampling (WHS).
Since $s_{\text{rand}}$ is bounded above, the two terms that are added to the WDS formula scale as 1/$s_{\text{tot}}^2$, and thus will not contribute significantly to the variance at high numbers of shots. This method is therefore well suited to both the low and high number of shots regimes, making it very useful in the context of optimization methods.
\begin{figure*}[!ht]
\centering
\includegraphics[width=2\columnwidth]{Variances.pdf}
\caption{Variances of the estimator $\widehat{E}$ with the different sampling strategies for the low energy states of different Hamiltonians. Panel \textbf{a} shows the results for a low energy state for H$_2$ generated by optimizing the angles in the ansatz in Fig.~\ref{fig:ansatz} with $D=1$. Panel \textbf{b} shows the same but for LiH and with $D=2$. Panel \textbf{c} again shows the same but for BeH$_2$ and with $D=2$.}
\label{fig:Var}
\end{figure*}
\subsubsection{Randomly Selecting Only One Term}\label{sec:single-rand}
A different method of randomly sampling was recently proposed where instead of distributing the shots at random, one randomly selects a single $h_i$ and uses all of the shots to measure that $h_i$~\cite{sweke2019}. As shown therein, this method provides an unbiased estimator for $\avg{H}$. In our notation, the estimator for this method is given by
\begin{equation}
\label{eq:single_expectation}
\widehat{E} = \frac{c_i}{s_\text{tot}}\sum_{j=1}^{s_\text{tot}} r_{ij}\,,
\end{equation}
for some $i\in \{1,2,\dots,N\}$ chosen with probability $p_i$. Though the authors of~\cite{sweke2019} focused on the uniform case where $p_i=1/N$, they also comment that one could use a weighted approach like the one used in the WRS and set $p_i=\abs{c_i}/M$.
If one uses this weighted approach, which we will refer to as Weighted Single Sampling (WSS), the variance of their estimator is
\begin{align}\label{eq:single-var}
\text{Var}\left(\widehat{E}\right)=&\frac{M}{s_{\text{tot}}}\sum_{i=1}^N |c_i| \sigma_i^2\nonumber \\
&+M\sum_{i=1}^N |c_i| \avg{h_i}^2-\avg{H}^2\,.
\end{align}
As with the variance of WRS, this follows from plugging in the variance of the multinomial distribution into~\eqref{eq:gen_variance} but differs as we only take a single draw.
From comparing~\eqref{eq:rand-var} and~\eqref{eq:single-var}, one can see that measuring only one operator adds a floor to the variance as the additional terms no longer scale as ($1/s_\text{tot}$). As mentioned in~\cite{sweke2019}, this method can be extended to distributing the shots among a subset of the $h_i$'s which improves the situation but cannot recover the $1/s_\text{tot}$ scaling unless we include all $h_i$'s.
\begin{figure}[]
\centering
\includegraphics[width=\columnwidth]{ansatz.pdf}
\caption{Structure of the quantum circuit ansatz used in our numerics. The block of gates inside the curly braces is repeated $D$ times to provide different depth ansatzes. Each $U$ gate represents a general single qubit unitary and is the composition of a z-rotation, a y-rotation, and a z-rotation. Each angle in these rotations is varied independently.
\label{fig:ansatz}
\end{figure}
\subsubsection{Numerical Comparison of Variances}\label{sec:numerical}
As the variance from any sampling strategy will depend both on the state and the Hamiltonian, we now consider the variance for states of interest for quantum chemistry. Specifically, we now compare them numerically for the low energy states found at the end of a VQE procedure in Fig.~\ref{fig:Var}. We employ the ansatz structure described in Fig.~\ref{fig:ansatz} with the Hamiltonians for H$_2$, LiH, and BeH$_2$ used in \cite{kandala2017}.
As shown in Fig.~\ref{fig:Var}, for all cases, WDS is the best at high values of $s_\text{tot}$ but does not come into play until we are allocating many shots, especially for LiH and BeH$_2$. WRS and WHS are identical for small $s_\text{tot}$, and perform the best there. Once WDS becomes relevant, the variance of WHS jumps to meet it and then stays close, showing an advantage over WRS. For all cases, the WRS, WHS, and WDS (when relevant) give smaller variances than UDS, though we note that UDS becomes possible at fewer shots than WDS. Due to the variance floor of WSS, it is typically not competitive with the other strategies
\subsection{The Rosalin Method}\label{sec:Rosalin}
In order to formulate an optimizer geared towards chemistry applications, we combine the shot-frugal optimizer iCANS~\cite{kubler2019adaptive} (in particular the more aggressive variant referred to as iCANS1 in that paper) with the WRS and WHS strategies described above. We refer to the resulting method as Rosalin (Random Operator Sampling for Adaptive Learning with Individual Number of shots). We present the random and hybrid operator sampling methods in Algorithm \ref{alg:expectation}, and review the iCANS method in Algorithm \ref{alg:iCANS} in the Sec.~\ref{sec:iCANS}. Together, these methods compose the Rosalin approach to VQE and other VQCA problems. We refer to the WRS version of Rosalin as Rosalin1 and the WHS version as Rosalin2.
\begin{figure}[!t]
\begin{algorithm}[H]
\begin{algorithmic}[1]
\let\oldReturn\Return
\renewcommand{\Return}{\State\oldReturn}
\Procedure{$Estimate\_H$}{$\boldsymbol{\theta}, s_\text{tot},\{c_i\},\{h_i\}$}
\State initialize: $\vec{\widehat{E}} \gets (0,...,0)^T$, $\ell \gets 0$, $\vec{s} \gets (0 ,... ,0)^T$
\For{$i \in [1,...,N]$}
\State $p_i \gets\frac{|c_i|}{\sum_{i'}|c_{i'}|}$
\EndFor
\If{$Hybrid$ and $\lfloor \min_i( p_is_\text{tot})\rfloor >0$}
\For{$i \in [1,...,N]$}
\State $s_i \gets \lfloor p_i s_\text{tot}\rfloor$
\EndFor
\State $s_{\text{det}} \gets \sum_i \vec{s}_i$
\Else
\State $s_{\text{det}} \gets 0$
\EndIf
\State $s_{\text{rand}} \gets s_\text{tot}-s_{\text{det}}$
\State $\vec{m} \sim \text{Multinomial}(s_{\text{rand}},\vec{p})$
\For{$j \in [1,...,s_{\text{rand}}]$}
\State $s_{m_j} \gets s_{m_j} + 1$
\EndFor
\For{$i \in [1,...,N]$}
\For{$j \in [1,...,s_i]$}
\State $r \gets Measure(\boldsymbol{\theta},h_i)$
\State $\ell \gets \ell +1$
\State $\widehat{E}_\ell \gets c_i r/p_i$
\EndFor
\EndFor
\Return $\vec{\widehat{E}}$
\EndProcedure
\end{algorithmic}
\caption{\justified{The function $Estimate\_H$ which, given a parameter vector $\vec{\theta}$, a shot budged $s_\text{tot}$, and the sets of coefficients and operators, $\{c_i\}$ and $\{h_i\}$, returns a vector of single-shot estimates ($\vec{\widehat{E}}$) of $\langle H \rangle$ for Rosalin using either the WHS or the WRS strategy, depending on the Boolean $Hybrid$ flag. The function $Measure(\boldsymbol{\theta},h_i)$ represents a measurement of the operator $h_i$ on a state prepared by the circuit parametrized by $\boldsymbol{\theta}$.}}
\label{alg:expectation}
\end{algorithm}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[width=2\columnwidth]{H2.pdf}
\caption{Average energy above the ground state ($\Delta E$) for the H$_2$ molecular Hamiltonian as a function of the total number of shots ($N_s$) expended during the optimization procedure. The energy is calculated exactly using the parameters found with the stochastic optimization. Each of the 4 Pauli product operators in the Hamiltonian description were measured separately. Panels \textbf{a} and \textbf{b} show the results for optimizing without and with machine noise (respectively). Both cases were optimized with the ansatz in Fig.~\ref{fig:ansatz} with $D=1$.}
\label{fig:H2}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=2\columnwidth]{LiH.pdf}
\caption{Average energy above the ground state ($\Delta E$) for the LiH molecular Hamiltonian as a function of the total number of shots ($N_s$) expended during the optimization procedure. The energy is calculated exactly using the parameters found with the stochastic optimization. Each of the 99 Pauli product operators in the Hamiltonian description were measured separately. Panels \textbf{a} and \textbf{b} show the results for optimizing without and with machine noise (respectively). Both cases were optimized with the ansatz in Fig.~\ref{fig:ansatz} with $D=2$.}
\label{fig:LiH}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=2\columnwidth]{BeH2.pdf}
\caption{Average energy above the ground state ($\Delta E$) for the BeH$_2$ molecular Hamiltonian as a function of the total number of shots ($N_s$) expended during the optimization procedure. The energy is calculated exactly using the parameters found with the stochastic optimization. Panel \textbf{a} shows the results while measuring each of the 164 Pauli product operator separately while panel \textbf{b} shows the results when simultaneously measuring operators within the 44 commuting subsets of Pauli product operators. Both cases were optimized with the ansatz in Fig.~\ref{fig:ansatz} with $D=2$.}
\label{fig:BeH2}
\end{figure*}
\subsection{Numerical Optimization Comparison} \label{sec:Implementations}
We now compare the performance of different sampling strategies with two optimizers on the optimization of the molecular Hamiltonians used in \cite{kandala2017}: H$_2$, LiH, and BeH$_2$. For each Hamiltonian, we consider eight different approaches arising from using either iCANS or Adam with four different sampling strategies: WRS, WHS, WDS, and UDS
In Fig.~\ref{fig:H2} and Fig.~\ref{fig:LiH} we show the average performance of these optimization strategies for the H$_2$ and LiH molecular VQE problems versus the total number of shots expended. We perform this comparison both with and without simulated hardware error (both versions are limited by the finite statistics). The noise model used here was based on the noise profile of IBM's Melbourne processor~\cite{IBMQ14} as retrieved by Qiskit function calls~\cite{gadi_aleksandrowicz_2019_2562111}. In Fig.~\ref{fig:BeH2} we instead compare the performance of these optimization strategies for minimizing the energy of BeH$_2$ both with considering each operator separately (as in the H$_2$ and LiH cases above) and using the commuting subsets chosen in~\cite{kandala2017}. All instances using the Adam optimizer were given the larger of one hundred shots or the method's shot floor for the given Hamiltonian for each expectation value estimated.
\section{Discussion}
\label{sec:Discussion}
In order to achieve practical applications of VQCAs, we will need optimization strategies that can scale well as we consider larger problem sizes. In particular, applying VQE to chemical applications will have to contend with Hamiltonians that are the sum over many directly measurable operators. While simultaneously measuring commuting subsets of these operators can help reduce this difficulty, it is only one part of the answer.
With this challenge in mind, we have introduced the Rosalin method which combines random operator sampling with the iCANS optimizer to achieve greater shot frugality. iCANS is a recently introduced optimizer that attempts to be shot-frugal by dynamically adapting the number of shots (and thus precision) used to estimate each gradient component as part of a stochastic gradient descent~\cite{kubler2019adaptive}. By combining iCANS and the random or hybrid sampling strategies (WRS or WHS, respectively), Rosalin has the ability to make gradient update steps that are very inexpensive (i.e., use few shots) early in the training process when less precision is needed, even for large and complicated Hamiltonians. Additionally, since our random and hybrid sampling schemes retain the standard $1/s_\text{tot}$ scaling, Rosalin is able to directly increase the precision it uses as needed during the optimization procedure.
The analytical results of Sec.~\ref{sec:analytical} show how the usual $1/s_{\text{tot}}$ scaling of the variance can be achieved by random sampling strategies, while also allowing for unbiased estimators with far fewer resources. Additionally, they show that such a random sampling strategy should allocate samples across all of the operators $h_i$ (rather than singling out an individual operator) in order to avoid introducing a precision floor. These results about our random and hybrid sampling strategies are central to the shot frugality advantage that we find with Rosalin.
The simulated optimization procedures of Sec.~\ref{sec:Implementations} show that as the size and complexity of the Hamiltonians being considered increases, random sampling procedures such as Rosalin offer greater improvements in the efficiency of performing an optimization. Specifically, for the case of H$_2$ where we had a Hamiltonian with only $4$ terms Fig.~\ref{fig:H2} does not show much of a difference between the sampling strategies. However, once we move on to LiH with $99$ terms we see a marked difference for both the noisy and noiseless optimization in Fig.~\ref{fig:LiH}. This difference is even more pronounced for the larger molecule BeH$_2$ in Fig.~\ref{fig:BeH2}. Additionally, for this molecule, we find that Rosalin achieves an advantage over the other methods considered even when we work with the $44$ commuting subsets (Fig.~\ref{fig:BeH2}\textbf{b}) rather than measuring the full $164$ terms individually (Fig.~\ref{fig:BeH2}\textbf{a}).
We note that while our results have concerned unbiased estimators, there may sometimes be a case for using a biased estimator. We focus on unbiased estimators as stochastic optimization methods using a biased estimator would generically be expected to converge to a distribution of parameter values that is not centered about the true minimum. However, if our goal is to compute the energy of a ground state to a fixed accuracy rather than trying to find the true optimal state parameters, we might choose to exclude a set of terms in the Hamiltonian chosen to keep the total energy bias small enough that it is negligible compared to the desired accuracy. The random and hybrid methods we propose here would apply as naturally to such a biasing truncation of a Hamiltonian as to the full Hamiltonian.
Finally, we remark that while Rosalin is an optimization strategy intended for VQCAs, the random and hybrid sampling strategies we propose would also provide a way to potentially achieve less computationally expensive estimates of expectation values for non-variational methods as well. Hence, our work has relevance to traditional quantum algorithms, even in the fault-tolerant quantum computing era.
\section{Methods}
\subsection{Proof of Prop.~\ref{prop1}}\label{App:bias}
In~\eqref{eq:gen_expectation} we introduced the following expression to estimate $\ave{H}$ with $s_\text{tot}$ shots:
\begin{equation}
\label{eq:gen_exp_form}
\widehat{E} = \sum_{i=1}^N c_i \frac{1}{\text{E}[s_i]}\sum_{j=1}^{s_i} r_{ij},
\end{equation}
where $r_{ij}$ is the measurement outcome of the $j$-th measurement of $h_i$ and $s_i$ is the (possibly random) number of shots allocated to the measurement of $h_i$, and we assume $\text{E} [s_i]>0$ for all $i$. We now show that $\text{E}[\widehat{E}]=\left\langle H \right\rangle$ (Prop.~\ref{prop1}). We have:
\begin{align}
\text{E}[\widehat{E}] =&\text{E}\Bigg[ \sum_{i=1}^N c_i \frac{1}{\text{E}[s_i]}\sum_{j=1}^{s_i} r_{ij}\Bigg]\nonumber\\
=& \sum_{i=1}^N c_i \frac{1}{\text{E}[s_i]}\text{E}\Bigg[\sum_{j=1}^{s_i} r_{ij}\Bigg]\nonumber\\
=& \sum_{i=1}^N c_i \frac{\text{E}[s_i]}{\text{E}[s_i]}\avg{h_i}\nonumber\\
=&\avg{H}.
\end{align}
Thus we have shown that if $\text{E} [s_i]>0$ for all $i$, then $\text{E}[\widehat{E}]=\left\langle H \right\rangle$.
\subsection{Proof of Prop.~\ref{prop2}}\label{App:Var}
We now derive the general variance formula~\eqref{eq:gen_variance} to prove Prop.~\ref{prop2}. We have
\begin{align}
{\rm Var}(\widehat{E}) =& \sum_{i, i'=1}^N\frac{c_ic_{i'}}{{\rm E}[s_i]{\rm E}[s_{i'}]}{\rm E} [\sum_{j}^{s_i}\sum_{j'}^{s_{i'}} r_{ij}r_{i'j'}] - \langle H \rangle^2 \nonumber\\
=&\sum_{i, i'=1,i \ne i'}^N\frac{c_ic_{i'}\langle h_i \rangle \langle h_{i'} \rangle }{{\rm E}[s_i]{\rm E}[s_{i'}]}{\rm E}[s_i s_{i'}]\nonumber\\
&+ \sum_{i=1}^N\frac{(c_i)^2 \langle h_i \rangle^2 }{{\rm E}^2[s_i]}{\rm E}[s_i (s_{i}-1)] \nonumber\\
& + \sum_{i=1}^N\frac{(c_i)^2 \langle h_i^2 \rangle }{{\rm E}^2[s_i]}{\rm E}[s_i] - \langle H \rangle^2 \nonumber\\
=& \sum_{i=1}^N \frac {c_i^2}{{\rm E}[s_{i}]} \sigma_i^2 +\sum_{i, i'=1}^N\frac{c_ic_{i'}\langle h_i \rangle \langle h_{i'} \rangle }{{\rm E}[s_i]{\rm E}[s_{i'}]}{\rm E}[s_i s_{i'}]\nonumber\\
&- \langle H \rangle^2 \nonumber\\
=&\sum_{i=1}^N \frac {c_i^2}{{\rm E}[s_{i}]} \sigma_i^2 +\sum_{i, i'=1}^N\frac{c_ic_{i'}\langle h_i \rangle \langle h_{i'} \rangle }{{\rm E}[s_i]{\rm E}[s_{i'}]}{\rm Cov}[s_i,s_{i'}] \;,
\end{align}
which is~\eqref{eq:gen_variance}.
\subsection{Remark on Using Prior Variance Information}\label{sec:prior_var}
If one approximately knows the $\sigma_i$'s ahead of time for a quantum state of interest, the proportioning of shots in both the deterministic and stochastic sampling methods can be optimized further to decrease the variance of the estimate~\cite{rubin2018application}. If one has such information available, optimal deterministic distribution of shots is
\begin{equation}
\label{eq:weighted_alloc_prior}
s_i=\frac{s_{\text{tot}}\abs{c_i}\sigma_i}{\sum^N_{i'=1}|c_{i'}|\sigma_{i'}}.
\end{equation}
(As in the other deterministic cases discussed, this may not result in an integer $s_i$, in which case once could take the floor of this expression.)
Note that for this estimator to remain unbiased we must demand that at least one shot is allocated to each operator, meaning that this prescription requires some form of regularization when $\sigma_i\to 0$. However, including such prior information makes the variance of the estimator
\begin{equation}\label{eq:weighted-var-with-info}
\text{Var}\left(\widehat{E}\right)=\frac{(\sum^N_{i'=1}|c_{i'}|\sigma_{i'})^2}{s_{\text{tot}}},
\end{equation}
which is optimal~\cite{rubin2018application}.
In the random approach, if we know the $\sigma_i$'s we can instead set the probabilities to be
\begin{equation}
p_i=\frac{\abs{c_i}\sigma_i}{\sum_{i'}\abs{c_{i'}}\sigma_{i'}}\,,
\end{equation}
which gives a variance of
\begin{equation}\label{eq:rand-var-with-info}
\text{Var}\left(\widehat{E}\right)=\frac{\sum^N_{i'=1}|c_{i'}|\sigma_{i'}}{s_{\text{tot}}}\sum_{i=1}^N |c_i|\frac{ \avg{h_i^2}}{\sigma_i}-\frac{\avg{H}^2}{s_{\text{tot}}}\,.
\end{equation}
Note that this variance diverges when at least one $\sigma_i$ becomes small, meaning that this method is unstable without regularizing the expression with an effective lower bound on the $\sigma_i$'s.
Additionally, we empirically find that during a minimization procedure using the $\sigma_i$'s from the previous iteration to guide the shot allocation for a new iteration performs poorly. Therefore, though such information may be helpful when accurately determining an expectation value (perhaps after an optimization procedure), we do not incorporate it into Rosalin.
\subsection{The Rosalin Optimizer}\label{sec:iCANS}
Algorithm \ref{alg:iCANS} is described below and shows the version of iCANS~\cite{kubler2019adaptive} adapted to Rosalin. We now make a few remarks about the hyperparameters of Rosalin. These include the Lipshitz constant $L$, the maximum learning rate $\alpha$, the running average constant $\mu$, the minimum number of shots per energy estimation $s_{\text{min}}$, and the bias $b$.
$L$ is the bound on the largest derivative that the energy landscape has and is thus set by the problem Hamiltonian. We recommend setting $L =M$, as suggested in \cite{kubler2019adaptive}. $L$ also gives a bound on the learning rates that can be used, as the iCANS formalism requires that $0<\alpha<2/L$.
The running average constant $\mu$ is bounded between zero and one. Unlike in other methods with running averages, $\mu$ is only used to control how quickly the number of shots recommended for each gradient component changes. With this in mind, $\mu$ can be set close to one in order to get a smooth increase in the number of shots without directly influencing the parameter update step. We also note that $s_{\text{min}}$ cannot be lower than $2$ for the variance $\vec{S}_\ell$ to be well defined (see line 22 of Algorithm \ref{alg:iCANS}). Finally, the bias $b$ is introduced to act as a regularizer and thus should be positive but small compared to the expected size of the variances.
\begin{figure}[ht]
\begin{algorithm}[H]
\begin{algorithmic}[1]
\let\oldReturn\Return
\renewcommand{\Return}{\State\oldReturn}
\Statex \textbf{Input:} Learning rate $\alpha$, starting point $\boldsymbol{\theta}_0$, min number of shots per estimation $s_{\min}$, number of shots that can be used in total $M$, Lipschitz constant $L$, running average constant $\mu$, bias for gradient norm $b$, and Hamiltonian expressed as a list of coefficients $c_i$ and operators $h_i$.
\State initialize: $\boldsymbol{\theta} \gets \boldsymbol{\theta}_0 $, $s_{\text{used}} \gets 0$,
$\vec{s} \gets (s_{\min} ,... ,s_{\min})^T$, $\vec{\chi} \gets (0,...,0)^T$,
$\vec{\xi} \gets (0,...,0)^T$, $k\gets 0$
\While{$s_{\text{used}} < M$}
\State $s_{\text{used}} \gets s_{\text{used}} + 2 \sum_\ell s_\ell$
\For{$ \ell \in [1,...,d]$}
\State $g_\ell, S_\ell \gets iEvaluate(\boldsymbol{\theta}, s_\ell,\ell,\{c_i\},\{h_i\})$
\State $\xi_\ell' \gets \mu \xi_\ell' + (1-\mu) S_\ell$
\State $\chi_\ell' \gets \mu \chi_\ell' + (1-\mu) g_\ell$
\State $\xi_\ell \gets \xi_\ell'/(1-\mu^{k+1})$
\State $\chi_\ell \gets \chi_\ell'/(1-\mu^{k+1})$
\State $\theta_\ell \gets \theta_\ell - \alpha g_\ell$
\State $s_\ell \gets \left\lceil\frac{2L\alpha}{2-L\alpha} \frac{\xi_\ell}{\chi_\ell^2 + b \mu^k}\right\rceil$
\State $\gamma_\ell \gets \frac{1}{s_\ell} \left[\left(\alpha - \frac{L\alpha^2}{2}\right) \chi_\ell^2 - \frac{L \alpha^2}{2 s_\ell} \xi_\ell\right]$
\EndFor
\State $s_{\max} \gets s_{\argmax_\ell \gamma_\ell}$
\State $\vec{s} \gets clip(\vec{s}, s_\text{min} , s_\text{max} )$
\State $k\gets k + 1$
\EndWhile
\Procedure{$iEvaluate$}{$\boldsymbol{\theta}, s_\text{tot},\ell,\{c_i\},\{h_i\}$}
\State $\vec{\widehat{E}}^+\gets Estimate\_H(\boldsymbol{\theta}+ \frac{\pi}{2}\hat{e}_\ell,s_\text{tot},\{c_i\},\{h_i\})$
\State $\vec{\widehat{E}}^-\gets Estimate\_H(\boldsymbol{\theta}- \frac{\pi}{2}\hat{e}_\ell,s_\text{tot},\{c_i\},\{h_i\})$
\State $g_\ell \gets \sum_{j=1}^{s_\text{tot}} (\widehat{E}^+_j-\widehat{E}^-_j)/(2s_\text{tot})$
\State $S_\ell \gets \sum_{j=1}^{s_\text{tot}} [((\widehat{E}^+_j-\widehat{E}^-_j)/(2))^2-\vec{g_\ell}^2]/(s_\text{tot}-1)$
\Return $g_\ell,S_\ell$
\EndProcedure
\end{algorithmic}
\caption{\justified{The optimization loop for Rosalin. The function $Estimate\_H$ which returns a vector containing single-shot estimates of $\langle H \rangle$ for Rosalin is described in Algorithm~\ref{alg:expectation}. }}
\label{alg:iCANS}
\end{algorithm}
\end{figure}
\begin{acknowledgements}
AA, LC, RDS, and PJC acknowledge support from the LDRD program at LANL. PJC also acknowledges support from the LANL ASC Beyond Moore's Law project. LC, RDS, and PJC were also supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under the Accelerated Research in Quantum Computing (ARQC) program. Los Alamos National Laboratory is managed by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. 89233218CNA000001.
\end{acknowledgements}
|
1,108,101,565,249 | arxiv | \section{Introduction}
\label{sec:intro}
Formation of multiple systems is an active research topic related to
such areas as stellar mass function, disks, and initial conditions for
planet formation. Although large-scale numerical simulations
give reasonable match to the observed statistics \citep{Bate2012}, we
are still far from modeling the distributions of multiple-star
parameters in a predictive way. The relevant physics is identified,
but the correct mix of processes that define multiple-star population
is yet to be found.
Stochastic dynamics of small $N$-body systems is one such process.
Recently \citet{RM12} suggested that wide binaries are mostly formed
by ejections from smaller $N$-body aggregates. It is well known that
in chaotic dynamics the smallest-mass stars are ejected most readily.
We therefore would expect wide companions to be preferentially of low
mass, single, and on eccentric orbits. Another physical process --
rotationally-driven fragmentation -- makes the opposite prediction
\citep{Delgado2004}. In this case, wide binaries contain a large
fraction of the collapsing cloud's angular momentum. Their orbits
should have moderate eccentricity, the masses of both wide components
should be comparable, and the incidence of subsystems in the primary
and secondary components should be similar.
In this paper we focus on multiplicity of secondary components of wide
binaries, so far poorly characterized. The primary components of those
binaries are nearby solar-type dwarfs; their multiplicity is
constrained by combination of methods, from radial velocities (RVs)
for close pairs to imaging and wide common-proper-motion (CPM)
binaries, spanning the full range of periods \citep{FG67a}. In
contrast, the fraction of subsystems in the secondary components could
be estimated with a considerable uncertainty. Yet, this is an
important diagnostic of the formation processes. If secondary
components were ejected on their wide orbits by dynamical interplay in
compact and unstable nascent multiple systems, they should be
preferentially single.
In recent years, the binarity of secondary components has been probed
by several techniques. Adaptive optics (AO) imaging was used at
Gemini-S \citep{Tok2010}, Palomar 1.5-m Robo-AO \citep{Law2010,RAO},
and SOAR \citep{Tok2014}; RVs were monitored by \citet{CHIRON}. This
project explores the binarity of secondary components by means of
speckle interferometry at the 8-m Gemini-N telescope. Compared to the
Robo-AO, a 7-fold increase in the angular resolution gives access to
much shorter periods, exploring a larger part of the parameter space
and overlapping with the RV surveys.
In Section~2 we describe the instrument, observations, and data
reduction. Newly resolved binaries and detection limits are presented
in Section 3. In Section 4 we analyze one of the newly discovered
subsystems and determine its orbit using archival measurements of the
wide outer binary. Section 5 concludes the paper.
\begin{figure}
\epsscale{1.1}
\plotone{fig1.eps}
\caption{Negative images of newly resolved subsystems at 880\,nm. The
scale and intensity stretch are adjusted individually to highlight
the companions. The numbers in the lower left corner of each image
are angular separations. The first two resolutions (HIP 6268B
and HIP 12764B) are tentative.
\label{fig:mosaic} }
\end{figure}
\begin{figure*}
\epsscale{1.1}
\plotone{fig2.eps}
\caption{Left panel (a): cuts in the final fitted
power spectrum perpendicular to the fringe direction for the
marginal detection of HIP 6268 Ba,Bb at 880\,nm. Three cuts are
shown, each of which has a closest approach to the origin of the
uv plane of 5, 10, and 15 cycles per arcsecond as shown (i.e.
this value is measured parallel to the fringe direction). The
fitted binary model is plotted with the colored curves. A similar
analysis for a point source, where no fit is presented, is shown in
the right panel (b). This gives a sense of how flat a typical point
source observation is in the frequency domain and how significant
the binary signal is. \label{fig:fringes} }
\end{figure*}
\section{Observations and data reduction}
\label{sec:obs}
Observing time for this work has been granted through NOAO (program
15A-0087) for a total of 7 hours, with a low Band 3 priority. This
project was a ``filler'' for more challenging observations, as it
could be executed in less than optimum conditions. The observations
at Gemini-N were conducted by E.H., Mark Everett (NOAO), Steve Howell
(NASA-Ames), Johanna Teske (DTM and Carnegie Observatories), and Lea
Hirsch (UC Berkeley), between 2015 July 11 and 19. During this run,
the sky was clear for about 2/3 of the time, allowing to use 4.7 hours
for this program, part of it through the clouds.
The target list was based on the 67-pc sample of solar-type stars
\citep{FG67a}. Known binaries with separations from 0\farcs5 to
2\farcs8 that had no prior high-resolution data from AO or speckle
instruments were selected. Such binaries fit in the camera field of
view, allowing to check both components for the existence of close
subsystems. In addition, secondary components of wider binaries with
separations from 3\arcsec ~to 10\arcsec ~were targeted separately.
The program contained 63 targets; for 39 of those useful data could be
obtained. In some cases, the brighter component A was pointed instead
of the intended secondary B, either by error or because the extinction
was too great to successfully observe the secondary while the primary
was still visible.
The Differential Speckle Survey Instrument, DSSI, is described by
\citet{Horch2009}. Speckle images of the observed star are recorded by
two electron multiplication CCDs simultaneously in two spectral bands,
the light being divided by a dichroic. At Gemini-N, the DSSI worked
with the filters that transmit central wavelengths and bandwidths of
692/40 and 880/50 nm; for brevity they are called here bands $R$ and
$I$. Previous publications resulting from the DSSI at Gemini-N, e.g.
\citep{Horch2012,Horch2015a}, contain additional details. In a
typical observing sequence, 1000 frames with a 60-ms exposure and a
256$\times$256 size are recorded in both DSSI channels
simultaneously. Observations of single unresolved reference stars are
taken at low airmass and used for modeling instrument signatures and
atmospheric dispersion during data reduction. The data cubes are
processed by the standard speckle technique (calculation of power
spectrum and autocorrelation) and by the speckle image reconstruction
delivering true images \citep[see further details
in][]{Horch2009,Horch2015b}. Figure~\ref{fig:fringes} illustrates
the resolution of a very close binary HIP~6268B.
The pixel scale of 11.41\,mas and the orientation of both detectors
was calibrated as described by \citet{Horch2012}. We observed two
binaries, HIP~83838 = HU~1176 and HIP~104858 = STT~535, for which
extremely accurate interferometric orbits are available
\citep{Muterspaugh2008,Muterspaugh2010}. After correcting the optical
distortion in the reflective channel of the DSSI \citep{Horch2011}, we
found astrometry in the two channels to be in excellent mutual
agreement at a level of $\sim$2\,mas for most pairs, in line with the
previous analysis of Gemini-N speckle data in \citep{Horch2012};
larger discrepancies are present only for pairs near the limit of the
technique.
Although the detectors used with DSSI have a 512$\times$512
pixel format, the speckle frames were sub-arrays of 256$\times$256
pixels centered on the target, so the field of view of the recorded
images was 2\farcs8$\times$2\farcs8. This allows detection of companions
with separations up to 1\farcs45. Some wider binaries (up to about
2\arcsec ~separation) were measured by placing both components in the
field, i.e. centering the sub-array in between the two sources.
\begin{figure}
\epsscale{1.1}
\plotone{fig3.ps}
\caption{Detection limits in the $I$ channel of DSSI. The full line is
the average detection curve, the dotted lines are the best and worst
detection limits. Squares denote the actually measured binaries.
\label{fig:det} }
\end{figure}
\section{Results}
\label{sec:res}
\subsection{Measurements}
Binary companions, either known or newly discovered, were identified
visually in the reconstructed images (Figure~\ref{fig:mosaic}).
Relative astrometry and photometry of binary stars is derived by
approximating the speckle power spectrum, as described in the previous
DSSI publications \citep{Horch2009,Horch2012}. The detection
limits are estimated by computing rms fluctuations $\sigma$ in annular
zones of the reconstructed image and assuming that a companion above
$5 \sigma$ would be detectable, see \citep{Horch2011}.
Figure~\ref{fig:det} illustrates the detection limits in the $I$
channel.
High contrast ($\Delta m$ of 4 to 6 mag) and high-SNR DSSI
observations at Gemini, such as those described here, generally detect
companions down to a separation of about 0\farcs1. For smaller
separations, the dynamic range decreases to something below $\Delta m
\sim 1$ mag at the diffraction limit. We have looked for evidence of
vibration effects in our data by studying the centroid positions of
bright stars as a function of time, but have not found any signature
of vibration to the limit of our time resolution, which is about
20\,Hz. Elongations in speckles do occur, but these are correlated
with telescope elevation and directed along a line leading to the
zenith as DSSI does not correct the atmospheric dispersion. This we
have calibrated out by deconvolving our data with point source
observations that have a dispersion model built in (the elongation is
measured on a bright star observed at small zenith distance $z$ and
scaled as $\tan z$ when deconvolving the object). The resulting
reconstructed images remain diffraction limited.
Table~\ref{tab:res} lists all observations. Its first column
identifies the target by the {\it Hipparcos} number of the primary
component, while the next column shows which component was observed,
the primary A or the secondary B; AB stands for both components. The
following columns contain the Besselian date of the observation,
filter ($R$ or $I$, see above), and the detection limits $\Delta m$ at
separations of 0\farcs15 and 1\arcsec. For resolved binaries, the last
three columns give the position angle $\theta$, separation $\rho$, and
magnitude difference $\Delta m$, while the detection limits for
resolved binaries are $\Delta m$ relative to the primary component.
Four binaries have known orbits. Residuals from our measurements
(average in $R$ and $I$ filters) to those orbits are given in Table~2,
as a consistency check of the DSSI calibration. However, they are
much larger than the DSSI errors of $\sim$1\,mas \citep{Horch2012} and
reflect the quality of the orbits. The 4th column gives the orbit
grade from \citep{VB6}.
\begin{deluxetable}{ r r r c l }
\tabletypesize{\scriptsize}
\tablenum{2}
\tablecaption{Residuals to orbits
\label{tab:VB6} }
\tablewidth{0pt}
\tablehead{
\colhead{HIP} &
\colhead{(O$-$C)$_\theta$} &
\colhead{(O$-$C)$_\rho$} &
\colhead{Gr.} &
\colhead{Reference} \\
& \colhead{(\degr)} & \colhead{($''$)} & &
}
\startdata
9621 & $-$3.7 & $-$0.105 & 3 & \citet{Hei1996a} \\
82510 & $-$0.5 & 0.048 & 3 & \citet{Sca2003c} \\
95589 & 0.0 & 0.023 & 4 & \citet{Sca2015a} \\
97477 & $-$0.1 & 0.000 & 2 & \citet{WSI2006b}
\enddata
\end{deluxetable}
\subsection{Comments on individual systems}
Comments on some observed objects are given here. Orbital periods
of newly resolved pairs are estimated by assuming that projected
separation equals semimajor axis. Overall, we resolved six new
subsystems in the secondary components. One of them was independently
found by other team. The two closest subsystems are tentative and
require confirming observations.
{\it HIP 6268.} The faint secondary component of this binary was
pointed and tentatively resolved into a new 20-mas pair Ba,Bb. The
2\farcs6 separation between A and B was last measured in 1967;
presently AB must be wider than 2\farcs8, as it did not fit into the
DSSI field. The separation of Ba,Bb implies a period of 1.3\,yr.
Given the small separation, the relative photometry $\Delta R = 1.08$
mag and $\Delta I = 1.42$ mag is not reliable and it is premature to
conclude that Bb is bluer than Ba. The masses of Ba and Bb estimated
from their luminosity are 0.6 and 0.4 ${\cal M}_\odot$. Ironically,
the main component A has never been observed with high angular
resolution and has no RV coverage, so a similarly close subsystem
Aa,Ab would remain undetected, if it existed.
{\it HIP 9583.} Both A and B were observed separately, the separation
of AB is 2\farcs3. The data quality is below average because light
from the other component is getting into the speckle frames of the
observed component. This gives a ``bright'' edge to these frames, and
it is not handled well by the reduction routines, causing lines in the
reconstructed images. But nonetheless, the power spectra show no hint
of any fringes, so there is no evidence for binarity for either
component.
{\it HIP 12764.} The secondary component B of the recently discovered
5\farcs1 binary was observed and resolved into a 0\farcs048 subsystem
Ba,Bb. However, the pair is not seen in the $R$-band, and this
detection remains tentative. Both A and B were also observed with the
speckle camera at SOAR and unresolved \citep{SAM15}. The period of
Ba,Bb is estimated at 2.6\,yr, the masses are 0.7 and 0.5 ${\cal
M}_\odot$. The main component A has an RV trend \citep{Bonavita2007}
also indicative of a subsystem. Therefore, the DSSI data reveal this
as a potential new 2+2 quadruple system.
{\it HIP 12925.} This is a quadruple system where the main star A is a
spectroscopic binary with yet unknown orbit and two visual companions.
The components A and D, at 1\farcs9 separation, were pointed
separately and found unresolved. The pair AD was also observed at
Palomar in 2013 \citep{Palomar}, with no new subsystems detected
either.
{\it HIP 74211.} The B-component of STF~1916 was unresolved. It
contains a spectroscopic subsystem (D.~Latham, 2012, private communication).
{\it HIP 74734.} The secondary component of the 5\farcs4 pair HO~547
is resolved into a new subsystem Ba,Bb with a relatively wide
separation of 0\farcs28 but a high contrast. It is barely seen in
the $R$ channel, and its tentative measure in $R$ disagrees with the
reliable measure in $I$. We estimate the period of Ba,Bb as
50\,yr, the masses are 0.65 and 0.12 ${\cal M}_\odot$. This resolution
highlights the high dynamic range of DSSI.
{\it HIP 77052.} Both components of the 4\farcs4 nearby (parallax
68\,mas) pair A~2230 ($\psi$~Ser, GJ~596.1) were observed, and Ba,Bb
was resolved at 0\farcs22. This subsystem has been independently
discovered by \citet{Rodriguez}. We estimate the masses of nearly
equal stars Ba and Bb as 0.25 ${\cal M}_\odot$. The position of Ba,Bb
in 2009.5 was (31\degr, 0\farcs22), similar to (27\degr, 0\farcs22) in
2015.5, so the pair may have already completed a full revolution. It
was measured again at SOAR on 2016.141 at (21\fdg8, 0\farcs202),
demonstrating fast retrograde motion. The three measures correspond to a
6.1-year orbit with a semimajor axis of 0\farcs19 and a mass sum of
0.5 ${\cal M}_\odot$, but it is premature to present this tentative
orbit here. Masses of Ba and Bb will eventually be measured
accurately from the orbit to test evolutionary models of late
M-dwarfs, while the main component A, a G5V solar analogue with a
constant RV, will serve as a reference. The 529-year orbit of the outer
binary AB recently determined by \citet{Gat2013b} corresponds to the
mass sum of 1.44 ${\cal M}_\odot$, in agreement with the estimated
masses of all three stars.
{\it HIP 81662.} Both A and B at 11\farcs6 from each other were
pointed separately and found unresolved. A trend in the RV of A has
been noted by \citet{Bonavita2007}.
{\it HIP 85741.} The secondary component of the 5\farcs3 pair HU~673
is resolved at 0\farcs2, with a large $\Delta m$. The period of Ba,Bb
is $\sim$50yr, masses 0.55 and 0.14 ${\cal M}_\odot$.
{\it HIP 100970} hosts a planet with 18-day orbit \citep{Fischer1999}
and has a faint visual companion B at 3\farcs5. We report a secure
non-resolution of the main star, in agreement with other publications,
while B has not been observed.
{\it HIP 101345} is the pair BU~668 with a large contrast, last
measured at 3\farcs2 in 1965 and now found at 1\farcs2. It has not
been resolved by {\it Hipparcos}.
{\it HIP 110785} is a triple system consisting of the 913-day
spectroscopic and astrometric binary Aa,Ab \citep{Griffin2010} in a
3\farcs6 binary BU~290 with a poorly constrained visual orbit. The
estimated masses of Aa and Ab are 1.5 and 0.3 ${\cal M}_\odot$, the
semi-major axis of Aa,Ab is 59\,mas. The subsystem Aa,Ab is
unresolved here; its highly inclined orbit means that the separation
can often be smaller than the semimajor axis.
{\it HIP 111903} is a 10\farcs9 pair HU~3128 where a spectroscopic
subsystem in the primary component A is suspected from RV variation
\citep{CHIRON}. The secondary component B is unresolved here.
{\it HIP 115417} has a newly resolved secondary subsystem, discussed
in the next Section. There was some confusion about which component, A
or B, was pointed at Gemini and resolved. Analysis of the astrometric
data indicates that the subsystem belongs to B. This was confirmed in
2015 September using speckle camera at the SOAR telescope
\citep{SAM15}. The SOAR measure of Ba,Bb (2015.7379: 181\fdg2,
0\farcs746, $\Delta m = 1.97$ mag at 788\,nm) agrees very well with
the Gemini measure (2015.5255: 180\fdg6, 0\farcs735, $\Delta I$ = 1.76
mag).
\section{The triple system HIP~115417}
\label{sec:orb}
HIP~115417 (HD~220334, WDS 23228+2034) is a nearby solar-type binary
with the following properties: spectral type G2V, HIP2 parallax
26.75$\pm$0.62 mas, proper motion $(+314.6, -11.7$)\,mas~yr$^{-1}$
\citep{HIP2}, $V=6.62$ mag. The WDS database contains 262
measurements of AB. The pair was frequently observed photographically
from 1942 to 1976 (mostly at USNO), probably in search of astrometric
subsystems.
The visual binary AB was discovered by W.~Struve in 1829 at 5\farcs69
and 79\fdg5 (STF~3007). It has moved to 5\farcs85 and 92\fdg1 in 2007.
Estimated period of this pair is on the order of 2\,kyr. The Tycho
photometry gives $V$ magnitudes of 6.74 and 9.78 mag, $B$ magnitudes
of 7.42 and 10.98 mag for A and B, respectively. The distant visual
companion C, also discovered by Struve, is optical.
The motion of AB observed for nearly two centuries shows a ``wavy''
character, most obvious in the position angles and less evident in
separations, which are measured less accurately than angles. The
astrometric subsystem Ba,Bb responsible for this wave was first resolved by
us and confirmed at SOAR two months later. Archival observations of AB
allow us to estimate orbital parameters of the subsystem.
\begin{figure}
\epsscale{1.1}
\plotone{fig4.eps}
\caption{Motion of the component B relative to A, located at
coordinate origin, in the binary system HIP~115417 (STF~3007).
Visual observations are plotted as crosses, photographic
observations as squares. The line shows the trajectory
corresponding to the two orbits proposed here. The scale is in
arcseconds. The gray insert shows the orbit of Ba,Bb and the
observed position of this pair on different scale.
\label{fig:orb} }
\end{figure}
If the orbit of the subsystem Ba,Bb is eccentric, the center of mass
does not coincide with the center of the orbital ellipse. When we
subtract the smooth motion of AB, this offset is also subtracted and
the average residuals are zero; an astrometric orbit derived from
such residuals would have a small eccentricity, even if $e$ is in fact
substantial. This problem is solved by fitting the motion of AB and
the orbit of the subsystem Ba,Bb simultaneously. We represent the motion of
AB by a circular orbit and adjust its parameters to obtain the
expected mass sum. Obviously, the orbit of AB is premature, it is
derived here only to model its observed segment, as needed for the
analysis of the subsystem. The astrometric orbit of B (photo-center of
the subsystem Ba,Bb) is described by the standard seven Campbell
elements ($P$-- period, $T_0$ -- time of periastron, $e$ --
eccentricity, $a$ -- astrometric semimajor axis, $\Omega$ -- position
angle of node, $\omega$ -- argument of periastron, $i$ --
inclination). An additional parameter $F=A_2/a_2$, the ratio of the
true and astrometric inner axes, is introduced, and the resolved
measures of Ba,Bb are added to the data set. Thus, the data are
modeled by 15 parameters.
The early visual measurements of AB are inaccurate and are compatible
with a wide range of orbital parameters, while the photographic
measurements do not cover the full inner period. To derive the most
likely set of orbital elements, we estimate the components' masses
from their luminosity using standard relations for main sequence stars
and choose the elements that match those masses, namely 1.2, 0.8, and
0.5 ${\cal M}_\odot$ for A, Ba, and Bb, respectively. This is achieved
by fixing some elements in the least-squares fitting. By increasing
the inclination of the outer circular orbit, we increase the total
mass sum. The inner mass sum is adjusted by fixing the inner period.
The number of adjustable parameters is therefore reduced from 15 to 11.
In the least-squares adjustment of 11 free parameters, the weights are
inversely proportional to the measurement errors, which are assigned
subjectively based on the observing method and then corrected
iteratively to down-weight the outliers. The errors of most micrometer
measures are taken as 0\farcs2, the errors of photographic measures
are 0\farcs02, in both radial and tangential directions. The weighted
rms deviation from the orbits is indeed 20\,mas. Figure~\ref{fig:orb}
shows the measures of AB and the trajectory corresponding to the
elements listed in Table~3.
\begin{deluxetable}{ l c c }
\tabletypesize{\scriptsize}
\tablenum{3}
\tablecaption{Orbits of HIP 115417
\label{tab:orb} }
\tablewidth{0pt}
\tablehead{
\colhead{Element} &
\colhead{AB} &
\colhead{Ba,Bb}
}
\startdata
$P$ (yr) & 2161 & 117 (fixed) \\
$T_0$ (yr) & 1904.5 & 1935.1$\pm$5.5 \\
$e$ & 0 (fixed) & 0.204$\pm$0.04 \\
$a$ (\arcsec) & 6.053 & 0.215$\pm$0.011 \\
$\Omega$ (\degr) & 84.4 & 188.5$\pm$2.8 \\
$\omega$ (\degr) & 0 (fixed) & 294$\pm$15 \\
$i$ (\degr) & 58 (fixed) & 65.1$\pm$2.5 \\
$F =A_2/a_2$ & \ldots & 3.21$\pm$0.17
\enddata
\end{deluxetable}
If the light of the component Bb is negligible, the measures of
AB and A,Ba are identical and the mass ratio in the inner subsystem
is related to the ratio of the axes, $q_2 = 1/(F -1)$ = 0.45 for
$F = 3.2$. The estimated masses correspond to $q_2 \approx 0.6$
or $F \approx 2.7$. The light of Bb is thus slightly offsetting
the photo-center of B from the position of Ba, increasing $F$.
This semi-qualitative analysis is all we can do at present, as the
data do not constrain both orbits sufficiently well to warrant a more
detailed investigation. The long inner period means that the situation
will not improve soon.
\vspace{0.5cm}
\section{Summary}
\label{sec:concl}
We observed 25 secondary components of nearby binaries with the DSSI
instrument and discovered six new subsystems (one of those was
independently found by others). Two new pairs are very tight and have
short estimated periods, illustrating the detection power of speckle
interferometry at a 8-m telescope. The large fraction of resolved
secondaries, 0.24, supports previous results {
\citep{Tok2014,RAO,CHIRON} and shows that secondary subsystems are
no less frequent than subsystems in the primaries. Considering the
small number of targets covered, it makes little sense to go beyond
this qualitative statement in the statistical analysis. The data
collected here, including non-resolutions with deep detection limits,
will be used in the future to obtain a refined statistical analysis of
the complete 67-pc sample. The large incidence of secondary
subsystems suggests that dynamical interactions in small $N$-body
systems do not play a major role in the formation of multiple stars.
One of the newly resolved secondary subsystems, HIP 115417 Ba,Bb,
causes noticeable deviation from the slow motion of the outer pair
AB. We determined a preliminary orbit of the inner subsystem with a
period of 117 years using archival measurements from the WDS.
\acknowledgments
We thank all observers who participated in the DSSI run. We are
grateful to Gemini staff who aided in making the DSSI observations possible.
This work used the SIMBAD service operated by Centre des Donn\'ees
Stellaires (Strasbourg, France), bibliographic references from the
Astrophysics Data System maintained by SAO/NASA, and the Washington
Double Star Catalog maintained at USNO.
{\it Facilities:} \facility{Gemini-N}.
|
1,108,101,565,250 | arxiv |
\section{Introduction}\label{sec:intro}
Mathematical formulae are an integral part of Wikipedia and other projects of the Wikimedia foundation\footnote{List of Wikimedia projects: \url{https://meta.wikimedia.org/wiki/Wikimedia_projects}}.
The MediaWiki software is the technical backbone of many Wikimedia projects, including Wikipedia.
Since 2003, wikitext -- the markup language of MediaWiki -- supports mathematical content~\cite{Schubotz2014}.
For example, MediaWiki converts the wikitext code \verb|<math>E=mc^2</math>| to the formula $E=mc^2$.
While the markup for mathematical formulae has remained stable since 2003, MediaWiki’s pipeline for processing wikitext to render formulae has changed significantly.
Initially, MediaWiki used LaTeX to convert math tags in wikitext to PNG images.
The rendering process was slow, the images did not integrate well into the text, were inaccessible to screen readers for visually impaired users, and scaled poorly for both small and high-resolution screens.
To alleviate these problems, we started developing a new JavaScript-based rendering backend called Mathoid in 2013 \cite{Schubotz2014}.
Mathoid invokes MathJax on the server-side to convert the LaTeX code to MathML and SVG output. The new rendering pipeline became available in production in 2016~\cite{Schubotz2016}.
Improving the rendering of mathematical formulae was only a first step towards our ultimate goal of enhancing the discoverability of mathematical knowledge.
Working towards that goal, we developed a first math search engine prototype for Wikipedia~\cite{Schubotz2012} in 2012.
However, we found that classic, lexical search functionality for mathematical content has little practical benefit.
The NTCIR MathIR competitions~\cite{Aizawa2014,Schubotz2015}, which will continue at the CLEF conference 2020, have confirmed our experience. The competitions use Wikipedia as a dataset to evaluate mathematical information retrieval systems. The NTCIR results indicate that systems employing established information retrieval technology fail to add significant value for the average Wikipedia user~\cite{Schubotz2016b}.
Deploying math search to Wikipedia requires a semantic understanding of formulae, which in turn necessitates semantic augmentation of formulae.
To increase the availability of semantic information on mathematical formulae, we implemented the rendering of formulae in Wikidata -- the central structured knowledge base for Wikimedia projects.
The new functionality greatly facilitates the donation of semantically annotated mathematical formulae for volunteers.
The availability of semantic formula data in Wikidata has thus far enabled several research projects, e.g., on math question answering~\cite{Schubotz2018a}, semantic augmentation of mathematical content~\cite{Scharpf2018}, and mathematical information retrieval~\cite{Scharpf2019b}. However, the connection between formulae in Wikidata and Wikipedia had no immediate benefit for the average Wikipedia user until January 2020.
\section{Enhanced formulae in Wikipedia}\label{sec:method}
In January 2020, we deployed a feature that enables enhancing mathematical formulae in Wikipedia with semantics from Wikidata.
For instance, the wikitext code \verb|<math qid=Q35875>E=mc^2</math>| now connects the formula $E=mc^2$ to the corresponding \href{https://www.wikidata.org/wiki/Q35875}{Wikidata item} by creating a hyperlink from the formula to the special page shown in Figure \ref{figure}.
The special page displays the formulae together with its name, description, and type, which the page fetches from Wikidata.
This information is available for most formulae in all languages.
Moreover, the page displays elements of the formula modeled as \texttt{has part} annotations of the Wikidata item.
The \texttt{has part} annotation is not limited to individual identifiers but also applicable to complex terms, such as $\frac12m_0v^2$, i.e., the \href{https://en.wikipedia.org/w/index.php?oldid=939835125#Mass–velocity_relationship}{kinetic energy approximation for slow velocities}.
For example, we demonstrated using the annotation for the
\href{https://en.wikipedia.org/w/index.php?title=Special:MathWikibase&qid=Q1899432}{Grothendieck–Riemann–Roch theorem}
\(\mbox{ch}(f_{\mbox{!}}{\mathcal F}^\bullet)\mbox{td}(Y) = f_* (\mbox{ch}({\mathcal F}^\bullet) \mbox{td}(X))\).
The smooth quasi-projective schemes $X$ and $Y$ in the theorem lack Wikipedia articles.
However, dedicated articles on\textit{ quasi-projective variet}y and \textit{smooth scheme} exist.
We proposed modeling this situation by creating the new Wikidata item \href{https://www.wikidata.org/wiki/Q85397895}{\emph{smooth quasi-projective scheme}}, which links to the existing articles as subclasses.
To create a clickable link from the Wikidata item to Wikipedia, we could create a new Wikipedia article on \textit{smooth quasi-projective scheme}.
Alternatively, we could add a new section on \textit{smooth quasi-projective scheme} to the article on \emph{quasi-projective variety} and create a redirect from the Wikidata item to the new section.
Aside from implementing the new feature, defining a decision-making process for the integration of math rendering features into Wikipedia was equally important.
For this purpose, we founded the \href{https://meta.wikimedia.org/wiki/Wikimedia_Community_User_Group_Math}{Wikimedia Community Group Math} as an international steering committee with authority to decide on future features of the math rendering component of Wikipedia.
\section{Conclusion \& Future Work}\label{sec.concl}
After working on Wikipedia's math support for several years, we have deployed the first feature that goes beyond improving the display of formulae.
Realizing the feature became possible through the inauguration of the Wikimedia Community Group Math.
The new feature helps Wikipedia users to better understand the meaning of mathematical formulae by providing details on the elements of formulae.
Because the new feature is available in all language editions of Wikipedia, all users benefit from the improvement.
Rolling out the feature for all languages was important to us because using Wikipedia for more in-depth investigations is significantly more prevalent in languages other than English~\cite{LemmerichS0Z19}.
Nevertheless, also in the English Wikipedia, fewer than one percent of the articles have a quality rating of good or higher~\cite{PiccardiCZ018}.
Providing better tool support to editors can help in raising the quality of articles.
In that regard, our semantic enhancements of mathematical formulae will flank other semi-automated methods, such as recommending sections~\cite{PiccardiCZ018} and related articles~\cite{Schwarzer2016}.
To stimulate the wide-spread adoption of semantic annotations for mathematical formulae, we are currently working on tools that support editors in creating the annotations.
With \texttt{AnnoMathTex} \cite{Scharpf2019b}, we are developing a tool that facilitates annotating mathematical formulae by providing a graphical user interface that includes machine learning assisted suggestions~\cite{moi} for annotations.
Moreover, we will integrate a field into the visual wikitext editor that will suggest Wikipedia authors to link the Wikidata id of a formula if the formula is in the Wikidata database.
Improved tool support will particularly enable smaller language editions of Wikipedia to benefit from the new feature because the annotations performed in any language will be available in all languages automatically.
\begin{figure}[tp]
\centering
\includegraphics[width=.9\columnwidth]{image.png}
\caption{Semantic enhancement of the formula $E=mc^2.$ \small{\url{https://en.wikipedia.org/wiki/Special:MathWikibase?qid=Q35875}}}
\providecommand{\Description}[2][]{}
\Description[]{Special page on the mass-energy equivalence formula.}
\label{figure}
\vspace{-1.5em}
\end{figure}
|
1,108,101,565,251 | arxiv | \section{The Saga}
Intuitively, the explanation of color confinement should be encoded
in the infrared (IR) behavior of QCD Green's functions. The
Landau-gauge Gribov-Zwanziger (GZ) confinement scenario and the scaling
solution obtained by solving Schwinger-Dyson equations (SDEs)
demand --- for any space-time dimensions D $\geq 2$ --- a null
gluon propagator at zero momentum and an IR-enhanced ghost
propagator \cite{Cucchieri:2008yp}.
At present, there is wide agreement \cite{Cucchieri:2010xr} that numerical
simulations in minimal Landau gauge show (in the infinite-volume limit):
{\bf 1)} an IR-finite gluon propagator $D(p)$ in D $=3,4$ and a null $D(0)$
in 2D, {\bf 2)} violation of reflection positivity for
the gluon propagator in D $=2,3,4$ and {\bf 3)} an essentially free
ghost propagator $G(p)$ in D $=3,4$ but IR-enhanced in 2D. Thus,
the 3D and 4D results support the massive solution of the SDEs
\cite{Boucaud:2008ji,massive}, while the 2D case has a scaling behavior.
Then, a natural question is why the 2D case is different from the 3D and 4D
ones. At the moment, a possible answer to this question has only been
presented in \cite{Dudal:2008xd}.
Recently, three works \cite{Sternbeck:2008mv,Maas:2008ri,Maas:2009se}
have allegedly shown evidence of the scaling IR behavior also in 3D and 4D.
Here, we will comment on these three works.
\subsection{The $\beta = 0$ Case}
We have already criticized Ref.\ \cite{Sternbeck:2008mv} in our
work \cite{Cucchieri:2009zt}. Since that criticism has not been answered,
we will repeat it here. The authors of Ref.\ \cite{Sternbeck:2008mv}
study the Landau-gauge gluon and ghost propagators in the strong-coupling limit of pure
SU(2) lattice gauge theory. These propagators are evaluated using
different discretizations of the gluon field and, in particular,
the standard (compact) definition and the (non-compact)
stereographic projection \cite{vonSmekal:2007ns}. Their main conclusions are:
``We furthermore demonstrate that the massive branch observed for $a^2q^2 <1$
does depend on the lattice definition of the gluon fields, and that it is thus
not unambiguously defined....One might still hope that this ambiguity
will go away at non-zero $\beta$ in the scaling limit. While this is true
at large momenta, we demonstrate...that the ambiguity is still present
in the low-momentum region, at least for commonly used values of the lattice
coupling such as $\beta = 2.3$ or $\beta = 2.5$ in $SU(2)$....The scaling
properties such as exponent and coupling, on the other hand, appear to be robust under
variations of the discretization of the gauge fields...This emphasizes the importance
of understanding any discretization ambiguity of the associated gluon mass, before concluding
that this mass is now firmly established.'' However, nowhere in Ref.\ \cite{Sternbeck:2008mv}
are data at $\beta = 2.5$ shown. On the
other hand, data for a lattice volume $32^4$ at $\beta = 2.5$ in the SU(2) case
are presented in Ref.\ \cite{vonSmekal:2007ns} for the two propagators, using the standard
discretization and the stereographic projection. The conclusion of \cite{vonSmekal:2007ns}
is that ``...there are hardly any differences between the propagators obtained in
each case''. Thus, referring to the last sentence reported above from Ref.\ \cite{Sternbeck:2008mv},
there are no discretization ambiguities in the evaluation of these propagators
and the existence of a gluon mass is now firmly established.
\subsection{The Absolute Landau Gauge}
Ref.\ \cite{Maas:2008ri} considers the absolute Landau gauge,
i.e.\ configurations belonging to the fundamental modular region
$\Lambda$. This approach, however, cannot yield an IR-enhanced ghost propagator
in 3D or in 4D. Actually, restricting the configuration space to the region $\Lambda$
makes the ghost propagator even less singular \cite{Cucchieri:1997dx}. This can be seen,
indeed, also in Figures 5 and 12 of Ref.\ \cite{Maas:2008ri}. The author tries
to explain these results, which clearly go against the scaling solution, by saying that
``The reason for this behavior of the ghost propagator...may be connected to
the volume evolution of the first Gribov region and the fundamental modular region....The
combined effect of the precise shape of the
low-eigenvalue spectrum and a diverging normalization of the eigenstates could be
sufficient to provide a more infrared divergent ghost propagator in the infinite-volume
and continuum limits in absolute Landau gauge than in minimal Landau gauge.''
Thus, simulations in the absolute Landau gauge should agree with the scaling solution in the
infinite-volume limit (for D $=3,4$) due to a {\em hypothetical} diverging contribution of
the eigenstates of the Faddeev-Popov operator. (Note that a possible way of quantifying
this sentence would be to prove that the lower bound of the ghost propagator, introduced
in Ref.\ \cite{Cucchieri:2008fc},
blows up sufficiently fast in the infinite-volume limit.) Moreover, that this effect should be important
in the absolute Landau gauge and not in the minimal Landau gauge remains a mystery to us,
considering that any configuration belonging to the absolute Landau gauge is
also a configuration of the minimal Landau gauge. The author also adds ``A final proof is, of course,
only that in the absolute Landau gauge at sufficiently large volume the ghost propagator would be more
singular than in the minimal Landau gauge. The volume dependence of the propagator in both gauges found
here is as expected if this is the case....Hence, at the current point it seems more appropriate
to compare the lattice results in absolute Landau gauge, rather than in minimal Landau gauge, with those
from functional results which exhibit a scaling behavior in the far infrared.'' These
statements --- which are somewhat {\em sibylline}, since the data do not show an
IR-enhanced ghost propagator ---
may be the reason why Ref.\ \cite{Maas:2008ri} is (wrongly) cited as a numerical
verification of the scaling behavior for D $ > 2$.
\begin{figure}
\includegraphics[height=.3\textheight]{D-2d-D0-bounds-V-tri2.ps}
\caption{\label{fig:D0-2d}
Plot of $D(0)/V$, together with its upper and lower
bounds \cite{Cucchieri:2007rg}, as a function of the inverse
lattice side $1/N$. In the three cases we get a behavior
$1 / N^e$ with $e = 2.67(2)$.
\vspace{-0.45mm}}
\end{figure}
\subsubsection{The 2D Case}
Let us note that if the massive behavior observed in 3D and in 4D could be
related to discretization effects, as suggested by Ref.\ \cite{Sternbeck:2008mv},
or to Gribov-copy effects, as reported in \cite{Maas:2008ri}, then these effects
should also be present for D $ = 2$ and one should not find in this case
a scaling behavior. In this respect, still in Ref.\ \cite{Maas:2008ri},
the author makes the following prediction: ``A consequence of this scenario is
that it should be expected that also in two dimensions, for sufficiently large
volumes and number of Gribov copies, an infrared finite gluon propagator is
obtained in the minimal Landau gauge.'' We checked this prediction by
evaluating the gluon propagator for lattice volumes up to $2560^2$ at $\beta = 10$
(i.e.\ with a lattice size $L \approx 460 \, fm$).
In Fig.\ \ref{fig:D0-2d} we plot the volume dependence of $D(0)/V$, together with the
upper and lower bounds introduced in Ref.\ \cite{Cucchieri:2007rg}. The three
sets of data clearly extrapolate to zero faster than $1/V$, implying $D(0)=0$ in 2D.
\subsection{The B Gauges}
After considering the absolute Landau gauge in Ref.\ \cite{Maas:2008ri}, a
new set of gauges --- called B gauges --- was introduced by the same author
\cite{Maas:2009se}.
In this case one looks along each orbit for a transverse configuration
that yields a given value $B$ for the ghost dressing function $D_G(p) = p^2 G(p)$ at
the smallest non-zero momentum $p_{min}$. This definition does not solve the Gribov
ambiguity \cite{Maas:2009se}. Moreover, in order to find an IR-enhanced ghost
propagator one needs to favor configurations closer to the
first Gribov horizon $\partial \Omega$. This is the opposite
of what is done in the absolute Landau gauge, where one favors configurations well
inside the first Gribov region $\Omega$ \cite{Cucchieri:1997ns}. Thus, if the B gauges
should produce the scaling solution, ``it could well be that...the absolute
Landau gauge is not connected to a scaling behavior'' \cite{Maas:2009se},
in disagreement with Ref.\ \cite{Maas:2008ri}.
The main result of this approach is that the ghost propagator is strongly affected
by the choice of configuration on each orbit, in such a way that its values are enclosed in
a ``corridor''. In particular, in 3D and 4D the upper bound of this corridor
``is strongly increasing with volume''. At the same time, the gluon propagator seems
to be $B$-independent and we should have $D(0) > 0$ in the infinite-volume limit.
Thus, the only scaling solution that can be obtained with the B gauges seems to be the one
corresponding to a critical exponent $\kappa = 1/2$, which was never the
preferred value in scaling-solution works. Moreover, if the
infrared exponent is $1/2$ then in 4D one should have $ D_G(p) \sim 1/p$. Since
$1/p_{min} \approx L$, the upper bound of the corridor should grow at least as fast
as the lattice size $L$, in order to support the scaling solution. One can verify that
this is not the case with the 4D data presented in Fig.\ 3 of \cite{Maas:2009se}
(the curve should be hyperbolic as a function of $1/L$).
Let us note that one of the motivations for the introduction of the B gauges is the possible
relation with the one-parameter family of solutions obtained by functional methods
\cite{Boucaud:2008ji,Fischer:2008uz}. In this respect one should stress, however, that the B gauges are
related to different Gribov copies on each orbit. On the other hand, the configuration space
is not encoded in the SDEs and this information has to be put in by hand. This can be done
in simple cases \cite{Reinhardt:2008ij}, if all Gribov copies are known, but nobody
knows how to do it in a realistic case. Thus, this relation seems at the moment
quite accidental. Even more fanciful seems to us the hypothetical
connection between the Kugo-Ojima (KO) approach \cite{Cucchieri:2008yp} and a (possible) scaling
solution obtained using B gauges. Indeed, this connection requires ``subtle cancellation''
\cite{Maas:2009se}, since one has to relate an average over all
Gribov copies to results obtained by selecting specific copies inside the first Gribov region
$\Omega$. In our opinion, the lack of BRST invariance when the functional space is restricted
to $\Omega$ \cite{Dudal:2009xh} obscures the relation of the GZ approach
with the KO criterion and the analogies between these two approaches seem to be, at
the moment, a questionable coincidence.
Finally, several questions should be answered before discussing
in detail the results obtained using B gauges. For example,
it is well known that some Gribov copies on the lattice are just lattice artifacts. Thus, by
using the B gauges, aren't we just probing these artifacts? This may explain the
over-scaling observed in \cite{Maas:2009ph}. Also, it seems very difficult to control the
infinite-volume limit of the corridor and, as pointed in \cite{Maas:2009se}, ``it cannot be
excluded that the corridor closes again at much larger volumes''.
This seems indeed possible since, for very large lattice volumes,
all the orbits should come very close to the boundary $\partial \Omega$ and one can
expect smaller Gribov-copy effects \cite{Sternbeck:2005tk}.
\section{Conclusions: the Ghost Factory}
We believe that, in order to understand the results obtained in minimal
Landau gauge using numerical simulations, the first question to be answered
is: why is the 2D case different? One could also ask: can we test
numerically the explanation presented in \cite{Dudal:2008xd}? A clear answer
to these questions probably requires new ideas and better data (especially
in the ghost sector). Unproven hypotheses and happy coincidences
should on the contrary be treated with great caution.
We recently installed at IFSC--USP a new machine with 18 CPUs Intel quadcore
Xeon 2.40GHz (with InfiniBand network and a total of 216 GB of memory)
and 8 NVIDIA Tesla S1070
boards (500 Series), each with 960 cores and 16 GB of memory. The peak performance
of the 8 Tesla boards is estimated in about 2.8 Tflops in double precision and 33
Tflops in single precision. This machine will be used mainly for studies of Green's
functions in different gauges (Landau, Feynman and Coulomb) for various SU($N_c$) gauge groups.
In particular, the GPUs will be used for the inversion of the Faddeev-Popov matrix
using conjugate gradient.
This computer will allow us to perform an extensive study of the ghost sector.
We believe that this new ammo will help us clarify the issues addressed above.
\begin{theacknowledgments}
We thank FAPESP (grant \# 2009/52213-2) and CNPq for support.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
1,108,101,565,252 | arxiv | \section{introduction}
Collective spontaneous emission is one of the central topics of modern optics. An intriguing example exhibiting collective effect is superradiance discorvered by Dicke in 1954, where radiance intensity from an ensemble of emitters is enhanced due to the constructive interference between the radiances from individual emitters\,\cite{Dicke1954}. Superradiance behavior was first observed experimentally more than four decades ago\,\cite{Skribanowitz1973HM,Friedberg1973HT}. To date, superradiance has become a useful resource in lasing engineering\,\cite{Chan2003BV,Norcia2016WC,Bohnet2012CW}, precision measurements\,\cite{Kim2006BO, Rohlsberger2010SS, Norcia2018CM}, quantum memories\,\cite{Walther2019SK}, and quantum information\,\cite{Kuzmich2003BB,Casabone2015FB}. Its counterpart, subradiance, describes the cooperative suppression of spontaneous emission from an ensemble of emitters\,\cite{Dicke1954}. Compared with superradiance, it is significantly hard to experimentally observe subradiance effect, as the subradiance states are weakly coupled to the radiative vacuum and are rather sensitive to non-radiative decoherence. Direct observations of subradiance have been achieved in a pair of ions\,\cite{DeVoe1996Brewer} and molecular systems\,\cite{Hettich2002SZ,Takasu2012ST,McGuyer2015MI}. Recently, subradiance was also observed in cold atomic clouds\,\cite{Guerin2016AK,Cipris2021ME,Das2020LY, Gold2022HY,Ferioli2021GH}, superconducting circuit systems\,\cite{Wang2020LF}, Rydberg atoms\,\cite{Stiesdal2020BK}, and 2D layer of atoms\,\cite{Rui2020WRA}. Because of its special radiation characteristics, subradiance may have important applications in quantum metrology\,\cite{Ostermann2013RG} and quantum information processing\,\cite{Asenjo-Garcia2017MCA,Facchinetti2016JR,Needham2019,Guimond2019GV}. For example, subradiance can be used to prolong the stored lifetimes of information by its slow collective emission.
Usually superradiance and subradianc are studied in the pulsed regime, where the emitters initially prepared in the excited state rapidly relax to the ground states with the single-atom decay channel and then the radiance terminates\,\cite{Gross1982Haroche}. This collective radiance is transient. It has recently been proposed that the superradiance and subradiance can be obtained in steady state, where both continuous dissipation and pumping have been considered in the systems\,\cite{Bohnet2012CW,Meiser2010Holland1, Meiser2010Holland2, Gegg2018CK, Shankar2021RJ, Auffeves2011GP, Dorofeenko2013ZV, Jager2021LS, Zheng2016Cooper, Zhang2018ZM, Kirton2017Keeling, Patra2019AY, Xu2016JS,Bohnet2014CW}. The emitters collectively emit photons and can be repumped to provide a steady supply for the system\,\cite{Bohnet2012CW,Meiser2010Holland1, Meiser2010Holland2,Shankar2021RJ}. This system can continuously generate collective light emission with the single-atom decay channel. However, the present studies on superradiance and subradiance of steady state are confined to considering only the single-atom decay as the collective decay of the atomic ensemble. Recently, it has been proposed that collective light emission with the single-atom decay channel is suppressed and the two-atom decay channel (i.e., the simultaneous decay of two atoms of an ensemble) emerges in waveguide quantum electrodynamics (QED) when the emitter frequencies are below the edge of the propagation bound\,\cite{Wang2020JK}. Reference\,\cite{Qin2021MJ} proposed that the quantum degenerate parametric amplification in a cavity QED system can also cause the two-atom decay. The two-atom decay can lead to supercorrelated radiance with perfectly correlated spontaneous emission \,\cite{Wang2020JK} and the generation of a long-lived macroscopically quantum superposition state\,\cite{Qin2021MJ}. Then one question that arises naturally is whether the two-atom decay could influence the collective radiance characteristics of steady state of an atomic ensemble.
Here, we study the steady-state collective radiance of an incoherently pumped atomic ensemble. The atomic ensemble can undergo two-atom collective decay via the cavity. We find that, compared with the case of single-atom decay, two-atom decay can significantly suppress the steady-state collective radiance of the atomic ensemble, expanding the region of subradiance. In the subradiance regime manipulated by the collective decay, the system is in an entangled state, and the mean populations of the system in the excited state and ground state of the atoms are almost equal. We also show the state spaces in different radiance regimes, which clearly demonstrate the processes of the two-atom decay and the collective radiance characteristics of the atoms manipulated by the two-atom decay. Moreover, we investigate the correlation characteristics of the emitted photons from the atomic ensemble. Nearly coherent emitted photons can be obtained in the superradiance regime when only single-atom decay is considered, but does not occur in the case of including only two-atom decay. In the latter case, the emitted photons of steady state only show bunching in the subradiance, superradiance, and uncorrelated radiance regimes, where the correlation function is rewritten due to the cavity field $\hat{a}\propto\hat{J}_-^2$.
Compared to earlier works on the application of two-atom decay\,\cite{Wang2020JK,Qin2021MJ}, here we investigate a different quantum effect, i.e., steady-state subradiance of an atomic ensemble manipulated by two-atom decay. The subradiance in steady state originates from the competition between the collective decay and the repumping on the atoms, where the system makes the balance between the collective decay and the weak repumping on atoms by a suppressed emission of the atomic ensemble. This subradiance may be used for quantum storage due to its slow collective emission. In contrast, Ref.\,\cite{Wang2020JK} studied supercorrelated radiance, where the two-atom decay makes perfectly correlated spontaneous emission and can lead to collective acceleration beyond the $N^2$ scaling of superradiance; Ref.\,\cite{Qin2021MJ} studied the generation of a long-lived macroscopically quantum superposition state by the two-atom decay. They may have potential applications in lasing engineering and noise-immune quantum technologies. The collective radiance of steady state means that the system stably emits photons through continuous dissipation and pumping, which is also essentially different from supercorrelated radiance, where the emitters initially prepared in the excited state rapidly relax to the ground states and then the radiance terminates. Our work not only expands the realm of the two-atom decay by bringing it to the next stage of application in steady-state collective radiance, but also is fundamentally interested in exploring collective radiance theory.
\begin{figure}
\includegraphics[width=8.6cm]{fig1.pdf}\\
\caption{(a) Schematic of the model. The cavity field is coupled to the atomic ensemble consisting of $N$ identical two-level atoms, where $\lambda$ is the collective coupling strength between the cavity and atomic ensemble, and $\kappa_a$ is the rate of cavity field. The atoms are incoherently repumped with pump rate and can undergo the two-atom collective decay via the cavity field. (b) Dicke space for $N=6$ showing the Dicke states $|J,M\rangle$. The red, green, and blue arrows correspond to the processes of repuming $\gamma_p$, single-atom decay $\Gamma_1$, and two-atom decay $\Gamma_2$, respectively. }\label{fig1}
\end{figure}
\section{Model}
We consider an atomic ensemble that consists of $N$ identical two-level atoms, each with an excited state $|e\rangle$ and ground state $|g\rangle$.
As shown in Fig.\,\ref{fig1}(a), it is considered that all atoms are collectively coupled to a cavity field and the coupled atom-cavity system is described by the Hamiltonian $H=\lambda(\hat{a}^\dag \hat{J}_-^2+\hat{a}\hat{J}_+^2 )$, where $\hbar=1$, $\hat{J}_-=\sum_{n=1}^N\hat{\sigma}_n=(\hat{J}_+)^\dag$ is the collective lowering operator, $\hat{\sigma}_n^\dag=|e\rangle\langle g|$ is the Pauli creation operator for the $n$th two-level atom, and $\lambda$ is the effective atom-cavity collective coupling strength. Here, we have considered $\omega_a\approx2\omega_{\sigma}$, and $\omega_a$ and $\omega_{\sigma}$ are the frequencies of the cavity and two-level atom, respectively. The model can be achieved in a nonlinear cavity QED system with degenerate parametric amplification\,\cite{Qin2021MJ}. The dissipative dynamics of the coupled system can be implemented with a Lindblad-type master equation
\begin{align}\label{eq001}
\frac{d \hat{\rho}}{dt}=-i[\hat{H},\hat{\rho}]+\gamma_p \sum_{n=1}^N \mathcal{L}[\hat{\sigma}_n^\dag]+ \kappa_a \mathcal{L}[\hat{a}],
\end{align}
%
where the Liouvillian superoperator $\mathcal{L}$ is defined as $\mathcal{L}[\mathcal{\hat{O}}]=(2\mathcal{\hat{O}}\hat{\rho}\mathcal{\hat{O}}^\dag-\hat{\rho}\mathcal{\hat{O}}^\dag\mathcal{\hat{O}}-\mathcal{\hat{O}}^\dag\hat{\mathcal{O}}\hat{\rho})/2$, and $\kappa_a$ and $\gamma_p$ represent the decay rate of the cavity and the rate of incoherent pumping of the individual atoms, respectively. The dissipation of the system is balanced by pumping the atoms to the excited states from their ground states. This pumping on the atoms can be regarded as spontaneous absorption from $|g\rangle$ to $|e\rangle$. It can be achieved experimentally by optically driving a Raman transition from the ground state $|g\rangle$ to an auxiliary excited state that can rapidly decay to the excited state $|e\rangle$\,\cite{Bohnet2012CW,Meiser2010Holland1}. The spontaneous emission of the individual atoms has been neglected here, since the decay rate of the independent atoms $\gamma$ is much less than that of the cavity, i.e., $\kappa_a\gg\gamma$. In this limit of bad-cavity $\kappa_a\gg\gamma$, the cavity mode $\hat{a}$ can be adiabatically eliminated\,\cite{Meiser2010Holland2}. The emission of the cavity photons is thus characterized by the collective emission of the atomic ensemble, with $\hat{a}\propto\hat{J}_-^2$ in the system. The above Eq.\,(\ref{eq001}) can be reduced to an effective master equation of collective radiance
\begin{align}\label{eq002}
\frac{d \hat{\rho}}{dt}=\gamma_p \sum_{n=1}^N \mathcal{L}[\hat{\sigma}_n^\dag]+ \Gamma_2 \mathcal{L}[\hat{J}_-^2],
\end{align}
%
where the two-atom decay emerges in the system due to $\hat{a}\propto\hat{J}_-^2$, and the collective decay rate of the atoms $\Gamma_2=4\lambda^2/\kappa_a$. As a comparison, we also consider the system containing only single-atom decay, here the last term of Eq.\,(\ref{eq002}) is replied by $ \Gamma_1 \mathcal{L}[\hat{J}_-]$, and $\Gamma_1$ is the rate of the single-atom decay. This case can be easily achieved in a Tavis-Cumming model under the limit of bad cavity\,\cite{Bohnet2012CW}.
To understand the behavior of the collective emission of the atomic ensemble more conveniently, we discuss the dynamics of the system in the collective basis $|J,M\rangle$, with the quantum numbers $J=0,1,2,\dots,N/2$ (for an even number $N$) and $M=-J,-J+1,\dots,J-1,J$. The state $|J,M\rangle$ is the joint eigenstate of the operators $\hat{\bold{J}}^2$ and $\hat{J}_z$, with $\hat{\bold{J}}^2|J,M\rangle=J(J+1)|J,M\rangle$ and $\hat{J}_z|J,M\rangle=M|J,M\rangle$, where $\hat{J}_z=\sum_{n=1}^N \hat{\sigma}_n^j/2$ ($j=x,y,z$), $\hat{\sigma}_n^x=\hat{\sigma}_n^\dag + \hat{\sigma}_n$, $\hat{\sigma}_n^y=i(\hat{\sigma}_n - \hat{\sigma}_n^\dag)$, and $\hat{\sigma}_n^z=\hat{\sigma}_n^\dag\hat{\sigma}_n - \hat{\sigma}_n\hat{\sigma}_n^\dag $. The value of small $J$ corresponds to subradiant subspace. The discrete Dicke state space for the collective atomic states is shown in Fig.\,\ref{fig1}(b). In this state space, the single-atom and two-atom collective decays give rise to the transitions with the differences of the quantum number $\delta M = M-M'=-1$ (see the green arrows) and $\delta M=-2$ (see the blue arrows) within a ladder of a constant $J$, respectively. The pumping on individual atoms can generate the transition with $\delta M=1$ within the ladder of $J$ and its adjacent ladder of $J\pm1$, corresponding to the red arrows in Fig.\,\ref{fig1}(b).
\section{Results}
To investigate in detail the influence of the two-atom decay on the steady-state collective radiance of the atomic ensemble, we calculate the average occupation of the emitters and atom-atom correlation $R_f$ in the two cases of single-atom decay and two-atom decay. The atom-atom correlation is given by
\begin{align}\label{eq003}
R_f=\frac{1}{N}\langle \hat{J}_+ \hat{J}_-\rangle- \frac{1}{N}\sum_{n=1}^N\langle \hat{\sigma}_n^\dag \hat{\sigma}_n\rangle,
\end{align}
%
where $\langle \hat{J}_+ \hat{J}_-\rangle$ and $\sum_{n=1}^N\langle \hat{\sigma}_n^\dag \hat{\sigma}_n\rangle$ describe average collective occupation of the atoms and the total population from $N$ individual atoms, respectively. The effect of atom-atom correlations has been included in the collective occupation term $\langle \hat{J}_+ \hat{J}_-\rangle$. Under this definition, $R_f=0$ indicates an uncorrelated feature between the atoms, where the collective occupation of the atomic ensemble is the sum of the populations of $N$ individual atoms, i.e., $\langle \hat{J}_+ \hat{J}_-\rangle=\sum_{n=1}^N\langle \hat{\sigma}_n^\dag \hat{\sigma}_n\rangle$. $R_f>0$, i.e., $\langle \hat{J}_+ \hat{J}_-\rangle>\sum_{n=1}^N\langle \hat{\sigma}_n^\dag \hat{\sigma}_n\rangle$, means that the atom-atom correlation increases the collective population of the atoms. $R_f<0$, i.e., $\langle \hat{J}_+ \hat{J}_-\rangle<\sum_{n=1}^N\langle \hat{\sigma}_n^\dag \hat{\sigma}_n\rangle$, corresponds to the suppression of the collective population of the atoms by atom-atom correlation.
\begin{figure}
\includegraphics[width=8.6cm]{fig2.pdf}\\
\caption{Averaged population of the atoms in steady state vs $\gamma_p/\Gamma_1$ and $\gamma_p/\Gamma_2$ in the cases of including only the single-atom decay (a and b) and two-atom decay (c and d), respectively. The blue and gray shaded areas indicate the increase and suppression of the population of the atoms by the atom-atom correlation, respectively, and the white areas indicate an uncorrelated feature between the atoms. Insets: atom-atom correlation $R_f$ vs $\gamma_p/\epsilon$ for different $N$. The solid lines and dots correspond to the analytical and numerical results, respectively. System parameters are (a, c) $N=10$ and (b, d) $N=100$.}\label{fig2}
\end{figure}
We first consider the case of single-atom decay, where the equations of motion from the master equation Eq.\,(\ref{eq002}) are given by
%
\begin{subequations}
\begin{align}\label{eq004}
&\frac{d}{dt}\!\langle \hat{\sigma}_1^z\rangle \!=\!-(\!\gamma_p\!+\!\Gamma_1\!)\!\langle \hat{\sigma}_1^z\rangle\!\!-\!2(\!N\!\!-\!1\!)\Gamma_1\!\langle\hat{\sigma}_1^\dag\hat{\sigma}_2\rangle\!+\!(\!\gamma_p\!-\!\Gamma_1\!),\\%\tag{3a}
&\frac{d}{dt}\langle \hat{\sigma}_1^\dag \hat{\sigma}_2\rangle=-(\gamma_p+\Gamma_1)\langle\hat{\sigma}_1^\dag\hat{\sigma}_2\rangle+\frac{\Gamma_1}{2}\langle \hat{\sigma}_1^z\hat{\sigma}_2^z\rangle+\frac{\Gamma_1}{2}\langle \hat{\sigma}_1^z\rangle\nonumber\\
&~~~~~~~~~~~~~~~+\Gamma_1(N-2)\langle \hat{\sigma}_1^z\hat{\sigma}_2\hat{\sigma}_3^\dag\rangle,\\%\tag{3b}
&\frac{d}{dt}\langle \hat{\sigma}_1^z\hat{\sigma}_2^z\rangle=-2(\gamma_p+\Gamma_1)\langle \hat{\sigma}_1^z\hat{\sigma}_2^z\rangle+2(\gamma_p-\Gamma_1)\langle \hat{\sigma}_1^z\rangle\nonumber\\
&~~~~~~~~~~~~~~~+4\Gamma_1\langle \hat{\sigma}_1^\dag \hat{\sigma}_2 \rangle-4\Gamma_1 (N-2)\langle\hat{\sigma}_1^z\hat{\sigma_2}\hat{\sigma}_3^\dag\rangle
\end{align}
\end{subequations}
Here we have considered $\langle \hat{\sigma}_n^\dag\hat{\sigma}_{n'}\rangle=\langle \hat{\sigma}_1^\dag\hat{\sigma}_{2}\rangle$ for all $n\neq n'$ due to the symmetry of the expectation values related to particle exchange\,\cite{Meiser2010Holland1, Meiser2010Holland2,Shankar2021RJ,Xu2016JS}. The above Eqs.\,(4a)-(4c) can be reduced to a closed set of equations by factorizing the third-order expectation values as $\langle\hat{\sigma}_1^z\hat{\sigma_2}\hat{\sigma}_3^\dag\rangle \approx \langle\hat{\sigma}_1^z\rangle \langle \hat{\sigma_1}^\dag\hat{\sigma}_2\rangle$\,\cite{Meiser2010Holland1, Meiser2010Holland2,Shankar2021RJ,Xu2016JS}. This factorization might cause partial decorrelation between atoms, but complete decorrelation cannot occur, since the term $ \langle \hat{\sigma_1}^\dag\hat{\sigma}_2\rangle$ includes the effect of atom-atom correlation. In the steady-state limit, we obtain
\begin{subequations}
\begin{align}\label{eq005}
&\langle \hat{\sigma}_1^\dag\hat{\sigma}_2\rangle=-\frac{c_2}{2c_1}+\frac{\sqrt{c_2^2-4c_1c_3}}{2c_1},\\%\tag{5a}\\
&\langle \hat{\sigma}_1^z\rangle=\frac{\gamma_p-\Gamma_1}{\gamma_p+\Gamma_1}+\frac{\Gamma_1(N-1) (c_2-\sqrt{c_2^2-4c_1c_3})}{c_1(\gamma_p+\Gamma_1)}
\end{align}
\end{subequations}
where
\begin{subequations}
\begin{align}\label{eq006}
&c_1=\frac{4(N-1)(N-2)\Gamma_1^2}{(\gamma_p+\Gamma_1)^2},\\%\tag{6a}\\
&c_2=2+\frac{2N\Gamma_1}{\gamma_p+\Gamma_1}-\frac{2\Gamma_1(2N-3)(\gamma_p-\Gamma_1)}{(\gamma_p+\Gamma_1)^2},\\%\tag{6b}\\
&c_3=\frac{2\Gamma_1(\Gamma_1-\gamma_p)}{(\gamma_p+\Gamma_1)^2}
\end{align}
\end{subequations}
Then the solutions of the steady-state light emissions can thus be given by inserting Eqs.\,(5a)-(5b) into $\langle \hat{J}_+\hat{J}_-\rangle=N(\langle \hat{\sigma}_1^z\rangle-1)/2+N(N-1)\langle \hat{\sigma}_1^\dag\hat{\sigma}_2\rangle$ and $\sum_{n=1}^N\langle \hat{\sigma}_n^\dag \hat{\sigma}_n\rangle=N(\langle \hat{\sigma}_1^z\rangle+1)/2$. Figures \ref{fig2}(a)-\ref{fig2}(b) show the population of the atoms and atom-atom correlation in the ensemble, obtained by the analytical solutions and numerically calculating the master equation Eq.\,(\ref{eq002}). Here, Eq.\,(\ref{eq002}) can be directly calculated by the permutational-invariant quantum solver\,\cite{Shammah2018AL} in QUTIP\,\cite{Johansson2012NN}. The very good agreement between analytical solutions and fully numerical simulations demonstrates the validity of the above approximation.
\begin{figure}
\includegraphics[width=8cm]{fig3.pdf}\\
\caption{ Mean atomic inversion $\langle \hat{\sigma}_1^z\rangle$ and squeezing parameter $\xi^2$ vs (a and b) $\gamma_p/\Gamma_1$ and (c and d) $\gamma_p/\Gamma_2$ when $N=100$. The blue and yellow dots correspond to the cases of single-atom decay and two-atom decay, respectively.}\label{fig3}
\end{figure}
%
Now, let us consider the case of two-atom decay. In Figs.\,\ref{fig2}(c)-\ref{fig2}(d), we present the population of the atoms and atom-atom correlation in the ensemble, obtained by numerically solving the master equation Eq.\,(\ref{eq002}).
Comparing Figs.\,\ref{fig2}(a)-\ref{fig2}(b) and \ref{fig2}(c)-\ref{fig2}(d), shows that the two-atom decay can significantly suppress the collective population of the atoms in steady state in a wider parameter regime, leading to the expanding of the subradiance region. This is because the system with two-atom decay relaxes to lower energy states faster than the one with single-atom decay. More energy is needed to repump the atoms to their excited states in the system with two-atom decay, compared with that in the case of single-atom decay, as shown in Figs.\,\ref{fig3}(a) and \ref{fig3}(c). In the subradiance regime manipulated by the collective decay, i.e., single-atom decay or two-atom decay, the system is in an entangled state, and the mean populations of the system in the excited state and ground state of the atoms are almost equal, as shown in Fig.\,\ref{fig3}. Here the entanglement can be adjusted by the witness
\begin{align}\label{eq007}
&\xi^2=\frac{2[(\Delta\hat{J}_x)^2+(\Delta\hat{J}_y)^2+(\Delta\hat{J}z)^2]}{N},
\end{align}
with $(\Delta\hat{J}_j)^2=\langle \hat{J}_j^2\rangle-\langle\hat{J}_j\rangle^2$ ($j=x,y,z$), and the spin squeezing parameter $\xi^2<1$ indicates the entanglement establishing.
\begin{figure}
\includegraphics[width=8.6cm]{fig4.pdf}\\
\caption{Population distribution of system states on the Dicke ladder for different $\gamma_p/\Gamma_2$ when $N=100$: steady states of (a-c) subradiance, (d-g) supperradiance, and (h,i) uncorrelated radiance. Insets: Enlarged region of the population distribution.}\label{fig4}
\end{figure}
In Figs.\,\ref{fig4}(a)-\ref{fig4}(i), we show the population distribution of the system state on the Dicke ladder in the case of two-atom decay. In the subradiance regime [see Figs.\,\ref{fig5}(a)-\ref{fig5}(c)], the system evolves in the states with smaller and smaller $J$, since the pumping mainly drives the adjacent ladder transitions $|J\rangle\to|J-1\rangle$ when $M<0$. The populations of the system in the excited state and ground state of the individual atom in this steady state are almost equal [also see Fig.\,\ref{fig3}(c)], and the collective population of the atoms is suppressed by atom-atom correlation. This steady state generates a suppressed emission. In the superradiance regime [see Figs.\,\ref{fig5}(d)-\ref{fig5}(g)], the collective two-atom decay is dominated in the system for the states with large $J$, while the repumping is dominated for the states with small $J$. In this steady state, the population of the system in the excited state of the atoms is greater than that in the ground state [also see Fig.\,\ref{fig3}(c)], and the collective population of the atoms is increased by atom-atom correlation. The population distribution of the system states on the Dicke ladder for large $J$ also demonstrates that the two-atom decay generates transitions with differences of the quantum number $\delta M=-2$ within a ladder of a constant $J$, as shown in the insets of Figs.\,\ref{fig5}(d)-\ref{fig5}(f). In the uncorrelated radiance regime, corresponding to Figs.\,\ref{fig5}(h) and \ref{fig5}(i), almost all atoms are repumped to their excited states due to a strong pumping rate. This is in agreement with the result shown in Fig.\,\ref{fig3}(c).
\begin{figure}
\includegraphics[width=8.6cm]{fig5.pdf}\\
\caption{(a and b) Equal-time second-order correlation function $g^{(2)}_i(0)$ ($i=1,2$) and (c and d) atom-atom correlation $R_f$ vs $\gamma_p/\epsilon$ for different $\Gamma_i/\epsilon$. Panels (a,c) and (b,d) correspond to the cases of single-atom decay and two-atom decay, respectively. Other system parameters are $N=100$ and $\epsilon/2\pi=10\,{\rm Hz}$. }\label{fig5}
\end{figure}
The collective radiance in steady state originates from the competition between the collective decay and the repumping of the atoms. In the weak pumping regime, the system makes the balance between the collective decay and the repumping on atoms by a suppressed emission of the atomic ensemble, leading to steady-state subradiance. In the intermediate pumping regime, the rate of pumping is increased, and the balance between the collective decay and the repumping is realized by an enhanced collective emission of the atoms, which leads to steady-state superradiance. In the strong pumping limit, the strong repumping enables a large number of atoms to be continuously repumped to their excited states, where the rate of pumping is much larger than that of the two-atom decay. This leads to the generation of uncorrelated radiance of the atomic ensemble.
Next, we investigate the correlation characteristics of the emitted photons from the atomic ensemble. In Sec.II, we have discussed that the cavity mode can be adiabatically eliminated in the bad-cavity limit (i.e., $\kappa_a\gg\gamma$), and the emission of the cavity photons is thus characterized by the collective emission of the atomic ensemble. Then the correlation functions of the emitted photons can be given by calculating the atomic correlation functions. Thus, in the case of including only the single-atom decay, the equal-time second-order correlation function\,\cite{Glauber1963} in the steady-state limit is
$g^{(2)}_1(0)=\langle \hat{J}_+\hat{J}_+\hat{J}_-\hat{J}_-\rangle/\langle \hat{J}_+\hat{J}_-\rangle^2$\,\cite{Meiser2010Holland2}. However, this function is not suitable for the system consisting of two-atom decay, where the cavity field $\hat{a}\propto\hat{J}^2$. In the case of including only two-atom decay, the correlation function can be defined as
\begin{align}\label{eq008}
&g^{(2)}_2(0)=\frac{\langle \hat{J}_+^2 \hat{J}_+^2\hat{J}_-^2\hat{J}_-^2\rangle}{\langle \hat{J}_+^2\hat{J}_-^2\rangle^2}.
\end{align}
In Figs.\,\ref{fig5}(a)-\ref{fig5}(d), we plot the influence of the repumping on the correlation function. It shows that the nearly coherent emitted photons can be obtained in the superradiance regime when only single-atom decay is considered, but cannot occur in the case of two-atom decay. In the latter case, the emitted photons of steady state show bunching in three radiance regimes. The dips of the correlation function correspond to the parameter regime of the maximum superradiance in both cases. In addition, in the case of two-atom decay, there exists a peak of super bunching in the superradiance regime, and this peak cannot be found in the case of single-atom decay. The dips and peaks correspond to the range of the superradiance regime, thus their positions shift along with the increasing pumping rate.
\section{Discussions and Conclusions}
Regarding the experimental implementations, a superconducting circuit is an ideal candidate for the implementation of the system with two-atom decay. We consider a circuit QED system consisting of two parametrically coupled superconducting resonators $\hat{a}$ and $\hat{b}$\,\cite{Chang2020SF, Vrajitoarea2020HG}, with frequencies $\omega_a$ and $\omega_b$, respectively. The parametric coupling between the two resonators with coupling strength $\lambda_{ab}$ can induce the transformation between the single microwave photon in resonator $\hat{a}$ and the microwave photon pair in resonator $\hat{b}$. An ensemble of $N$ identical two-level systems (e.g., qubits and ultracold atoms) with frequency $\omega_{\sigma}$ is coupled to the resonator $\hat{b}$ with the collective coupling strength $\lambda_{b\Gamma}=\sqrt{N}\lambda_{b\sigma}$\,\cite{Kakuyanagi2016MD,Hattermann2017BL,Kubo2011OB, Amsuss2011KN,Kubo2012DD,Putz2014KA,Astner2017NP}, where $\lambda_{b\sigma}$ is the coupling strength between the individual spin and resonator $\hat{b}$, and the frequency of the resonator $\hat{b}$ is far greater than that of the two-level systems. Under the conditions of $\omega_a\approx 2\omega_{\sigma}$ and $\omega_{b}\gg \omega_{\sigma}$, the effective resonant transition between two excited two-level systems and the single-photon resonator $\hat{a}$ can be realized, where the transition rate $\lambda=\lambda_{ab} \lambda_{b\Gamma}^2/[N(\omega_b-\omega_{\sigma})^2]$\,\cite{Qin2021MJ}. Considering the dissipation of the system through coupling to a reservoir, a new two-atom decay channel emerges and the single-atom decay is suppressed. Here, the subsequent loss of microwave photons of the system leads to the collective decay of the ensemble of two-system systems. The pumping on the two-level systems can be achieved by optically driving a Raman transition from the ground state $|g\rangle$ to an auxiliary excited state that can rapidly decay to the excited state $|e\rangle$\,\cite{Bohnet2012CW}. The correlation function of emitted microwave photons can be measured by applying quadrature amplitude detectors\,\cite{Bozyigit2011LS, Lang2011BE,Brown1956Twiss}. In this design, we can obtain the correlation of emitted photons $g_2^{(2)}\approx6.22$ with feasible experimental parameters ($N=100$, $\lambda_{ab}/2\pi=\lambda_{b\Gamma}/2\pi=20\,{\rm MHz}$, $(\omega_b-\omega_{\sigma})/2\pi=200\,{\rm MHz}$, $\kappa_a/2\pi=1.6\,{\rm MHz}$, $\kappa_b/2\pi=10\,{\rm KHz}$, $\lambda/2\pi=2\,{\rm KHz}$, $\Gamma_2=4\lambda^2/\kappa_a=10\times 2\pi\,{\rm Hz}$, and $\gamma_p/2\pi=1\,{\rm KHz}$)\,\cite{Vrajitoarea2020HG,Kakuyanagi2016MD,Hattermann2017BL, Kubo2011OB}, where $\kappa_a$ and $\kappa_b$ are the rates of decay of resonators $\hat{a}$ and $\hat{b}$, respectively. Note that our model is not limited to a particular architecture and could be implemented or adapted in a variety of platforms, such as a waveguide QED system\,\cite{Wang2020JK}.
In summary, we have investigated the collective radiance characteristics of an atomic ensemble with two-atom decay. The system with two-atom decay relaxes to lower energy states faster than one with single-atom decay. As a result, the two-atom decay significantly suppresses the steady-state collective radiance of the atomic ensemble in a wide parameter regime, and expands the region of the steady-state subradiance of the atoms. In particular, compared with the case of single-atom decay where the nearly coherent emitted photons can be obtained in the superradiance regime, the emitted photons of the steady state only show bunching in three radiance regimes when only the two-atom decay is considered.
We thank Dr. C.-S. Hu and Q.-Y. Qiu for helpful discussions. This work is supported by the National Key Research and Development Program of China grant 2021YFA1400700, the National Natural Science Foundation of China (Grants No.\,11974125, No.\,12205109) and the China Postdoctoral Science Foundation No. 2021M701323.
|
1,108,101,565,253 | arxiv | \subsection*{Scheme of analysis}
When describing quantum systems via density matrix one suggests that
the system is located in pure states $\Psi_{\alpha}^{i}$ with different
probabilities $\lambda_{i}$. The representation of eigenvectors of density
matrix on Argand plots is an ultimate goal of partial wave analysis. The input
data are the moments $t_{\gamma}$. We propose to obtain the region of solution
using Monte-Carlo technique. The scheme of analysis is the following:
\begin{enumerate}
\item
A random unitary matrix $\Psi_{\alpha}^{i}$ is generated. We use the generation
of the columns of this matrix as a set of independent isotropically distributed
random complex vectors with the subsequent Hilbert-Schmidt orthogonalization.
An isotropic (Gaussian) distribution of $n$-dimensional vector can be obtained
by generating its components as independent random values with Gaussian
distribution:
$$dp=e^{-{x_{1}}^{2}}dx_{1}\ ...\ e^{-{x_{n}}^{2}}dx_{n}=e^{-r^{2}}dV.$$
The unitary matrices generated by this me\-thod are uni\-form\-ly
di\-stri\-bu\-ted over
the unitary group invariant volume\footnote{The unitary transformation
transfers
one orthonormal basis to another. The invariant measure on unitary group can be
defined as a product of spherical volume for one vector from the
basis and the measure on subgroup leaving this vector invariant (little
subgroup). Using this recurrent definition and isotropy of generated vectors,
one can easily prove the above statement by induction.}.
\item
The matrix $M_{\gamma i}$ is obtained, system (\ref{tMl}) is solved for
$\lambda_{i}$.
\item
The solution is accepted, if all $\lambda_{i}>0$, and is discarded otherwise.
\item
The eigenvalues $\lambda_{i}$ are displayed on intensity plots, the
eigenvectors
$\Psi_{\alpha}^{i}$ -- on Argand plots.
\end{enumerate}
\underline{Remarks.}
1) Phase rotation of eigenvectors $\Psi_{\alpha}^{i}\to e^{i\varphi_{i}}
\Psi_{\alpha}^{i}$ does not change the density matrix. One can generally choose
the phases of eigenvectors in such a way that the component $\Psi_{0}^{i}$
in each eigenvector will be a real positive number (phase is measured from
S-wave). Those unitary $N\times N$ matrices are described by $N^{2}-N$
independent real parameters, which being unified with $N$ eigenvalues
$\lambda_{i}$ give $N^{2}$ degrees of freedom, contained in an arbitrary
hermitean matrix $\rho_{\alpha\beta}$.
2) Change of numeration (permutation) of eigenvalues and eigenvectors
conserves the density matrix. The region of solutions is symmetrical with
respect to these permutations. Thus projections of the region onto Argand
plots of each eigenvector coincide. One can order the eigenvalues:
$\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{N}$ and respectively permute
the eigenvectors. Such ordering might lead to breaks of continuity in
solutions.
The breaks appear when a collision of eigenvalues occurs on a smooth solution
(fig.1). The ordering transfers the continuous path $ab$ to the non-continuous
path $ab'$. One should take into account this fact when selecting the
solutions: not only smooth paths inside ambiguity region are allowed,
but also those paths, which can be smoothly continued in the points with
coincident eigenvalues after the permutation of the correspondent eigenvectors.
\begin{quotation}{\small
In the vicinity of the point $A$ the eigenvectors $\Psi_{1}$ and $\Psi_{2}$
cannot be close because they are orthogonal. The points $A$ and $A'$ cannot
coincide on all Argand plots. Hence the break of curves on Argand plots
actually
occurs in transition through the point $A$ along the path $ab'$. One can
connect
the points $A$ and $A'$ by continuous path, along which the density matrix is
constant. At $\lambda_{1}=\lambda_{2}$ any linear combinations of $\Psi_{1}$
and
$\Psi_{2}$ are also eigenvectors. The $\Psi_{1}$ and $\Psi_{2}$ can be permuted
with the aid of unitary transformation, not touching others $\Psi_{i}$ (this
can be developed continuously, because $U(2)$ group is connective). The
rotation
$AA'$ is developed at constant $\lambda_{1}=\lambda_{2}$. Hence, though the
transition $aAA'b'$ is continuous, it cannot have analytical dependence on
energy.)
}\end{quotation}
3) Let first $N$ moments of distribution be measured in experiment
($l\leq L_{max},\ N=(L_{max}+1)^{2}$). Let the unitary matrices $\Psi$
have sizes $N\times N$. We will solve $N$ first equations of (\ref{tMl})
for $N$ unknowns $\lambda_{i}$. The density matrices obtained in solution
give the distributions, for which $N$ first moments exactly coincide with
measured ones. Other moments $t_{\gamma},\ \gamma >N$ can be obtained by
substitution of values $\Psi_{\alpha}^{i} ,\lambda_{i}$ into (\ref{tMl}).
Generally these moments do not vanish. One can show from the properties of the
coefficients $D_{\alpha\beta\gamma}$ that there is a finite number of non-zero
moments: $L_{max}<l\leq2L_{max}$.
The partial wave analysis
practice is to treat higher (non-measured) moments as zeros
with the experimental precision. One can impose conditions on these moments
$$|t_{lm}|<\delta t,\qquad L_{max}<l\leq2L_{max},$$
where $\delta t$ is a statistical error of the moment.
$$|\delta t_{\gamma}|^{2}={{DY_{\gamma}}\over{N_{\mbox{\small evts}}}},$$
\quad$N_{\mbox{\small evts}}$
is the total number of events, on which the moment is measured
$$t_{\gamma}={{1}\over{N_{\mbox{\small evts}}}}
\sum\limits_{i=1}^{N_{\mbox{\small evts}}}
Y_{\gamma}(\vec n_{i}),$$
$DY_{\gamma}$ is the dispersion of random value $Y_{\gamma}(\vec n)$,
$$DY_{\gamma}=<|Y_{\gamma}|^{2}>-\ |<Y_{\gamma}>|^{2}=\int d^{2}n\ I|Y_{\gamma}|^{2}\
-
|t_{\gamma}|^{2},$$
$$|\delta t_{\gamma}|^{2}\leq {{1}\over{N_{\mbox{\small evts}}}}
\int d^{2}n\ I|Y_{\gamma}|^{2}.$$
Averaging both sides over the quantum number $m$, we have
$${{1}\over{2l+1}}\sum_{m}|\delta t_{lm}|^{2}
\leq{{1}\over{4\pi N_{\mbox{\small evts}}}},\quad
\mbox{i.e.\ } \overline{|\delta t_{lm}|}\leq{{1}
\over{\sqrt{4\pi N_{\mbox{\small evts}}}}}
=\delta t.$$
When $|t_{\gamma}|>\delta t$, the deviation of $t_{\gamma}$ from zero is
statistically considerable.
We note, that our estimate for $\delta t$ is $l$ independent, all moments are
measured with equal precision. Actually one can reliably measure a great number
of moments (limitations are imposed only by finite angular distribution of the
equipment: $l<2\pi/\Delta\theta_{\mbox{\small resol}}$).
The contributions of the moments with high $l$ in distribution $I(\vec n)$ are
fast oscillating functions. To detect $l$-th harmonics one should fill 2D
histograms, containing at least $l^{2}$ bins. Due to limited statistics
a few events will be placed in one bin. Hence, the presence of high harmonics
can not be detected in angular distribution. Nevertheless, the moments of these
harmonics can be precisely measured.
\vspace{1cm}
\underline{Example.}\quad Let us consider the distribution
\begin{equation}
I(\vec n)=(1-a)|Y_{0}|^{2}+a|Y_{10}|^{2}={{1}\over{4\pi}}
(1-a+3a\cos^{2}\theta).\label{Ia}
\end{equation}
The moments are
\begin{equation}
t_{0}={{1}\over{\sqrt{4\pi}}},\quad
t_{20}={{a}\over{\sqrt{5\pi}}},\quad
\mbox{others\ }t_{\gamma}=0.\label{ta}
\end{equation}
Let first 9 moments $t_{\gamma}$ be measured in an experiment
(S, P and D-moments)\footnote{The distribution can differ from (\ref{Ia})
by the contributions of higher moments $l\geq3$.}. The density matrix is
described by 81 real parameters, 72 of which describe the unitary matrix
$\Psi$ ($\Psi_{0}^{i}\in R_{+}$), 9 eigenvalues $\lambda_{i}$ can be obtained
from system (\ref{tMl}).
Figures 2 and 3 show the result of analysis for the value $a=1/6$. The number
of random unitary matrices was 100000. In 1646 cases the solution of system
(\ref{tMl}) satisfied the condition $\lambda_{i}>0$. For 54 points the
additional condition $|t_{lm}|<10^{-2},\ l\geq3$ held true.
The solutions $\Psi$ fill a region in the group $U(9)$. In projection onto
Argand plots this region maps into a set of related points. The density of
points on figures estimates a ``thickness'' of the stratum, projected
into the same area on the plot. A circle on the figures denotes the trivial
solution
\begin{eqnarray}
\lambda_{1}&=&5/6\quad\Psi_{0}^{1}=1,\quad\mbox{others\ }\Psi_{\alpha}^{1}=0
\quad\mbox{(S-wave)}\nonumber\\
\lambda_{2}&=&1/6\quad\Psi_{10}^{2}=1,\quad\mbox{others\ }\Psi_{\alpha}^{2}=0
\quad\mbox{(P-wave)}\nonumber\\
\lambda_{i}&=&0,\ i=3..9\nonumber
\end{eqnarray}
A few points fall in the vicinity of the trivial solution, most part
of solutions is located in the region of close eigenvalues
$\lambda_{i}=0...0.2$.
The points, for which the additional condition $|t_{lm}|<10^{-2},\ l\geq3$
is satisfied, cover the same areas on the figures. A position of these
solutions
is analogous to the position of thin layer in a sphere: outer spherical layer
occupies a small volume in a sphere, but covers the same area in projection
on a plane.
\vspace{0.5cm}
\underline{Limiting cases.}
\underline{$a\to0$}\quad
Isotropic distribution ($a\to0$) is a singular case for the given scheme of
analysis. When $t_{\gamma}=\delta_{\gamma0}/\sqrt{4\pi}$, a set of equal
eigenvalues $\lambda_{i}=1/N$ is a solution of (\ref{tMl}) for all $\Psi$.
In this the density matrix is proportional to unit matrix: $\rho_{\alpha
\beta}=\delta_{\alpha\beta}/N$, the isotropy of distribution (\ref{Irho})
follows from the property $\sum\limits_{m=-l}^{l}|Y_{lm}(\vec n)|^{2}=(2l+1)/4\pi$.
The solution $\lambda_{i}=1/N$ has no definite limit at $N\to\infty$ and it
is not of physical value. Also there are other solutions, e.g. pure S-wave
state
or $$ \Psi=\left(\begin{array}{cccc}
1&0&0&\ldots\\
0&U(3)&0&\ldots\\
0&0&U(5)&\ldots\\
\ldots&\ldots&\ldots&\ldots\\
\end{array}\right)\quad ,\quad
\Lambda=\left(\begin{array}{cccc}
\lambda_{1}&0&0&\ldots\\
0&\lambda_{2}\cdot1_{3\times3}&0&\ldots\\
0&0&\lambda_{3}\cdot1_{5\times5}&\ldots\\
\ldots&\ldots&\ldots&\ldots
\end{array}\right)\quad ,
$$
the matrix $\Psi$ is diagonal with respect to $l$ (mixes $m$ only),
correspondent $\lambda_{lm}$ are independent of $m$. System (\ref{tMl})
has these additional solutions only if the matrix $M_{\gamma i}$ is degenerate.
The values of parameters $\Psi$, for which $M_{\gamma i}$ degenerates, form
in unitary group a set of zero measure. When generating
matrices $\Psi$ uniformly
distributed in $U(N)$ volume these additional solutions will not be revealed.
Transition to the limit $a\to0$ is shown on fig.4. The distribution of
$\lambda_{i}$ contracts to the point $\lambda=1/9\ (N=9)$, the points on Argand
plots fill unit circles.
The presence of such singularities indicates that the whole set of solutions
of system (\ref{tMl}) possesses a complicated topological structure.
This structure can be inferred from the following examples (fig.5):
$$\mbox{a)}\quad
\left(\begin{array}{cc}1&1\\x&y\end{array}\right)
\left(\begin{array}{c} \lambda_{1}\\ \lambda_{2}\end{array}\right)=
\left(\begin{array}{c} 1\\ 0\end{array}\right)\quad\Rightarrow\quad
\left(\begin{array}{c} \lambda_{1}\\ \lambda_{2}\end{array}\right)=
{{1}\over{y-x}}\left(\begin{array}{c} y\\ -x\end{array}\right).
$$
At $x=y=0$ the solutions of the system (with positive $\lambda_{i}$) form
a segment
$(0<\lambda_{1}<1,\ \lambda_{2}=1-\lambda_{1})$. This set has greater dimension
($d=1$, a line) than the set of solutions at fixed $x\neq y$ ($d=0$, a point),
but less dimension than the set of solutions for all $x,y$ ($d=2$,
a surface). Singular solutions have zero measure in the set of all solutions
(on the hyperbolic paraboloid, fig.5a).
$$\mbox{b)}\quad
\left(\begin{array}{ccc}1&1&1\\x&-x&0\\y&0&-y\end{array}\right)
\left(\begin{array}{c} \lambda_{1}\\ \lambda_{2}\\ \lambda_{3}\end{array}
\right)=
\left(\begin{array}{c} 1\\ 0\\ 0\end{array}\right).
$$
At fixed $x\neq0,y\neq0$ the solution is a point $\lambda_{i}=1/3$.
When $x=0$ or $y=0$, the dimension of the set of solutions is increased by 1,
when $x=y=0$ -- by~2. The set of singular solutions has the same dimension as
the set of all non-singular ones ($d=2$).
In uniform random generation of parameters $(x,y)$ the singular solutions will
not be found. This is essential only for singularities of the type~b).
\vspace{0.5cm}
\underline{Limits of $a$.}\quad
Function (\ref{Ia}) is positive on the sphere at $-1/2<a<1$. For these
values of $a$ the problem considered has solutions.
Remarks.
1) At negative $a$ the point $\{ \lambda_{1}=1-a,\ \Psi_{1}=\mbox{S-wave};\
\lambda_{2}=a,\ \Psi_{2}=\mbox{P-wave}\}$ is no longer a positively defined
solution of the problem.
2) Non-negativity of the distribution is necessary and sufficient condition
for positively semi-defined density matrix existence, if sizes of matrices
are not bonded ($N\to\infty$). The necessity is obvious, the sufficience
follows from the fact that any distribution $I(\vec n)\geq0$ can be described by
a pure state (see first part of this paper, Section I), i.e. the density matrix
with single non-zero eigenvalue
\begin{equation}
\lambda_{1}=1,\ \lambda_{i}=0, i>1,\ \Psi_{1}(\vec n)=\sqrt{I(\vec n)}e^{i\varphi(\vec n)}.
\label{pur}
\end{equation}
3) At finite $N$ this condition is neither necessary nor sufficient. We study
a class of distributions which can differ by harmonics with numbers greater
than
$N$. Even if $I(\vec n)<0$ in some point for distribution considered, positive
functions can exist in the same class. On the other hand, the pure states of
form (\ref{pur}) have in general an infinite number of harmonics and are not
contained in the considered finite dimensional class $\Psi_{\alpha}^{i}$.
One can show that a set of $a$ values, for which the problem has solutions in
the finite dimensional class, is connective, i.e. it is a segment
$a_{-}\leq a\leq a_{+}$, with $a_{+}\geq1$ for distribution (\ref{Ia}).
We will not determine exact values $a_{\pm}$.
When $a$ tends to boundary points, the volume occupied by solutions in $U(N)$,
tends to zero fast. The behaviour of this volume could be inferred from the
data
given in Table~1.
\begin{table}
\caption{ Number of positively defined solutions for 50000 random matrices with
random distribution in $U(9)$ }
\begin{center}\begin{tabular}{|c||c|c|c|c|c|c|c|c|}\hline
$a$&-1/4&-1/8&-1/16&0&1/16&1/8&1/6&1/4\\ \hline
$N_{\mbox{\small sol}}$&9&2192&14689&50000&14969&2770&823&75\\
\hline
\end{tabular}\end{center}
\end{table}
At large $|a|$ and at a greater number of moments involved in analysis as well,
the Monte-Carlo based approach presented here is ineffective
(most part of solutions is discarded in positivity test $\lambda_{i}$).
One might use more advanced methods for solution of inequalities system
$\lambda_{i}(\Psi)>0$.
\subsection*{Discussion}
The pure states are particular cases of mixed states, hence the ambiguity
region
for mixed states is wider than for pure ones. Even in the class of
solutions, containing a
finite number of harmonics, continuous transformations of
density matrix are available, which do not change the distribution.
One can fix the continuous ambiguity of solutions only imposing model
restrictions on the form of density matrix. The illustration of this statement
can be found in \cite{Hansen}. This work presents a formalism used in partial
wave analysis of the reactions $\pi p\to(3\pi)p,\ Kp\to(K\pi\pi)p$. The process
is represented in a form of decay sequence\footnote{It is assumed that the
process amplitude has a general form and depends on all quantum numbers and
kinematic variables. In this step the representation of the process as decay
sequence is introduced for convenience and do not impose any restrictions on
the
form of amplitude.}
$$\mbox{ meson }+ p\to p+\mbox{ meson }(J^{P});$$
$$\mbox{ meson }(J^{P})\to\mbox{ meson }+\mbox{ di-meson };$$
$$\mbox{ di-meson }\to\mbox{ 2 mesons }.$$
Then assumptions for the amplitude (eigenvectors of density matrix) are made,
of which the most important are:
1. The amplitude dependence on kinematic variables, describing the decay of
di-meson, is introduced as a product of Breit-Wigner function and barrier
factors.
2. The amplitude of the process is represented as a product of $J^{P}$ state
production amplitude and its decay amplitude. This assumption is exact only if
a single state $J^{P}$ is present.
As stated in the work, namely these assumptions fix continuous ambiguity of
partial wave analysis (1--partially, 2--completely).
\vspace{0.5cm}
The author is indebted to E.B.Berdnikov, I.A.Kachaev, D.I.Ryabchikov and
S.A.Sadovsky for helpful discussions.
|
1,108,101,565,254 | arxiv | \section{Introduction}
\label{sec:intro}
Suppose we are given measurements $A_{i,j} \in \{0,1\}$ with $i,j \in \{1,\ldots,n\}$, and
with
\begin{equation}
\label{eqn:model1}
\begin{array}{lll}
A_{j,i} & =& A_{i,j}, \,\,\,\,\,\,\,\,\,\,\,\,\,\, \forall i,j \in\{1,\ldots,n\},\\
A_{i,i} &=& 0, \,\,\,\,\,\,\,\,\,\,\,\,\,\, \forall i \in\{1,\ldots,n\},\\
A_{i,j} &\sim &\text{Bernoulli}\left(\,\theta_{i,j}^*\right), \,\,\,\,\,\,\,\, \forall i < j, \,\,\, i,j \in \{1,\ldots,n\}, \\
\theta_{i,j}^* &=& f_0(\xi_i,\xi_j) \\
\xi_ i &\sim^{ \text{i.i.d} } & U[0,1], \,\,\,\,i \in \,\{1,\ldots,n\},
\end{array}
\end{equation}
where $f_0 \,:\, [0,1] \times [0,1] \,\rightarrow [0,1]$ is a function that might depend on $n$. Moreover, the indices $i $ and $j$ denote the nodes of a network with $n$ vertices, and $A_{i,j} \,\,=\,\,1$
indicates that there is an edge between $i$ and $j$. The goal is to estimate $\theta^*_{i,j}$, the probability that there is a link between $i$ and $j$, under structural assumptions of the function $f_0$.
The model described in (\ref{eqn:model1}) has attracted significant attention in the statistics and machine learning communities. This is due to the increasing amount of data that can be represented as a binary graph model. For instance, emails between individuals, social networks connections, financial networks where an edge indicates a transaction between individuals, and many more.
The goal of this paper is to study a class of methods that can perform well in practice, yet enjoying attractive statistical properties under different classes of functions. We are particularly interested in settings where $f_0$ is piecewise H\"{o}lder, has bounded variation, or is an instance of the stochastic block model. Our idea is based on the connection between nonparametric regression and graphon estimation, see \cite{gao2015rate} for a discussion.
Roughly speaking, our approach proceeds as follows. We start constructing a 2D grid graph $ G_{2D}$ with set of nodes $V = \{1,\ldots,n\} \times \{ 1,\ldots,n \}$. To construct the set of edges we first find a permutation $\hat{\tau}$ of the set $\{1,\ldots,n\}$ (we will discus below some choices for $\hat{\tau}$). With this permutation in hand, the set of edges of $G_{2D}$, namely $E $, is such that $(i,j) \in V$ is connected to $(i^{\prime},j^{\prime})\in V$ if and only if
\[
\left\vert \hat{\tau}^{-1}(i) - \hat{\tau}^{-1}(i^{\prime}) \right\vert \,+\, \left\vert \hat{\tau}^{-1}(j) - \hat{\tau}^{-1}(j^{\prime}) \right\vert \,=\, 1.\,\,\,\,\,\,\
\]
Using the graph $G_{2D}$, we define the total variation of $\theta \in \mathbb{R}^{n \times n }$, with respect to such graph, as
\[
\| D \theta \|_1\,=\, \underset{( (i,j),(i^{\prime},j^{\prime}) ) \in E }{\sum }\, \left\vert \theta_{i,j} - \theta_{i^{\prime},j^{\prime}} \right \vert.
\]
This similar to the notion of total variation as studied in \cite{sadhanala2016total}, but the 2D grid graph is based on the permutation $P$ that we will learn from the data. Our hope is to construct a reasonable permutation $P$ in that sense that the true parameter has small total variation, or that $\| D \theta^* \|_1 << n^2 $. If this were achieved, then it would make sense to estimate $\theta^*$ as the solution to
\begin{equation}
\label{eqn:first_problem}
\underset{\theta \in \mathbb{R}^{n \times n }}{\min}\,\, \frac{1}{2}\|A\,-\,\theta \|_F^2 \,\,+\,\, \lambda\,\| D \theta \|_1,
\end{equation}
where $\| \cdot \|_F$ is the Frobenius norm, and $\lambda > 0 $ is a tuning parameter that is meant to emphasize the believe that the total variation along $G_{2D} $ should be small.
Thus, we perform total variation denosing on the adjacency matrix $A$, treating its entries as nodes of carefully designed grid graphs. The first type of grid that we consider is constructed as follows: We think of the elements of
$\{1,\ldots,n\}$ as cities with distances induced by the metric from \cite{zhang2015estimating}. Thus the distance between $i$ and $j$ is
\[
\hat{d}_I(i,j) \,\,=\,\, \frac{1}{n}\, \underset{k \neq i,j }{\max} \left\vert \sum_{l =1}^n (A_{i,l} - A_{j,l} )A_{k,l} \right\vert.
\]
Using such metric, we run the nearest neighbor (NN) algorithm, which is an approximate solution to the traveling salesman problem, see for instance \cite{rosenkrantz1977analysis}. This gives a permutation $\hat{\tau}(1), \ldots,\hat{\tau}(n) $ of the nodes that can be used to embed the matrix $A$ in a 2D grid. With the embedding we can then solve a problem in the form of (\ref{eqn:first_problem}). We refer to the resulting procedure as NN-FL, given that it combines the NN algorithm with fused lasso regularization. Here, fused lasso refers to the estimator for denoising problems introduced by \cite{tibshirani2005sparsity}, whose predecessor is the total variation denoising estimator from
\cite{rudin1992nonlinear}. Equation (\ref{eqn:first_problem}) is actually a graph fused lasso denoising problem as studied from a theoretical perspective in \cite{padilla2016dfs} and \cite{hutter2016optimal}, and from an algorithmic point of view in \cite{barbero2014modular} and \cite{tansey2015fast}. We exploit these algorithms in order to efficiently compute our estimator.
The second approach that we study considers an alternative to the metric form \cite{zhang2015estimating}. Basically, we simply sort the degrees of the nodes just as in \cite{chan2014consistent}. However, once the ordering is obtained, we do not use the penalty in (\ref{eqn:sorting}) as in \cite{chan2014consistent} but we rather apply fused lasso denoising. Hence we refer to the resulting procedure as SAS-FL, to emphasize that is a minor modification of the sort and smooth (SAS) method from \cite{chan2014consistent}. This small difference between SAS-FL and the method from \cite{chan2014consistent} allows us to study the former
on classes of bounded variation. We refer the reader to \cite{sadhanala2016total} for a discussion on some advantages of using the fused lasso on grids.
Finally, we will also discuss other possible metrics choices such as those based on external data (independent of $A$), or the $\ell_1$ distance on the columns of $A$.
\subsection{Summary of results}
As stated above, our approach to graphon estimation is based on performing fused lasso/total variation denoising of the adjacency matrix with appropriate graphs. Loosely speaking, for the resulting estimators we show the following:
\begin{enumerate}
\item If the function $f_0(u,\cdot)$ is piecewise H\"{o}lder of exponent $\alpha \in (0,1/2]$, then the NN-FL estimator attains the rate $n^{-\alpha/2}$ after disregarding logarithmic terms.
In fact, our result actually holds for a class of functions larger than that of piecewise H\"{o}lder functions of exponent $\alpha$.
\item Let $g$ be the degree function $g(u) \,=\, \int f_0(u,v)\,dv $. If there exists a constant $L \,>\, 0$ such that $L\,\vert u\,-\, v\vert\,\leq \, \left\vert g(u)\,-\, g(v) \right\vert $ for all $u,v \in [0,1]$, $g$ is piecewise--monotone, and $f_0$ has bounded variation, then the mean square error (MSE) of the SAS-FL attains rate $n^{-1/2}$ (if we ignore logarithmic terms).
\item In both simulated and real data examples, we provide evidence that the proposed methods outperform existing approaches for network denoising and link prediction.
\end{enumerate}
\subsection{Notation}
For $n \in \mathbb{N}$ we denote the set $\{1,\ldots,n\}$ by $[n]$. Moreover, for $n \in \mathbb{N}$ we denote by $\mathcal{S}_n$ the set of permutations of $[n]$, thus bijections from $[n]$ to itself.
For a matrix $X \in \mathbb{R}^{s \times t}$, the $i$-th row of $X$ is denoted by $X_{i,\cdot}$ , and the $i$-th column by $X_{\cdot,i}$. The Frobenius norm is denoted by
$$ \|X\|_F \,\,=\,\, \sqrt{ \displaystyle \sum_{i=1}^{s}\,\sum_{j=1}^{t} x_{i,j}}. $$
For a set $I \subset [t] $ we denote by $X_{\cdot,I}$ the matrix obtained by removing from $X$ the columns whose indices correspond to elements in $[t] \backslash I$. We define $X_{I,\cdot}$ in a similar way for $I \subset [s]$.
Throughout, we use the standard notation $\| x \|_p = ( \sum_{i=1}^{n} \vert x_i \vert^p)^{1/p}$ for $x \in \mathbb{R}^n$, and $p \in \mathbb{N}$ along with the
$\|x\|_{\infty } = \max_{i \in [n]} \vert x_i\vert$ notation.
For a Borel measurable set $A \subset \mathbb{R}^d$ we denote its Lebesgue measure in $\mathbb{R}^d$ by $\text{Vol}(A)$. And we denote by $1_A(x)$ the function that takes value $1$ if $x \in A$ and $0$ otherwise.
\subsection{Previous work}
Methods for network estimation have been extensively studied in the literature, and network denoising remains an active research area due to the challenge that it represents.
One avenue of research has focused on assuming that the generative process is the stochastic block model. In such framework, the main difficulty is perhaps to estimate the communities to which the nodes belong.
This perspective includes seminal works by
\cite{bickel2009nonparametric,rohe2011spectral} and \cite{adamic2005political}, more recently by \cite{guedon2016community, yan2018provable,gao2018minimax} among others.
In this paper we will not necessarily assume the stochastic block model but will also allow for more general models. Our work focuses on directly estimating the link probabilities to which we refer as graphon estimation. The statistical properties of graphon estimators have been of extensive interest. For instance,
\cite{airoldi2013stochastic} proposed to approximate Lipschitz graphons by block models, and consistency in H\"{o}lder classes was studied in
\cite{wolfe2013nonparametric}. Moreover, \cite{gao2015rate} and \cite{gao2016optimal} characterized the minimax rates for graphon estimation under the stochastic block model. The rate is $\log K /n$ where $K$, the number of communities, satisfies $K \,\leq \, \sqrt{n \log n}$. Recently, \cite{xu2017rates} and \cite{klopp2017optimal} independently showed that
the universal singular value thresholding (USVT) algorithm from \cite{chatterjee2015matrix} attains the rate $K/n$ when the true model is the stochastic block model. In fact,
\cite{xu2017rates} showed the rate $K/(n\,\rho)$ where $\rho $ is a sparsity parameter that satisfies $n\,\rho \,=\, \Omega(\log n)$. There, the authors also studied USVT when the function $f_0$ (in Model \ref{eqn:model1}) belongs to a Sobolev space.
Moreover, \cite{zhang2015estimating} studied graphon estimation under more general structures than the stochastic block model.
Their method proceeds in a two--step fashion. First, a network with nodes $\{1,\ldots,n\}$ is constructed using the adjacency matrix $A$. Then, neighborhood smoothing is applied to the different neighborhoods. The resulting estimator is proven to be consistent when $f_0 $ is piecewise Lipschitz. The respective rate is $(\log n /n)^{1/2} $ versus the minimax rate $\log n /n$. A different method based on neighborhood smoothing was studied in \cite{song2016blind}.
An alternative approach to graphon estimation considering degree sorting was studied in \cite{chan2014consistent} under the assumption that $\int_{0}^{1} f_0(x,y)dy$ is increasing. The idea behind this method is to construct an ordering of the nodes based on the empirical degree $d_i \,=\, \sum_{j=1}^{n} A_{i,j}$. Once the ordering is obtained, the authors propose the sort and smooth (SAS) estimator
\begin{equation}
\label{eqn:sorting}
\hat{\theta}_{sas} \,\,=\,\, \underset{\theta}{\arg\min} \displaystyle \sum_{i=1}^{m} \sum_{j=1}^m \,\sqrt{ \left( \frac{\partial \theta}{\partial x} \right)_{i,j}^2 \,\,+\,\,\left( \frac{\partial \theta}{\partial y}\right)_{i,j}^2 } \,\,\,\,\,\, \text{subject to } \,\| \theta - H \|_2 \,\leq \,\epsilon,
\end{equation}
where $\epsilon > 0$ is a tuning parameter, $H$ is a smooth version of $A$, and $ \frac{\partial \theta}{\partial x}$ $ \left(\frac{\partial \theta}{\partial y}\right)$ is a discrete derivative in the direction of the x (y) axis.
Finally, we emphasize that while several graphon estimation methods exist, many properties of estimators are unknown. For instance, the estimator from \cite{zhang2015estimating} is consistent when the true graphon is piecewise Lipschitz but it is not known if such estimator can be nearly minimax when the true model is the stochastic block model. Also, USVT can perform well under the stochastic block model assumption or some smooth condition on $f_0$, but performance guarantees are unknown when $f_0$ is piecewise H\"{o}lder or it is a general bounded variation function
\section{General approach}
In this section we propose a general class of estimators filling in the details of our discussion in Section \ref{sec:intro}. We begin by reviewing some background on the traveling salesman problem. This is then related to the graphon estimation problem giving rise to our family of estimators.
\subsection{Nearest neighborhoods construction}
\label{sec:nn}
Our approach to construct a grid graph is motivated by the traveling salesman problem.
We describe this in generality next. Let $\mathcal{C} \,\,=\,\, \{c_1,\ldots,c_s\}$ be a set of cities with a distance metric $d_{\mathcal{C}} \,\,:\,\,\mathcal{C} \times \mathcal{C} \,\rightarrow \mathbb{R}$ specifying how far any pair of cities are from each other. A tour of cities is a sequence $c_{P(1)},\ldots,c_{P(s)}$ where $P$ is a permutation of the set $\{1,\ldots,s\}$. Thus, a tour is just an arrangement of cities such that each city appears exactly once.
The traveling salesman problem (see \cite{bellmore1968traveling} for a survey) is a well known problem that seeks for the tour with the minimum cost, measuring the cost in terms of the metric
$d_{\mathcal{C}}$, see below. The optimal tour or circuit is found by solving
\begin{equation}
\label{tsp}
P^*\,\,\in \,\, \underset{P \in \mathcal{S}_s }{\arg \min}\,\, C(P),
\end{equation}
where the cost of a tour $P$ is defined as
\[
C(P)\,\,\,=\,\,\,\sum_{i=1}^{s-1} \,d_{\mathcal{C}}(c_{P(i)},c_{P(i+1)})\,\,+\,\, d_{\mathcal{C}}(c_{P(s)},c_{P(1)}) .
\]
Unfortunately, it is known that (\ref{tsp}) is NP-hard, e.g. \cite{rosenkrantz1977analysis}. Despite this challenge, there exist different approximation algorithms for solving (\ref{tsp}). For instance, \cite{rosenkrantz1977analysis} studied approximation algorithms which run in polynomial time. From such methods, we will use the nearest neighbor algorithm (NN) which starts from a random city, and then proceeds iteratively by choosing the closest city to the current city. Specifically, at first the algorithm visits $\hat{\tau}(1)$ for some $\hat{\tau}(1) \in \mathcal{C}$, perhaps chosen randomly. Next, the NN algorithm visits $\hat{\tau}(2)$, where
\[
\hat{d}(\hat{\tau}(1),\hat{\tau}(2)) \,\,\leq \,\, \hat{d}(\hat{\tau}(1),c),\,\,\,\,\,\, \forall c \in \mathcal{C} \backslash \{ \hat{\tau}(1) \} .
\]
Then, the NN algorithm continues sequentially visiting $\hat{\tau}(j)$ at iteration $j$ where
\[
\hat{d}(\hat{\tau}(j-1),\hat{\tau}(j)) \,\,\leq \,\, \hat{d}(\hat{\tau}(j-1),c),\,\,\,\,\,\, \forall c \in \mathcal{C} \backslash \{\hat{\tau}(1),\ldots,\hat{\tau}(j-1) \}.
\]
Although, the nearest neighbor method is straightforward, it enjoys some attractive performance guarantees. In particular,
denoting the permutation associated with NN by $\hat{P}$, Theorem 1 in \cite{rosenkrantz1977analysis} shows that
\begin{equation}
\label{eqn:nn_property}
C(\hat{P} )\,\,\leq \,\, \left(1 + \frac{\log_2 s}{2} \right)\,\, C(P^*),
\end{equation}
provided that $\hat{d}$ satisfies the triangle inequality.
Moreover, by its mere definition, one can see that the computational complexity of running the NN algorithm is $O(w\,s^2)$, where $w$ is the computational cost of evaluating $d_{\mathcal{C}}$.\\
\subsection{Class of estimators}
\label{sec:class_of_estimators}
As anticipated, our approach is based on running 2D grid fused lasso denoising on the data $A$ with a carefully constructed graph. We will now state more precisely how to arrive to our estimator $\hat{\theta} \in \mathbb{R}^{n \times n} $ for $\theta^*$.
In order to avoid correlation between the constructed ordering and the signal to denoise, for $m \in [n]$ we use the data $A_{\cdot,[m]}$ to construct
metric $\hat{d} \,:\, ([n] \backslash [m] )\times ([n] \backslash [m] ) \rightarrow \mathbb{R}$. Once the metric is constructed, we can think of the elements of $([n] \backslash [m])$ as a set of cities, and for any two cities $i,j \in [n]$, the distance $\hat{d}(i,j)$ tells us how far the cities are. A discussion on choices of the metric $\hat{d}$ will be given later.
\textbf{Choice of $m$:} Throughout we assume that $m $ is chosen to satisfy $m \asymp n$, thus there exists positive constants $c_1$ and $c_2$ such that $c_1\,m \,\leq \, n \,\leq \,c_2\,m$. For instance, we could take $ m = \floor{n/2}$.
A natural way to arrange the cities (nodes), in the graphon estimation problem, would be to place cities that are close to each other in the sense of the metric $\hat{d}$ as adjacent. This would make sense if the graphon has some underlying structure, for instance if the ground truth is the stochastic block model. We would also require that the distance $\hat{d}(i,j)$ is a reasonable approximation to a metric $d^*(\xi_i,\xi_j)$, such that $f_0(\xi_{i},\cdot)$ and $f_0(\xi_j,\cdot) $ are ``similar" if $d^*(\xi_i,\xi_j)$ is small. We will be precise in stating our assumptions but for now we proceed to construct our proposed approach.
Motivated by the discussion above, we use the NN algorithm (as discussed in Section \ref{sec:nn}) on the cities $[n] \backslash [m]$ with distance $\hat{d}$. We let $\hat{\tau}$ be the corresponding function $\hat{\tau} \,:\, [n-m] \,\rightarrow\, ([n] \backslash [m]) $, such that the NN algorithm first visits city $\hat{\tau}(1)$, next city $\hat{\tau}(2)$, and so on.
Using the ordering $\hat{\tau}$, we construct a signal $y \in \mathbb{R}^{ (n-m) \times (n-m) }$ satisfying $y_{i,j} \,=\, A_{ \hat{\tau}(i), \hat{\tau}(j) } $ for all $i,j \in [n-m]$. We also construct the 2D grid graph $G \,=\, (V,E) $ with set of node
\[
V \,=\, \{(i,j) \,:\, i,j \in [n-m] \},
\]
and set of edges
$$E \,=\, \{ (e^+,e^-) \in [n-m]^2 \times [n-m]^2 \,:\, \| e^+ \,-\,e^- \|_1 = 1 \}. $$
We also use $\nabla_G$ to denote an oriented incidence operator of $G$. Thus, $\nabla_G \,:\, \mathbb{R}^{(n-m)\times (n-m)} \,\rightarrow\, \mathbb{R}^{\vert E \vert} $ where for $e \,=\, (e^+,e^{-}) \in E$ we have
\[
(\nabla_G \theta)_{e} \,\,=\,\, \theta_{e^+} \,\,-\,\, \theta_{e^-}.
\]
Using the graph $G$, we proceed to construct our estimator by solving a graph fused lasso problem. Doing so, we first find $\hat{\beta} \in \mathbb{R}^{ (n-m) \times (n-m) }$
as the solution to
\begin{equation}
\label{eqn:estimator}
\underset{ \beta \in \mathbb{R}^{ (n-m) \times (n-m) } }{ \text{minimize} } \,\, \frac{1}{2} \| y \,-\, \beta \|_F^2\,\,+\,\, \lambda\,\|\nabla_G \beta \|_1,
\end{equation}
for a tunning parameter $\lambda >0$. We then set $\hat{\theta}_{ \hat{\tau}(i),\hat{\tau}(j)} \,=\, \hat{\beta}_{i,j}$ for all $i,j \in [n-m]$.
The procedure above allows us to estimate $\theta^*_{i,j}$ for all $i,j \in [n]\backslash [m]$. In a similar way we can also construct estimates of $\theta^*_{i,j}$ for all $i,j \in [m]$. The idea is to have an ordering of the nodes in $[m]$ by using the NN algorithm with a metric that only involves data from $A_{ \cdot, ([n] \backslash [m]) }$, and then we solve a 2D fused lasso problem.
As for the estimates of $\theta^*_{i,j}$ for all $i \in [n]\backslash [m]$ and $j \in [m]$, we proceed in a similar way but using two different orderings. The first ordering $\hat{\tau}_1$ is obtained by running the NN algorithm on the set of cities $[m]$ with a metric depending on the data $A_{[m], [m] }$. For the second ordering we use the NN algorithm with set of cities $([n]\backslash [m])$ and metric depending the data $A_{([n]\backslash [m]),([n]\backslash [m])}$.
Finally, we emphasize that we do the portioning of the data in order to avoid correlation between the ordering and the signal to denoise. This is done to keep the analysis mathematically correct. However, in practice, one can obtain a single ordering and then run a fused lasso problem as in (\ref{eqn:first_problem}). We have noticed that such an approach works well in practice.
\paragraph{Computational cost.}
As stated in the previous subsection, the computational cost associated with the NN algorithm is $O(s\,n^2)$, where $s$ is the cost of computing the distance between any two nodes.
Note that this can be reduced if there are multiple processors available. In such case, the distance from a node to the remaining nodes could be computed by partitioning the nodes and performing computations in parallel.
As for the fused lasso computation, this can be done using the efficient algorithm from \cite{barbero2014modular}, or the ADMM solver from \cite{tansey2015fast} which is amenable to parallel computing.
\subsection{Choices of metric $\hat{d}$}
\label{sec:d}
Clearly, the class of methods described above can be used with any metric $\hat{d}$ on the set of nodes on the graphon estimation. The purpose of this section is to highlight different choices of $\hat{d}$, some of which have appeared in the literature in the context of other estimators.
For simplicity, we will focus on constructing $\hat{d}$ for the case of estimating $\theta^*_{i,j}$ for all $i,j \in [n]\backslash [m]$. The remaining cases described in
Section \ref{sec:class_of_estimators} can be constructed in a similar way.
\subsubsection{Inner product based distance}
Our first natural approach for constructing $\hat{d}$
is to consider the metric proposed in \cite{zhang2015estimating}. Specifically, we set
\begin{equation}
\label{eqn:distance_smoothing}
\hat{d}_I(i,i^{\prime}) \,\,=\,\, \, \underset{ k \in [m] }{\max }\, \sqrt{\frac{1}{n} \vert \langle A_{i, [m]}, A_{k, [m] } \rangle \,-\, \langle A_{i^{\prime}, [m]}, A_{k, [m] } \rangle \vert }, \,\,\,\,\,\forall i, i^{\prime} \in [n] \backslash [m].
\end{equation}
We call NN-FL the estimator from Section \ref{sec:class_of_estimators} when the distance $\hat{d}$ is taken as $\hat{d}_I$ in (\ref{eqn:distance_smoothing}).
Importantly, our modification of the distance defined in \cite{zhang2015estimating} does satisfy the triangle inequality, which is sufficient for the NN algorithm to satisfy
(\ref{eqn:nn_property}).
\subsubsection{Sorting}
Another choice for the distance $\hat{d}$ is
\begin{equation}
\label{eqn:distance_sorting}
\hat{d}_1(i,i^{\prime}) \,\,=\,\, \left\vert \frac{1}{m} \sum_{j \in [m]}A_{i,j} \,-\, \frac{1}{m} \sum_{j \in [m]}A_{i^{\prime},j} \right\vert, \,\,\,\,\forall i,i^{\prime } \in [n] \backslash [m].
\end{equation}
Thus, for any two nodes, the distance is the absolute value of the difference between the degrees (normalized by $m$) based on the data $A_{\cdot,[m]}$. Since such degrees are numbers and the metric is the Euclidean distance, the optimal tour (traveling salesman problem solution ) is the ordering obtained by sorting the degrees. This is the ordering constructed in the method from \cite{chan2014consistent}. The difference is that we use the fused lasso penalty for denoising without preliminary smoothing of the data, whereas \cite{chan2014consistent} use the penalty in (\ref{eqn:sorting}) with a smoothed version of $A$.
Throughout, whenever we use the distance (\ref{eqn:distance_sorting}) we will refer to the estimator from Section \ref{sec:class_of_estimators} as sort and smooth fused lasso graphon estimation (SAS-FL). In the experiments section we will see that, as expected, the empirical performance of SAS-FL is similar to SAS from \cite{chan2014consistent}, although the former seems to perform sightly better in the examples in Section \ref{sec:experiments}.
\subsubsection{$\ell_1$ distance and other choices}
Clearly, a different metric can be obtained by simply taking the $\ell_1$ norm of the difference between rows or columns of the incidence matrix $A$. Thus, we can define
\[
d_{\ell_1}(i,i^{\prime}) \,=\, \| A_{i,[m]} \,-\, A_{i^{\prime},[m]} \|_1, \,\,\,\,\,\forall i, i^{\prime} \in [n] \backslash [m].
\]
We will refer to the respective procedure using this metric as $\ell_1$-FL. We will not study convergence properties of this method, although we will see that it is a reasonable method in practice.
Finally, we notice that the metric $\hat{d}$ could be constructed using side information about the nodes. For instance, using covariates, or repeated measurements of the network if available.
\section{Analysis of the NN-FL estimator on extensions of piecewise H\"{o}lder classes}
\label{sec:extenshions_holder}
The purpose of this section is to study convergence properties of the NN-FL estimator.
We analyze the performance of the NN-FL estimator for classes of graphons that extent the notion of piecewise H\"{o}lder functions. We notice that a particular instance of this was studied in \cite{zhang2015estimating}.
To formalize, if $\alpha \in (0,1]$, we say that $f_0$ is piecewise H\"{o}lder of exponent $\alpha$ if the following holds. There exists a partition of intervals $\mathcal{A}_1,\ldots,\mathcal{A}_r$ of $[0,1]$, such that if $u,v \in \mathcal{A}_l$ for some $l \in [r]$, then
\begin{equation}
\label{eqn:piecewise_holder}
\underset{t \in [0,1]}{\sup}\, \left\vert f_0(u,t) \,-\, f_0(v,t) \right \vert\,\,\leq \,\, L_1\,\vert u\,-\, v\vert^{\alpha},
\end{equation}
and
\begin{equation}
\label{eqn:piecewise_holder2}
\underset{t \in [0,1]}{\sup}\, \left\vert f_0(t,u) \,-\, f_0(t,v) \right \vert\,\,\leq \,\, L_1\,\vert u\,-\, v\vert^{\alpha},
\end{equation}
for a positive constant $L_1$ independent of $u$ and $v$.
This class of functions appeared in the analysis of the fused lasso estimator in the context of 2D non-parametric regression, see \cite{hutter2016optimal}. There, the authors showed that the fused lasso attains the rate $n^{- \alpha/(1+\alpha) }$.
Next, we state two assumptions which hold if (\ref{eqn:piecewise_holder}) is satisfied along with Model \ref{eqn:model1}. Thus, we relax the piecewise H\"{o}lder condition.
\begin{assumption}
\label{as1}
There exists a positive constant $c_1 $ such that with probability approaching one
\[
\underset{ k \in [m] }{\min} \displaystyle \int_{0}^1 \vert f_0(\xi_k,t) \,-\, f_0(\xi_{i},t) \vert dt \,\leq \,\, c_1 \left( \frac{\log n}{n}\right)^{\alpha} ,\,\,\,\,\,\, \forall i \in [n] \backslash [m].
\]
and
\[
\underset{ k \in [m] }{\min} \displaystyle \int_{0}^1 \vert f_0(t,\xi_k) \,-\, f_0(t,\xi_{i}) \vert dt \,\leq \,\, c_1 \left( \frac{\log n}{n}\right)^{\alpha} ,\,\,\,\,\,\, \forall i \in [n] \backslash [m].
\]
for $\alpha \in (0,1/2]$.
\end{assumption}
Note that we constrain $\alpha \in (0,1/2]$, and the case $\alpha \in [1/2,1]$ will not be studied. Instead, we will analyze the stochastic block model in the next subsection.
To understand why Assumption \ref{as1} holds when condition (\ref{eqn:piecewise_holder}) is met with $\alpha \in (0,1/2]$, we refer to the work in \cite{von2010hitting}. There, the authors showed that, with probability approaching one, the following holds: For each $\xi_i$, the set of its $K $-th nearest neighbors (with Euclidean distance) among the points $\{x_j\}_{j \in [n ] \backslash \{i\} }$ are all within distance $c\log n /n$ for an appropriate choice of $K$, and for some constant $c >0$.
We now state our second assumption. This involves the quantity that the penalty $\|\nabla_G \cdot\|_1$ in (\ref{eqn:estimator}) emulates.
\begin{assumption}
\label{as2}
There exists an unknown (random) permutation $\tau^* \in \mathcal{S}_{n-m-1}$ such that
\[
\displaystyle \sum_{i=1}^{n-m-1} \displaystyle \int_0^1 \vert f_0(\xi_{ \tau^*(i) },t) \,-\, f_0(\xi_{\tau^*(i+1) },t) \vert \,dt \,\,=\,\, O_{\mathbb{P}}( n^{1-\alpha} \,\log^{\alpha} n ),
\]
and
\[
\displaystyle \sum_{i=1}^{n-m-1} \displaystyle \int_0^1 \vert f_0(t,\xi_{ \tau^*(i) }) \,-\, f_0(t,\xi_{\tau^*(i+1) }) \vert \,dt \,\,=\,\, O_{\mathbb{P}}( n^{1-\alpha} \,\log^{\alpha} n ),
\]
where $\alpha \in (0,1/2]$.
\end{assumption}
If (\ref{eqn:piecewise_holder}) and (\ref{eqn:piecewise_holder2}) hold then Assumption \ref{as2} will be satisfied. To verify this, simply take $\tau^*$ as the ordering of the elements of $\{\xi_i \}_{i \in ([n ]\backslash [m])}$, i.e,
\[
\xi_{\tau^*(1)} \,<\,\xi_{\tau^*(2)} \,<\, \ldots \,<\, \xi_{\tau^*(n-m)}.
\]
Once again, we exploit properties of nearest neighbor graphs from \cite{von2010hitting}.
With these conditions, we are now ready to present our next result.
\begin{theorem}
\label{thm:rate}
Let us suppose that Assumptions \ref{as1}-\ref{as2} hold, and let $\hat{\tau}$ be constructed as in Section \ref{sec:class_of_estimators} by setting $d \,:=\, \hat{d}_I$ (see Equation \ref{eqn:distance_smoothing}). Then for an appropriate choice of $\lambda$, the corresponding estimator in (\ref{eqn:estimator}) satisfies
\[
\displaystyle \frac{1}{n^2} \sum_{ i,j \in [n-m], \,i < j } \left( \hat{\theta}_{ \hat{\tau}(i), \hat{\tau}(j) } \,-\, \theta_{\hat{\tau}(i), \hat{\tau}(j) }^* \right)^2 \,\,=\,\,
O_{\mathbb{P}} \left( \frac{ \log^{2\,+\, \frac{\alpha}{2} }n }{n^{ \frac{\alpha}{2} }} \right).
\]
\end{theorem}
Thus, in the class of graphons implied by Assumptions \ref{as1}--\ref{as2}, the NN-FL estimator attains the rate $n^{-\alpha/2}$ after ignoring logarithmic terms. To the best of our knowledge, other estimators have not been studied on this class of functions. The most related work comes from \cite{zhang2015estimating} who studied piecewise Lipschitz graphons for which their estimator attains the rate $n^{-1/2}$.
\section{Analysis of the SAS-FL estimator on BV functions}
We focus on the class of graphons that satisfy the assumption of bounded variation. This condition has appeared in the statistics literature due to its flexibility. Indeed, it encompasses a very large class of functions that includes reasonable subclasses such as piecewise Lipschitz. In one dimensional non-parametric regression, \cite{mammen1997locally} studied locally adaptive estimators that attain minimax rates when the true regression function has bounded variation. More recently, \cite{tibshirani2011solution,tibshirani2014adaptive} studied discretized version of the estimators from \cite{mammen1997locally}. The framework from Section \ref{sec:class_of_estimators} consists of a particular instance of generalized lasso estimation studied in \cite{tibshirani2011solution}.
In one dimension, bounded variation is well defined as follows. A function $f\,:\, [0,1] \,\rightarrow \,\mathbb{R}$ has bounded variation if there exists a constant $C$ such that for any set of points $0 \,\leq\, a_1 \,\leq \,\ldots\,\leq a_r \leq 1$, $r \in \mathbb{N}$, it holds that
\begin{equation}
\label{eqn:one_d_bv}
\displaystyle \sum_{l=1}^{r-1} \, \vert f(a_l) \,-\,f(a_{l+1}) \vert \,\,<\,\,C.
\end{equation}
When passing to higher dimensions (in particular dimension two), the definition of bounded variation is not unique. An early work from \cite{clarkson1933definitions} discussed multiple definitions of bounded variation. Although these days there is a widespread convention in the definition of bounded variation in the field of mathematics, the statistics community continues to rely on early definitions. For instance, perhaps implicitly \cite{sadhanala2016total} defined the canonical class of bounded variation by taking (\ref{eqn:one_d_bv}) and assuming it holds through each horizontal and vertical chain graph of a 2D grid graph. We now do something similar for the case of graphons.
\begin{assumption}
\label{as5}
We assume the data is generated as in the model implied by (\ref{eqn:model1}). Moreover, we assume that the functions
\[
g_1(u) \,\,=\,\, \displaystyle \int_0^1\, f_0(u,v)dv,\,\,\,\,\,\,\,\,\,\,\,\,\, g_2(u) \,\,=\,\, \displaystyle \int_0^1\, f_0(v,u)dv,
\]
satisfy the following:
\begin{enumerate}
\item There exists some positive constant $L_1$ such that
\begin{equation}
\label{eqn:bilip}
L_1 \, \vert x \,-\, y \vert \,\,\leq \,\, \vert g_l(x) \,-\,g_l(y) \vert , \,\,\,\,\,\,\,\,\forall x,y \in [0,1], \forall l \in \{1,2\}.
\end{equation}
\item \textbf{Piecewise-Monotonic:} For $l\in \{1,2\}$ there exists a partition $0\,<\,b_1^l\,< \ldots\, <\, b_r^l \,<\, 1$ such that $g_l$ is monotone in each of the intervals
$(0,b_1^l), (b_1^l,b_2^l),\ldots, (b_r^l,1)$.
\item The function $f_0$ has \textbf{bounded variation} in the following sense. There exists a positive constant $C>0$ such that if $ 0\,\leq\, a_0 \,\leq \, a_1 \,\leq \, \ldots \,\leq \,a_s \,\leq \, 1$ with $s \in \mathbb{N}$, then
\[
\displaystyle \sum_{l=1}^{s-1} \, \vert f_0(a_l,t) \,-\,f_0(a_{l+1},t) \vert \,\,<\,\,C,
\]
and
\[
\displaystyle \sum_{l=1}^{s-1} \, \vert f_0(t,a_l) \,-\,f_0(t,a_{l+1}) \vert \,\,<\,\,C,
\]
\end{enumerate}
for all $t \in [0,1]$.
\end{assumption}
Importantly, we allow for a flexibility of the graphon by only requiring that has bounded variation. The most restrictive assumption is perhaps that $g$ is piecewise monotonic. As for the condition expressed by (\ref{eqn:bilip}), we acknowledge that this requirement appeared in the analysis of \cite{chan2014consistent}.
\begin{theorem}
\label{thm:bv_class}
Suppose that Assumption \ref{as5} holds. Let $\hat{\tau}$ be constructed as in Section \ref{sec:class_of_estimators} by setting $d \,:=\, \hat{d}_1$ (see Equation \ref{eqn:distance_sorting}). Then for an appropriate choice of $\lambda$, the corresponding estimator $\hat{\theta}$ in (\ref{eqn:estimator}) satisfies that
\[
\displaystyle \frac{1}{n^2} \sum_{ i,j \in [n-m], \,i < j } \left( \hat{\theta}_{ \hat{\tau}(i), \hat{\tau}(j) } \,-\, \theta^*_{\hat{\tau}(i), \hat{\tau}(j) } \right)^2 \,\,=\,\,
O_{\mathbb{P}} \left( \frac{ \, \log^{ \frac{3}{2} } n }{\sqrt{n} \,} \right).
\]
\end{theorem}
Thus, on the class of functions from Assumption \ref{as5}, the SAS-FL estimator attains the rate $n^{-1/2}$, which matches the theoretical result from \cite{zhang2015estimating} but for the class of piecewise Lipschitz functions.
Interestingly, we are the first to study graphon estimation with the bounded variation assumption.
\section{Experiments}
\label{sec:experiments}
The purpose of this section is to shed some lights on the empirical performance of the class of estimators proposed in this paper. Evaluations of performance are presented next on both simulated and real networks.
\subsection{Network denoising}
We begin by considering examples of simulated data that are intended to test the validity of our general class of methods on qualitatively different scenarios. The specifications of $\hat{d}$ that we consider are those described in Section \ref{sec:d}.
As benchmarks we consider the following approaches. The neighborhood smoothing method (NS) from \cite{zhang2015estimating}, universal singular value thresholding (USVT) algorithm from \cite{chatterjee2015matrix}, and the sort and smooth (SAS) method from \cite{chan2014consistent}.
In all comparisons, the MSE is used as a measure of performance. Four different scenarios are constructed. In the first scenario the ground truth is the stochastic block model with 12 communities. In our second example, $f_0$ is taken as piecewise smooth, where locally the function behaves like linear combinations of the $\sqrt{\cdot}$ function applied to each coordinate.
We also consider a piecewise constant model (not a stochastic block). In the latter, the degree function behaves locally as a constant making estimation difficult for both SAS and SAS-FL. Our final example consists of $f_0$ being a polynomial of two variables. Figures \ref{fig:ex1}-\ref{fig:ex4} offer a visualization of the examples.
\begin{figure}[bp!
\begin{center}
\includegraphics[width=1.55in,height= 1.59in]{ex1_P_K12.png}
\includegraphics[width=1.55in,height= 1.59in]{ex1_sas_K12.png}
\includegraphics[width=1.55in,height= 1.59in]{ex1_usvt_K12.png}
\includegraphics[width=1.55in,height= 1.59in]{ex1_ns_K12.png}\\
\includegraphics[width=1.55in,height= 1.59in]{ex1_A_K12.png}
\includegraphics[width=1.55in,height= 1.59in]{ex1_sfl_K12.png}
\includegraphics[width=1.55in,height= 1.59in]{ex1_nnfl_K12.png}
\includegraphics[width=1.55in,height= 1.59in]{ex1_l1fl_K12.png}\\
\caption{\label{fig:ex1}
The top left in the first row shows a realization of the matrix of probabilities $P$ for Example 1, here $n= 500$. Then from left to right the panels in the first row correspond to the methods SAS, USVT, and NS. In the second row the leftmost plot corresponds to a realization of the incidence matrix (A) drawn with the parameters in $P$ from the first row. Then from left to right the remaining plots are associated to SAS-FL, NN-FL, and $\ell_1$-FL.
}
\end{center}
\end{figure}
\begin{figure}[bp!
\begin{center}
\includegraphics[width=1.55in,height= 1.59in]{ex2_P_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex2_sas_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex2_usvt_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex2_ns_K6.png}\\
\includegraphics[width=1.55in,height= 1.59in]{ex2_A_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex2_sfl_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex2_nnfl_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex2_l1fl_K6.png}\\
\caption{\label{fig:ex2}
The top left in the first row shows a realization of the matrix of probabilities $P$ for Example 2, here $n= 500$. Then from left to right the panels in the first row correspond to the methods SAS, USVT, and NS. In the second row the leftmost plot corresponds to a realization of the incidence matrix (A) drawn with the parameters in $P$ from the first row. Then from left to right the remaining plots are associated to SAS-FL, NN-FL, and $\ell_1$-FL.
}
\end{center}
\end{figure}
\begin{figure}[bp!
\begin{center}
\includegraphics[width=1.55in,height= 1.59in]{ex3_P_K11.png}
\includegraphics[width=1.55in,height= 1.59in]{ex3_sas_K11.png}
\includegraphics[width=1.55in,height= 1.59in]{ex3_usvt_K11.png}
\includegraphics[width=1.55in,height= 1.59in]{ex3_ns_K6.png}\\
\includegraphics[width=1.55in,height= 1.59in]{ex3_A_K11.png}
\includegraphics[width=1.55in,height= 1.59in]{ex3_sfl_K11.png}
\includegraphics[width=1.55in,height= 1.59in]{ex3_nnfl_K11.png}
\includegraphics[width=1.55in,height= 1.59in]{ex3_l1fl_K11.png}\\
\caption{\label{fig:ex3}
The top left in the first row shows a realization of the matrix of probabilities $P$ for Example 3, here $n= 500$. Then from left to right the panels in the first row correspond to the methods SAS, USVT, and NS. In the second row the leftmost plot corresponds to a realization of the incidence matrix (A) drawn with the parameters in $P$ from the first row. Then from left to right the remaining plots are associated to SAS-FL, NN-FL, and $\ell_1$-FL.
}
\end{center}
\end{figure}
\begin{figure}[bp!
\begin{center}
\includegraphics[width=1.55in,height= 1.59in]{ex4_P_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex4_sas_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex4_usvt_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex4_ns_K6.png}\\
\includegraphics[width=1.55in,height= 1.59in]{ex4_A_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex4_sfl_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex4_nnfl_K6.png}
\includegraphics[width=1.55in,height= 1.59in]{ex4_l1fl_K6.png}\\
\caption{\label{fig:ex4}
The top left in the first row shows a realization of the matrix of probabilities $P$ for Example 4, here $n= 500$. Then from left to right the panels in the first row correspond to the methods SAS, USVT, and NS. In the second row the leftmost plot corresponds to a realization of the incidence matrix (A) drawn with the parameters in $P$ from the first row. Then from left to right the remaining plots are associated to SAS-FL, NN-FL, and $\ell_1$-FL.
}
\end{center}
\end{figure}
\begin{table}[t!]
\centering
\caption{ \label{tab:sim}Simulation results for Examples 1, 2, 3 and 4, see Figures \ref{fig:ex1}--\ref{fig:ex4} in that order. Comparisons between the true and estimated probability matrices for different methods given samples from each example. The acronyms here are explained the text. The Mean squared error (MSE) is multiplied by a constant. }
\begin{subtable}{1\textwidth}
\centering
\centering
\caption{\label{tab:sim1}Mean squared error, times 1000, averaging over 50 Monte Carlo simulations, for different methods given samples from Example 1. }
\smallskip
\begin{small}
\begin{tabular}{p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} }
n & NN-FL & L1-FL & SAS-FL & SAS & USVT & NS \\
500 &\textbf{2.8} & 4.9 & 13.5 & 15.2 & 5.1 & 3.9 \\
1000 &\textbf{1.5} & 2.7 & 10.5 & 13.1 & 2.1 & 2.5 \\
2000 &\textbf{1.2} & 1.6 &9.2 &12.3 &1.6 &1.9\\
\end{tabular}
\end{small}
\end{subtable}
\begin{subtable}{1\textwidth}
\centering
\centering
\caption{\label{tab:sim2} Mean squared error, times 1000, averaging over 50 Monte Carlo simulations, for different methods given samples from Example 2. }
\medskip
\begin{small}
\begin{tabular}{p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} }
n & NN-FL & L1-FL & SAS-FL & SAS & USVT & NS \\
500 &1.7 & 4.2 & \textbf{0.9} & 1.0 &1.9 & 2.9 \\
1000 &1.0 & 2.4 &\textbf{0.47} &0.53 &0.82 & 1.7 \\
2000 &0.88 &1.7 &\textbf{0.33} &0.35 &0.55 &1.3 \\
\end{tabular}
\end{small}
\end{subtable}
\begin{subtable}{1\textwidth}
\centering
\centering
\caption{\label{tab:sim3} Mean squared error, times 1000, averaging over 50 Monte Carlo simulations, for different methods given samples from Example 3. }
\medskip
\begin{small}
\begin{tabular}{p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} }
n & NN-FL & L1-FL & SAS-FL & SAS & USVT & NS \\
500 &\textbf{8.2} &9.9 &56.5 &60.7 &9.4 &8.7 \\
1000 &\textbf{5.7} &6.3 &47.9 &59.7 &6.4 &6.4 \\
2000 &5.1 &\textbf{5.0}&44.6 &59.4 &5.3 &5.3 \\
\end{tabular}
\end{small}
\end{subtable}
\begin{subtable}{1\textwidth}
\centering
\centering
\caption{\label{tab:sim4}Mean squared error, times 1000, averaging over 50 Monte Carlo simulations, for different methods given samples from Example 4. }
\medskip
\begin{small}
\begin{tabular}{p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} }
n & NN-FL & L1-FL & SAS-FL & SAS & USVT & NS \\
500 &2.0 &4.3 &\textbf{1.3 } &1.4 &1.8 &3.6 \\
1000 &1.3 &2.5 &\textbf{0.67} &0.71 &0.88 &2.2 \\
2000 &1.1 &1.9 &\textbf{0.48} &0.50 &0.59 &1.7 \\
\end{tabular}
\end{small}
\end{subtable}
\end{table}
For the methods based on fused lasso denoising, namely SAS-FL, NN-FL, and $\ell_1$-FL we choose the tunning parameter $\lambda$ by cross-validation. This is done by selecting the best value of $\lambda$ out of 30 candidates, by erasing $20\%$ of the data points and replacing them by zeros, and then performing predictions based on the remaining data.
The results for each scenario are given in Table \ref{tab:sim}. These are obtained by averaging over 50 Monte Carlo simulations, and for values of $n \in \{500,1000,1500\}$. In Table \ref{tab:sim}, we see that in most cases the best method is either NN-FL or SAS-FL. Even for the stochastic block model example, the method NN-FL seems to have the best performance.
Moreover, in Example 3, we see that SAS and SAS-FL suffer greatly due to the nearly constant behavior of the degree function ($g_1$, $g_2$ in Assumption \ref{as5}). In contrast, NN-FL and $\ell_1$-FL are not affected by the degree issue and offer strong performance.
Figures \ref{fig:ex1}-\ref{fig:ex4} allow us to visualize the comparisons of different methods in the examples considered. In Figure \ref{fig:ex1}, we can see that NN-FL gives a more detailed recovery of the blocks compared to all the other methods. As for Figure \ref{fig:ex2}, we see that with $n = 500$, all the competing methods are comparable, although, based on MSE, SAS-FL is the best approach. In Figure \ref{fig:ex3}, we clearly see the effect of the degree on the performance of SAS and SAS-FL.
\subsection{Link prediction}
We now validate the methods studied in this paper in the task of link prediction. To that end, we consider two different datasets. For our first example we use the Protein230 dataset from \cite{butland2005interaction}. This consists of 230 proteins and their interactions encoded in 595 edges. In our second example, we use the Yeast protein interaction network from \cite{butland2005interaction}. This is a larger network consisting of 2361 nodes and 6646 edges.
Using the data described above, we evaluate the prediction performance of different methods as follows. In each case, we remove some observations of the matrix $A \in \mathbb{R}^{n \times n}$, thus rather than observing $A$, we assume that the data is $\tilde{A}$ where
\[
\begin{array}{lll}
\tilde{A}_{i,j} & =& \rho_{i,j}\,A_{i,j},\,\,\,\,\,\\
\rho_{i,j}& \sim^{i.i.d} &\text{Bernoulli}(0.8), \,\,\,\,\,\,\forall i,j \in [n].
\end{array}
\]
For each data example we generate $50$ trials of $\tilde{A}$, and for each instance of $\tilde{A}$ we fit different estimators. For each estimator we compute the area under the curve (AUC) of the receiver
operating characteristic (ROC). We then report the average AUC-ROC and refer to it simply as AUC-ROC.
With the setting described above,
Table \ref{tab:real} reports the average AUC-ROC associated to the competing methods in each of the considered examples. We can see that for both examples, NN-FL and $\ell_1$-FL are the most competitive estimators. As a sanity check, we also computed the area under the precision recall curve, and found that in both cases the best approach was NN-FL.
\begin{table}[t!]
\centering
\caption{ \label{tab:real} Average AUC-ROC for the competing methods under Examples 1 and 2. }
\centering
\smallskip
\begin{small}
\begin{tabular}{p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} p{3.5pc} }
Example & NN-FL & $\ell_1$-FL & SAS-FL & SAS & USVT & NS \\
1 &\textbf{0.84} & 0.83 &0.76 &0.84 &0.60 &0.69 \\
2 & 0.92 &\textbf{0.93} & 0.80 &0.82 & 0.39 & 0.92 \\
\end{tabular}
\end{small}
\end{table}
\section{Conclusion}
We have studied a novel class of graphon estimators based on a two--step approach that combines the nearest neighbor algorithm with the graph fused lasso.
Overall the estimators seem to perform reasonably in both simulated and real data.
Statistical guarantees have been provided, although some questions remain open. For instance, we have not studied the statistical performance when the graphon is piecewise H\"{o}lder with exponent in the interval $[1/2,1]$.
We also leave for future work to understand the convergence properties $\ell_1$-FL which also seems like a reasonable approach, at least in the examples considered here.
|
1,108,101,565,255 | arxiv | \subsection*{#}
\newenvironment{newfigure}{\begin{figure}\vspace*{1em}
{\vspace{-.25em}\mbox{}\hfill\rule{\textwidth}{.6pt}\hfill\mbox{}\end{figure}}
\newcommand{\Sp}{\ensuremath{\mathcal{S}}}
\newcommand{\T}{\ensuremath{\mathcal{T}}}
\newcommand{\w}[1]{\ensuremath{\widetilde{#1}}}
\newcommand{\tup}[1]{\ensuremath{\mathbf{#1}}}
\makeatletter
\def\rangeref#1#2{page\@ifundefined{r@#1}{s {\bf ?--?}\@warning
{Reference `#1' on page \thepage \space
undefined}}{\@ifundefined{r@#2}{s {\bf ?--?}\@warning
{Reference `#2' on page \thepage \space
undefined}}{\edef\@tempa{\@nameuse{r@#1}}\expandafter
\@tempcnta\expandafter\@cdr\@tempa\@nil\relax
\edef\@tempa{\@nameuse{r@#2}}\expandafter
\@tempcntb\expandafter\@cdr\@tempa\@nil\relax
\ifnum\@tempcnta=\@tempcntb \textilde\pageref{#1}\else
s \pageref{#1}--\pageref{#2}\fi}}}
\makeatother
\newtheorem{definitionx}{Definition}
\newenvironment{definition}[1]{\begin{definitionx}[#1]\rm}{\end{definitionx}}
\newtheorem{theoremx}{Theorem}
\newenvironment{theorem}[1]{\begin{theoremx}[#1]{\rm}}{\end{theoremx}}
\begin{document}
\maketitle
\begin{abstract}
Auxiliary variables are often needed for verifying that an implementation is
correct with respect to a higher-level specification. They augment the formal
description of the implementation without changing its semantics---that is,
the
set of behaviors that it describes. This paper explains rules for adding
history, prophecy, and stuttering variables to \tlaplus\ specifications,
ensuring that the augmented specification is equivalent to the original one.
The rules are explained with toy examples, and they are used to verify
the correctness of a simplified version of a snapshot algorithm due to
Afek et al.
\end{abstract}
\tableofcontents
\newpage
\section{Introduction}
With state-based methods, checking that an implementation satisfies a
higher-level specification requires describing how the higher-level
concepts in the specification are represented by the lower-level data
structures of the implementation. This approach was first proposed in
the domain of sequential systems by Hoare in 1972~\cite{hoare:proof}.
Hoare called the description an \emph{abstraction function}. The
generalization to concurrent systems was called a \emph{refinement
mapping} by Abadi and Lamport~\cite{abadi:existence}. They observed
that constructing a refinement mapping may require adding auxiliary
variables to the implementation---variables that do not alter the
behavior of the actual variables and need not be implemented.
This paper is about adding auxiliary variables to \tlaplus\
specifications. The ideas we present should be applicable to other
state-based specification methods, but we make no attempt to translate
them into those other methods. We hope that a future paper will
present the basic ideas in a language-independent way and will contain
soundness and completeness proofs. Our goal here is to teach
engineers writing \tlaplus\ specifications how to add auxiliary
variables when they need them.
We assume the reader can understand \tlaplus\
specifications. A basic understanding of refinement mappings will be
helpful but isn't necessary. \tlaplus\ and refinement mappings are
explained in the book \emph{Specifying
Systems}~\cite{lamport:tla-book} and in material listed on the TLA web
page~\cite{lamport:tla-webpage}.
This is a long paper, in part because it contains 25 figures
with actual \tlaplus\ specifications.
The paper contains hyperlinks, and we recommend
reading the pdf version on line.
If you are doing that, you can download the source files for all
the \tlaplus\ specifications described in this paper by
\hyperref{http://research.microsoft.com/en-us/um/people/lamport/tla/auxiliary/auxiliary.html}{}{}{clicking here}.
Otherwise, you can find the URL in the reference
list~\cite{lamport:auxiliary-variables-web}.
We expect that engineers will have to study the specifications
carefully to learn how to add auxiliary variables to their
specifications.
We explain three kinds of auxiliary variables: history, prophecy, and
stuttering variables. History variables record information about the
system's past behavior. They have been used since at least the
1970s~\cite{owicki:verifying}. They were sometimes called ``ghost''
variables. Prophecy variables predict the future behavior of the
system. They were introduced by Abadi and Lamport in
1991~\cite{abadi:existence}. The need for them was also implicit in
an example presented in Herlihy and Wing's classic paper on
linearizability~\cite{herlihy:axioms}. We found the original prophecy
variables very difficult to use in practice. The prophecy variables
described here are new, and our experience with them so far indicates
that they are reasonably easy to use in practice. Stuttering
variables add ``stuttering'' steps---ones that leave the
specification's actual variables unchanged. Abadi and Lamport
originally used prophecy variables to add stuttering steps, but we
have found it better to introduce stuttering steps with a separate
kind of variable.
We will mostly ignore liveness and consider only safety
specifications. The canonical form of a \tlaplus\ specification
consists of a safety specification of the form
\tlabox{Init /\ [][Next]_{vars}}
conjoined with a liveness condition. An auxiliary variable is added
by modifying the safety specification, but leaving the liveness
condition unchanged. Liveness therefore poses no problem for auxiliary
variables and is discussed only briefly.
\section{Refinement Mappings}
We will illustrate refinement mappings with a simple, useless example.
A user presents a server with a sequence of integer inputs. The
server responds to each input value $i$ with one of the following
outputs: $Hi$ if $i$ is the largest number input so far, $Lo$ if it's
the smallest number input so far, $Both$ if it's both, and $None$ if
it's neither. We declare $Hi$, $Lo$, $Both$, and $None$ in a
\textsc{constants} statement. They are assumed not to be integers.
\subsection{Specification \emph{MinMax}1}
Our first specification appears in a module named $MinMax1$. It
describes the interaction of the user and the server with two
variables: a variable $x$ to hold an input or a response, and a
variable $turn$ that indicates whether it's the user's turn to input a
value or the server's turn to respond. The specification also uses a
variable $y$ to hold the set of values input so far. The initial
predicate is
\begin{display}
\begin{notla}
Init == /\ x = None
/\ turn = "input"
/\ y = {}
\end{notla}
\begin{tlatex}
\@x{ Init \.{\defeq}\@s{4.1} \.{\land} x \.{=}None
\@x{\@s{39.80} \.{\land} turn \.{=}\@w{input}
\@x{\@s{39.80} \.{\land} y \.{=} \{ \}
\end{tlatex}
\end{display}
The next-state relation $Next$ equals $InputNum \/ Respond$ where
$InputNum$ is the user's input action and $Respond$ is the server's output
action. The definition of $InputNum$ is simple:
\begin{display}
\begin{notla}
InputNum == /\ turn = "input"
/\ turn' = "output"
/\ x' \in Int
/\ y' = y
\end{notla}
\begin{tlatex}
\@x{ InputNum \.{\defeq}\@s{4.1} \.{\land} turn \.{=}\@w{input}
\@x{\@s{68.01} \.{\land} turn \.{'} \.{=}\@w{output}
\@x{\@s{68.01} \.{\land} x \.{'} \.{\in} Int
\@x{\@s{68.01} \.{\land} y \.{'}\@s{0.10} \.{=} y
\end{tlatex}
\end{display}
To define the $Respond$ action, we must first define operators
$setMax$ and $setMin$ so that, for any finite nonempty set $S$ of
integers, $setMax(S)$ and $setMin(S)$ are the maximum and minimum
element, respectively, of $S$. The definitions are:
\begin{display}
\begin{notla}
setMax(S) == CHOOSE t \in S : \A s \in S : t >= s
setMin(S) == CHOOSE t \in S : \A s \in S : t =< s
\end{notla}
\begin{tlatex}
\@x{ setMax ( S )\@s{4.1} \.{\defeq}\@s{4.1} {\CHOOSE} t \.{\in} S \.{:} \A\,
s \.{\in} S \.{:} t \.{\geq} s
\@pvspace{4.0pt
\@x{ setMin ( S )\@s{5.59} \.{\defeq}\@s{4.09} {\CHOOSE} t \.{\in} S \.{:}
\A\, s \.{\in} S \.{:} t \.{\leq} s
\end{tlatex}
\end{display}
The definition of $Respond$ is:
\begin{display}
\begin{notla}
Respond == /\ turn = "output"
/\ turn' = "input"
/\ y' = y \cup {x}
/\ x' = IF x = setMax(y')
THEN IF x = setMin(y') THEN Both ELSE Hi
ELSE IF x = setMin(y') THEN Lo ELSE None
\end{notla}
\begin{tlatex}
\@x{ Respond \.{\defeq} \.{\land} turn \.{=}\@w{output}
\@x{\@s{55.83} \.{\land} turn \.{'} \.{=}\@w{input}
\@x{\@s{55.83} \.{\land} y \.{'}\@s{0.10} \.{=} y \.{\cup} \{ x \}
\@x{\@s{55.83} \.{\land} x \.{'} \.{=} {\IF} x \.{=} setMax ( y \.{'} )
\@x{\@s{97.13} \.{\THEN} {\IF} x \.{=} setMin ( y \.{'} ) \.{\THEN} Both
\.{\ELSE} Hi
\@x{\@s{97.13} \.{\ELSE} {\IF} x \.{=} setMin ( y \.{'} ) \.{\THEN}
Lo\@s{9.84} \.{\ELSE} None
\end{tlatex}
\end{display}
Note that action $InputNum$ is enabled iff $turn$ equals $"input"$,
and action $Respond$ is enabled iff $turn$ equals $"output"$.
The complete specification is the formula
\[ Spec == Init /\ [][Next]_{vars}\]
where $vars$ is the tuple $<<x, turn, y>>$ of variables. The module
$MinMax1$ we have written thus far is shown in
\lref{targ:MinMax1}{Figure~\ref{fig:MinMax1}}.
\begin{figure} \target{targ:MinMax1}
\begin{notla}
----------------------------- MODULE MinMax1 -----------------------------
EXTENDS Integers
setMax(S) == CHOOSE t \in S : \A s \in S : t >= s
setMin(S) == CHOOSE t \in S : \A s \in S : t =< s
CONSTANTS Lo, Hi, Both, None
ASSUME {Lo, Hi, Both, None} \cap Int = { }
VARIABLES x, turn, y
vars == <<x, turn, y>>
Init == /\ x = None
/\ turn = "input"
/\ y = {}
InputNum == /\ turn = "input"
/\ turn' = "output"
/\ x' \in Int
/\ y' = y
Respond == /\ turn = "output"
/\ turn' = "input"
/\ y' = y \cup {x}
/\ x' = IF x = setMax(y')
THEN IF x = setMin(y') THEN Both ELSE Hi
ELSE IF x = setMin(y') THEN Lo ELSE None
Next == InputNum \/ Respond
Spec == Init /\ [][Next]_vars
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} MinMax1}\moduleRightDash\@xx{
\@x{ {\EXTENDS} Integers
\@pvspace{8.0pt
\@x{ setMax ( S ) \.{\defeq} {\CHOOSE} t \.{\in} S \.{:} \A\, s \.{\in} S
\.{:} t \.{\geq} s
\@x{ setMin ( S )\@s{1.49} \.{\defeq} {\CHOOSE} t \.{\in} S \.{:} \A\, s
\.{\in} S \.{:} t \.{\leq} s
\@pvspace{8.0pt
\@x{ {\CONSTANTS} Lo ,\, Hi ,\, Both ,\, None
\@x{ {\ASSUME} \{ Lo ,\, Hi ,\, Both ,\, None \} \.{\cap} Int \.{=} \{ \}
\@pvspace{8.0pt
\@x{ {\VARIABLES} x ,\, turn ,\, y
\@x{ vars \.{\defeq} {\langle} x ,\, turn ,\, y {\rangle}
\@pvspace{8.0pt
\@x{ Init\@s{2.02} \.{\defeq}\@s{4.1} \.{\land} x \.{=} None
\@x{\@s{41.82} \.{\land} turn \.{=}\@w{input}
\@x{\@s{41.82} \.{\land} y \.{=} \{ \}
\@pvspace{8.0pt
\@x{ InputNum \.{\defeq}\@s{4.1} \.{\land} turn \.{=}\@w{input}
\@x{\@s{68.01} \.{\land} turn \.{'} \.{=}\@w{output}
\@x{\@s{68.01} \.{\land} x \.{'} \.{\in} Int
\@x{\@s{68.01} \.{\land} y \.{'}\@s{0.10} \.{=} y
\@pvspace{8.0pt
\@x{ Respond \.{\defeq} \.{\land} turn\@s{1.52} \.{=}\@w{output}
\@x{\@s{55.83} \.{\land} turn \.{'} \.{=}\@w{input}
\@x{\@s{55.83} \.{\land} y \.{'}\@s{0.10} \.{=} y \.{\cup} \{ x \}
\@x{\@s{55.83} \.{\land} x \.{'} \.{=} {\IF} x \.{=} setMax ( y \.{'} )
\@x{\@s{97.13} \.{\THEN} {\IF} x \.{=} setMin ( y \.{'} ) \.{\THEN} Both
\.{\ELSE} Hi
\@x{\@s{97.13} \.{\ELSE} {\IF} x \.{=} setMin ( y \.{'} ) \.{\THEN}
Lo\@s{9.84} \.{\ELSE} None
\@pvspace{8.0pt
\@x{ Next \.{\defeq} InputNum \.{\lor} Respond
\@pvspace{8.0pt
\@x{ Spec\@s{1.46} \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ vars}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Module \emph{MinMax}1.} \label{fig:MinMax1}
\end{figure}
\subsection{The Hiding Operator \protect\EE}
Recall that a behavior is a sequence of states, where a state is an
assignment of values to all possible variables. For specification
$Spec$ of module $MinMax1$, the interesting part of the state is the
assignment of values to $x$, $turn$, and $y$. Our specification
allows all other variables to have any value at any state of any
behavior.
The purpose of this specification is to describe the interaction of
the user and the server. This interaction is described by the values
of $x$ and $turn$. The value of $y$ is needed only to describe how
the values of $x$ and $turn$ can change. We consider $x$ and $turn$
to be the externally visible or observable values of the specification and $y$
to be an internal variable. A philosophically correct specification of
our user/server system would allow only behaviors in which the values
of $x$ and $turn$ are as specified by $Spec$, but would not constrain
the value of $y$. We can write such a specification in terms of the
temporal-logic operator~$\EE$.
For any temporal formula $F$ and variable $v$, the formula
\tlabox{\EE v : F}
is defined approximately as follows. A behavior $\sigma$ satisfies
\tlabox{\EE v : F}
iff there exists a behavior $\tau$ satisfying $F$ such that $\tau$ is
identical to $\sigma$ except for the values its states assign to $v$.
The precise definition is more complicated because a temporal formula
of \tlaplus\ may neither require nor prohibit stuttering steps, but we
will use the approximate definition for now. The operator $\EE$
is much like the ordinary existential quantifier $\E$ except that
\tlabox{\EE v : F}
asserts the existence not of a single value for $v$ that makes $F$
true but rather of a sequence of values, one for each state in the
behavior, that makes $F$ true on the behavior. This temporal existential
quantifier $\EE$ satisfies most of the properties of ordinary quantification.
For example, if the variable $v$ does not occur in formula $F$, then
\tlabox{\EE v : F} is equivalent to $F$. We sometimes read the formula
\tlabox{\EE v : F} as ``$F$ with $v$ hidden''.
The philosophically correct specification of the $MinMax1$ system
should consist of formula $Spec$ with $y$ hidden. The obvious way to
write this specification is \tlabox{\EE y : Spec}. However, we can't
do that for the following reason. Suppose a module $M$ defines $exp$
to equal some expression. \tlaplus\ does not allow the expression
\begin{equation} \label{eq1}
\{v \in exp : v^{2}>42\}
\end{equation}
to appear at any point in module $M$ where $v$ is already declared or
defined. Since $exp$ must be defined for the expression to have a
meaning, this means that (\ref{eq1}) is illegal if $v$ is a declared
variable that appears in the definition of $exp$. Similarly, the formula
\tlabox{\EE y : Spec} is illegal because $y$ appears in the definition
of $Spec$.\footnote{There
are languages for writing math precisely that allow expression
(\ref{eq1}) even if $v$ is already declared. In such a language,
\tlabox{\EE y : Spec} would be equivalent to \tlabox{\EE w : Spec} for
any identifier $w$, which means it would be equivalent to $Spec$.}
There are ways to write the formula $Spec$ with $v$ hidden in
\tlaplus. The most convenient ones involve writing it in another
module that instantiates module $MinMax1$. Chapter~4 of
\emph{Specifying Systems}~\cite{lamport:tla-book} explains one way to
do this. However, there's little reason to do it since the \tlaplus\
tools cannot check specifications written with $\EE$. (The TLAPS
proof system may eventually be able to reason about it.) Instead, we
take the formula \tlabox{\EE y : Spec} to be an abbreviation for the
formula \tlabox{\EE y : \Mmap{Spec}}, where \Mmap{Spec} is the formula
obtained from $Spec$ by expanding all definitions. Formula
\Mmap{Spec} contains only: \tlaplus\ primitives; the constants $Hi$,
$Lo$, $Both$, and $None$; and the variables $x$, $turn$, and $y$.
Thus \tlabox{\EE y : Spec} is meaningful in a context in which $x$ and
$turn$ are declared variables. If used in a context in which $y$
already has a meaning, we interpret \tlabox{\EE y : Spec} to be the
formula obtained from \tlabox{\EE y : \Mmap{Spec}} by replacing $y$
everywhere with a new symbol.
What
\label{pageMmap
it means to expand all definitions in an expression is not
as simple as it might seem. Consider the following definition:
\begin{equation}
NotUnique(a) == \E i : i # a \NOTLA\label{eq:subs-1}
\end{equation}
It's clear that the following theorem is true:
\begin{equation}
\THEOREM \tlabox{\A a : NotUnique(a)} \NOTLA\label{eq:subs-2}
\end{equation}
Now suppose we follow the definition of $NotUnique$ with:
\begin{equation}
\begin{noj}
\CONSTANT i \V{.2}
\THEOREM NotUnique(i)
\end{noj}
\NOTLA\label{eq:subs-3}
\end{equation}
Theorem (\ref{eq:subs-2})
obviously implies the theorem of (\ref{eq:subs-3}). However, a naive
expansion of the definition of $NotUnique$ tells us that
\Mmap{NotUnique(i)} equals
\tlabox{\E i : i # i},
which equals \FALSE. The problem is clear: the bound identifier $i$ in
the definition of $NotUnique$ is not the same $i$ as the one declared
in the \textsc{constant} declaration. The following definition of
$NotUnique$ is equivalent to (\ref{eq:subs-1})
\[ NotUnique(a) == \E jku : jku # a
\]
and with the naive expansion of this definition,
\Mmap{NotUnique(i)} equals the true formula
\tlabox{\E jku : jku # i}
of (\ref{eq:subs-3}).
The easiest way to define the meaning of expanding all definitions in
an expression is to consider (\ref{eq:subs-1}) to define
$NotUnique(a)$ to equal something like
\tlabox{\E v\_743 : v\_743 # a},
where $v\_743$ is an identifier that cannot be used anywhere else. In
general, every bound identifier in a definition is replaced by some
unique identifier.
Recursive definitions are not a problem for complete expansion of
definitions because in \tlaplus, a recursive definition is just an
abbreviation for a non-recursive one. For example
\[ f[i \in Nat] == \IF {i=0}\THEN 1 \LSE i*f[i-1]
\]
is an abbreviation for
\[ f == \CHOOSE f : f = [i\in Nat |-> \IF {i=0}\THEN 1 \LSE i*f[i-1]]
\]
so the bound identifier $f$ to the right of the ``$\deq$'' is not the
same symbol as the $f$ being defined. (A recursive operator
definition is an abbreviation for a much more complicated ordinary
definition.)\label{pageMmapx}
\subsection{Specification \emph{MinMax}2}
The specification of our system in module $MinMax1$ uses the variable
$y$ to remember the set of all values that the user has input. Module
$MinMax2$ specifies the same user/server interaction that
remembers only the smallest and largest values input so far, using the
variables $min$ and $max$. Representing the initial state, before any
values have been input, is a little tricky. It would be simpler if
the standard $Integers$ module defined a value $\infty$ such that
$-\infty < i < \infty$ for all integers $i$. So, we will write the
spec pretending that it did. Afterwards, we'll describe how to obtain
an actual \tlaplus\ spec.
The initial predicate of the specification is:
\begin{display}
\begin{notla}
Init == /\ x = None
/\ turn = "input"
/\ min = Infinity
/\ max = MinusInfinity
\end{notla}
\begin{tlatex}
\@x{ Init\@s{4.1} \.{\defeq}\@s{4.1} \.{\land} x \.{=}None
\@x{\@s{43.90} \.{\land} turn \.{=}\@w{input}
\@x{\@s{43.90} \.{\land} min\@s{1.49} \.{=} \infty
\@x{\@s{43.90} \.{\land} max \.{=} -\infty
\end{tlatex}
\end{display}
The user's $InputNum$ action is the same as for the $MinMax1$ specification,
except it leaves $min$ and $max$ rather than $y$ unchanged:
\begin{display}
\begin{notla}
InputNum == /\ turn = "input"
/\ turn' = "output"
/\ x' \in Int
/\ UNCHANGED <<min, max>>
\end{notla}
\begin{tlatex}
\@x{ InputNum\@s{4.1} \.{\defeq}\@s{4.1} \.{\land} turn \.{=}\@w{input}
\@x{\@s{72.11} \.{\land} turn \.{'} \.{=}\@w{output}
\@x{\@s{72.11} \.{\land} x \.{'} \.{\in} Int
\@x{\@s{72.11} \.{\land} {\UNCHANGED} {\langle} min ,\, max {\rangle}
\end{tlatex}
\end{display}
Here is the system's $Respond$ action:
\begin{display}
\begin{notla}
Respond == /\ turn = "output"
/\ turn' = "input"
/\ min' = IF x =< min THEN x ELSE min
/\ max' = IF x >= max THEN x ELSE max
/\ x' = IF x = max'
THEN IF x = min' THEN Both ELSE Hi
ELSE IF x = min' THEN Lo ELSE None
\end{notla}
\begin{tlatex}
\@x{ Respond\@s{4.1} \.{\defeq}\@s{4.1} \.{\land} turn \.{=}\@w{output}
\@x{\@s{64.03} \.{\land} turn \.{'} \.{=}\@w{input}
\@x{\@s{64.03} \.{\land} min \.{'}\@s{1.49} \.{=} {\IF} x \.{\leq}
min\@s{1.49} \.{\THEN} x \.{\ELSE} min
\@x{\@s{64.03} \.{\land} max \.{'} \.{=} {\IF} x \.{\geq} max \.{\THEN} x
\.{\ELSE} max
\@x{\@s{64.03} \.{\land} x \.{'} \.{=} {\IF} x \.{=} max \.{'}
\@x{\@s{105.33} \.{\THEN} {\IF} x \.{=} min \.{'} \.{\THEN} Both \.{\ELSE}
Hi
\@x{\@s{105.33} \.{\ELSE} {\IF} x \.{=} min \.{'} \.{\THEN} Lo\@s{9.84}
\.{\ELSE} None
\end{tlatex}
\end{display}
As usual, the complete specification is
\[ Spec == Init \;/\ \; [][Next]_{vars}\]
where this time $vars$ is the tuple $<<x, turn, min, max>>$ of variables.
To turn this into a \tlaplus\ specification, we replace $\infty$ and
$-\infty$ by two constants $Infinity$ and $MinusInfinity$. In the definition
of $Respond$, we replace $x\leq min$ and $x\geq max$ by
$IsLeq(x, min)$ and $IsGeq(x, max)$, where $IsLeq$ and $IsGeq$ are defined
by
\begin{display}
\begin{notla}
IsLeq(i, j) == (j = Infinity) \/ (i =< j)
IsGeq(i, j) == (j = MinusInfinity) \/ (i >= j)
\end{notla}
\begin{tlatex}
\@x{ IsLeq ( i ,\, j )\@s{6.07} \.{\defeq}\@s{4.1} ( j \.{=} Infinity )
\.{\lor} ( i \.{\leq} j )
\@pvspace{4.0pt
\@x{ IsGeq ( i ,\, j )\@s{4.09} \.{\defeq}\@s{4.10} ( j \.{=} MinusInfinity )
\.{\lor} ( i \.{\geq} j )
\end{tlatex}
\end{display}
These definitions must be preceded by declarations or definitions of
$Infinity$ and $MinusInfinity$. They can equal any values, except that
they must not be equal and neither of them should be an
integer---otherwise, the spec wouldn't mean what we want it to mean.
We could declare $Infinity$ and $MinusInfinity$ in a \textsc{constant}
declaration and then add an \textsc{assume} statement asserting that
they aren't in $Int$. However, we prefer to define them like this:
\begin{display}
\begin{notla}
Infinity == CHOOSE n : n \notin Int
MinusInfinity == CHOOSE n : n \notin (Int \cup {Infinity})
\end{notla}
\begin{tlatex}
\@x{ Infinity\@s{31.21} \.{\defeq}\@s{4.1} {\CHOOSE} n \.{:} n \.{\notin}
Int
\@pvspace{4.0pt
\@x{ MinusInfinity\@s{4.10} \.{\defeq}\@s{4.10} {\CHOOSE} n \.{:} n
\.{\notin} ( Int \.{\cup} \{ Infinity \} )
\end{tlatex}
\end{display}
This completes module $MinMax2$, which is shown in its entirety
in \lref{targ:MinMax2}{Figure~\ref{fig:MinMax2}}.
\begin{figure} \target{targ:MinMax2}
\begin{notla}
----------------------------- MODULE MinMax2 -----------------------------
EXTENDS Integers, Sequences
CONSTANTS Lo, Hi, Both, None
ASSUME {Lo, Hi, Both, None} \cap Int = { }
Infinity == CHOOSE n : n \notin Int
MinusInfinity == CHOOSE n : n \notin (Int \cup {Infinity})
IsLeq(i, j) == (j = Infinity) \/ (i =< j)
IsGeq(i, j) == (j = MinusInfinity) \/ (i >= j)
VARIABLES x, turn, min, max
vars == <<x, turn, min, max>>
Init == /\ x = None
/\ turn = "input"
/\ min = Infinity
/\ max = MinusInfinity
InputNum == /\ turn = "input"
/\ turn' = "output"
/\ x' \in Int
/\ UNCHANGED <<min, max>>
Respond == /\ turn = "output"
/\ turn' = "input"
/\ min' = IF IsLeq(x, min) THEN x ELSE min
/\ max' = IF IsGeq(x, max) THEN x ELSE max
/\ x' = IF x = max' THEN IF x = min' THEN Both ELSE Hi
ELSE IF x = min' THEN Lo ELSE None
Next == InputNum \/ Respond
Spec == Init /\ [][Next]_vars
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} MinMax2}\moduleRightDash\@xx{
\@x{ {\EXTENDS} Integers ,\, Sequences
\@pvspace{8.0pt
\@x{ {\CONSTANTS} Lo ,\, Hi ,\, Both ,\, None
\@x{ {\ASSUME} \{ Lo ,\, Hi ,\, Both ,\, None \} \.{\cap} Int \.{=} \{ \}
\@pvspace{8.0pt
\@x{ Infinity\@s{27.11} \.{\defeq} {\CHOOSE} n \.{:} n \.{\notin} Int
\@x{ MinusInfinity \.{\defeq} {\CHOOSE} n \.{:} n \.{\notin} ( Int \.{\cup}
\{ Infinity \} )
\@pvspace{8.0pt
\@x{ IsLeq ( i ,\, j )\@s{1.97} \.{\defeq} ( j \.{=} Infinity ) \.{\lor} ( i
\.{\leq} j )
\@x{ IsGeq ( i ,\, j ) \.{\defeq} ( j \.{=} MinusInfinity ) \.{\lor} ( i
\.{\geq} j )
\@pvspace{8.0pt
\@x{ {\VARIABLES} x ,\, turn ,\, min ,\, max
\@x{ vars \.{\defeq} {\langle} x ,\, turn ,\, min ,\, max {\rangle}
\@pvspace{8.0pt
\@x{ Init\@s{2.02} \.{\defeq}\@s{4.1} \.{\land} x \.{=} None
\@x{\@s{41.82} \.{\land} turn \.{=}\@w{input}
\@x{\@s{41.82} \.{\land} min\@s{1.49} \.{=} Infinity
\@x{\@s{41.82} \.{\land} max \.{=} MinusInfinity
\@pvspace{8.0pt
\@x{ InputNum \.{\defeq}\@s{4.1} \.{\land} turn \.{=}\@w{input}
\@x{\@s{68.01} \.{\land} turn \.{'} \.{=}\@w{output}
\@x{\@s{68.01} \.{\land} x \.{'} \.{\in} Int
\@x{\@s{68.01} \.{\land} {\UNCHANGED} {\langle} min ,\, max {\rangle}
\@pvspace{8.0pt
\@x{ Respond\@s{8.07} \.{\defeq}\@s{4.1} \.{\land} turn \.{=}\@w{output}
\@x{\@s{68.01} \.{\land} turn \.{'} \.{=}\@w{input}
\@x{\@s{68.01} \.{\land} min \.{'}\@s{1.49} \.{=} {\IF} IsLeq ( x ,\, min
)\@s{3.47} \.{\THEN} x \.{\ELSE} min
\@x{\@s{68.01} \.{\land} max \.{'} \.{=} {\IF} IsGeq ( x ,\, max ) \.{\THEN}
x \.{\ELSE} max
\@x{\@s{68.01} \.{\land} x \.{'} \.{=} {\IF} x \.{=} max \.{'} \.{\THEN}
{\IF} x \.{=} min \.{'} \.{\THEN} Both \.{\ELSE} Hi
\@x{\@s{154.37} \.{\ELSE} {\IF} x \.{=} min \.{'} \.{\THEN} Lo\@s{9.84}
\.{\ELSE} None
\@pvspace{8.0pt
\@x{ Next \.{\defeq} InputNum \.{\lor} Respond
\@pvspace{8.0pt
\@x{ Spec\@s{1.46} \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ vars}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Module \emph{MinMax}2.} \label{fig:MinMax2}
\end{figure}
As before, we consider $x$ and $turn$ to be the externally visible
variables, and $min$ and $max$ to be internal variables. The
philosophically correct specification, which hides the internal
variables $min$ and $max$, is \tlabox{\EE min, max : Spec}. Of
course, this is an abbreviation for
\tlabox{\EE min : (\EE max : Spec}),
which is equivalent to
\tlabox{\EE max : (\EE min : Spec}).
\subsection{The Relation Between the Two Specifications} \label{sec:relation}
Using the standard \tlaplus\ naming convention, we have given the two
specifications the same name $Spec$. To distinguish them, let
$Spec_{1}$ be the specification $Spec$ of module $MinMax1$ and
$Spec_{2}$ be the $Spec$ of module $MinMax2$.
It should be clear that both specifications describe the same behavior
of the external variables $x$ and $turn$. This means that if we hide
the internal variable $y$ of $Spec_{1}$ and the internal variables $min$
and $max$ of $Spec_{2}$, we should obtain equivalent specifications.
More precisely, we expect to this to be true:
\begin{equation} \label{eqT1}
(\EE y : Spec_{1}) \ \equiv \ (\EE min, max : Spec_{2})
\end{equation}
This formula is equivalent to the conjunction of these two formulas
\begin{eqnarray}
(\EE y : Spec_{1}) \ => \ (\EE min, max : Spec_{2}) \label{eqT1a} \V{.2}
(\EE min, max : Spec_{2}) \ => \ (\EE y : Spec_{1}) \label{eqT1b}
\end{eqnarray}
We verify (\ref{eqT1}) by separately verifying
(\ref{eqT1a}) and (\ref{eqT1b}). We first consider (\ref{eqT1a}).
Formula (\ref{eqT1a}) asserts of a behavior $\sigma$ that if there
exists some way of assigning values to $y$ in the states of $\sigma$
to make it satisfy $Spec_{1}$, then $\sigma$ satisfies
\tlabox{\EE min, max : Spec_{2}}.
Since the variable $y$ does not appear in \tlabox{\EE min, max : Spec_{2}},
changing the values of $y$ in the states of $\sigma$ doesn't affect
whether it satisfies that formula. This implies that to verify
(\ref{eqT1a}), it suffices to show that any behavior $\sigma$ that
satisfies $Spec_{1}$ also satisfies \tlabox{\EE min, max : Spec_{2}}.
In other words, to verify (\ref{eqT1a}), it suffices to verify
\begin{equation} \label{eqT1abis}
Spec_{1} \ => \ (\EE min, max : Spec_{2})
\end{equation}
To verify (\ref{eqT1abis}), we must show that for any behavior
$\sigma$ that satisfies $Spec_{1}$, there exists a way of assigning
values to the variables $min$ and $max$ in the states of $\sigma$ that
makes the resulting behavior satisfy $Spec_{2}$. A standard way of
doing that is to find explicit expressions \ov{min} and \ov{max} such
that, if in each state of a behavior we assign to the variables
$min$ and $max$ the values of \ov{min} and \ov{max} in that
state, then the resulting behavior satisfies $Spec_{2}$. We do this by
showing that any behavior satisfying $Spec_{1}$ satisfies the formula
obtained by substituting \ov{min} for $min$ and \ov{max} for $max$ in
$Spec_{2}$. Let's write that formula
$\ov{\Mmap{Spec_{2}}}$,
emphasizing that we must expand all definitions in $Spec_{2}$ before
substituting \ov{min} for $min$ and \ov{max} for $max$. So,
we verify (\ref{eqT1abis}) by verifying
\begin{equation} \label{eqT1abis2}
Spec_{1} \ => \ \ov{\Mmap{Spec_{2}}}
\end{equation}
We can write formula $\ov{\Mmap{Spec_{2}}}$ (or more precisely, a
formula equivalent to it) in module $MinMax1$ as
follows. We first add the statement
\[ M \ == \ \INSTANCE MinMax2 \WITH min <- \ov{min}, \ max <- \ov{max}\]
For every defined symbol $def$ in module $MinMax2$, this statement
defines $M!def$ to be equivalent to $\ov{\Mmap{def}}$, the formula
whose definition is obtained by substituting \ov{min} for $min$ and
\ov{max} for $max$ in the formula obtained by expanding all
definitions in the definition of $def$ in $MinMax2$
\footnote{Note that the declared constants $Hi$, $Lo$, $Both$,
and $None$ of module $MinMax2$ have been implicitly instantiated
by the constants of the same name declared in $MinMax1$.}
This \textsc{instance} statement therefore defines $M!Spec$ to
be equivalent to $\ov{\Mmap{Spec_{2}}}$, allowing us to write
(\ref{eqT1abis2}) in module $MinMax1$ as the theorem:
\[ \THEOREM Spec \ => \ M!Spec
\]
We can write a \tlaplus\ proof of this theorem and check it with the
TLAPS theorem prover. We can also have TLC check this theorem by
creating a model for module $MinMax1$ with specification $Spec$ that
substitutes a finite set of integers for $Int$ and checks the property
$M!Spec$. But before we can do that, we have to determine what the
expressions \ov{min} and \ov{max} are in the \textsc{instance}
statement.
Formula (\ref{eqT1abis2}) asserts that in a behavior $\sigma$
satisfying $Spec_{1}$, if in each state of $\sigma$ we assign to $min$
and $max$ the values of \ov{min} and \ov{max} in that state, then the
resulting behavior satisfies $Spec_{2}$. One way of thinking about
this is that in a behavior satisfying $Spec_{1}$, the values of
\ov{min} and \ov{max} simulate the values that $Spec_{2}$ requires
$min$ and $max$ to assume.
A little thought reveals that \ov{min} and \ov{max} should be defined
as indicated in this statement:
\begin{display}
\begin{notla}
M == INSTANCE MinMax2
WITH min <- IF y = {} THEN Infinity ELSE setMin(y),
max <- IF y = {} THEN MinusInfinity ELSE setMax(y)
\end{notla}
\begin{tlatex}
\@x{ M \.{\defeq} {\INSTANCE} MinMax2
\@x{\@s{37.69} {\WITH} min\@s{1.49} \.{\leftarrow} {\IF} y \.{=} \{ \}
\.{\THEN} Infinity\@s{27.11} \.{\ELSE} setMin ( y ) ,\,
\@x{\@s{68.67} max \.{\leftarrow} {\IF} y \.{=} \{ \} \.{\THEN} MinusInfinity
\.{\ELSE} setMax ( y )
\end{tlatex}
\end{display}
Of course, we need to define $Infinity$ and $MinusInfinity$ before we
can write that statement. They should be defined to be the same
values as in $MinMax2$, so we just copy the definitions from that
module into module $MinMax1$. We have added the statements in
\lref{targ:MinMax1a}{Figure~\ref{fig:MinMax1a}} to the bottom of the module in
\lref{targ:MinMax1}{Figure~\ref{fig:MinMax1}}
\begin{figure} \target{targ:MinMax1a}
\vspace*{1em}
\begin{notla}
Infinity == CHOOSE n : n \notin Int
MinusInfinity == CHOOSE n : n \notin (Int \cup {Infinity})
M == INSTANCE MinMax2
WITH min <- IF y = {} THEN Infinity ELSE setMin(y),
max <- IF y = {} THEN MinusInfinity ELSE setMax(y)
\end{notla}
\begin{tlatex}
\@x{ Infinity\@s{27.11} \.{\defeq} {\CHOOSE} n \.{:} n \.{\notin} Int
\@x{ MinusInfinity \.{\defeq} {\CHOOSE} n \.{:} n \.{\notin} ( Int \.{\cup}
\{ Infinity \} )
\@pvspace{8.0pt
\@x{ M \.{\defeq} {\INSTANCE} MinMax2
\@x{\@s{41.79} {\WITH} min\@s{1.49} \.{\leftarrow} {\IF} y \.{=} \{ \}
\.{\THEN} Infinity\@s{27.11} \.{\ELSE} setMin ( y ) ,\,
\@x{\@s{72.77} max \.{\leftarrow} {\IF} y \.{=} \{ \} \.{\THEN} MinusInfinity
\.{\ELSE} setMax ( y )
\end{tlatex}
\caption{Additions to module \emph{MinMax}1.} \label{fig:MinMax1a}
\end{figure}
\subsection{Refinement In General}
In general, we have two specs: $Spec_{1}$ with variables
$x_{1},\ldots,x_{m}, y_{1},\ldots,y_{n}$,
and $Spec_{2}$ with variables
$x_{1},\ldots,x_{m}, z_{1},\ldots,z_{p}$. For compactness
let \textbf{x} denote $x_{1},\ldots,x_{m}$, let \textbf{y} denote
$y_{1},\ldots,y_{n}$ and let \textbf{z} denote $z_{1},\ldots,z_{p}$.
We consider \textbf{x} to be the externally visible variables of both
specifications, and we consider \textbf{y} and \textbf{z} to be internal
variables.
The specifications with their internal variables hidden are written
\tlabox{\EE \textbf{y}:Spec_{1}} and \tlabox{\EE \textbf{z}:Spec_{2}}.
To verify that
\tlabox{\EE \textbf{y}:Spec_{1}} implements \tlabox{\EE \textbf{z}:Spec_{2}},
we must show that for each behavior satisfying $Spec_{1}$, there is
some way to assign values of the variables \textbf{z} in each state so
that the resulting behavior satisfies $Spec_{2}$. We do that by
explicitly specifying those values of \textbf{z} in terms of the values
of \textbf{x} and \textbf{y}. More precisely,
for each $z_{i}$ we define an expression \ov{z_{i}} in terms of the
variables \textbf{x} and \textbf{y} and show that $Spec_{1}$
implements $\ov{\Mmap{Spec_{2}}}$, the specification obtained by
expanding all definitions in $Spec_{2}$ and substituting
$z_{1}<-\ov{z_{1}},\,\ldots,\,z_{p}<-\ov{z_{p}}$ in the resulting
formula. This substitution is
called a \emph{refinement mapping}; and if $Spec_{1}$
implements $\ov{\Mmap{Spec_{2}}}$, then we say that
$Spec_{1}$ implements $Spec_{2}$ under the refinement mapping.
The assertion that $Spec_{1}$ implements $Spec_{2}$ under the
refinement mapping
$z_{1}<-\ov{z_{1}},\,\ldots,\,z_{p}<-\ov{z_{p}}$ can be expressed
in \tlaplus\ as follows. Suppose $Spec_{1}$ is formula $Spec1$ in a
module named $Mod1$, and $Spec_{2}$ is formula $Spec2$ in a module
named $Mod2$. ($Spec1$ and $Spec2$ can be the same
identifier.) For some identifier $Id$, we add the following statement
to $Mod1$:\footnote{If $Mod2$ has declared \textsc{constants},
then the statement must
also specify expressions to be substituted for those constants. A
substitution of the form $id \leftarrow id$ for an identifier $id$ can be
omitted from the \textsc{with} clause.}
\[ Id == \INSTANCE Mod2\ \WITH \
z_{1}<-\ov{z_{1}},\,\ldots,\,z_{p}<-\ov{z_{p}} \]
The assertion that $Spec_{1}$ implements $Spec_{2}$ under the
refinement mapping can then be expressed in module $Mod1$ by the
following theorem:
\[ \THEOREM \ Spec1 => Id!Spec2
\]
This theorem asserts that in any behavior satisfying $Spec_{1}$, the
values of the expressions $\ov{z_{i}}$ are values that $Spec_{2}$
permits the variables $z_{i}$ to have.
The shape of the theorem makes it explicit that in \tlaplus, implementation is
implication.
The correctness of the theorem
can be checked (but seldom completely verified) with TLC for module
$Mod1$ having $Spec1$ as the specification and $Id!Spec2$ as the
temporal property to be checked.
As we will see, it is sometimes the case that \tlabox{\EE
\textbf{y}:Spec_{1}} implements \tlabox{\EE \textbf{z}:Spec_{2}} but
there does not exist a refinement mapping under which $Spec_{1}$
implements $Spec_{2}$. In that case, it is almost always possible to
construct the necessary refinement mapping by adding auxiliary
variables to $Spec_{1}$. Adding auxiliary variables \textbf{a} to the
specification $Spec_{1}$ means finding a specification
$Spec_{1}^{\mathbf{a}}$ such that \tlabox{\EE \mathbf{a}:Spec_{1}^{\mathbf{a}}}
is equivalent to $Spec_{1}$. Showing that
\tlabox{\EE \mathbf{y},\mathbf{a}:Spec_{1}^{\mathbf{a}}}
implements \tlabox{\EE z:Spec_{2}} shows that
\tlabox{\EE \textbf{y}:Spec_{1}} implements
\tlabox{\EE \textbf{z}:Spec_{2}}, since
\tlabox{\EE \mathbf{y},\mathbf{a}:Spec_{1}^{\mathbf{a}}}
equals
\tlabox{\EE \mathbf{y} : \EE a:Spec_{1}^{\mathbf{a}}}
which is equivalent to \tlabox{\EE \textbf{y}:Spec_{1}}. Even though
we can't define the expressions \ov{z_{i}} in terms of \textbf{x} and
\textbf{y}, we may be able to define them in terms of \textbf{x},
\textbf{y}, and \textbf{a}.
We will define three kinds of auxiliary variables: history variables
that remember what happened in the past, prophecy variables that
predict what will happen in the future, and stuttering variables
that add stuttering steps (ones that don't change \textbf{x} and \textbf{y}).
\section{History Variables} \label{sec:history}
\subsection{Equivalence of \emph{MinMax}1 and \emph{MinMax}2}
Let us return to the notation of Section~\ref{sec:relation}, so
$Spec_{1}$ is specification $Spec$ of module $MinMax1$ and $Spec_{2}$
is specification $Spec$ of module $MinMax2$. We observed that
\tlabox{\EE y : Spec_{1}} and \tlabox{\EE min, max : Spec_{2}} are
equivalent, meaning that each implements (implies) the other. We
found a refinement mapping under which $Spec_{1}$ implements
$Spec_{2}$. To prove the converse implication, we want to find a
refinement mapping under which $Spec_{2}$ implements $Spec_{1}$. This
means defining an expression \ov{y} in terms of the variables $x$,
$turn$, $min$, and $max$ such that the values of $x$, $turn$, and $\ov{y}$ in
any behavior allowed by $Spec_{1}$ are values of $x$, $turn$, and $y$
allowed by $Spec_{2}$.
In a behavior of $Spec_{1}$, the value of $y$ is the set of all values
input by the user. However, in a behavior of $Spec_{2}$, the
variables $min$ and $max$ record only the smallest and largest input
values. There is no way to reconstruct the set of all values input
from the variables of $MinMax2$. So, there is no refinement mapping
under which $Spec_{2}$ implements $Spec_{1}$. To solve this problem,
we write another spec $Spec_{2}^{h}$ that is the same as $Spec_{2}$,
except that it also constrains the behavior of another variable $h$.
More precisely, if we hide $h$ in $Spec_{2}^{h}$, then we get a
specification that's equivalent to $Spec_{2}$. Expressed
mathematically, this means \tlabox{\EE h:Spec_{2}^{h}} is equivalent
to $Spec_{2}$.
The initial predicate and next-state action of $Spec_{2}^{h}$ are the
same as those of $Spec_{2}$, except they also describe the values that
$h$ may assume. In particular, the value of $h$ records information
about previous values of the variable $x$, but does not affect the
current or future values of $x$ or any of the other variables $turn$,
$min$, and $max$ of $Spec_{2}$. Thus \tlabox{\EE h:Spec_{2}^{h}} is
equivalent to $Spec_{2}$. We call $h$ a \emph{history} variable.
We write $Spec_{2}^{h}$ as follows in a \tlaplus\ module $MinMax2H$.
The module begins with the statement
\[ \EXTENDS MinMax2 \]
that simply imports all the declarations and definitions from
$MinMax$, defining $Spec$ to be the specification we are calling
$Spec_{2}$. The module declares the variable $h$ and defines the
initial predicate $InitH$ of $Spec_{2}^{h}$ by
\[ InitH == Init \, /\ \, (h = \{\,\})\]
The next-state action $NextH$ is defined to equal
$InputNumH \/ RespondH$
where $InputNumH$ and $RespondH$ are defined as follows:
\begin{display}
\begin{notla}
InputNumH == /\ InputNum
/\ h' = h
RespondH == /\ Respond
/\ h' = h \cup {x}
\end{notla}
\begin{tlatex}
\@x{ InputNumH \.{\defeq} \.{\land} InputNum
\@x{\@s{72.21} \.{\land} h \.{'} \.{=} h
\@pvspace{8.0pt
\@x{ RespondH\@s{8.33} \.{\defeq} \.{\land} Respond
\@x{\@s{72.21} \.{\land} h \.{'} \.{=} h \.{\cup} \{ x \}
\end{tlatex}
\end{display}
The specification $Spec_{2}^{h}$ is the following formula defined
in the module:
\begin{display}
\begin{notla}
SpecH == InitH /\ [][NextH]_varsH
\end{notla}
\begin{tlatex}
\@x{ SpecH\@s{4.1} \.{\defeq}\@s{4.1} InitH\@s{4.1} \.{\land}\@s{4.1} {\Box}
[ NextH ]_{varsH}
\end{tlatex}
\end{display}
where $varsH$ equals $<<vars, h>>$. (Because $vars$ equals
$<<x, turn, min, max>>$, we can also
define $varsH$ to equal $<<x, turn, min, max, h>>$; the two
definitions give equivalent expressions $\UNCHANGED varsH$.)
It's easy to see that this specification asserts that $h$ is always
equal to the set of all values that the user has input thus far, which
is exactly what $Spec_{1}$ asserts about $y$. Therefore,
$Spec_{2}^{h}$ implements $Spec_{1}$ under the refinement mapping
$y <- h$---that is, with $\ov{y}$ equal to the expression $h$. We
express this in module $MinMax2H$ by
\begin{display}
\begin{notla}
M == INSTANCE MinMax1 WITH y <- h
THEOREM SpecH => M!Spec
\end{notla}
\begin{tlatex}
\@x{ M\@s{4.1} \.{\defeq}\@s{4.1} {\INSTANCE} MinMax1\@s{4.1} {\WITH}\@s{4.1}
y \.{\leftarrow} h
\@pvspace{4.0pt
\@x{ {\THEOREM}\@s{4.1} SpecH \.{\implies} M {\bang} Spec
\end{tlatex}
\end{display}
The complete module $MinMax2H$ is in \lref{targ:MinMax2H}{Figure~\ref{fig:MinMax2H}}
\begin{figure} \target{targ:MinMax2H}
\begin{notla}
---------------------------- MODULE MinMax2H ----------------------------
EXTENDS MinMax2
VARIABLE h
varsH == <<vars, h>>
InitH == Init /\ (h = {})
InputNumH == /\ InputNum
/\ h' = h
RespondH == /\ Respond
/\ h' = h \cup {x}
NextH == InputNumH \/ RespondH
SpecH == InitH /\ [][NextH]_varsH
M == INSTANCE MinMax1 WITH y <- h
THEOREM SpecH => M!Spec
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} MinMax2H}\moduleRightDash\@xx{
\@x{ {\EXTENDS} MinMax2
\@pvspace{8.0pt
\@x{ {\VARIABLE} h
\@x{ varsH \.{\defeq} {\langle} vars ,\, h {\rangle}
\@pvspace{8.0pt
\@x{ InitH\@s{2.14} \.{\defeq} Init\@s{9.42} \.{\land} ( h \.{=} \{ \} )
\@pvspace{8.0pt
\@x{ InputNumH \.{\defeq} \.{\land} InputNum
\@x{\@s{72.21} \.{\land} h \.{'} \.{=} h
\@pvspace{8.0pt
\@x{ RespondH\@s{8.33} \.{\defeq} \.{\land} Respond
\@x{\@s{72.21} \.{\land} h \.{'} \.{=} h \.{\cup} \{ x \}
\@pvspace{8.0pt
\@x{ NextH \.{\defeq} InputNumH \.{\lor} RespondH
\@pvspace{8.0pt
\@x{ SpecH\@s{1.08} \.{\defeq} InitH \.{\land} {\Box} [ NextH ]_{ varsH}
\@pvspace{8.0pt
\@x{ M \.{\defeq} {\INSTANCE} MinMax1 {\WITH} y \.{\leftarrow} h
\@pvspace{8.0pt
\@x{ {\THEOREM} SpecH \.{\implies} M {\bang} Spec
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Module \emph{MinMax2H}.} \label{fig:MinMax2H}
\end{figure}
\subsection{Disjunctive Representation}
The generalization from the $MinMax$ example is intuitively clear. We
add a history variable $h$ to a specification by conjoining
$h=exp_{Init}$ to its initial predicate and $(h'=exp_{A})$ to each
subaction $A$ of its next-state action, where expression $exp_{Init}$
can contain the spec's variables and each expression $exp_{A}$ can
contain the spec's variables, both primed and unprimed, and $h$
(unprimed).
To make this precise, we have to state exactly what a \emph{subaction}
is.
In general, there may be many different ways to define the
subactions of a next-state action. In defining $SpecH$ in module
$MinMax2H$, we took $InputNum$ and $Respond$ to be the subactions of
the next-state action $Next$ of $MinMax2$. However, we can also
consider $Next$ itself to be a subaction of $Next$. We can do this
and get an equivalent specification $SpecH$ by defining $NextH$ as
follows:
\begin{display}
\begin{notla}
NextH == /\ Next
/\ \/ (turn = "input") /\ (h '= h)
\/ (turn = "output") /\ (h' = h \cup {x})
\end{notla}
\begin{tlatex}
\@x{ NextH\@s{4.1} \.{\defeq}\@s{4.1} \.{\land} Next
\@x{\@s{56.15} \.{\land} \.{\lor} ( turn \.{=}\@w{input} )\@s{6.22} \.{\land}
( h \.{'} \.{=} h )
\@x{\@s{67.26} \.{\lor} ( turn \.{=}\@w{output} ) \.{\land} ( h \.{'} \.{=} h
\.{\cup} \{ x \} )
\end{tlatex}
\end{display}
The two specifications are equivalent because the two definitions of
$NextH$ are equivalent. Their equivalence is asserted by adding the
following theorem to module $MinMax2H$. The TLAPS proof system easily
checks its \textsc{by} proof.
\begin{display}
\begin{notla}
THEOREM NextH = /\ InputNum \/ Respond
/\ \/ (turn = "input") /\ (h '= h)
\/ (turn = "output") /\ (h' = h \cup {x})
BY DEF NextH, Next, InputNumH, RespondH, InputNum, Respond
\end{notla}
\begin{tlatex}
\@x{ {\THEOREM} NextH \.{=} \.{\land} InputNum \.{\lor} Respond
\@x{\@s{89.22} \.{\land} \.{\lor} ( turn \.{=}\@w{input} )\@s{6.22} \.{\land}
( h \.{'} \.{=} h )
\@x{\@s{100.33} \.{\lor} ( turn \.{=}\@w{output} ) \.{\land} ( h \.{'} \.{=}
h \.{\cup} \{ x \} )
\@x{ {\BY} {\DEF} NextH ,\, Next ,\, InputNumH ,\, RespondH ,\, InputNum ,\,
Respond
\end{tlatex}
\end{display}
To define what a subaction is, we introduce the concept of a
\emph{disjunctive representation}. A disjunctive representation of a
formula $N$ is a way of writing $N$ in terms of subactions
$A_{1}$, \ldots, $A_{m}$ using only the operators $\/ $ and
\tlabox{\E k \in K},
for some identifiers $k$ and expressions $K$. For example,
consider the formula:
\begin{equation} {\NOTLA\label{eq:dis-rep}}
B \; \/ \; C \; \/\; D \; \/ \; (\begin{noj}
\E i \in S, j \in T : \\ \s{1}
(\E q \in U : E) \; \/ \; (\E r \in W : {F}))
\end{noj}
\end{equation}
where $B$, $C$, $D$, $E$, and $F$ can be any formulas.
Here is one of the 36 possible disjunctive representations of formula
(\ref{eq:dis-rep}), where each boxed formula is a
subaction:
\[ \fbox{$B$} \; \/ \; \fbox{$C \; \/\; D$} \; \/ \;
(\begin{noj}
\E i \in S, j \in T : \\ \s{1}
\fbox{$(\E q \in U : E)$} \; \/ \; (\E r \in W : \fbox{$F$}\,))
\end{noj}
\]
In other words, this disjunctive representation of (\ref{eq:dis-rep})
has the four subactions $B$, $C \/ D$,
\tlabox{\E q \in U : E}, and $F$.
Each subaction of a disjunction representation has a \emph{context},
which is a pair $<<\textbf{k};\textbf{K}>>$, where $\mathbf{k}$ is an
$n$-tuple of identifiers and $\textbf{K}$ is an $n$-tuple of
expressions, for some $n$. The contexts of the subactions in the
disjunctive representation of (\ref{eq:dis-rep}) defined above are:
\[ \begin{noj2}
\mbox{\underline{subaction}}\;\; & \mbox{\underline{context}} \V{.2}
B & << >> \\
C \lor D & << >> \\
\E q \in U : E \s{1}& <<i,j;\,S,T>> \\
F & <<i,j,r;\,S,T,W>>
\end{noj2}
\]
We let \mbox{$\E \langle i,j;\,S,T\rangle$} be an abbreviation for
\mbox{$\E i\in S,\,j\in T$} and similarly for
\mbox{$\A \langle i,j;\,S,T\rangle$}.
The generalization of this example should be clear.
The tuple of identifiers in the context of a subaction $A$ are all the
bound identifiers of existential quantifiers within whose scope $A$
lies.\footnote{Everything we do extends easily to handle unbounded
quantification if we pretend that the unbounded quantifiers $\E v$ and
$\A v$ are written $\E v\in\Omega$ and $\A v\in\Omega$, and we define
$e\in\Omega$ to equal $\TRUE$ for every expression $e$. Since
unbounded quantification seldom occurs in specifications, we will not
discuss this further.} If $<<\textbf{k};\textbf{K}>>$ is the empty
context $<<;>>$, we let \tlabox{\E <<\textbf{k};\textbf{K}>>: F} and
\tlabox{\A <<\textbf{k};\textbf{K}>>: F} equal $F$.
We can now define precisely what it means to add a history variable to
a specification The definition is contained in the hypothesis of this
theorem:
\begin{theorem}{History Variable}
Let $Spec$ equal \tlabox{Init \,/\ \,[][Next]_{vars}} and let
$Spec^{h}$ equal
\tlabox{Init^{h}\, /\ \,[][Next^{h}]_{vars^{h}}},
where:
\begin{itemize}
\item $Init^{h}$ equals $Init /\ (h=exp_{Init})$, for some expression
$exp_{Init}$ that may contain the specification's
(unprimed) variables.
\item $Next^{h}$ is obtained from $Next$ by replacing each subaction
$A$ of a disjunctive representation of $Next$ with
\tlabox{A /\ (h'=exp_{A})},
for some expression $exp_{A}$ that may contain primed and unprimed
specification variables, identifiers in the context of $A$, and
constant parameters.
\item $vars^{h}$ equals $<<vars, h>>$
\end{itemize}
Then $Spec$ is equivalent to \tlabox{\EE h : Spec^{h}}.
\end{theorem}
The hypotheses of this theorem are purely syntactic ones: conditions
on the definitions of $Init^{h}$, $Next^{h}$, and $vars^{h}$ plus
conditions on what variables and identifiers may appear in
$exp_{Init}$ and $exp_{A}$. By a variable or identifier appearing
in an expression $exp$, we mean that it appears in the expression \Mmap{exp}
obtained by expanding all definitions if $exp$.
(See the discussion of definition expansion on page~\pageref{pageMmap}.)
\subsection{Equivalence of Next-State Actions}
When adding an auxiliary variable, it is often useful to rewrite a
specification $Spec$---that is, to replace $Spec$ with a different but
equivalent formula. This is most often done by rewriting the
next-state action $Next$, which is done by rewriting one or more of
the subactions in a disjunctive representation of $Next$. We now consider
when we can replace a subaction $A$ in a disjunctive representation
of $Next$ by the subaction $B$.
We can obviously replace $A$ by $B$ if $A$ and $B$ are equivalent
formulas. However, this is often too stringent a requirement. For
example, the two actions
\[ \begin{noj}
(x' = \IF{y\geq 0} \THEN x+y \ELSE x - y) \ /\ \ (y'=y) \V{.2}
(x' = \IF{y> 0} \THEN x+y \ELSE x - y) \ /\ \ (y'=y)
\end{noj}
\]
are not equivalent. However, they are equivalent if $y$ is a number.
Thus, we can replace one by the other in the next-state action
if $y\in Int$ is an invariant of the specification. The generalization
of this observation
is:
\begin{theorem}{Subaction Equivalence} \label{subactEquiv1}
\s{1}Let $A$ be a subaction with context
$<<\mathbf{k}; \mathbf{K}>>$
in a disjunctive representation of the next-state action of a
specification $Spec$ with tuple $vars$ of variables, let $Inv$ be an
invariant of $Spec$, and let $B$ be an action satisfying:
\begin{equation} \label{eqEqv1}
Inv \; => \;
\tlabox{\A <<\mathbf{k}; \mathbf{K}>> : A \equiv B}
\end{equation}
Then $Spec$ is equivalent to the specification obtained by replacing
$A$ with $B$ in the next-state action's disjunctive representation.
\end{theorem}
Formula (\ref{eqEqv1}) is an action formula, so it can be proved with
TLAPS but cannot be checked with TLC\@.
TLC can check directly that two specifications are equivalent by
checking that each specification implies the other. To check that
specification $Spec$ implies a specification $SpecB$, just run TLC
with a model having $Spec$ as the behavioral specification and $SpecB$
as the property to be checked. If one spec is obtained by a simple
modification of the other, it should suffice to use small models. But
in that case, it should not be hard to prove (\ref{eqEqv1})
with TLAPS, where $Inv$ is a simple type invariant.
\subsection{Discussion of History Variables}
As our example, we showed that specifications $MinMax1$ and $MinMax2$ are
equivalent. In practice, we rarely care about checking equivalence of
specifications. We almost always want to show that a specification {\Sp}
satisfies some property $\mathcal{P}$, which means that {\Sp} implies
$\mathcal{P}$.
For example, $\Sp => []Inv$ asserts that $Inv$ is an invariant of \Sp.
In (\ref{eqT1a}) and (\ref{eqT1b}), the property $\mathcal{P}$ is, like \Sp, a
complete system specification.
The most general form of correctness that TLC can check is that one
specification implies another. For example, the assertion that $Inv$
is an invariant of a specification {\Sp} with tuple $vars$ of
variables is equivalent to the assertion that {\Sp} implies the
specification
\[
Inv \ /\ \ [][Inv'\equiv Inv]_{vars}
\]
We often want to show that a specification {\Sp} implies a
higher-level, more abstract specification {\T}. The standard way of
doing this is to find a refinement mapping that expresses the values
of {\T}'s variables as functions of the values of {\Sp}'s variables.
This can't be done if specification {\T} remembers in its state
information that is forgotten by {\Sp}. In that case, we show that
{\Sp} implies \T\ by adding a history variable $h$ to \Sp\ to obtain
the specification $\Sp^{h}$, and we find a refinement mapping to show
$\Sp^{h}$ implies \T. Since \tlabox{\EE h:\Sp^{h}} is equivalent to
\Sp, this shows that \Sp\ implies \T.
One can argue that \T\ is not a good higher-level specification if it
keeps information about the past that doesn't have to be kept by its
implementation \Sp. However, sometimes that information about the
past can simplify the higher-level specification. We may also add a
history variable to a specification \Sp\ so we can state the property
we want to show that it satisfies, even if we aren't explicitly
constructing a refinement mapping. For example the property that \Sp\
requires one kind of action to occur before another can be expressed
as an invariant if we add a history variable that remembers when
actions have occurred. Because it's a history variable, it doesn't
have to be implemented in the system being specified---that is,
in the actual hardware or software. Only the variables of \Sp\
must be implemented.
\subsection{Liveness} \label{sec:history-liveness}
A natural liveness requirement for our $MinMax$ specs is that every
input should produce an output. This requirement is added to both
specifications by conjoining the fairness requirement
$\WF_{vars}(Respond)$ to the formula $Spec$. (It is just a
peculiarity of our example that the fairness requirements are exactly
the same in both specifications.) Let us call the resulting
specifications $LSpec$.
The two specifications are still equivalent when the internal
variables are hidden. Formula (\ref{eqT1}), and hence formulas
(\ref{eqT1a}) and (\ref{eqT1b}), remain true if we replace $Spec_{1}$
and $Spec_{2}$ by $LSpec_{1}$ and $LSpec_{2}$, respectively---where
$LSpec_{1}$ and $LSpec_{2}$ are formulas $LSpec$ of $MinMax1$ and
$MinMax2$, respectively. We verify (\ref{eqT1a}) the same as before
by verifying the theorem \tlabox{LSpec => M!LSpec} of module
$MinMax1$.
To verify (\ref{eqT1b}), we need to add a history variable $h$ to
$LSpec_{2}$ rather than to $Spec_{2}$. We now drop the subscripts;
all the formulas we write will be ones defined in $MinMax2$ or
$MinMax2H$.
To add a history variable to the specification $LSpec$,
we add a history variable to its safety part and then conjoin the
liveness part of $LSpec$. The resulting specification is defined in
module $MinMax2H$ by
\[ HLSpec == HSpec /\ \WF_{vars}(Respond) \]
The equivalence of $LSpec$ and \tlabox{\EE h:HLSpec} follows from
the equivalence of $Spec$ and \tlabox{\EE h:HSpec} by this argument:
\begin{display}
\pflongnumbers
\afterPfSpace{.5em}
\beforePfSpace{.2em}
\begin{proof}
\step{1}{$Spec /\ \WF_{vars}(Respond) \ \equiv \
(\tlabox{\EE h : HSpec}) /\ \WF_{vars}(Respond) $}
\begin{proof}
\pf\ Because $Spec$ is equivalent to \tlabox{\EE h : HSpec}.
\end{proof}
\step{2}{$\begin{noj}
(\tlabox{\EE h : HSpec}) \ /\ \ \WF_{vars}(Respond)\V{.2}\s{2}
\equiv \ \ \EE h : (HSpec /\ \WF_{vars}(Respond))
\end{noj}$}
\begin{proof}
\pf\ For any formulas $F$ and $G$, if $h$ does not occur in $G$,
then \tlabox{(\EE h:F) /\ G} is equivalent to \tlabox{\EE h: (F /\ G)}.
\end{proof}
\qedstep
\begin{proof}
\pf\ By definition of $LSpec$ and $HLSpec$, 1 and 2 imply that
$LSpec$ is equivalent to \tlabox{\EE h:HLSpec}.
\end{proof}
\end{proof}
\end{display}
What we have done for this example generalizes in the obvious way.
For a specification written in the canonical form
\[ Init /\ [][Next]_{vars} /\ L
\]
with $L$ a liveness condition, we add a history variable by adding it
just to the safety part, keeping the same liveness condition.
This method of adding a history variable to a specification with
liveness produces unusual specifications. In our example, if we
expand the definition of $HSpec$, we see that specification $HLSpec$
equals
\[ InitH \ /\ \ [][NextH]_{varsH} \ /\ \ \WF_{vars}(Respond) \]
This spec is unusual because it contains a fairness condition on the
action $Respond$ that is not a subaction of the next-state relation
$NextH$. Such specs can be weird. However, specs obtained in this
way by adding a history variable are not. Because (i)~a $Respond$ action
is enabled iff a $RespondH$ action is and (ii)~a $NextH$ step is a
$Respond$ step iff it is a $RespondH$ step, specification $HLSpec$ is
equivalent to the normal specification:
\begin{equation}
InitH \ /\ \ [][NextH]_{varsH} \ /\ \ \WF_{varsH}(RespondH)
\end{equation}
In general, if $Spec^{h}$ is obtained from a specification $Spec$ by
adding a history variable $h$, then replacing a fairness requirement
on a subaction $A$ by the same fairness requirement on $A^{h}$
produces a specification equivalent to $Spec^{h}$.
The unusual nature of specification $HLSpec$ affects neither the TLC
model checker nor our ability to reason about the specification. The
same refinement mapping as before shows that $HLSpec$ implements
$MinMax1$ with the added fairness condition when internal variables
are hidden.
\section{Prophecy Variables}
As we have observed, the fundamental task of verification is to show
that the specification $Spec_{1}$ of an implementation satisfies a
specification $Spec_{2}$ of what the implementation is supposed to do.
A history variable remembers the past. It is needed to find a
refinement mapping to show that $Spec_{1}$ implements $Spec_{2}$ when
$Spec_{2}$ remembers previous events longer than it has to. A
prophecy variable is one that predicts the future. It is needed to
find a refinement mapping to show that $Spec_{1}$ implements
$Spec_{2}$ when $Spec_{2}$ makes decisions before it has to.
\subsection{One-Prediction Prophecy Variables} \label{sec:simple-proph}
We begin by showing how to add a simple prophecy variable that makes a
single prediction at a time. Suppose a disjunctive representation of
the next-state relation contains a subaction $A$ such that
\begin{equation}
A => (\tlabox{\E i \in \Pi : Pred_{A}(i)})
\NOTLA \label{eq:Proph1}
\end{equation}
for some expression $Pred_{A}(i)$ and constant set
$\Pi$. Formula (\ref{eq:Proph1}) is equivalent to
\begin{equation}
A \ \equiv\ A \, /\ \, (\tlabox{\E i \in \Pi : Pred_{A}(i)})
\NOTLA \label{eq:Proph2}
\end{equation}
which means that any $A$ step is an \tlabox{A /\ Pred_{A}(i)} step for
some $i$ in $\Pi$. We introduce a one-prediction prophecy variable
$p$ whose value is an $i$ for which the next $A$ step is an
\tlabox{A /\ Pred_{A}(i)}
step---if there is a next $A$ step. (There could be more than one
such $i$, since we don't require $Pred_{A}(i) /\ Pred_{A}(j)$ to equal
\FALSE\ if $i#j$.) We give $p$ that meaning by replacing the
subaction $A$ with a subaction $A^{p}$ defined by
\begin{equation}
A^{p} == A \, /\ \, Pred_{A}(p) \, /\ \, Setp
\NOTLA \label{eq:gen-Ap}
\end{equation}
where $Setp$ determines the value of $p'$.
To ensure that adding the prophecy variable $p$ allows all the
behaviors of the other variables that the original spec does, we must
ensure that $p$ can always have any value in $\Pi$. We do this by
initializing $p$ to an arbitrary element of $\Pi$ and changing $p$
only by setting it to any arbitrary element of $\Pi$. Thus we
modify the spec's initial predicate $Init$ to equal
\tlabox{Init /\ (p\in\Pi)}
and we let $Setp$ equal $p'\in\Pi$, so
\begin{equation}
A^{p} == A \, /\ \, Pred_{A}(p) \, /\ \, (p' \in \Pi)
\NOTLA \label{eq:ApDef}
\end{equation}
For another subaction $A$ of the next-state relation whose effect is not
being predicted by $p$, we let $A^{p}$ leave the prediction
unchanged, so it is defined simply as:
\begin{equation}
A^{p} == A /\ (p'=p)
\NOTLA \label{eq:ApDef2}
\end{equation}
\bigskip
\noindent We illustrate this with a simple example: a system in which
integers are sent and received, where sending an integer $i$ is
represented by setting the variable $x$ to $i$, and receiving a value
is represented by setting $x$ to a value $NotInt$ that is not an
integer. Our specification $SendInt2$ has the receiving action set an
internal variable $z$ to the next value to be sent. (The initial
value of $z$ is the first value to be sent.) This simple
specification is in \lref{targ:SendInt2}{Figure~\ref{fig:SendInt2}}.
\begin{figure} \target{targ:SendInt2}
\begin{notla}
----------------------------- MODULE SendInt2 -----------------------------
EXTENDS Integers
NotInt == CHOOSE n : n \notin Int
VARIABLE x, z
Init == /\ x = NotInt
/\ z \in Int
Send == /\ x = NotInt
/\ x' = z
/\ z' = NotInt
Rcv == /\ x \in Int
/\ x' = NotInt
/\ z' \in Int
Next == Send \/ Rcv
Spec == Init /\ [][Next]_<<x,z>>
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} SendInt2}\moduleRightDash\@xx{
\@x{ {\EXTENDS} Integers
\@pvspace{8.0pt
\@x{ NotInt \.{\defeq} {\CHOOSE} n \.{:} n \.{\notin} Int
\@pvspace{8.0pt
\@x{ {\VARIABLE} x ,\, z
\@pvspace{8.0pt
\@x{ Init\@s{5.17} \.{\defeq} \.{\land} x \.{=} NotInt
\@x{\@s{40.87} \.{\land} z\@s{0.52} \.{\in} Int
\@pvspace{8.0pt
\@x{ Send \.{\defeq} \.{\land} x \.{=} NotInt
\@x{\@s{40.87} \.{\land} x \.{'} \.{=} z
\@x{\@s{40.87} \.{\land} z \.{'}\@s{0.52} \.{=} NotInt
\@pvspace{8.0pt
\@x{ Rcv \.{\defeq} \.{\land} x \.{\in} Int
\@x{\@s{35.94} \.{\land} x \.{'} \.{=} NotInt
\@x{\@s{35.94} \.{\land} z \.{'}\@s{0.52} \.{\in} Int
\@pvspace{8.0pt
\@x{ Next \.{\defeq} Send \.{\lor} Rcv
\@pvspace{8.0pt
\@x{ Spec\@s{1.46} \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ {\langle} x
,\, z {\rangle}}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Specification \emph{SendInt}2.} \label{fig:SendInt2}
\end{figure}
Of course, we can describe the behavior of the variable $x$ even more
simply, without using any internal variable. Such a specification is
in module $SendInt1$ of \lref{targ:SendInt1}{Figure~\ref{fig:SendInt1}}.
\begin{figure} \target{targ:SendInt1}
\begin{notla}
----------------------------- MODULE SendInt1 -----------------------------
EXTENDS Integers
NotInt == CHOOSE n : n \notin Int
VARIABLE x
Init == x = NotInt
Send == /\ x = NotInt
/\ x' \in Int
Rcv == /\ x \in Int
/\ x' = NotInt
Next == Send \/ Rcv
Spec == Init /\ [][Next]_x
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} SendInt1}\moduleRightDash\@xx{
\@x{ {\EXTENDS} Integers
\@pvspace{8.0pt
\@x{ NotInt \.{\defeq} {\CHOOSE} n \.{:} n \.{\notin} Int
\@pvspace{8.0pt
\@x{ {\VARIABLE} x
\@pvspace{8.0pt
\@x{ Init\@s{5.17} \.{\defeq} x \.{=} NotInt
\@pvspace{8.0pt
\@x{ Send \.{\defeq} \.{\land} x \.{=} NotInt
\@x{\@s{40.87} \.{\land} x \.{'} \.{\in} Int
\@pvspace{8.0pt
\@x{ Rcv \.{\defeq} \.{\land} x \.{\in} Int
\@x{\@s{35.94} \.{\land} x \.{'} \.{=} NotInt
\@pvspace{8.0pt
\@x{ Next \.{\defeq} Send \.{\lor} Rcv
\@pvspace{8.0pt
\@x{ Spec\@s{1.46} \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ x}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Specification \emph{SendInt1}} \label{fig:SendInt1}
\end{figure}
Let $Spec_{1}$ and $Spec_{2}$ be the formulas $Spec$ of modules
$SendInt1$ and $SendInt2$, respectively. It should be obvious that
$Spec_{1}$ is equivalent to \tlabox{\EE z:Spec_{2}}. To verify
\tlabox{(\EE z:Spec_{2}) => Spec_{1}}, we just have to verify
$Spec_{2}=>Spec_{1}$, which TLC easily checks for a model that
substitutes a finite set of integers for
$Int$. We now show how to verify
\tlabox{Spec_{1} => (\EE z:Spec_{2})}.
There is obviously no refinement mapping under which
$Spec_{1}$ implements $Spec_{2}$. Such a refinement mapping would be
an expression \ov{z} involving only the variable $x$ so that
\ov{z} is the value of $z$ in any state satisfying $Spec_{2}$.
This is impossible, since $z$ could equal any integer in a state in which
$x$ equals $NotInt$, so there is no way to express its value
as a function of the value of $x$.
The variable $z$ of $SendInt2$ is used to predict the value to be sent
before it actually is sent. To be able to define the value of \ov{z}
for a refinement mapping, we add a prophecy variable $p$ to $SendInt1$
that predicts what the next value to be sent is.
The prophecy variable $p$ must predict the value sent by action $Send$
of $SendInt1$. Therefore $SendP$ must have the form of
(\ref{eq:ApDef}). A little thought shows that $p$ makes the
right prediction if we take $Pred_{Send}(p)$ to equal
$x'=p$. Since \tlaplus\ doesn't allow identifiers to have subscripts
we write $PredSend$ instead of $Pred_{Send}$ and define
\[ PredSend(i) == x' = i
\]
Condition (\ref{eq:Proph1}) becomes
\[ Send => \tlabox{\E i \in \Pi : PredSend(i)}
\]
which is obviously true by definition of $Send$, if we let $\Pi$
equal $Int$. Writing $Pi$ instead of $\Pi$ and $SendP$ instead of
$Send^{p}$, we make the definitions:
\[ \begin{noj}
Pi == Int \V{.2}
SendP == Send \, /\ \, PredSend(p) \, /\ \, (p' \in Pi)
\end{noj}
\]
(We could of course simply write $Int$ instead of $Pi$, but writing
$Pi$ will help us understand what's going on.)
For the receive action, $Rcv^{p}$ should have the form (\ref{eq:ApDef2}).
In a behavior of $SendInt2$, when $x$ equals $NotInt$, the value of
$z$ is the next value sent; and when $x$ equals an integer (the value
sent), then $z$ equals $NotInt$. Therefore, if $SpecP$ is the
specification obtained from specification $Spec$ of $SendInt1$ by
adding the prophecy variable $p$, then $SpecP$ implements the
specification $Spec$ of $SendInt2$ under this refinement mapping:
\[ \ov{z} \ <- \ \IF x = NotInt \THEN p \ELSE NotInt
\]
Note that $SpecP$ predicts the next value to be sent even before the
$SendInt2$ specification does---when the previous value is sent rather
than when it is received. Although it's not necessary, we'll see
later how we could defer the prediction until the $Rcv$ action is
executed.
The complete specification is contained in module $SendInt1P$ in
\lref{targ:SendInt1P}{Figure~\ref{fig:SendInt1P}}.
\begin{figure} \target{targ:SendInt1P}
\begin{notla}
----------------------------- MODULE SendInt1P -----------------------------
EXTENDS SendInt1
Pi == Int
PredSend(i) == x' = i
VARIABLE p
varsP == <<x, p>>
InitP == Init /\ (p \in Pi)
SendP == Send /\ PredSend(p) /\ (p' \in Pi)
RcvP == Rcv /\ (p' = p)
NextP == SendP \/ RcvP
SpecP == InitP /\ [][NextP]_varsP
-----------------------------------------------------------------------------
SI2 == INSTANCE SendInt2 WITH z <- IF x = NotInt THEN p ELSE NotInt
THEOREM SpecP => SI2!Spec
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} SendInt1P}\moduleRightDash\@xx{
\@x{ {\EXTENDS} SendInt1
\@pvspace{8.0pt
\@x{ Pi \.{\defeq} Int
\@pvspace{8.0pt
\@x{ PredSend ( i ) \.{\defeq} x \.{'} \.{=} i
\@pvspace{8.0pt
\@x{ {\VARIABLE} p
\@pvspace{8.0pt
\@x{ varsP\@s{2.93} \.{\defeq} {\langle} x ,\, p {\rangle}
\@pvspace{8.0pt
\@x{ InitP\@s{5.08} \.{\defeq}\@s{4.1} Init\@s{5.17} \.{\land} ( p \.{\in} Pi
)
\@pvspace{8.0pt
\@x{ SendP \.{\defeq}\@s{4.1} Send \.{\land} PredSend ( p ) \.{\land} ( p
\.{'} \.{\in} Pi )
\@pvspace{8.0pt
\@x{ RcvP \.{\defeq} Rcv \.{\land} ( p \.{'} \.{=} p )
\@pvspace{8.0pt
\@x{ NextP \.{\defeq} SendP \.{\lor} RcvP
\@pvspace{8.0pt
\@x{ SpecP\@s{1.08} \.{\defeq} InitP \.{\land} {\Box} [ NextP ]_{ varsP}
\@x{}\midbar\@xx{
\@x{ SI2 \.{\defeq} {\INSTANCE} SendInt2 {\WITH} z \.{\leftarrow} {\IF} x
\.{=} NotInt \.{\THEN} p \.{\ELSE} NotInt
\@pvspace{8.0pt
\@x{ {\THEOREM} SpecP \.{\implies} SI2 {\bang} Spec
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Specification \emph{SendInt1P} \label{fig:SendInt1P}}
\end{figure}
Observe that we defined $PredSend$ before the declaration of $p$ to
ensure that $PredSend$ is not defined in terms of $p$.
\begin{sloppypar}
The module's theorem asserts that $SpecP$ implements formula $Spec$ of
$SendInt2$ under the refinement mapping defined above. It can be
checked with TLC by creating a model with temporal specification $SpecP$
and having it check the temporal property $SI2!Spec$. The model will
have to substitute a finite set of integers for $Int$. The
specification is very simple and doesn't depend on any properties of
integers, so substituting a set with a few numbers will ensure that we
didn't make a mistake.
\end{sloppypar}
\subsection{One-Prediction Prophecy Variables in General} \label{sec:simple-general}
We generalize our description of a one-prediction prophecy variable in two
ways. First, we can allow a prophecy variable to make predictions
about more than one action by replacing more than one subaction $A$ of
a disjunctive representation by an action $A^{p}$ of the form
(\ref{eq:gen-Ap}). If we do this for subactions $A_{1}$, $A_{2}$,
\ldots, then the value of $p$ makes a prediction about the next step
to occur that is an $A_{1}$ or $A_{2}$ or \ldots\ step. We can
express this a little more elegantly by generalizing (\ref{eq:gen-Ap})
to allow $Setp$ to depend on $A$ and then letting each action $A$ of
the disjunctive representation be replaced with an action $A^{p}$ defined by
\begin{equation}
A^{p} == A \, /\ \, Pred_{A}(p) \, /\ \, Setp_{A}
\NOTLA \label{eq:gen-Ap2}
\end{equation}
For an action $A$ about which no prediction is being made,
$Pred_{A}(p)$ is the expression $\TRUE$. We can then replace
(\ref{eq:ApDef}) and (\ref{eq:ApDef2}) by defining $Setp_{A}$ to be
one of the following:
\begin{equation}
\begin{noj2}
\mbox{(a)} & Setp_{A} == p'=p \V{.2}
\mbox{(b)} & Setp_{A} == p'\in\Pi
\end{noj2}
\NOTLA \label{eq:ApDef2bis}
\end{equation}
where possibility (a) is allowed only if $Pred_{A}(p)$ is the
expression $\TRUE$ (so $p$ is making no prediction about $A$). This
is more general because it allows an action that doesn't use the
prediction made by $p$ to make a new prediction.
Our second generalization is needed to handle subactions of a
disjunctive representation having a nonempty context. For
a subaction $A$ with context $<<\textbf{k};\textbf{K}>>$,
condition (\ref{eq:Proph1}) contains the identifiers \textbf{k}.
That condition need only hold for values of those identifiers
in the corresponding set in \textbf{K}. Thus (\ref{eq:Proph1})
can be generalized to
\begin{equation}
\tlabox{\A <<\textbf{k};\textbf{K}>> : A => (\E i \in \Pi : Pred_{A}(i))}
\NOTLA \label{eq:Proph1Gen}
\end{equation}
Condition~(\ref{eq:Proph1Gen}) is a condition on pairs of states.
It needn't hold for all pairs of states, only for pairs of states
that can occur in a behavior satisfying the original
specification $Spec$. We can therefore
replace (\ref{eq:Proph1Gen}) by the requiremen
\footnote{Formula (\ref{eq:Proph1GenTLA}) does not imply
that (\ref{eq:Proph1Gen}) is true for stuttering steps that are
allowed by $A$. It can be shown that this doesn't matter,
and condition (\ref{eq:Proph1GenTLA}) is strong enough.}
\begin{equation}
Spec\; => \; [][\tlabox{\A <<\textbf{k};\textbf{K}>> : A => (\E i \in \Pi : Pred_{A}(i)})]_{vars}
\NOTLA \label{eq:Proph1GenTLA}
\end{equation}
TLC can check this condition with a model having the temporal formula
$Spec$ as its behavioral spec and
\[
\tlabox{[][\A <<\textbf{k};\textbf{K}>> : A => (\tlabox{\E i \in \Pi : Pred_{A}(i)})]_{vars}}
\]
as a property to be checked.
\subsection{Prophecy Array Variables} \label{sec:proph-array}
Our next example is based on one created by Mart\'{\i}n
Abadi~\cite{abadi:undo}. It is similar to our $SendInt$
specifications in that a sender sends a value $v$ to a receiver with a
$Send$ action that sets the variable $x$ to $v$, and the receiver
receives the value by resetting $x$. Instead of sending integers, the
values sent are elements of an unspecified constant set $Data$, and we
let the initial value of $x$ be a value $NonData$ not in $Data$. A
variable $y$ contains a set of values to be sent. Those values are
chosen by a $Choose$ action, which adds a new data element to $y$.
The high-level specification is formula $Spec$ in module $SendSet$ of
\lref{targ:SendSet}{Figure~{\ref{fig:SendSet}}}.
\begin{figure} \target{targ:SendSet}
\begin{notla}
------------------------------ MODULE SendSet ------------------------------
CONSTANT Data
NonData == CHOOSE d : d \notin Data
VARIABLES x, y
vars == <<x, y>>
Init == (x = NonData) /\ (y = {})
Choose == /\ \E d \in Data \ y : y' = y \cup {d}
/\ x' = x
Send == /\ x = NonData
/\ x' \in y
/\ y' = y \ {x'}
Rcv == /\ x \in Data
/\ x' = NonData
/\ y' = y
Next == Choose \/ Send \/ Rcv
Spec == Init /\ [][Next]_vars
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} SendSet}\moduleRightDash\@xx{
\@x{ {\CONSTANT} Data
\@pvspace{8.0pt
\@x{ NonData \.{\defeq} {\CHOOSE} d \.{:} d \.{\notin} Data
\@pvspace{8.0pt
\@x{ {\VARIABLES} x ,\, y
\@x{ vars \.{\defeq} {\langle} x ,\, y {\rangle}
\@pvspace{8.0pt
\@x{ Init\@s{2.02} \.{\defeq} ( x \.{=} NonData ) \.{\land} ( y \.{=} \{ \}
)
\@pvspace{8.0pt
\@x{ Choose \.{\defeq} \.{\land} \E\, d \.{\in} Data \.{\,\backslash\,} y
\.{:} y \.{'} \.{=} y \.{\cup} \{ d \}
\@x{\@s{50.30} \.{\land} x \.{'} \.{=} x
\@pvspace{8.0pt
\@x{ Send \.{\defeq} \.{\land} x \.{=} NonData
\@x{\@s{40.87} \.{\land} x \.{'} \.{\in} y
\@x{\@s{40.87} \.{\land} y \.{'}\@s{0.10} \.{=} y \.{\,\backslash\,} \{ x
\.{'} \}
\@pvspace{8.0pt
\@x{ Rcv \.{\defeq} \.{\land} x \.{\in} Data
\@x{\@s{35.94} \.{\land} x \.{'} \.{=} NonData
\@x{\@s{35.94} \.{\land} y \.{'}\@s{0.10} \.{=} y
\@pvspace{8.0pt
\@x{ Next \.{\defeq} Choose \.{\lor} Send \.{\lor} Rcv
\@pvspace{8.0pt
\@x{ Spec\@s{1.46} \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ vars}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Specification \emph{SendSet}.\label{fig:SendSet}}
\end{figure}
We consider the variable $x$ to be externally visible and $y$ to be
internal.
Our implementation adds to the specification of $SendSet$ an
\emph{undo} operation that removes elements from $y$. Abadi reports
that this example is a highly simplified abstraction of a real
system in which the implementation contains an undo operation not
present in the specification.
The implementation specification is
formula $SpecU$ in module $SendSetUndo$ of
\lref{targ:SendSetUndo}{Figure~\ref{fig:SendSetUndo}}.
\begin{figure} \target{targ:SendSetUndo}
\begin{notla}
---------------------------- MODULE SendSetUndo ----------------------------
EXTENDS SendSet
Undo(S) == /\ y' = y \ S
/\ x' = x
NextU == Next \/ (\E S \in (SUBSET y) : Undo(S))
SpecU == Init /\ [][NextU]_vars
============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} SendSetUndo}\moduleRightDash\@xx{
\@x{ {\EXTENDS} SendSet
\@pvspace{8.0pt
\@x{ Undo ( S )\@s{4.1} \.{\defeq}\@s{4.1} \.{\land} y \.{'}\@s{0.10} \.{=} y
\.{\,\backslash\,} S
\@x{\@s{65.59} \.{\land} x \.{'} \.{=} x
\@pvspace{8.0pt
\@x{ NextU\@s{4.1} \.{\defeq}\@s{4.1} Next\@s{4.1} \.{\lor}\@s{4.1} ( \E\, S
\.{\in} ( {\SUBSET} y ) \.{:} Undo ( S ) )
\@pvspace{8.0pt
\@x{ SpecU\@s{5.18} \.{\defeq}\@s{4.09} Init\@s{4.1} \.{\land}\@s{4.1} {\Box}
[ NextU ]_{ vars}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Specification \emph{SendSetUndo}.} \label{fig:SendSetUndo}
\end{figure}
Its initial predicate is the same as the initial predicate of the
specification $Spec$ of module $SendSet$, and its next-state action
$NextU$ is the same as the next-state action $Next$ of that module
except it allows $Undo(S)$ steps that remove from $y$ an arbitrarily
chosen non-empty subset $S$ of $y$.
It's clear that the specifications $Spec$ of module $SendSet$ and
$SpecU$ of module $SendSetUndo$ allow the same behaviors of the
variable $x$. Hence, \tlabox{\EE y:Spec} and \tlabox{\EE y:SpecU} are
equivalent. It's easy to show that $Spec$ implements $SpecU$ under the
identity refinement mapping in which \ov{y} is defined to equal $y$,
since $NextU$ allows all steps allowed by $Next$. This implies that
\tlabox{\EE y:Spec} implies \tlabox{\EE y:SpecU}. To construct a
refinement mapping under which $SpecU$ implements $Spec$, we must
define \ov{y} so that it contains a data value $d$ iff that value is
going to be sent by a $Send$ step rather than being removed from $y$
by an $Undo$ step. This involves predicting, when $d$ is added to $y$,
whether it will later be sent or ``undone''.
We add a prophecy array variable $p$ to $SpecU$ that makes this prediction,
setting $p[d]$ to either $"send"$ or $"undo"$ when $d$ is added to
$y$. So, we define our set $Pi$ of possible predictions by
\[ Pi == \{"send","undo"\}
\]
The value of $p$ in every state will be a function with
domain equal to the set $y$, with $p[d]\in Pi$ for all $d\in y$.
In other words
\tlabox{p \in [y -> Pi]}
will be an invariant of the spec $SpecUP$ obtained by adding the
prophecy variable $p$. The variable $p$ is therefore making a
predication $p[d]$ for every $d$ in $y$, so $p$ is making an array of
prophecies. (This is a ``dynamic'' array, because the value of $y$
can change.)
We now define the specification $SpecUP$. As with a one-prediction prophecy
variable, we obtain the next-state relation of $SpecUP$ by replacing
each subaction $A$ in a disjunctive representation of the next-state
relation with a new action $A^{p}$. Instead of defining $A^{p}$ as in
(\ref{eq:gen-Ap2}), we define it to equal
\begin{equation}
A^{p} == A \, /\ \, Pred_{A}(p) \, /\ \, (p' \in NewPSet_{A})
\NOTLA \label{eq:ApDefFcn}
\end{equation}
for suitable expressions $Pred_{A}(p)$ and $NewPSet_{A}$. We need a
condition corresponding to condition (\ref{eq:Proph1GenTLA}) for a
one-prediction prophecy variable to assert that there is a possible value of
$p$ that makes $Pred_{A}(p)$ true. With an array prophecy variable,
$p$ is no longer an element of $Pi$ but a function in $[Dom -> Pi]$
for some domain $Dom$ that can change. In our example, $Dom$ equals
$y$. To make the generalization to an arbitrary spec easier, we
define $Dom$ to equal $y$ and write $Dom$ instead of $y$ where
appropriate. For our example, we can replace (\ref{eq:Proph1GenTLA})
with
\begin{equation}
SpecU\; =>
\; [][\A <<\textbf{k};\textbf{K}>> :
A => (\tlabox{\E f \in [Dom -> \Pi] : Pred_{A}(f)})]_{vars}
\NOTLA \label{eq:ProphFcn}
\end{equation}
where $<<\textbf{k};\textbf{K}>>$ is the context of $A$. (Remember
that for the empty context $<<\,;\,>>$, we define
\tlabox{\A <<\,;\,>> : F} to equal $F$, for any formula $F$.)
\begin{sloppypar}
We now define the formulas $Pred_{A}$ and $NewPSet_{A}$ for the
disjunctive representation of $NextU$ with subactions $Choose$,
$Send$, $Rcv$, and $Undo(S)$. The context of the first three
subactions is empty; the context of $Undo(S)$ is
$<<S; \,\SUBSET y>>$.
\end{sloppypar}
The variable $p$ does not make any prediction about the $Choose$
action, so $Pred_{Choose}(p)$ should equal \TRUE. The action adds an
element $d$ to its domain, so $Choose^{p}$ must allow $p'[d]$ to equal
any element of $Pi$. For any element $d$ in the domain $Dom$ of $p$,
the value of $p[d]$ can be left unchanged. Our definitions
of $Pred_{Choose}(p)$ and $NewPSet_{Choose}(p)$
are then
\begin{display}
\begin{notla}
PredChoose(p) == TRUE
NewPSetChoose(p) == {f \in [Dom' -> Pi] : \A d \in Dom : f[d] = p[d]}
\end{notla}
\begin{tlatex}
\@x{ PredChoose ( p )\@s{19.31} \.{\defeq} {\TRUE}
\@x{ NewPSetChoose ( p ) \.{\defeq} \{ \,f \.{\in} [ Dom \.{'} \.{\rightarrow}
Pi ]\@s{4.1} \.{:}\@s{4.1} \A\, d \.{\in} Dom \.{:} f [ d ] \.{=} p [ d ] \,
\}
\end{tlatex}
\end{display}
The prophecy variable $p$ should predict that if the next action is
a $Send$ action, then it sends a value $d$ in $Dom$ such that $p[d]="send"$.
The value sent by the action is $x'$, so we define
\[ PredSend(p) == p[x'] = "send"
\]
The $Send$ action removes the sent element $d$ from $Dom$, thus erasing
the prediction $p$ made about $d$. The value of $p[d]$ is left
unchanged for all other elements $d$ in $Dom$. Thus
$NewPSet_{Send}(p)$ is defined as follows to be a set consisting of a
single function
\[ NewPSetSend(p) == \{\, [d \in Dom' |-> p[d]] \,\}
\]
No prediction is made about the $Rcv$ action, and it doesn't change
$Dom$, so we have:
\begin{display}
\begin{notla}
PredRcv(p) == TRUE
NewPSetRcv(p) == {p}
\end{notla}
\begin{tlatex}
\@x{ PredRcv ( p )\@s{19.31} \.{\defeq} {\TRUE}
\@x{ NewPSetRcv ( p ) \.{\defeq} \{ p \}
\end{tlatex}
\end{display}
The $Undo(S)^{p}$ action should be enabled only when $p$ has predicted
that all the elements in $S$ will not be sent---in other words,
when \tlabox{\A d \in S : p[d] = "undo"} is true. Since
the identifier $S$ appears in this formula, it must be an argument
of the definition of $Undo(S)^{p}$. Thus, we define
\[ PredUndo(p, S) == \A d \in S : p[d] = "undo"
\]
The $Undo(S)$ action removes from $Dom$ all the elements for which
$p$ made a prediction about $Undo(S)$---namely, all the elements
of $S$. We can define $NewPSet_{Undo(S)}$ the same way we defined
$PSetSend_{Send}$, without explicitly mentioning $S$:
\[ NewPSetUndo(p) == \{\, [d \in Dom' |-> p[d]] \,\}
\]
We can now declare the variable $p$ and define the specification
$SpecUP$ by
defining the initial predicate
$InitUP$ and defining the next-state relation $NextUP$
in terms of the subactions $A^{p}$, using (\ref{eq:ApDefFcn}).
The complete specification is in module $SendSetUndoP$
shown in \lref{targ:SendSetUndoP}{Figure~\ref{fig:SendSetUndoP}}.
\begin{figure} \target{targ:SendSetUndoP}
\begin{notla}
--------------------------- MODULE SendSetUndoP ---------------------------
EXTENDS SendSetUndo
Pi == {"send", "undo"}
Dom == y
PredChoose(p) == TRUE
NewPSetChoose(p) == {f \in [Dom' -> Pi] : \A d \in Dom : f[d] = p[d]}
PredSend(p) == p[x'] = "send"
NewPSetSend(p) == { [d \in Dom' |-> p[d]] }
PredRcv(p) == TRUE
NewPSetRcv(p) == {p}
PredUndo(p, S) == \A d \in S : p[d] = "undo"
NewPSetUndo(p) == { [d \in Dom' |-> p[d]] }
VARIABLE p
varsP == <<vars, p>>
InitUP == Init /\ (p = << >>)
ChooseP == Choose /\ PredChoose(p) /\ (p' \in NewPSetChoose(p))
SendP == Send /\ PredSend(p) /\ (p' \in NewPSetSend(p))
RcvP == Rcv /\ PredRcv(p) /\ (p' \in NewPSetRcv(p))
UndoP(S) == Undo(S) /\ PredUndo(p, S) /\ (p' \in NewPSetUndo(p))
NextUP == ChooseP \/ SendP \/ RcvP \/ (\E S \in SUBSET y : UndoP(S))
SpecUP == InitUP /\ [][NextUP]_varsP
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} SendSetUndoP}\moduleRightDash\@xx{
\@x{ {\EXTENDS} SendSetUndo
\@pvspace{8.0pt
\@x{ Pi \.{\defeq} \{\@w{send} ,\,\@w{undo} \}
\@x{ Dom \.{\defeq} y
\@pvspace{8.0pt
\@x{ PredChoose ( p )\@s{19.31} \.{\defeq} {\TRUE}
\@x{ NewPSetChoose ( p ) \.{\defeq} \{ f \.{\in} [ Dom \.{'} \.{\rightarrow}
Pi ]\@s{4.1} \.{:}\@s{4.1} \A\, d \.{\in} Dom \.{:} f [ d ] \.{=} p [ d ]
\}
\@pvspace{8.0pt
\@x{ PredSend ( p )\@s{19.31} \.{\defeq} p [ x \.{'} ] \.{=}\@w{send}
\@x{ NewPSetSend ( p ) \.{\defeq} \{ [ d \.{\in} Dom \.{'} \.{\mapsto} p [ d
] ] \}
\@pvspace{8.0pt
\@x{ PredRcv ( p )\@s{19.31} \.{\defeq} {\TRUE}
\@x{ NewPSetRcv ( p ) \.{\defeq} \{ p \}
\@pvspace{8.0pt
\@x{ PredUndo ( p ,\, S )\@s{6.38} \.{\defeq} \A\, d\@s{0.55} \.{\in} S \.{:}
p [ d ] \.{=}\@w{undo}
\@x{ NewPSetUndo ( p ) \.{\defeq} \{ [ d \.{\in} Dom \.{'} \.{\mapsto} p [ d
] ] \}
\@pvspace{8.0pt
\@x{ {\VARIABLE} p
\@x{ varsP \.{\defeq} {\langle} vars ,\, p {\rangle}
\@pvspace{8.0pt
\@x{ InitUP \.{\defeq} Init \.{\land} ( p \.{=} {\langle} {\rangle} )
\@pvspace{8.0pt
\@x{ ChooseP\@s{7.20} \.{\defeq} Choose\@s{7.08} \.{\land} PredChoose ( p
)\@s{5.42} \.{\land} ( p \.{'} \.{\in} NewPSetChoose ( p ) )
\@x{ SendP\@s{16.91} \.{\defeq} Send\@s{16.51} \.{\land} PredSend ( p
)\@s{14.85} \.{\land} ( p \.{'} \.{\in} NewPSetSend ( p ) )
\@x{ RcvP\@s{21.89} \.{\defeq} Rcv\@s{21.44} \.{\land} PredRcv ( p
)\@s{19.77} \.{\land} ( p \.{'} \.{\in} NewPSetRcv ( p ) )
\@x{ UndoP ( S ) \.{\defeq} Undo ( S ) \.{\land} PredUndo ( p ,\, S )
\.{\land} ( p \.{'} \.{\in} NewPSetUndo ( p ) )
\@pvspace{8.0pt
\@x{ NextUP \.{\defeq} ChooseP \.{\lor} SendP \.{\lor} RcvP\@s{4.1}
\.{\lor}\@s{4.1} ( \E\, S \.{\in} {\SUBSET} y \.{:} UndoP ( S ) )
\@pvspace{8.0pt
\@x{ SpecUP\@s{1.08} \.{\defeq} InitUP \.{\land} {\Box} [ NextUP ]_{ varsP}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Specification \emph{SendSetUndoP}} \label{fig:SendSetUndoP}
\end{figure}
Note that the initial value of $p$ is the unique function whose domain
is the empty set. We could write that function as
\tlabox{[d \in \{\} |-> exp]}
for any expression $exp$---for example, 42. However, it's
easier to write that function as $<< >>$ (the empty sequence).
We can now define the refinement mapping under which $SpecUP$
implements specification $Spec$ of module $SendSet$. The refinement
mapping defines \ov{y} to equal the set of elements $d$ in $y$ with
$p[d]="send"$. We thus add to the module
\begin{display}
\begin{notla}
SS == INSTANCE SendSet WITH y <- {d \in y : p[d] = "send"}
THEOREM SpecUP => SS!Spec
\end{notla}
\begin{tlatex}
\@x{ SS \.{\defeq} {\INSTANCE} SendSet {\WITH} y \.{\leftarrow} \{ d \.{\in}
y \.{:} p [ d ] \.{=}\@w{send} \}}\vspace{.2em
\@x{ {\THEOREM} SpecUP \.{\implies} SS {\bang} Spec
\end{tlatex}
\end{display}
We can have TLC check this theorem by creating a model having $SpecUP$
as the behavior spec and checking the property $SS!Spec$.
We should also check that condition~(\ref{eq:ProphFcn}) holds for each
subaction $A$. To do this, we need to create a model with
specification $SpecU$ and have TLC check the property:
\begin{display}
\begin{notla}
[][ /\ Choose => \E f \in [Dom -> Pi] : PredChoose(f)
/\ Send => \E f \in [Dom -> Pi] : PredSend(f)
/\ Rcv => \E f \in [Dom -> Pi] : PredRcv(f)
/\ \A S \in SUBSET y :
Undo(S) => \E f \in [Dom -> Pi] : PredUndo(f,S)
]_vars
\end{notla}
\begin{tlatex}
\@x{ {\Box} [\@s{4.1} \.{\land} Choose \.{\implies} \E\, f \.{\in} [ Dom
\.{\rightarrow} Pi ] \.{:} PredChoose ( f )
\@x{\@s{14.35} \.{\land} Send\@s{9.42} \.{\implies} \E\, f \.{\in} [ Dom
\.{\rightarrow} Pi ] \.{:} PredSend ( f )
\@x{\@s{14.35} \.{\land} Rcv\@s{14.35} \.{\implies} \E\, f \.{\in} [ Dom
\.{\rightarrow} Pi ] \.{:} PredRcv ( f )
\@x{\@s{14.35} \.{\land} \A\, S \.{\in} {\SUBSET} y \.{:}
\@x{\@s{36.78} Undo ( S ) \.{\implies} \E\, f \.{\in} [ Dom \.{\rightarrow}
Pi ] \.{:} PredUndo ( f ,\, S )
\@x{\@s{7.47} ]_{ vars}
\end{tlatex}
\end{display}
However, there is a problem in doing this. TLC will not allow a model
for module $SendSetUndoP$ to have behavior specification $SpecU$
because that spec doesn't describe the behavior of the variable $p$.
We can solve this problem by modifying the specification---temporarily
inserting ``\verb|======|'' into the module before the declaration of
$p$ and then creating the necessary model. Alternatively, we can move
all the definitions before the declaration of $p$ into module
$SendSetUndo$ and check the condition in a model for that spec. This
is inelegant because those definitions aren't part of the
$SendSetUndo$ specification. The proper solution is to move those
definitions from $SendSetUndoP$ and put them in a new module that
extends $SendSetUndo$ and is extended by $SendSetUndoP$. We can then
check that the condition is satisfied using a model for that
specification that has behavior spec $SpecU$. We won't bother doing
this, instead putting them in module $SendSetUndoP$.
\bigskip
\noindent
Recall that in the $SendInt$ example of
Section~\ref{sec:simple-proph}, the one-prediction prophecy variable we used
predicted the next value to be sent when the previous value was sent,
while the specification $SendInt2$ didn't choose the next value to be
sent until the previous value was received. We can use an array
prophecy variable to defer the prediction until it's needed. We let
the domain $Dom$ of $p$ initially contain a single element---let's
take that element to be $"on"$. We let the $Send$ action set $Dom$ to
the empty set, and we let the $Rcv$ action set $Dom$ to $\{"on"\}$.
\subsection{Prophecy Data Structure Variables} \label{sec:proph-data-struct}
It is easy to generalize from the $SendSet$ example to an arbitrary
prophecy-array variable. However, it is useful to generalize still
further from an array to an arbitrary data structure. These prophecy
data structure variables are the most general ones that we consider.
We propose them as the standard way of defining prophecy variables in
\tlaplus, and we have created a module with definitions that
simplify adding these prophecy variables. Both single-prophecy variables
and prophecy array variables are easily expressed as special cases.
As our example of a prophecy data structure variable, we modify the
specification $SendSet$, in which a set of items to be sent is chosen,
to a specification $SendSeq$ in which a sequence of items is chosen.
The value of the variable $y$ is changed from a set of data items to a
sequence of data items. The next item to be sent is the one at the
head of $y$, and each value chosen is appended to the tail of $y$.
The specification is in module $SendSeq$, shown in
\lref{targ:SendSeq}{Figure~\ref{fig:SendSeq}}.
\begin{figure} \target{targ:SendSeq}
\begin{notla}
------------------------------ MODULE SendSeq ------------------------------
EXTENDS Sequences, Integers
CONSTANT Data
NonData == CHOOSE v : v \notin Data
VARIABLES x, y
vars == <<x, y>>
Init == (x = NonData) /\ (y = <<>>)
Choose == /\ \E d \in Data : y' = Append(y, d)
/\ x' = x
Send == /\ x = NonData /\ y # << >>
/\ x' = Head(y)
/\ y' = Tail(y)
Rcv == /\ x \in Data
/\ x' = NonData
/\ y' = y
Next == Choose \/ Send \/ Rcv
Spec == Init /\ [][Next]_vars
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} SendSeq}\moduleRightDash\@xx{
\@x{ {\EXTENDS} Sequences ,\, Integers
\@pvspace{8.0pt
\@x{ {\CONSTANT} Data
\@pvspace{8.0pt
\@x{ NonData \.{\defeq} {\CHOOSE} v \.{:} v \.{\notin} Data
\@pvspace{8.0pt
\@x{ {\VARIABLES} x ,\, y
\@x{ vars \.{\defeq} {\langle} x ,\, y {\rangle}
\@pvspace{8.0pt
\@x{ Init\@s{2.02} \.{\defeq} ( x \.{=} NonData ) \.{\land} ( y \.{=}
{\langle} {\rangle} )
\@pvspace{8.0pt
\@x{ Choose \.{\defeq} \.{\land} \E\, d \.{\in} Data \.{:} y \.{'} \.{=}
Append ( y ,\, d )
\@x{\@s{50.30} \.{\land} x \.{'} \.{=} x
\@pvspace{8.0pt
\@x{ Send \.{\defeq} \.{\land} x \.{=} NonData \.{\land} y \.{\neq} {\langle}
{\rangle}
\@x{\@s{40.87} \.{\land} x \.{'} \.{=} Head ( y )
\@x{\@s{40.87} \.{\land} y \.{'}\@s{0.10} \.{=} Tail ( y )
\@pvspace{8.0pt
\@x{ Rcv \.{\defeq} \.{\land} x \.{\in} Data
\@x{\@s{35.94} \.{\land} x \.{'} \.{=} NonData
\@x{\@s{35.94} \.{\land} y \.{'}\@s{0.10} \.{=} y
\@pvspace{8.0pt
\@x{ Next \.{\defeq} Choose \.{\lor} Send \.{\lor} Rcv
\@pvspace{8.0pt
\@x{ Spec\@s{1.46} \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ vars}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Specification \emph{SendSeq} \label{fig:SendSeq}}
\end{figure}
For our implementation, we add an undo action that removes an
arbitrary element from the sequence $y$. The specification
is in module $SendSeqUndo$ of \lref{targ:SendSeqUndo}{Figure~\ref{fig:SendSeqUndo}}. It
defines $RemoveEltFrom(i, seq)$ to be the sequence obtained from
a sequence $seq$ by removing its $i$\tth\ element,
assuming $1 \leq i \leq Len(seq)$.
\begin{figure} \target{targ:SendSeqUndo}
\begin{notla}
---------------------------- MODULE SendSeqUndo ----------------------------
EXTENDS SendSeq
RemoveEltFrom(i, seq) == [j \in 1..(Len(seq)-1) |-> IF j < i THEN seq[j]
ELSE seq[j+1]]
Undo(i) == /\ y' = RemoveEltFrom(i, y)
/\ x' = x
NextU == Next \/ (\E i \in 1..Len(y) : Undo(i))
SpecU == Init /\ [][NextU]_vars
============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} SendSeqUndo}\moduleRightDash\@xx{
\@x{ {\EXTENDS} SendSeq
\@pvspace{8.0pt
\@x{ RemoveEltFrom ( i ,\, seq ) \.{\defeq} [ j \.{\in} 1 \.{\dotdot} ( Len (
seq ) \.{-} 1 ) \.{\mapsto} {\IF} j \.{<} i \.{\THEN} seq [ j ]
\@x{\@s{271.81} \.{\ELSE} seq [ j \.{+} 1 ] ]
\@pvspace{8.0pt
\@x{ Undo ( i ) \.{\defeq} \.{\land} y \.{'}\@s{0.10} \.{=} RemoveEltFrom ( i
,\, y )
\@x{\@s{54.66} \.{\land} x \.{'} \.{=} x
\@pvspace{8.0pt
\@x{ NextU \.{\defeq} Next \.{\lor} ( \E\, i \.{\in} 1 \.{\dotdot} Len ( y )
\.{:} Undo ( i ) )
\@pvspace{8.0pt
\@x{ SpecU\@s{1.08} \.{\defeq} Init \.{\land} {\Box} [ NextU ]_{ vars}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Specification \emph{SendSeqUndo} \label{fig:SendSeqUndo}}
\end{figure}
As before, we want to show that specification $SendU$ of module
$SendSeqUndo$ implements \tlabox{\EE y : Spec}, where $Spec$ is the
specification of module $SendSeq$. Again, we need to add a prophecy
variable $p$ that predicts whether each element of $y$ will be
sent or ``undone''. We do this by having $p$ be an element
of $Seq(\{"send", "undo"\})$ that has the same length as
$y$. The $Choose^{p}$ action should append either $"send"$
or $"undo"$ to the tail of $p$, the $Send^{p}$ action should remove the
head of $p$, and the $Undo(i)$ action should remove the
$i$\tth\ element of $p$.
As we did for the $SendSet$ example, we write a module $SendSeqUndoP$
that extends $SendSeqUndo$. In it, we define the formulas
$Pred_{A}(p)$ of (\ref{eq:ApDefFcn}) for each of the subactions
$Choose$, $Send$, $Rcv$, and $Undo(i)$. The definitions are:
\begin{display}
\begin{notla}
PredChoose(p) == TRUE
PredSend(p) == p[1] = "send"
PredRcv(p) == TRUE
PredUndo(p, i) == p[i] = "undo"
\end{notla}
\begin{tlatex}
\@x{ PredChoose ( p )\@s{4.1} \.{\defeq} {\TRUE}
\@x{ PredSend ( p )\@s{13.52} \.{\defeq} p [ 1 ] \.{=}\@w{send}
\@x{ PredRcv ( p )\@s{18.45} \.{\defeq} {\TRUE}
\@x{ PredUndo ( p ,\, i )\@s{1.41} \.{\defeq} p [ i ] \.{=}\@w{undo}
\end{tlatex}
\end{display}
We now need to define the expression $NewPSet_{A}$ of
(\ref{eq:ApDefFcn}) for each of these subactions.
Since a sequence of length $n$ is a function with domain $1\dd n$, the
value of $p$ is a function---just as in $SendSetUndoP$ above. This
time its domain $Dom$ is the set $1\dd Len(y)$. However, in that
example, if $d$ is in the domain $Dom$ of $p$ in two successive
states, then $p[d]$ represents the same prediction in both states.
This isn't true in the current example. If $s->t$ is a $Send$ step
and $Len(p)>1$ in state $s$, then the prediction made by $p[2]$ in
state $s$ is the prediction made by $p[1]$ in state $t$. If $s->t$ is
an $Undo(i)$ step and $j>i$, the prediction made by $p[j]$ in state
$s$ is the prediction made by $p[j-1]$ in state~$t$.
In general, an action $A$ defines a correspondence between some
elements in the domain $Dom$ of $p$ and elements in the domain $Dom'$
of $p'$. For example, for the
$Send$ action, each element $i>1$ in
$Dom$ corresponds to the element $i-1$ of $Dom'$. The formula $p' \in
NewPSet_{A}$ in (\ref{eq:ApDefFcn}) should ensure that if an element
$d$ of $Dom'$ either corresponds to an element $c$ of $Dom$ that makes
a prediction about $A$ or else does not correspond to any element of
$Dom$, then $p'[d]$ can assume any value in $\Pi$; but if $d$
corresponds to an element $c$ of $Dom$ that make no prediction about
$A$, then $p'[d]$ equals $p[c]$. Instead of defining the formulas
$NewPSet_{A}$ directly, we will define them in terms of the
correspondence between elements of $Dom$ and $Dom'$ made by $A$ and
the set of elements $d$ in $Dom$ for which $p[d]$ makes a prediction
about $A$.
To express formally a correspondence between elements of $Dom$ and
$Dom'$, we introduce the concept of a partial injection. A
\emph{partial function} from a set $U$ to a set $V$ is a function from
a subset of $U$ to $V$. In other words, it is an element of $[D->V]$
for some subset $D$ of $U$. (Remember that $U$ is a subset of
itself.) An \emph{injection} is a function that maps different
elements in its domain to different values. In other words, a
function $f$ is an injection iff for all $a$ and $b$ in $\DOMAIN f$,
if $a#b$ then $f[a]#f[b]$. The set of all partial injections from
$U$ to $V$ is defined in \tlaplus\ by
\begin{display}
\begin{notla}
PartialInjections(U, V) ==
LET PartialFcns == UNION { [D -> V] : D \in SUBSET U}
IN {f \in PartialFcns : \A a, b \in DOMAIN f : (a # b) => (f[a] # f[b])}
\end{notla}
\begin{tlatex}
\@x{ PartialInjections ( U ,\, V ) \.{\defeq}
\@x{\@s{12.29} \.{\LET} PartialFcns \.{\defeq} {\UNION} \{ [ D
\.{\rightarrow} V ] \.{:} D \.{\in} {\SUBSET} U \}
\@x{\@s{12.29} \.{\IN} \{ f \.{\in} PartialFcns \.{:} \A\, a ,\, b \.{\in}
{\DOMAIN} f \.{:} ( a \.{\neq} b ) \.{\implies} ( f [ a ] \.{\neq} f [ b ] )
\}
\end{tlatex}
\end{display}
For each subaction $A$, we define a partial injection $DomInj_{A}$
from $Dom$ to $Dom'$ such that an element $c$ of $Dom$ corresponds to
an element $d$ of $Dom'$ iff $c$ is in the domain of $DomInj_{A}$
and $d=DomInj_{A}[c]$. Here are the definitions for the four subactions,
which are put in module $SendSeqUndoP$:
\begin{display}
\begin{notla}
DomInjChoose == [d \in Dom |-> d]
DomInjSend == [i \in 2..Len(y) |-> i-1]
DomInjRcv == [d \in Dom |-> d]
DomInjUndo(i) == [j \in 1..Len(y) \ {i} |-> IF j < i THEN j ELSE j-1]
\end{notla}
\begin{tlatex}
\@x{ DomInjChoose\@s{4.35} \.{\defeq} [ d\@s{2.05} \.{\in} Dom \.{\mapsto} d
]
\@x{ DomInjSend\@s{13.78} \.{\defeq} [ i\@s{2.05} \.{\in} 2 \.{\dotdot} Len (
y ) \.{\mapsto} i \.{-} 1 ]
\@x{ DomInjRcv\@s{18.71} \.{\defeq} [ d \.{\in} Dom \.{\mapsto} d ]
\@x{ DomInjUndo ( i ) \.{\defeq} [ j\@s{1.63} \.{\in} 1 \.{\dotdot} Len ( y )
\.{\,\backslash\,} \{ i \} \.{\mapsto} {\IF} j \.{<} i \.{\THEN} j \.{\ELSE}
j \.{-} 1 ]
\end{tlatex}
\end{display}
For the prophecy array variable described in Section~\ref{sec:proph-array}
above, the function $DomInj_{A}$ maps each element $d$ in $Dom$ that is
also in $Dom'$ to itself. Thus, for each subaction $A$ used in defining
a prophecy array variable, we can define
\[ DomInj_{A} == [d \in Dom \cap Dom' |-> d]
\]
A function $f$ such that $f[x]=x$ for all $x\in\DOMAIN f$ is called an
\emph{identity} function. For convenience, the $Prophecy$ module
defines $IdFcn(S)$ to be the identify function with domain~$S$. But
we won't bother using it here. The $Prophecy$ module also defines
$EmptyFcn$ to be the (unique) function whose domain is the empty set.
Let us return to our prophecy data structure example. For a subaction
$A$, we can define $NewPSet_{A}$ in terms of $DomInj_{A}$ and the
subset $PredDom_{A}$ of $Dom$, which consists of the elements in $Dom$
such that $p[d]$ makes a prediction about $A$. We define these sets
$PredDom_{A}$ in module $SendSeqUndoP$ for our four subactions as
follows:
\begin{display}
\begin{notla}
PredDomChoose == {}
PredDomSend == {1}
PredDomRcv == {}
PredDomUndo(i) == {i}
\end{notla}
\begin{tlatex}
\@x{ PredDomChoose\@s{4.35} \.{\defeq} \{ \}
\@x{ PredDomSend\@s{13.78} \.{\defeq} \{ 1 \}
\@x{ PredDomRcv\@s{18.71} \.{\defeq} \{ \}
\@x{ PredDomUndo ( i ) \.{\defeq} \{ i \}
\end{tlatex}
\end{display}
(Since $PredDom_{Undo(i)}$ depends on the identifier $i$ in its
context, we must define $PredDomUndo$ to have a parameter.)
We can define $NewPSet_{A}$ to equal the set of all functions $q$ in
$[Dom'->\Pi]$ such that for every element $d$ in $Dom$ that is not in
$PredDom_{A}$ and has a corresponding element $DomInj_{A}[d]$ in
$Dom'$, the value of $q$ on that corresponding element equals
$p[d]$. More precisely, $NewPSet_{A}$ equals:
\[\begin{noj}
\{\,q \in [Dom' -> \Pi] : \\ \s{2}
\A d \in (\DOMAIN DomInj_{A}) :\: PredDom_A :
q[DomInj_{A}[d]] = p[d] \,\}
\end{noj}
\]
We encapsulate definitions like this in a module $Prophecy$. We find
it most convenient to make this a constant module with constant
parameters $Pi$, $Dom$, and $DomPrime$. This module is meant to be
instantiated with the parameter $Pi$ instantiated by $\Pi$, with the
parameter $Dom$ instantiated by the appropriate state function $Dom$,
and with $DomPrime$ instantiated by $Dom'$. The following definition
from module $Prophecy$ allows us to define $NewPSet_{A}(p)$
to equal
\tlabox{NewPSet(p, DomInj_{A}, PredDom_{A})}:
\begin{display}
\begin{notla}
NewPSet(p, DomInj, PredDom) ==
{ q \in [DomPrime -> Pi] :
\A d \in (DOMAIN DomInj) \ PredDom : q[DomInj[d]] = p[d] }
\end{notla}
\begin{tlatex}
\@x{ NewPSet ( p ,\, DomInj ,\, PredDom ) \.{\defeq}
\@x{\@s{8.2} \{\@s{4.1} q \.{\in} [ DomPrime \.{\rightarrow} Pi ] \.{:}
\@x{\@s{30.98} \A\, d \.{\in} ( {\DOMAIN} DomInj ) \.{\,\backslash\,} PredDom
\.{:} q [ DomInj [ d ] ] \.{=} p [ d ]\@s{4.1} \}
\end{tlatex}
\end{display}
\begin{sloppypar} \noindent
For each action in $A$ in our disjunctive decomposition of $NextU$, we
have written the definitions of $Pred_{A}$, $DomInj_{A}$, and
$PredDom_{A}$. This allows us to define $NewPSet_{A}$, and therefore,
by (\ref{eq:ApDefFcn}), to define $A^{p}$. The following operator
$ProphAction$ from the $Prophecy$ module allows us to write $A^{p}$
as
\tlabox{ProphAction(A, p, p', DomInj_{A}, PredDom_{A}, Pred_{A})}:
\end{sloppypar}
\begin{display}
\begin{notla}
ProphAction(A, p, pPrime, DomInj, PredDom, Pred(_)) ==
A /\ Pred(p) /\ (pPrime \in NewPSet(p, DomInj, PredDom))
\end{notla}
\begin{tlatex}
\@x{ ProphAction ( A ,\, p ,\, pPrime ,\, DomInj ,\, PredDom ,\,\@s{4.1} Pred
( \_ ) ) \.{\defeq}
\@x{\@s{16.4} A\@s{4.1} \.{\land}\@s{4.1} Pred ( p )\@s{4.1}
\.{\land}\@s{4.1} ( pPrime \.{\in} NewPSet ( p ,\, DomInj ,\, PredDom ) )
\end{tlatex}
\end{display}
In module $SendSeqUndoP$, we can define
\begin{display}
\begin{notla}
ChooseP == ProphAction(Choose, p, p', DomInjChoose,
PredDomChoose, PredChoose)
\end{notla}
\begin{tlatex}
\@x{ ChooseP \.{\defeq} ProphAction ( Choose ,\, p ,\, p \.{'} ,\,
DomInjChoose ,\,
\@x{\@s{116.48} PredDomChoose ,\, PredChoose )
\end{tlatex}
\end{display}
The definitions of $SendP$ and $RcvP$ are similar. However, there is
a problem with the definition of $Undo(i)^{p}$, which we write as
$UndoP(i)$. Operator $ProphAction$ requires its last argument, which
represents $Pred_{A}$, to be an operator with a single argument.
However, we defined $PredUndo$ to have two arguments: $p$ and its
context identifier $i$. Since we are defining $UndoP(i)$, the fifth
argument has to be an operator $Op$ so that $Op(p)$ equals
$PredUndo(p,i)$. So, we should define:
\begin{display}
\begin{notla}
UndoP(i) == LET Op(j) == PredUndo(j, i)
IN ProphAction(Undo(i), p, p', DomInjUndo(i),
PredDomUndo(i), Op)
\end{notla}
\begin{tlatex}
\@x{ UndoP ( i ) \.{\defeq} \.{\LET} Op ( j ) \.{\defeq} PredUndo ( j ,\, i
)
\@x{\@s{61.83} \.{\IN} ProphAction ( Undo ( i ) ,\, p ,\, p \.{'} ,\,
DomInjUndo ( i ) ,\,
\@x{\@s{141.36} PredDomUndo ( i ) ,\, Op )
\end{tlatex}
\end{display}
Using the \tlaplus\ \textsc{lambda} construct (added since
\emph{Specifying Systems} was published), this can also be written
as:
\begin{display}
\begin{notla}
UndoP(i) ==
ProphAction( Undo(i), p, p', DomInjUndo(i), PredDomUndo(i),
LAMBDA j : PredUndo(j, i))
\end{notla}
\begin{tlatex}
\@x{ UndoP ( i ) \.{\defeq}
\@x{\@s{16.4} ProphAction (\, Undo ( i ) ,\, p ,\, p \.{'} ,\, DomInjUndo ( i )
,\, PredDomUndo ( i ) ,\,
\@x{\@s{75.52} \, {\LAMBDA} j \.{:} PredUndo ( j ,\, i ) \,)
\end{tlatex}
\end{display}
It's now straightforward to complete our definition of specification
$SpecUP$. Doing so, gathering up the definitions made or implied
so far, and rearranging them a bit, we get the beginning of module
$SendSeqUndoP$ shown in
\lref{targ:SendSeqUndoP}{Figure~\ref{fig:SendSeqUndoP}}.
\begin{figure} \target{targ:SendSeqUndoP}
\begin{notla}
--------------------------- MODULE SendSeqUndoP ---------------------------
EXTENDS SendSeqUndo
Pi == {"send", "undo"}
Dom == DOMAIN y
INSTANCE Prophecy WITH DomPrime <- Dom'
PredDomChoose == {}
DomInjChoose == [d \in Dom |-> d]
PredChoose(p) == TRUE
PredDomSend == {1}
DomInjSend == [i \in 2..Len(y) |-> i-1]
PredSend(p) == p[1] = "send"
PredDomRcv == {}
DomInjRcv == [d \in Dom |-> d]
PredRcv(p) == TRUE
PredDomUndo(i) == {i}
DomInjUndo(i) == [j \in 1..Len(y) \ {i} |-> IF j < i THEN j ELSE j-1]
PredUndo(p, i) == p[i] = "undo"
-----------------------------------------------------------------------
VARIABLE p
varsP == <<vars, p>>
InitUP == Init /\ (p \in [Dom -> Pi])
ChooseP == ProphAction(Choose, p, p',
DomInjChoose, PredDomChoose, PredChoose)
SendP == ProphAction(Send, p, p', DomInjSend, PredDomSend, PredSend)
RcvP == ProphAction(Rcv, p, p', DomInjRcv, PredDomRcv, PredRcv)
UndoP(i) == ProphAction(Undo(i), p, p', DomInjUndo(i), PredDomUndo(i),
LAMBDA j : PredUndo(j, i))
NextUP == ChooseP \/ SendP \/ RcvP \/ (\E i \in 1..Len(y) : UndoP(i))
SpecUP == InitUP /\ [][NextUP]_varsP
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} SendSeqUndoP}\moduleRightDash\@xx{
\@x{ {\EXTENDS} SendSeqUndo
\@pvspace{8.0pt
\@x{ Pi \.{\defeq} \{\@w{send} ,\,\@w{undo} \}
\@x{ Dom \.{\defeq} {\DOMAIN} y
\@pvspace{8.0pt
\@x{ {\INSTANCE} Prophecy {\WITH} DomPrime \.{\leftarrow} Dom \.{'}
\@pvspace{8.0pt
\@x{ PredDomChoose \.{\defeq} \{ \}
\@x{ DomInjChoose\@s{7.14} \.{\defeq} [ d \.{\in} Dom \.{\mapsto} d ]
\@x{ PredChoose ( p )\@s{7.31} \.{\defeq} {\TRUE}
\@pvspace{8.0pt
\@x{ PredDomSend \.{\defeq} \{ 1 \}
\@x{ DomInjSend\@s{7.14} \.{\defeq} [ i \.{\in} 2 \.{\dotdot} Len ( y )
\.{\mapsto} i \.{-} 1 ]
\@x{ PredSend ( p )\@s{7.31} \.{\defeq} p [ 1 ] \.{=}\@w{send}
\@pvspace{8.0pt
\@x{ PredDomRcv \.{\defeq} \{ \}
\@x{ DomInjRcv\@s{7.14} \.{\defeq} [ d \.{\in} Dom \.{\mapsto} d ]
\@x{ PredRcv ( p )\@s{7.31} \.{\defeq} {\TRUE}
\@pvspace{8.0pt
\@x{ PredDomUndo ( i ) \.{\defeq} \{ i \}
\@x{ DomInjUndo ( i )\@s{7.14} \.{\defeq} [ j \.{\in} 1 \.{\dotdot} Len ( y )
\.{\,\backslash\,} \{ i \} \.{\mapsto} {\IF} j \.{<} i \.{\THEN} j \.{\ELSE}
j \.{-} 1 ]
\@x{ PredUndo ( p ,\, i )\@s{8.98} \.{\defeq} p [ i ] \.{=}\@w{undo}
\vspace{-2pt
\@x{}\midbar\@xx{
\vspace{-2pt
\@x{ {\VARIABLE} p
\@x{ varsP \.{\defeq} {\langle} vars ,\, p {\rangle}
\@pvspace{8.0pt
\@x{ InitUP \.{\defeq} Init \.{\land} ( p \.{\in} [ Dom \.{\rightarrow} Pi ]
)
\@pvspace{8.0pt
\@x{ ChooseP \.{\defeq} ProphAction ( Choose ,\, p ,\, p \.{'} ,\,
\@x{\@s{120.58} DomInjChoose ,\, PredDomChoose ,\, PredChoose )
\@pvspace{8.0pt
\@x{ SendP \.{\defeq} ProphAction ( Send ,\, p ,\, p \.{'} ,\, DomInjSend ,\,
PredDomSend ,\, PredSend )
\@pvspace{8.0pt
\@x{ RcvP \.{\defeq} ProphAction ( Rcv ,\, p ,\, p \.{'} ,\, DomInjRcv ,\,
PredDomRcv ,\, PredRcv )
\@pvspace{8.0pt
\@x{ UndoP ( i ) \.{\defeq} ProphAction ( Undo ( i ) ,\, p ,\, p \.{'} ,\,
DomInjUndo ( i ) ,\, PredDomUndo ( i ) ,\,
\@x{\@s{120.96} {\LAMBDA} j \.{:} PredUndo ( j ,\, i ) )
\@pvspace{8.0pt
\@x{ NextUP \.{\defeq} ChooseP \.{\lor} SendP \.{\lor} RcvP \.{\lor} ( \E\, i
\.{\in} 1 \.{\dotdot} Len ( y ) \.{:} UndoP ( i ) )
\@pvspace{8.0pt
\@x{ SpecUP\@s{1.08} \.{\defeq} InitUP \.{\land} {\Box} [ NextUP ]_{ varsP}
\end{tlatex}
\caption{The specification of \emph{SpecUP}.} \label{fig:SendSeqUndoP}
\end{figure}
Finally, we have to define the refinement mapping under which $SpecUP$
implements specification $Spec$ of module $SendSeq$. The idea is
simple: we let \ov{y} be the subsequence of $y$ containing only those
elements for which the corresponding element of the sequence $p$
equals $"send"$. The following formal definition is a bit tricky.
It uses a local recursive definition of an operator $R$ such that
if $yseq$ is any sequence and $pseq$ is a sequence of the same length,
then $R(yseq, pseq)$ is the subsequence of $yseq$ that contains
$yseq[i]$ iff $pseq[i]$ equals $"send"$.
\begin{display}
\begin{notla}
yBar ==
LET RECURSIVE R(_, _)
R(yseq, pseq) ==
IF yseq = << >>
THEN yseq
ELSE IF Head(pseq) = "send"
THEN <<Head(yseq)>> \o R(Tail(yseq), Tail(pseq))
ELSE R(Tail(yseq), Tail(pseq))
IN R(y, p)
\end{notla}
\begin{tlatex}
\@x{ yBar \.{\defeq}
\@x{\@s{12.29} \.{\LET} {\RECURSIVE} R ( \_ ,\, \_ )
\@x{\@s{32.79} R ( yseq ,\, pseq ) \.{\defeq}
\@x{\@s{52.46} {\IF} yseq \.{=} {\langle}\, {\rangle}
\@x{\@s{60.66} \.{\THEN} yseq
\@x{\@s{60.66} \.{\ELSE} {\IF} Head ( pseq ) \.{=}\@w{send}
\@x{\@s{100.18} \.{\THEN} {\langle} Head ( yseq ) {\rangle} \.{\circ} R (
Tail ( yseq ) ,\, Tail ( pseq ) )
\@x{\@s{100.18} \.{\ELSE} R ( Tail ( yseq ) ,\, Tail ( pseq ) )
\@x{\@s{12.29} \.{\IN} R ( y ,\, p )
\end{tlatex}
\end{display}
We then instantiate module $SendSeq$ and state as follows the theorem
asserting that $SpecUP$ implements formula $Spec$ of that model under
the refinement mapping.
\begin{display}
\begin{notla}
SS == INSTANCE SendSeq WITH y <- yBar
THEOREM SpecUP => SS!Spec
\end{notla}
\begin{tlatex}
\@x{ SS \.{\defeq} {\INSTANCE} SendSeq {\WITH} y \.{\leftarrow} yBar
\@pvspace{2.0pt
\@x{ {\THEOREM} SpecUP \.{\implies} SS {\bang} Spec
\end{tlatex}
\end{display}
TLC can check this theorem in the usual way.
\subsection{Checking the Definitions}
We have shown how to define a specification $Spec^{p}$ for an
arbitrary specification $Spec$ by defining a state function $Dom$ and,
for every subaction $A$ of a disjunctive representation of the
next-state action of $Spec$, defining $Pred_{A}$, $DomInj_{A}$, and
$PredDom_{A}$. These definitions must satisfy certain conditions
to ensure that \tlabox{\EE p:Spec^{p}} is equivalent to $Spec$.
We now state those conditions.
The first condition is (\ref{eq:ProphFcn}). The $Prophecy$ module
defines this operator:
\begin{display}
\begin{notla}
ExistsGoodProphecy(Pred(_)) == \E q \in [Dom -> Pi] : Pred(q)
\end{notla}
\begin{tlatex}
\@x{ ExistsGoodProphecy ( Pred ( \_ ) ) \.{\defeq} \E\, q \.{\in} [ Dom
\.{\rightarrow} Pi ] \.{:} Pred ( q )
\end{tlatex}
\end{display}
For a subaction $A$ with an empty context, we can write
(\ref{eq:ProphFcn}) as
\[ Spec\; => \; [][
A => (ExistsGoodProphecy(Pred_{A})]_{vars}
\]
(Remember that the $Prophecy$ module will be instantiated with the
appropriate expression substituted for $Dom$.)
To see how this
definition is used if $A$ has a non-empty context, here is how
condition is (\ref{eq:ProphFcn}) is expressed for the subaction
$UndoP(i)$ of specification $SpecU$ in our $SendSeq$ example:
\begin{display}
\begin{notla}
SpecU => [][\,\A i \in Dom :
Undo(i) =>
ExistsGoodProphecy( LAMBDA p : PredUndo(p,i))]_vars
\end{notla}
\begin{tlatex}
\@x{ SpecU\@s{4.1}
\.{\implies}\@s{4.1} {\Box} [\, \A\, i \.{\in} Dom \.{:}
\@x{\@s{70.18} Undo ( i ) \.{\implies}
\@x{\@s{78.38} ExistsGoodProphecy ( {\LAMBDA} p \.{:} PredUndo ( p ,\, i ) )
\, ]_{ vars}
\end{tlatex}
\end{display}
The only condition we require of $DomInj_{A}$ is that it be a partial
function from $Dom$ to $Dom'$. This is expressed as
$IsDomInj(DomInj_{A})$ using this definition from module
$Prophecy$
\begin{display}
\begin{notla}
IsDomInj(DomInj) == DomInj \in PartialInjections(Dom, DomPrime)
\end{notla}
\begin{tlatex}
\@x{ IsDomInj ( DomInj ) \.{\defeq} DomInj \.{\in} PartialInjections ( Dom
,\, DomPrime )
\end{tlatex}
\end{display}
As with the $ExistsGoodProphecy$ condition, it needs to hold only
for $A$ steps in a behavior satisfying the specification $Spec$.
Hence the general requirement on $DomInj_{A}$ for an action $A$ with
context $<<\textbf{k};\textbf{K}>>$ is
\[ Spec\; =>
\; [][\A <<\textbf{k};\textbf{K}>> :
A => IsDomInj(DomInj_{A})]_{vars}
\]
Because $IsDomInj$ does not have an operator argument, no local
definition or \textsc{lambda} expression is needed even if the context
is nonempty. For example, if $A$ is subaction $Undo(i)$ of
specification $SpecU$ of the $SendSeq$ example, this condition is
written:
\[ \A i \in Dom : Undo(i) => IsDomInj(DomInjUndo(i))
\]
Finally, we need a condition on $PredDom_{A}$. Remember that
$PredDom_{A}$ should equal the set of elements $d$ of $Dom$ such
that $p[d]$ is making predictions about $A$. Actually, it suffices
that $PredDom_{A}$ contain all such elements. (It may contain
other elements as well.) This is equivalent to
the requirement that any element not in $PredDom_{A}$ does not make a
prediction about $A$. Making a prediction about $A$ means affecting
the value of $Pred_{A}$, so not making a prediction means not affecting
its value. Thus, $p[d]$ does not make a prediction about
$A$ iff setting $p[d]$ to any value in $\Pi$ does not change the
value of $Pred_{A}$. You should be able to convince yourself
that the value of $Pred_{A}$ does not depend
on the value of $p[d]$ for any $d$ not in $PredDom_{A}$ iff the
following formula is true:
\[ \begin{noj}
\A q, r \in [Dom -> Pi] : \\ \s{1}
(\A d \in PredDom_{A} : q[d] = r[d]) => (Pred_{A}(q) = Pred_{A}(r))
\end{noj}
\]
In addition to this requirement, to ensure that our formulas make
sense, we also make the obvious requirement that $PredDom_{A}$ is a
subset of $Dom$. The following definition appears in module $Prophecy$.
\begin{display}
\begin{notla}
IsPredDom(PredDom, Pred(_)) ==
/\ PredDom \subseteq Dom
/\ \A q, r \in [Dom -> Pi] :
(\A d \in PredDom : q[d] = r[d]) => (Pred(q) = Pred(r))
\end{notla}
\begin{tlatex}
\@x{ IsPredDom ( PredDom ,\, Pred ( \_ ) ) \.{\defeq}
\@x{\@s{8.2} \.{\land} PredDom \.{\subseteq} Dom
\@x{\@s{8.2} \.{\land} \A\, q ,\, r \.{\in} [ Dom \.{\rightarrow} Pi ] \.{:}
\@x{\@s{36.11} ( \A\, d \.{\in} PredDom \.{:} q [ d ] \.{=} r [ d ] )
\.{\implies} ( Pred ( q ) \.{=} Pred ( r ) )
\end{tlatex}
\end{display}
Remembering that the condition on $PredDom_{A}$ needs to hold only for
$A$ steps in a behavior satisfying the specification, we can express
it for an action $A$ with an empty context as:
\[ Spec \;=>\; [][A => IsPredDom(PredDom_{A}, Pred_{A})]_{vars}
\]
Because the second argument of $IsPredDom$ is an operator argument,
we again need to use a local definition or a \textsc{lambda} expression
to express the condition if the subaction $A$ has a nonempty context.
For example, here is how we can express it for the $Undo(i)$
subaction in our $SendSeq$ example using a local definition:
\begin{display}
\begin{notla}
[][\A i \in Dom :
Undo(i) => LET Op(p) == PredUndo(p, i)
IN IsPredDom(PredDomUndo, Op)]_vars
\end{notla}
\begin{tlatex}
\@x{ {\Box} [\, \A\, i \.{\in} Dom \.{:}
\@x{\@s{17.47} Undo ( i ) \,\.{\implies} \, \.{\LET} Op ( p ) \.{\defeq} PredUndo
( p ,\, i )
\@x{\@s{68.79} \,\, \.{\IN} IsPredDom ( PredDomUndo ,\, Op ) \,]_{ vars}
\end{tlatex}
\end{display}
Here is how it is written with a \textsc{lambda} expression:
\begin{display}
\begin{notla}
[][\A i \in Dom :
Undo(i) => IsPredDom(PredDomUndo,
LAMBDA p : PredUndo(p, i))]_vars
\end{notla}
\begin{tlatex}
\@x{ {\Box} [ \,\A\, i \.{\in} Dom \.{:}
\@x{\@s{21.57} Undo ( i ) \,\.{\implies}\, IsPredDom (\, PredDomUndo ,\,
\@x{\@s{126.02} \,\,\,{\LAMBDA} p \.{:} PredUndo ( p ,\, i ) \,) \,]_{ vars}
\end{tlatex}
\end{display}
The following definition from module $Prophecy$ allows us
to combine the three conditions.
\begin{display}
\begin{notla}
ProphCondition(A, DomInj, PredDom, Pred(_)) ==
A => /\ ExistsGoodProphecy(Pred)
/\ IsDomInj(DomInj)
/\ IsPredDom(PredDom, Pred)
\end{notla}
\begin{tlatex}
\@x{ ProphCondition ( A ,\, DomInj ,\, PredDom ,\,\@s{4.1} Pred ( \_ )
)\@s{4.1} \.{\defeq}
\@x{\@s{20.5} A\@s{4.1} \.{\implies}\@s{4.1} \.{\land} ExistsGoodProphecy (
Pred )
\@x{\@s{51.68} \.{\land} IsDomInj ( DomInj )
\@x{\@s{51.68} \.{\land} IsPredDom ( PredDom ,\, Pred )
\end{tlatex}
\end{display}
Using this definition, the requirements on the definitions are
expressed for specification $SendSeqUndo$ in
\lref{targ:proph-cond}{Figure~\ref{fig:proph-cond}}
\begin{figure} \target{targ:proph-cond}
\begin{notla}
Condition ==
/\ ProphCondition(Choose, DomInjChoose, PredDomChoose, PredChoose)
/\ ProphCondition(Send, DomInjSend, PredDomSend, PredSend)
/\ ProphCondition(Rcv, DomInjRcv, PredDomRcv, PredRcv)
/\ \A i \in Dom :
ProphCondition(Undo(i), DomInjUndo(i), PredDomUndo(i),
LAMBDA p : PredUndo(p, i))
THEOREM SpecU => [][Condition]_vars
\end{notla}
\begin{tlatex}
\@x{ Condition \.{\defeq}
\@x{\@s{8.2} \.{\land} ProphCondition ( Choose ,\, DomInjChoose ,\,
PredDomChoose ,\, PredChoose )
\@x{\@s{8.2} \.{\land} ProphCondition ( Send ,\,\@s{9.4} DomInjSend
,\,\@s{9.4} PredDomSend ,\,\@s{9.4} PredSend )
\@x{\@s{8.2} \.{\land} ProphCondition ( Rcv ,\,\@s{14.2} DomInjRcv
,\,\@s{14.3} PredDomRcv ,\,\@s{14.2} PredRcv )
\@x{\@s{8.2} \.{\land} \A\, i \.{\in} Dom \.{:}
\@x{\@s{30.63} ProphCondition ( Undo ( i ) ,\, DomInjUndo ( i ) ,\,
PredDomUndo ( i ) ,\,
\@x{\@s{104.29} \,{\LAMBDA} p \.{:} PredUndo ( p ,\, i ) )
\@pvspace{8.0pt
\@x{ {\THEOREM} SpecU \.{\implies} {\Box} [ Condition ]_{ vars}
\end{tlatex}
\caption{Action requirements for specification \emph{SendSeqUndo}.}
\label{fig:proph-cond}
\end{figure}
We encounter the same problem here that we encountered in checking
condition (\ref{eq:ProphFcn}) for the $SendSet$ example. We would
like to put these requirements in module $SendSetUndoP$
(\lref{targ:SendSetUndoP}{Figure~\ref{fig:SendSetUndoP}}), right
before the declaration of the variable $p$. However, TLC can't check
the theorem in a model for that specification because $SpecU$ does not
specify the values of variable $p$. We can either move all the
definitions that are now in $SendSetUndoP$ before the declaration of
$p$ into a separate module, put them at the end of module
$SendSetUndo$, or end the module before the declaration of $p$
by adding ``\verb|=====|'' when checking the condition.
You should check these conditions when adding a prophecy variable.
They provide a good way to debug your definitions, before you try
checking that the specification with prophecy variable implements the
desired specification.
\subsection{Liveness}
As with our other auxiliary variables, we add a prophecy variable to
the safety part of a specification, but we keep the liveness part the
same. As we remarked in Section~\ref{sec:history-liveness}, this
produces unusual specifications in which the liveness property can
assert a fairness condition about an action that isn't a subaction of
the next-state action.
For history variables, although the form of the specifications is
unusual, the specifications are not. This is not the case for
prophecy variables. If $Spec$ has a liveness condition, the
specification $Spec^{p}$ obtained from it by adding a prophecy
variable can be weird. As an example, suppose that in the $SendInt$
specifications of Section~\ref{sec:simple-proph}, instead of taking $Pi$ to
equal $Int$, we let it equal $Int \cup \{\infty\}$ for some value
$\infty\notin Int$. Everything we did would work exactly as before,
and the theorem at the end of module $SendInt1P$ in
\lref{targ:SendInt1P}{Figure~\ref{fig:SendInt1P}} would still be true.
If a $SendP$ step set $p'$ to $\infty$, predicting that the next value
to be sent is $\infty$, then the system would simply halt (stutter
forever) before the next $SendP$ step because $Send /\ PredSend(\infty)$
equals \FALSE.
Now suppose we add a liveness condition $\WF_{vars}(Next)$ to
our $SendInt$ specifications, requiring that they never halt.
We would then have
\[ SpecP == InitP /\ [][NextP]_{varsP} /\ \WF_{vars}(Next)\]
This formula $SpecP$ implies that infinitely many $Next$ steps must
occur, so a behavior can't halt. The weak fairness conjunct therefore
requires that $SpecP$ not set $p'$ to $\infty$. This is weird.
The liveness property $\WF_{vars}(Next)$ doesn't just require that
something must eventually happen; it also prevents something (setting
$p$ to $\infty$) from ever happening. The technical term for this
weirdness is that the formula $SpecP$ is \emph{not
machine closed}~\cite{abadi:existence}, which means that its liveness
property affects safety as well as liveness.
Non-machine closed specs should never be used to describe \emph{how} a
system works. You can't understand how to implement a system if the
next-state action doesn't describe what it can and cannot do next. In
\emph{extremely} rare cases, a non-machine closed high-level spec is
the best way to describe \emph{what} a system should do, rather than
how it should do it. You are very unlikely to encounter such a
situation in practice.
While the non-machine closed spec we get by adding a prophecy variable
to a spec with liveness can be weird, this weirdness causes no
problem. We don't have to implement $SpecP$. We use it only to check
the correctness of $Spec$; and the presence of the liveness property
makes no difference in what we do. With our modified $SendInt$
specifications, we can check that $SpecP$ implies $SI2!Spec$ exactly as
we did before.
\section{Stuttering Variables}
\subsection{Adding Stuttering Steps to a Simple Action}
Suppose $Spec_{1}$ is a specification of a (24-hour) clock that
displays only the hour---a specification we can write as
\begin{equation}
Spec_{1} == (h = 0) /\ [][h' = (h+1)\,\%\, 24]_{h}
\NOTLA \label{eq:HourClock}
\end{equation}
Let $Spec_{2}$ be this specification of an hour-minute clock:
\begin{equation}
Spec_{2} == \begin{conj}
(h=0) /\ (m=0) \\
[][ \;\begin{conj}
m' = (m+1)\,\%\,60 \\
h' = \IF m'=0 \THEN (h+1)\,\%\,24 \LSE h \;]_{<<h,m>>}
\end{conj}
\end{conj}
\NOTLA \label{eq:HourMinClock}
\end{equation}
If we ignore the variable $m$ in $Spec_{2}$, then we get a clock that
displays only the hour. Thus, \tlabox{\EE m:Spec_{2}} should be
equivalent to $Spec_{1}$. (The 59 steps each hour that change only
$m$ are stuttering steps that are allowed by $Spec_{1}$.) It's easy to
see that $Spec_{2}$ implies $Spec_{1}$, so \tlabox{\EE m:Spec_{2}}
implies $Spec_{1}$. There is no refinement mapping with which we can
prove that $Spec_{1}$ implies \tlabox{\EE m:Spec_{2}}. To construct
the necessary refinement mapping we need to add an auxiliary variable
$s$ to $Spec_{1}$ to obtain a specification $Spec_{1}^{s}$ that adds
59 steps that change only $s$ to each step that increments $h$. Such
an auxiliary variable is called a \emph{stuttering} variable because
it changes $Spec_{1}$ only by requiring it to add steps that leave
the variable $h$ of $Spec_{1}$ unchanged.
To add such a variable $s$ to a specification $Spec$ to form
$Spec^{s}$, we let the next-state action of $Spec^{s}$ take
``normal'' steps that satisfy the next-state action of $Spec$ when $s$
equals $\top$ (usually read ``top''), which is some value that is not
a positive integer. The value of $s$ in the initial state equals $\top$.
When $s$ is set to a positive integer, the
specification $Spec^{s}$ allows only stuttering steps that decrement
$s$, leaving the variables of $Spec$ unchanged. When $s$ counts down
to zero, it is set equal to $\top$ again. We add these stuttering
steps before and/or after steps of some particular subaction of the
next-state action. Here, we assume that this is a ``simple''
subaction, meaning that its context is empty.
Suppose we want the specification $Spec^{s}$ to add stuttering steps
to each step of a particular subaction. We replace each
subaction $A$ by the action $A^{s}$ defined as follows. For each $A$
other than that particular subaction, we define $A^{s}$ by:
\begin{equation}
A^{s} == (s = \top) \,/\ \, A \,/\ \, (s'=s)
\NOTLA\label{eq:As}
\end{equation}
To add $initVal$ stuttering steps after a step of an
action $A$, for a positive
integer $initVal$ (whose value may depend on the variables of $Spec$),
we define
\[
A^{s} == \begin{noj}
\IF{s = \top} \\ \s{.5}\begin{noj}
\THEN A \,/\ \, (s'=initVal) \\
\ELSE \begin{conj}
vars'=vars \\
s' = \IF{s=1}\THEN \top \LSE s-1
\end{conj}
\end{noj}
\end{noj}
\]
We can generalize
this by replacing the set of natural numbers
with an arbitrary set $\Sigma$ having
a well-founded partial order $\prec$ with smallest element $\bot$ (read
``bottom'')\footnote{
This means that $\bot \prec \sigma$ for all $\sigma\in\Sigma$,
and any decreasing chain $\sigma_{1} \succ \sigma_{2} \succ \ldots$
of elements in $\Sigma$
must be finite.},
letting $initVal$ be an arbitrary element of $\Sigma$, replacing
$s=1$ with $s=\bot$, and replacing $s-1$ by $decr(s)$ for some
operator $decr$ such that \tlabox{decr(s)\prec s} for
all $s\in \Sigma$. The generalization is:
\begin{equation}
A^{s} == \begin{noj}
\IF{s = \top} \\ \s{.5}\begin{noj}
\THEN A \,/\ \, (s'=initVal) \\
\ELSE \begin{conj}
vars'=vars \\
s' = \IF{s=\bot}\THEN \top \LSE decr(s)
\end{conj}
\end{noj}
\end{noj}
\NOTLA \label{eq:A0sPost}
\end{equation}
We can add $initVal$ stuttering steps before an $A$ step, rather than
after it, as follows. The stuttering steps should only be taken when
they can be followed by an $A$ step, which is the case only when an $A$
step is enabled. Remembering that $\ENABLED A$ is the state
predicate that is true iff an $A$ step is enabled, the stuttering
steps can be added with this definition of $A^{s}$:
\begin{equation}
A^{s} ==
\begin{conj}
\ENABLED A\\
\IF{s = \bot} \begin{noj}
\THEN A \,/\ \, (s'=\top) \\
\ELSE \begin{conj}
vars' = vars \\
s' = \IF{s=\top}\THEN initVal\LSE decr(s)
\end{conj}
\end{noj}
\end{conj}
\NOTLA \label{eq:A0sPre}
\end{equation}
We could generalize (\ref{eq:A0sPost}) and (\ref{eq:A0sPre}) to allow
putting stuttering steps both before and after an $A$ step. We
won't bother to do this because it is probably seldom needed, and it
wouldn't be significantly simpler than adding two separate stuttering
variables.
\subsection{Adding Stuttering Steps to Multiple Actions}
To generalize what we did in the preceding section, we first consider
how to add stuttering steps before or after an action $A$ that may
have a nonempty context. We assume that the set $\Sigma$, its $\bot$
element, and the operator $decr$ do not depend on the values of the
context variables. (This should be true in most real examples.)
However, $initVal$ may depend on them. We therefore must make sure
that $initVal$ is evaluated with the values of the context variables
for which the action $A$ is ``executed''. This is no problem for
stuttering steps added after the $A$ step, where $A^{s}$ is defined by
(\ref{eq:A0sPost}). (Since the stuttering steps do nothing but
decrement the value of $s$, it makes no difference for which value of
the context variables $A^{s}$ is ``executed'' when $s#\top$.)
However, it is a problem for stuttering steps added before an $A$
step, where $A^{s}$ is defined by (\ref{eq:A0sPre}).
To solve the problem for $A^{s}$ defined by (\ref{eq:A0sPre}), we let
the non-$\top$ values of $s$ be records with a $val$ component that
equals the value of $s$ described in (\ref{eq:A0sPre}), and a $ctxt$
component that equals the tuple of values of the context when
$initVal$ is evaluated. In other words, $ctxt$ is set by the
\textsc{else} clause in the second conjunct of (\ref{eq:A0sPre}). The
condition that $s.ctxt$ equals the values of the context variables is
added as a conjunct to the \textsc{then} clause to make sure that $A$
is executed only in that context. The precise definition is given
below.
We often need to add stuttering steps before or after more than one
subaction of the next-state action. We could do that by adding
separate stuttering variables, or we could introduce a stuttering
array variable. However, because the stuttering steps we add to an
action all occur immediately before or after that subaction, we can
add them all with the same stuttering variable~$s$. Stuttering steps
can be added to each such subaction with its own set $\Sigma$ and
hence its own values of $initVal$ and $\bot$ and of the operator
$decr$. We just let the value of $s$ indicate the action to which the
stuttering steps are being added. We do this by adding to the
non-$\top$ values of $s$ an additional $id$ component that identifies
the action for which the stuttering steps are being added. The
component is set when $s$ is first set to a non-$\top$ value, and
execution of the new subaction $A^{s}$ is enabled when $s#\top$ only
if $s.id$ equals the identifier of $A$.
We write the three definitions (\ref{eq:As}), (\ref{eq:A0sPost}), and
(\ref{eq:A0sPre}) in \tlaplus\ using the three operators $NoStutter$,
$PostStutter$, and $PreStutter$, respectively, shown in
\lref{targ:Stuttering}{Figure~\ref{fig:Stuttering}}.
\begin{figure} \target{targ:Stuttering}
\begin{notla}
----------------------------- MODULE Stuttering ----------------------------
top == [top |-> "top"]
VARIABLES s, vars
NoStutter(A) == (s = top) /\ A /\ (s' = s)
PostStutter(A, actionId, context, bot, initVal, decr(_)) ==
IF s = top THEN /\ A
/\ s' = [id |-> actionId, ctxt |-> context, val |-> initVal]
ELSE /\ s.id = actionId
/\ UNCHANGED vars
/\ s'= IF s.val = bot THEN top
ELSE [s EXCEPT !.val = decr(s.val)]
PreStutter(A, enabled, actionId, context, bot, initVal, decr(_)) ==
IF s = top
THEN /\ enabled
/\ UNCHANGED vars
/\ s' = [id |-> actionId, ctxt |-> context, val |-> initVal]
ELSE /\ s.id = actionId
/\ IF s.val = bot THEN /\ s.ctxt = context
/\ A
/\ s' = top
ELSE /\ UNCHANGED vars
/\ s' = [s EXCEPT !.val = decr(s.val)]
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} Stuttering}\moduleRightDash\@xx{
\@x{ top \.{\defeq} [ top \.{\mapsto}\@w{top} ]
\@pvspace{8.0pt
\@x{ {\VARIABLES} s ,\, vars
\@pvspace{8.0pt
\@x{ NoStutter ( A ) \.{\defeq} ( s \.{=} top ) \.{\land} A \.{\land} ( s
\.{'} \.{=} s )
\@pvspace{8.0pt
\@x{ PostStutter ( A ,\, actionId ,\, context ,\, bot ,\, initVal ,\, decr (
\_ ) ) \.{\defeq}
\@x{\@s{8.2} {\IF} s \.{=} top \.{\THEN} \.{\land} A
\@x{\@s{84.08} \.{\land} s \.{'} \.{=} [ id \.{\mapsto} actionId ,\,\@s{4.1}
ctxt \.{\mapsto} context ,\, val \.{\mapsto} initVal ]
\@x{\@s{52.77} \.{\ELSE} \.{\land} s . id \.{=} actionId
\@x{\@s{84.08} \.{\land} {\UNCHANGED} vars
\@x{\@s{84.08} \.{\land} s \.{'} \.{=} {\IF} s . val \.{=} bot \.{\THEN} top
\@x{\@s{176.19} \.{\ELSE} [ s {\EXCEPT} {\bang} . val \.{=} decr ( s . val )
]
\@pvspace{8.0pt
\@x{ PreStutter ( A ,\, enabled ,\, actionId ,\, context ,\, bot ,\, initVal
,\, decr ( \_ ) ) \.{\defeq}
\@x{\@s{8.2} {\IF} s \.{=} top
\@x{\@s{16.4} \.{\THEN} \.{\land} enabled
\@x{\@s{47.71} \.{\land} {\UNCHANGED} vars
\@x{\@s{47.71} \.{\land} s \.{'} \.{=} [ id \.{\mapsto} actionId ,\, ctxt
\.{\mapsto} context ,\, val \.{\mapsto} initVal ]
\@x{\@s{16.4} \.{\ELSE} \.{\land} s . id \.{=} actionId
\@x{\@s{47.71} \.{\land} {\IF} s . val \.{=} bot \.{\THEN} \.{\land} s . ctxt
\.{=} context
\@x{\@s{150.07} \.{\land} A
\@x{\@s{150.07} \.{\land} s \.{'} \.{=} top
\@x{\@s{118.76} \.{\ELSE} \.{\land} {\UNCHANGED} vars
\@x{\@s{150.07} \.{\land} s \.{'} \.{=} [ s {\EXCEPT} {\bang} . val \.{=}
decr ( s . val ) ]
\end{tlatex}
\caption{The beginning of the \emph{Stuttering} module.} \label{fig:Stuttering}
\end{figure}
The module should be instantiated by substituting the new stuttering
variable for $s$ and the tuple of variables of the original
specification for $vars$. It defines $\top$, which is written $top$,
to be a value that is different from the values assigned to $s$ by
$PostStutter$ and $PreStutter$ actions (and that TLC knows is
different from those values). The other values in (\ref{eq:A0sPost})
and (\ref{eq:A0sPre}) are provided by the following arguments to
$PostStutter$ and $PreStutter$.
\begin{describe}{$actionId$}
\item[$A$] The action to which stuttering steps
are being added.
\item[$id$] An identifier to distinguish that action $A$ from other
actions to which stuttering steps are added. We like to let it be the
name of the action (which is a string).
\item[$bot$] The $\bot$ (smallest) element of $\Sigma$.
\item[$initVal$] The value we have been calling by that name.
\item[$decr$] The operator we have been calling by that name.
It must take a single argument.
\item[$enabled$] A formula that should be equivalent to $\ENABLED A$.
We can often find such a formula that TLC can evaluate much more
efficiently than $\ENABLED A$. You can use TLC to check that
$enabled$ is equivalent to $\ENABLED A$ by checking that
$enabled \equiv \ENABLED A$
is an invariant of the original specification.
\item[$context$] The tuple of context identifiers of $A$. (You can use
$i$ instead of a 1-tuple $<<i>>$.) The $context$ argument is used in
the $PostStutter$ action only to set the $ctxt$ component of~$s$.
This component may be used in defining refinement mappings.
\end{describe}
Note that $PostStutter$ and $PreStutter$ add at least one stuttering
step, adding exactly one such step if $initVal=bot$. It is often more
convenient to use operators that add one fewer stuttering step. These
are the operators $MayPostStutter$ and $MayPreStutter$ defined in the
$Stuttering$ module as shown in
\lref{targ:Stuttering2}{Figure~\ref{fig:Stuttering2}}.
\begin{figure} \target{targ:Stuttering2}
\begin{notla}
MayPostStutter(A, actionId, context, bot, initVal, decr(_)) ==
IF s = top THEN /\ A
/\ s' = IF initVal = bot
THEN s
ELSE [id |-> actionId, ctxt |-> context,
val |-> initVal]
ELSE /\ s.id = actionId
/\ UNCHANGED vars
/\ s'= IF decr(s.val) = bot
THEN top
ELSE [s EXCEPT !.val = decr(s.val)]
MayPreStutter(A, enabled, actionId, context, bot, initVal, decr(_)) ==
IF s = top
THEN /\ enabled
/\ IF initVal = bot
THEN A /\ (s' = s)
ELSE /\ UNCHANGED vars
/\ s' = [id |-> actionId, ctxt |-> context,
val |-> decr(initVal)]
ELSE /\ s.id = actionId
/\ IF s.val = bot THEN /\ s.ctxt = context
/\ A
/\ s' = top
ELSE /\ UNCHANGED vars
/\ s' = [s EXCEPT !.val = decr(s.val)]
============================================================================
\end{notla}
\begin{tlatex}
\@x{ MayPostStutter ( A ,\, actionId ,\, context ,\, bot ,\, initVal ,\, decr
( \_ ) ) \.{\defeq}
\@x{\@s{8.2} {\IF} s \.{=} top \.{\THEN} \.{\land} A
\@x{\@s{84.08} \.{\land} s \.{'} \.{=} {\IF} initVal \.{=} bot
\@x{\@s{124.44} \.{\THEN} s
\@x{\@s{124.44} \.{\ELSE} [ id \.{\mapsto} actionId ,\, ctxt \.{\mapsto}
context ,\,
\@x{\@s{158.53} val \.{\mapsto} initVal ]
\@x{\@s{52.77} \.{\ELSE} \.{\land} s . id \.{=} actionId
\@x{\@s{84.08} \.{\land} {\UNCHANGED} vars
\@x{\@s{84.08} \.{\land} s \.{'} \.{=} {\IF} decr ( s . val ) \.{=} bot
\@x{\@s{124.44} \.{\THEN} top
\@x{\@s{124.44} \.{\ELSE} [ s {\EXCEPT} {\bang} . val \.{=} decr ( s . val )
]
\@pvspace{8.0pt
\@x{ MayPreStutter ( A ,\, enabled ,\, actionId ,\, context ,\, bot ,\,
initVal ,\, decr ( \_ ) ) \.{\defeq}
\@x{\@s{8.2} {\IF} s \.{=} top
\@x{\@s{16.4} \.{\THEN} \.{\land} enabled
\@x{\@s{47.71} \.{\land} {\IF} initVal \.{=} bot
\@x{\@s{67.02} \.{\THEN} A \.{\land} ( s \.{'} \.{=} s )
\@x{\@s{67.02} \.{\ELSE} \.{\land} {\UNCHANGED} vars
\@x{\@s{98.33} \.{\land} s \.{'} \.{=} [ id \.{\mapsto} actionId ,\, ctxt
\.{\mapsto} context ,\,
\@x{\@s{133.27} val \.{\mapsto} decr ( initVal ) ]
\@x{\@s{16.4} \.{\ELSE} \.{\land} s . id \.{=} actionId
\@x{\@s{47.71} \.{\land} {\IF} s . val \.{=} bot \.{\THEN} \.{\land} s . ctxt
\.{=} context
\@x{\@s{150.07} \.{\land} A
\@x{\@s{150.07} \.{\land} s \.{'} \.{=} top
\@x{\@s{118.76} \.{\ELSE} \.{\land} {\UNCHANGED} vars
\@x{\@s{150.07} \.{\land} s \.{'} \.{=} [ s {\EXCEPT} {\bang} . val \.{=}
decr ( s . val ) ]
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{The end of the \emph{Stuttering} module.} \label{fig:Stuttering2}
\end{figure}
Unlike the original definitions, the actions they define do not
execute any stuttering step when $initVal$ equals $bot$.
As a simple example, let formula $Spec$ be defined as in
Figure~\ref{fig:hour} to equal the hour clock specification of
(\ref{eq:HourClock}).
\begin{figure}
\begin{notla}
---------------------------- MODULE Hour ----------------------------
EXTENDS Integers
VARIABLE h
Init == h=0
Next == h' = (h + 1)
Spec == Init /\ [][Next]_h
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} Hour}\moduleRightDash\@xx{
\@x{ {\EXTENDS} Integers
\@pvspace{8.0pt
\@x{ {\VARIABLE} h
\@pvspace{8.0pt
\@x{ Init\@s{4.12} \.{\defeq} h \.{=} 0
\@pvspace{8.0pt
\@x{ Next \.{\defeq} h \.{'} \.{=} ( h \.{+} 1 )\@s{4.1} \.{\%}\@s{4.1} 24
\@pvspace{8.0pt
\@x{ Spec\@s{1.46} \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ h}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{The hour clock specification.} \label{fig:hour}
\end{figure}
Let us suppose that a module $HourMin$ defines a formula $Spec$ to
equal the hour-minute clock specification $Spec_{2}$ of
(\ref{eq:HourMinClock}).
To construct a refinement mapping under which the hour clock
specification implements the hour-minute clock specification, we add
59 stuttering steps before each $Next$ step of the hour-clock
specification. The obvious way to do that is to let $\Sigma$ be the
set $1\dd 59$ ordered by \,$<$\,, with $\bot$ equal to 1 and $initVal$
equal to 59. However, our refinement mapping becomes simpler if we
use the reverse ordering $>$ of $1\dd 59$, with $\bot$ equal to $59$ and
$initVal$ equal to 1. The refinement mapping can then define \ov{m}
to equal 0 when $s=\top$ and $s.val$ when $s#\top$.
We use the operator $PreStutter$ of the $Stuttering$ module to define
$Next^{s}$. We instantiate that module with $vars$ equal to $h$ and
with $s$ equal to the stuttering variable, which we also call $s$.
For the arguments of $PreStutter$, observe that:
\begin{itemize}
\item $Next$ is always enabled, so $\ENABLED Next$ equals \TRUE.
\item Since we are adding stuttering steps to only one action, it doesn't
matter what constant we choose for the $actionId$ argument.
\item $Next$, which is the only subaction in the trivial disjunctive
representation of $Next$, has a null context. We can therefore let
the $context$ argument be any constant.
\end{itemize}
We therefore add the following to the end of module $Hour$.
\begin{display}
\begin{notla}
vars == h
VARIABLE s
INSTANCE Stuttering
InitS == Init /\ (s = top)
NextS == PreStutter(Next, TRUE, "Next", "", 59, 1, LAMBDA j : j+1)
SpecS == InitS /\ [][NextS]_<<vars, s>>
HM == INSTANCE HourMin WITH m <- IF s = top THEN 0 ELSE s.val
THEOREM SpecS => HM!Spec
\end{notla}
\begin{tlatex}
\@x{ vars \.{\defeq} h
\@x{ {\VARIABLE} s
\@x{ {\INSTANCE} Stuttering
\@pvspace{8.0pt
\@x{ InitS\@s{4.12} \.{\defeq} Init \.{\land} ( s \.{=} top )
\@pvspace{8.0pt
\@x{ NextS \.{\defeq} PreStutter ( Next ,\, {\TRUE} ,\,\@w{Next} ,\,\@w{\s{.15}} ,\,
59 ,\, 1 ,\, {\LAMBDA} j \.{:} j \.{+} 1 )
\@pvspace{8.0pt
\@x{ SpecS\@s{1.08} \.{\defeq} InitS \.{\land} {\Box} [ NextS ]_{ {\langle}
vars ,\, s {\rangle}}
\@pvspace{8.0pt
\@x{ HM \.{\defeq} {\INSTANCE} HourMin {\WITH} m \.{\leftarrow} {\IF} s \.{=}
top \.{\THEN} 0 \.{\ELSE} s . val
\@pvspace{8.0pt
\@x{ {\THEOREM} SpecS \.{\implies} HM {\bang} Spec
\end{tlatex}
\end{display}
TLC can easily check this theorem.
\subsection{Correctness of Adding a Stuttering Variable}
How do we check that adding a stuttering variable using the operators
of the $Stuttering$ module produces a specification $Spec^{s}$ such
that
\tlabox{\EE s: Spec^{s}}
is equivalent to the original specification $Spec$? The construction
ensures that each behavior of $Spec^{s}$ is obtained by adding
stuttering steps to a behavior of $Spec$, so \tlabox{\EE s: Spec^{s}}
implies $Spec$. It can fail to be equivalent to $Spec$ only if it
either adds an infinite sequence of stuttering steps, or if we have
used an incorrect $enabled$ argument for $PreStutter$. It will be
equivalent to $Spec$ if the following conditions are satisfied for
every use of the $PostStutter$ and $PreStutter$ operators, with
arguments named as above, for some constant set $\Sigma$:
\begin{enumerate}
\item For every $\sigma$ in $\Sigma$, the sequence of values
$\sigma$, $decr(\sigma)$, $decr(decr(\sigma))$, \ldots\
is contained in $\Sigma$ and
eventually reaches $bot$.
\item $initVal$ is in $\Sigma$.
\item $enabled$ is equivalent to $\ENABLED A$ [for $PreStutter$ only]
\end{enumerate}
Condition~1 is a condition only on the constants $\Sigma$, $bot$, and
$decr$. It can be written as $StutterConstantCondition(\Sigma,bot,decr)$
using the following definition from the $Stuttering$ module:
\begin{display}
\begin{notla}
StutterConstantCondition(Sigma, bot, decr(_)) ==
LET InverseDecr(S) == {sig \in Sigma \ S : decr(sig) \in S}
R[n \in Nat] == IF n = 0 THEN {bot}
ELSE LET T == R[n-1]
IN T \cup InverseDecr(T)
IN Sigma = UNION {R[n] : n \in Nat}
\end{notla}
\begin{tlatex}
\@x{ StutterConstantCondition ( Sigma ,\, bot ,\, decr ( \_ ) ) \.{\defeq}
\@x{\@s{8.2} \.{\LET} InverseDecr ( S ) \.{\defeq} \{ sig \.{\in} Sigma
\.{\,\backslash\,} S \.{:} decr ( sig ) \.{\in} S \}
\@x{\@s{28.59} R [ n \.{\in} Nat ] \.{\defeq} {\IF} n \.{=} 0 \.{\THEN} \{
bot \}
\@x{\@s{134.14} \.{\ELSE} \.{\LET} T \.{\defeq} R [ n \.{-} 1 ]
\@x{\@s{165.45} \.{\IN} T \.{\cup} InverseDecr ( T )
\@pvspace{8.0pt
\@x{\@s{8.2} \.{\IN} Sigma \.{=} {\UNION} \{ R [ n ] \.{:} n \.{\in} Nat \}
\end{tlatex}
\end{display}
This condition can be checked by TLC by putting it into an \textsc{assume}
statement or else putting it in the \textsf{Evaluate Constant Expression}
field of a model's \emph{Model Checking Results} page. In either case,
the model must replace $Nat$ by a $0\dd n$ for a
(sufficiently large) integer $n$,
and $\Sigma$ must also be replaced with a finite set if it is infinite.
The $Stuttering$ module defines $AltStutterConstantCondition$ to
be equivalent to $StutterConstantCondition$ if $\Sigma$ is finite,
and it doesn't require redefining $Nat$.
The last two conditions are ones that need only hold for behaviors
of $Spec$. They can be stated formally as follows, where
$A$ is a subaction with context~\tlabox{<<\mathbf{k}; \mathbf{K}>>}:
\begin{eqnarray}
Spec & => & [][\A <<\mathbf{k}; \mathbf{K}>> :
A => (initVal \in \Sigma)]_{vars}
{\NOTLA \label{eq:stuttering-cond}} \V{.4}
Spec & => & [](\A <<\mathbf{k}; \mathbf{K}>> :
enabled \equiv \ENABLED A)
\end{eqnarray}
TLC can check them in the obvious way.
\subsection{Adding Infinite Stuttering} \label{sec:infinite-stutter}
The type of stuttering variable we have been describing adds a finite
number of stuttering steps before or after a step of a subaction.
There is another type of stuttering variable that adds an infinite
number of stuttering steps not associated with an action. You are
unlikely ever to have to use one, but we include it for completeness.
Suppose we want to find a refinement mapping under which a spec
$Spec_{1}$ that allows only halting behaviors implements a spec
$Spec_{2}$ that allows behaviors in which externally visible variables
stop changing, but internal variables keep changing forever. We
obviously can't do that, because if all the variables of $Spec_{1}$
stop changing, then no expression defined in terms of those variables
can keep changing forever. None of the methods we have described thus
far for adding an auxiliary variable $a$ to $Spec_{1}$ can help us,
because they all have the property that if every behavior allowed by
$Spec_{1}$ halts, then so does every behavior allowed by
$Spec_{1}^{a}$.
It's hard to devise a practical example in which this problem would
arise. One possibility is for $Spec_{2}$ to have a server perform
internal actions looking for user input that may never arrive, while
in $Spec_{1}$ the server just waits for input. Although unlikely to
arise, the problem is easy to solve, so we briefly sketch a solution.
The solution is to add a stuttering variable that is required to
stutter forever. Let $Spec$ equal $Init /\ [][Next]_{vars}$ and let
$UC$ be the stuttering action \tlabox{\UNCHANGED vars}. Since
$[Next]_{vars}$ equals $Next \/ UC$, we can write $Spec$ as
$Init /\ [][Next \/ UC]_{vars}$.
We can therefore add the subaction $UC$ to any disjunctive
representation of $Next$. We add a history variable $s$ as described
in Section~\ref{sec:history}, using a disjunctive representation
containing the subaction $UC$. This defines an action $A^{s}$ for
every subaction $A$ and produces the specification
$Init^{s} /\ [][Next^{s}]_{<<vars, s>>}$.
(We choose $UC^{s}$ so it implies $s'#s$.) We then define $Spec^{s}$
to equal
\[ Init^{s} /\ [][Next^{s}]_{<<vars, s>>} /\ \WF_{<<vars, s>>}(UC^{s})\]
Since \tlabox{\ENABLED UC^{s}} equals $\TRUE$, the fairness
requirement $\WF_{<<vars,s>>}(UC^{s})$ implies that infinitely many
$UC^{s}$ steps occur. These are steps that leave the variables in
$vars$ unchanged, changing only $s$. Since we have added $s$ as
a history variable,
\tlabox{\EE s : Init^{s} /\ [][Next^{s}]_{<<vars, s>>}}
is equivalent to $Spec$. Since any \tlaplus\ spec allows stuttering
steps, this implies that
\tlabox{\EE s: Spec^{s}}
is equivalent to $Spec$.
\subsection{Liveness}
Liveness poses no problem when adding a stuttering variable. As with
other auxiliary variables, we obtain $Spec^{s}$ by adding the
stuttering variable to the safety part of $Spec$ and then conjoining
to it the liveness conjunct of $Spec$. (This is true as well for the
kind of stuttering variable described in
Section~\ref{sec:infinite-stutter}, where $Spec^{s}$ contains a liveness
conjunct.)
Although $Spec^{s}$ may have an unusual form, it isn't weird. If
$Spec$ is machine closed then $Spec^{s}$ is also machine closed.
However, putting it into a standard form with fairness conditions only
on subactions of $Next^{s}$ is not as simple as it is for history
variables.
\section{The Snapshot Problem}
We now consider an example of using auxiliary variables to show that
an algorithm satisfies its specification. Our example is based on an
algorithm of Afek et al.~\cite{afek:atomic-snap}. Their algorithm
implements what they call a \emph{single-writer atomic snapshot
memory}, which will call simply a \emph{snapshot object}. Their
algorithm implements a snapshot object using an unbounded amount of
storage. They also present a second algorithm that uses a bounded
amount of storage and implements a more general type of object, but we
restrict ourselves to their first, simpler algorithm. Moreover, we
consider only a simplified version of this simpler algorithm; their
algorithm can be checked by adding the same auxiliary variables used
for the simplified version.
\subsection{Linearizability}
A snapshot algorithm is used to implement an atomic read of an array
of memory registers, each of which can be written by a different
process. Its specification is a special case of a linearizable
specification of a data object---a concept introduced by
Herlihy and Wing~\cite{herlihy:axioms}.
A data object, also called a state machine, executes commands from
user processes. It is described by an initial state of the
object and an operator $Apply$, where $Apply(i, cmd, st)$ describes
the output and new state of the object that results from process $i$
executing command $cmd$ when the object has state $st$. It is
specified formally by these declared constants:
\begin{display}
\begin{notla}
CONSTANTS Procs, Commands(_), Outputs(_), InitObj,
Apply(_, _, _)
\end{notla}
\begin{tlatex}
\@x{ {\CONSTANTS}\@s{4.1} Procs ,\, Commands ( \_ ) ,\, Outputs ( \_ ) ,\,
InitObj ,\,
\@x{\@s{58.85} Apply ( \_ ,\, \_ ,\, \_ )
\end{tlatex}
\end{display}
They have the following meanings:
\begin{describe}{$Apply(i,cmd,st)$}
\item[$Procs$] The set of processes.
\item[$Commands(i)$] The set of commands that process $i$ can issue.
\item[$Outputs(i)$] The set of outputs the commands issued by process $i$
can produce.
\item[$InitObj$] The initial state of the object.
\item[$Apply(i,cmd,st)$] A record with $output$ and $newState$ fields
describing the result of process $i$ executing command $cmd$ when
the object is in state~$st$.
\end{describe}
A linearizable implementation of the data object
is one in which the state of the object is internal, the only
externally visible actions being the issuing of the command and the
return of its output. More precisely, a process $i$ executes a
command $cmd$ with a $BeginOp(i, cmd)$ step, followed by a $DoOp(i)$
step that modifies the state of the object, followed by an $EndOp(i)$
step. The $BeginOp$ and $EndOp$ steps are externally visible, meaning
that they modify externally visible variables (and perhaps internal
variables), while the $DoOp$ step modifies only internal
variables---including an internal variable describing the state of the
object.
To simplify the specification, we assume that the sets of commands and
of outputs are disjoint. We can then use a single externally visible
variable $interface$, letting $BeginOp(i, cmd)$ set $interface[i]$ to
the command $cmd$ and letting $EndOp(i)$ set it to the command's
output. We also introduce an internal variable $istate$ to hold the
internal state of the processes---needed to remember, while a process
is executing a command, whether or not it has performed the $DoOp$
step and, if it has, what output was produced. We do this by letting
$BeginOp(i, cmd)$ set $istate[i]$ to $cmd$, and letting $DoOp(i)$
set it to the command's output. Here is the definition of the
next-state action and its subactions.
\begin{display}
\begin{notla}
BeginOp(i, cmd) == /\ interface[i] \in Outputs(i)
/\ interface' = [interface EXCEPT ![i] = cmd]
/\ istate' = [istate EXCEPT ![i] = cmd]
/\ object' = object
DoOp(i) == /\ interface[i] \in Commands(i)
/\ istate[i] = interface[i]
/\ LET result == Apply(i, interface[i], object)
IN /\ object' = result.newState
/\ istate' = [istate EXCEPT ![i] = result.output]
/\ interface' = interface
EndOp(i) == /\ interface[i] \in Commands(i)
/\ istate[i] \in Outputs(i)
/\ interface' = [interface EXCEPT ![i] = istate[i]]
/\ UNCHANGED <<object, istate>>
Next == \E i \in Procs : \/ \E cmd \in Commands(i) : BeginOp(i, cmd)
\/ DoOp(i)
\/ EndOp(i)
\end{notla}
\begin{tlatex}
\@x{ BeginOp ( i ,\, cmd ) \.{\defeq} \.{\land} interface [ i ] \.{\in}
Outputs ( i )
\@x{\@s{93.61} \.{\land} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang}
[ i ] \.{=} cmd ]
\@x{\@s{93.61} \.{\land} istate \.{'}\@s{1.47} \.{=} [ istate {\EXCEPT}
{\bang} [ i ] \.{=} cmd ]
\@x{\@s{93.61} \.{\land} object \.{'} \.{=} object
\@pvspace{8.0pt
\@x{ DoOp ( i ) \.{\defeq} \.{\land} interface [ i ] \.{\in} Commands ( i )
\@x{\@s{56.82} \.{\land} istate [ i ] \.{=} interface [ i ]
\@x{\@s{56.82} \.{\land} \.{\LET} result \.{\defeq} Apply ( i ,\, interface [
i ] ,\, object )
\@x{\@s{67.93} \.{\IN} \.{\land} object \.{'} \.{=} result . newState
\@x{\@s{88.33} \.{\land} istate \.{'}\@s{1.47} \.{=} [ istate {\EXCEPT}
{\bang} [ i ] \.{=} result . output ]
\@x{\@s{56.82} \.{\land} interface \.{'} \.{=} interface
\@pvspace{8.0pt
\@x{ EndOp ( i ) \.{\defeq} \.{\land} interface [ i ] \.{\in} Commands ( i )
\@x{\@s{61.67} \.{\land} istate [ i ] \.{\in} Outputs ( i )
\@x{\@s{61.67} \.{\land} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang}
[ i ] \.{=} istate [ i ] ]
\@x{\@s{61.67} \.{\land} {\UNCHANGED} {\langle} object ,\, istate {\rangle}
\@pvspace{8.0pt
\@x{ Next \.{\defeq} \E\, i \.{\in} Procs \.{:} \.{\lor} \E\, cmd \.{\in}
Commands ( i ) \.{:} BeginOp ( i ,\, cmd )
\@x{\@s{97.40} \.{\lor} DoOp ( i )
\@x{\@s{97.40} \.{\lor} EndOp ( i )
\end{tlatex}
\end{display}
Initially, $interface[i]$ and $istate[i]$ equal some output, for each
$i$. We let that equal $InitOutput(i)$ for some \textsc{constant}
operator $InitOutput$. We also add a fairness requirement to imply
that any command that has begun (with a $BeginOp$ step) eventually
completes (with an $EndOp$ step). The complete specification (with
the action definitions above elided) is in module $Linearizability$,
shown in \lref{targ:Linearizability}{Figure~\ref{fig:Linearizability}}.
\begin{figure} \target{targ:Linearizability}
\begin{notla}
-------------------------- MODULE Linearizability --------------------------
CONSTANTS Procs, Commands(_), Outputs(_), InitOutput(_),
ObjValues, InitObj, Apply(_, _, _)
ASSUME LinearAssumps ==
/\ InitObj \in ObjValues
/\ \A i \in Procs : InitOutput(i) \in Outputs(i)
/\ \A i \in Procs : Outputs(i) \cap Commands(i) = { }
/\ \A i \in Procs, obj \in ObjValues :
\A cmd \in Commands(i) :
/\ Apply(i, cmd, obj).output \in Outputs(i)
/\ Apply(i, cmd, obj).newState \in ObjValues
VARIABLES object, interface, istate
vars == <<object, interface, istate>>
Init == /\ object = InitObj
/\ interface = [i \in Procs |-> InitOutput(i)]
/\ istate = [i \in Procs |-> InitOutput(i)]
BeginOp(i, cmd) == ...
DoOp(i) == ...
EndOp(i) == ...
Next == ...
SafeSpec == Init /\ [][Next]_vars
Fairness == \A i \in Procs : WF_vars(DoOp(i)) /\ WF_vars(EndOp(i))
Spec == Init /\ [][Next]_vars /\ Fairness
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} Linearizability}\moduleRightDash\@xx{
\@x{ {\CONSTANTS}\@s{4.1} Procs ,\, Commands ( \_ ) ,\, Outputs ( \_ ) ,\,
InitOutput ( \_ ) ,\,
\@x{\@s{58.85} ObjValues ,\, InitObj ,\, Apply ( \_ ,\, \_ ,\, \_ )
\@pvspace{8.0pt
\@x{ {\ASSUME} LinearAssumps \.{\defeq}
\@x{\@s{46.44} \.{\land} InitObj \.{\in} ObjValues
\@x{\@s{46.44} \.{\land} \A\, i \.{\in} Procs \.{:} InitOutput ( i ) \.{\in}
Outputs ( i )
\@x{\@s{46.44} \.{\land} \A\, i \.{\in} Procs \.{:} Outputs ( i ) \.{\cap}
Commands ( i ) \.{=} \{ \}
\@x{\@s{46.44} \.{\land} \A\, i \.{\in} Procs ,\, obj \.{\in} ObjValues
\.{:}
\@x{\@s{65.75} \A\, cmd \.{\in} Commands ( i ) \.{:}
\@x{\@s{77.07} \.{\land}\@s{3.71} Apply ( i ,\, cmd ,\, obj ) . output
\.{\in} Outputs ( i )
\@x{\@s{77.07} \.{\land}\@s{3.71} Apply ( i ,\, cmd ,\, obj ) . newState
\.{\in} ObjValues
\@pvspace{8.0pt
\@x{ {\VARIABLES} object ,\, interface ,\, istate
\@x{ vars \.{\defeq} {\langle} object ,\, interface ,\, istate {\rangle}
\@pvspace{8.0pt
\@x{ Init\@s{2.02} \.{\defeq} \.{\land} object \.{=} InitObj
\@x{\@s{37.72} \.{\land} interface \.{=} [ i \.{\in} Procs \.{\mapsto}
InitOutput ( i ) ]
\@x{\@s{37.72} \.{\land} istate \.{=} [ i \.{\in} Procs \.{\mapsto}
InitOutput ( i ) ]
\@pvspace{8.0pt
\@x{ BeginOp ( i ,\, cmd ) \.{\defeq} \.{\dots}
\@x{ DoOp ( i ) \.{\defeq} \.{\dots}
\@x{ EndOp ( i ) \.{\defeq} \.{\dots}
\@x{ Next \.{\defeq} \.{\dots}
\@pvspace{8.0pt
\@x{ SafeSpec \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ vars}
\@pvspace{8.0pt
\@x{ Fairness\@s{0.49} \.{\defeq} \A\, i \.{\in} Procs \.{:} {\WF}_{ vars} (
DoOp ( i ) ) \.{\land} {\WF}_{ vars} ( EndOp ( i ) )
\@pvspace{2.0pt
\@x{ Spec \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ vars}\@s{4.1} \.{\land}
Fairness
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Module \emph{Linearizability}.} \label{fig:Linearizability}
\end{figure}
Any particular linearizable object can be specified by instantiating
the module with the appropriate constants. The module includes an
assumption named $LinearAssumps$, to check that the instantiated
constants satisfy the properties they should for the module to specify
a linearizable object. To state all those properties, the
specification introduces a constant $ObjValues$ to describe the set of
all possible states of the object. This set could be defined to equal
the following rather complicated expression. Trying to understand it
provides a good lesson in set theory.
\begin{display}
\begin{notla}
LET ApplyProcTo(i,S) ==
{Apply(i, cmd, x).newState : x \in S, cmd \in Commands(i)}
ApplyTo(S) == UNION {ApplyProcTo(i, S) : i \in Procs}
ApplyITimes[i \in Nat] ==
IF i = 0 THEN {InitObj}
ELSE ApplyTo(ApplyITimes[i-1])
IN UNION {ApplyITimes[i] : i \in Nat}
\end{notla}
\begin{tlatex}
\@x{ \.{\LET} ApplyProcTo ( i ,\, S ) \.{\defeq}
\@x{\@s{32.69} \{ Apply ( i ,\, cmd ,\, x ) . newState \.{:}\@s{4.1} x
\.{\in} S ,\, cmd \.{\in} Commands ( i ) \}
\@pvspace{4.0pt
\@x{\@s{20.39} ApplyTo ( S ) \.{\defeq} {\UNION} \{ ApplyProcTo ( i ,\, S )
\.{:} i \.{\in} Procs \}
\@pvspace{4.0pt
\@x{\@s{20.19} ApplyITimes [ i \.{\in} Nat ] \.{\defeq}
\@x{\@s{40.89} {\IF} i \.{=} 0 \.{\THEN} \{ InitObj \}
\@x{\@s{75.47} \.{\ELSE} ApplyTo ( ApplyITimes [ i \.{-} 1 ] )
\@pvspace{4.0pt
\@x{ \.{\IN} {\UNION} \{ ApplyITimes [ i ] \.{:} i \.{\in} Nat \}
\end{tlatex}
\end{display}
\subsection{The Linearizable Snapshot Specification}
By a snapshot object, we mean what Afek et al.~\cite{afek:atomic-snap}
called an \emph{atomic snapshot memory}. In a snapshot object, the
processes are either readers or writers. Reader and writer should be
thought of as roles; the same physical process can act as both a
reader and a writer. A snapshot object is an array of registers, one
per writer. A write operation writes a value to the writer's register
and produces as output some fixed value that is not a possible
register value. A read operation has a single command that produces
the object's state (an array of register values) as output and leaves
that state unchanged.
The specification declares four constants: the sets $Readers$ and
$Writers$ of reader and writer processes; the set $RegVals$ of
possible register values; and a value $InitRegVal$ in $RegVals$ that
is the initial value of a register. We call the snapshot object a
\emph{memory} and use different names for some of the parameters of
the $Linearizability$ module, including $MemVals$ and $InitMem$ for
$ObjValues$ and $InitObj$. We define $NotMemVal$ be the single reader
command and $NotRegVal$ to be the single write command output. The
complete specification is in module $LinearSnapshot$ of
\lref{targ:LinearSnapshot}{Figure~\ref{fig:LinearSnapshot}}. (The
\textsc{assume} is added at the end of the module so TLC will check
that the assumption $LinearAssumps$ of module $Linearizability$ is
true under the instantiation.)
\begin{figure} \target{targ:LinearSnapshot}
\begin{notla}
--------------------------- MODULE LinearSnapshot ---------------------------
CONSTANTS Readers, Writers, RegVals, InitRegVal
ASSUME /\ Readers \cap Writers = {}
/\ InitRegVal \in RegVals
Procs == Readers \cup Writers
MemVals == [Writers -> RegVals]
InitMem == [i \in Writers |-> InitRegVal]
NotMemVal == CHOOSE v : v \notin MemVals
NotRegVal == CHOOSE v : v \notin RegVals
Commands(i) == IF i \in Readers THEN {NotMemVal}
ELSE RegVals
Outputs(i) == IF i \in Readers THEN MemVals
ELSE {NotRegVal}
InitOutput(i) == IF i \in Readers THEN InitMem ELSE NotRegVal
Apply(i, cmd, obj) == IF i \in Readers
THEN [newState |-> obj, output |-> obj]
ELSE [newState |-> [obj EXCEPT ![i] = cmd],
output |-> NotRegVal]
VARIABLES mem, interface, istate
INSTANCE Linearizability WITH ObjValues <- MemVals, InitObj <- InitMem,
object <- mem
ASSUME LinearAssumps
============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} LinearSnapshot}\moduleRightDash\@xx{
\@x{ {\CONSTANTS} Readers ,\, Writers ,\, RegVals ,\, InitRegVal
\@pvspace{8.0pt
\@x{ {\ASSUME} \.{\land} Readers \.{\cap} Writers \.{=} \{ \}
\@x{\@s{38.24} \.{\land} InitRegVal \.{\in} RegVals
\@pvspace{8.0pt
\@x{ Procs \.{\defeq} Readers \.{\cup} Writers
\@pvspace{8.0pt
\@x{ MemVals \.{\defeq} [ Writers \.{\rightarrow} RegVals ]
\@x{ InitMem\@s{2.60} \.{\defeq} [ i \.{\in} Writers \.{\mapsto} InitRegVal
]
\@pvspace{8.0pt
\@x{ NotMemVal \.{\defeq} {\CHOOSE} v \.{:} v \.{\notin} MemVals
\@x{ NotRegVal\@s{6.27} \.{\defeq} {\CHOOSE} v \.{:} v \.{\notin} RegVals
\@pvspace{8.0pt
\@x{ Commands ( i ) \.{\defeq} {\IF} i \.{\in} Readers \.{\THEN} \{ NotMemVal
\}
\@x{\@s{144.52} \.{\ELSE} RegVals
\@pvspace{8.0pt
\@x{ Outputs ( i ) \.{\defeq} {\IF} i \.{\in} Readers \.{\THEN} MemVals
\@x{\@s{130.21} \.{\ELSE} \{ NotRegVal \}
\@pvspace{8.0pt
\@x{ InitOutput ( i ) \.{\defeq} {\IF} i \.{\in} Readers \.{\THEN} InitMem
\.{\ELSE} NotRegVal
\@pvspace{8.0pt
\@x{ Apply ( i ,\, cmd ,\, obj ) \.{\defeq} {\IF} i \.{\in} Readers
\@x{\@s{110.27} \.{\THEN} [ newState \.{\mapsto} obj ,\, output \.{\mapsto}
obj ]
\@x{\@s{110.27} \.{\ELSE} [ newState \.{\mapsto} [ obj {\EXCEPT} {\bang} [ i
] \.{=} cmd ] ,\,
\@x{\@s{144.36} output\@s{11.04} \.{\mapsto} NotRegVal ]
\@pvspace{8.0pt
\@x{ {\VARIABLES} mem ,\, interface ,\, istate
\@pvspace{8.0pt
\@x{ {\INSTANCE} Linearizability {\WITH} ObjValues \.{\leftarrow} MemVals ,\,
InitObj \.{\leftarrow} InitMem ,\,
\@x{\@s{140.48} object \.{\leftarrow} mem
\@pvspace{8.0pt
\@x{ {\ASSUME} LinearAssumps
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Module \emph{LinearSnapshot}.} \label{fig:LinearSnapshot}
\end{figure}
\subsection{The Simplified Afek et al. Snapshot Algorithm}
The snapshot algorithm of Afek et al.\ uses an internal variable $imem$
whose value is an array with $imem[i]$ a pair consisting of the value
of the $i$\tth\ register and an integer whose value is the number of
times the register has been written. It assumes that the entire pair
can be read and written atomically.
A write operation writes the register value $cmd$ in the obvious way,
the $DoOp(i)$ action setting $imem[i]$ to $<<cmd, imem[i][2]+1>>$.
A read operation first performs the following \emph{scan} procedure:
\begin{display}
It reads all the elements $imem[i]$ once, in any order. It then reads
them a second time, again in any order. If it reads the same values
both times for all $i$, it outputs the array of register values it
read.
\end{display}
If the values obtained for each element $imem[i]$ by the two reads are
not all the same, so the scan procedure does not produce an output,
then the procedure is repeated. The scan procedure is repeated again
and again until it produces an output.
The actual algorithm has an alternative method of producing an output
that can be used when it has read three different values for
$imem[i]$, for some writer~$i$. By using this method, termination of
the read is assured. However, for simplicity, we use an algorithm
that keeps performing the scan procedure until it succeeds in
producing an output. Thus, a read need never terminate, so the
algorithm does not satisfy the liveness requirement of a snapshot
algorithm---namely, it does not satisfy the weak fairness requirement
of the $DoOp(i)$ action for a reader~$i$. However, it does satisfy
the safety requirement. The correctness of the complete algorithm
(including liveness) can be verified by essentially the same method
used for our simplified version; but the complete algorithm is more
complicated, so the refinement mapping is more complicated and model
checking takes longer. We therefore consider only the simplified
algorithm.
To specify the algorithm in \tlaplus, we declare the same constants
$Readers$, $Writers$, $RegVals$, and $InitRegVal$ and make the same
definitions of $MemVals$, $InitMem$, $NotMemVal$, and $NotRegVal$ as
in module $LinearSnapshot$ above. We also define:
\begin{display}
\begin{notla}
IRegVals == RegVals \X Nat
IMemVals == [Writers -> IRegVals]
InitIMem == [i \in Writers |-> <<InitRegVal, 0>> ]
\end{notla}
\begin{tlatex}
\@x{ IRegVals\@s{6.27} \.{\defeq} RegVals \.{\times} Nat
\@x{ IMemVals \.{\defeq} [ Writers \.{\rightarrow} IRegVals ]
\@x{ InitIMem\@s{2.60} \.{\defeq} [ i \.{\in} Writers \.{\mapsto} {\langle}
InitRegVal ,\, 0 {\rangle} ]
\end{tlatex}
\end{display}
We declare five variables, with the following meanings:
\begin{display}
\begin{description}
\item[$interface:$] The same as in $LinearSnapshot$.
\item[$imem:$] Like $mem$ in $LinearSnapshot$, except $imem[i]$ is an
ordered pair in $RegVals\X Nat$, the first component representing
$mem[i]$ and the second the number of times $mem[i]$ has been written.
The initial value of $imem$ is initially $InitIMem$.
\item[$wrNum:$] A function with domain $Writers$, where $wrNum[i]$ is
the number of $BeginWr(i)$ steps that have been taken.
\item[$rdVal1$, $rdVal2:$] They are functions such that $rdVal1[i]$
and $rdVal2[i]$ describe the values read so far by reader $i$ in the
two reads of the \emph{scan} procedure. Both $rdVal1[i]$ and
$rdVal2[i]$ are functions whose domain is the set of writers $j$ for
which the first or second read of $imem[j]$ has been performed,
mapping each such $j$ to the value read. They are set initially to
the empty function (the function with empty domain), which we write
$<<\,>>$.
\end{description}
\end{display}
The writer actions are straightforward. Note that because $wrNum[i]$
counts the number of $BeginWr(i,cmd)$ steps and $imem[i][2]$ is set to
$wrNum[i]$ by the $DoWr(i)$, the $EndWrite(i)$ action should be
enabled and $DoWr(i)$ disabled when $imem[i][2]$ equals $wrNum[i]$.
\begin{display}
\begin{notla}
BeginWr(i, cmd) == /\ interface[i] = NotRegVal
/\ wrNum' = [wrNum EXCEPT ![i] = wrNum[i] + 1]
/\ interface' = [interface EXCEPT ![i] = cmd]
/\ UNCHANGED <<imem, rdVal1, rdVal2>>
DoWr(i) == /\ interface[i] \in RegVals
/\ imem[i][2] # wrNum[i]
/\ imem' = [imem EXCEPT ![i] = <<interface[i], wrNum[i]>>]
/\ UNCHANGED <<interface, wrNum, rdVal1, rdVal2>>
EndWr(i) == /\ interface[i] \in RegVals
/\ imem[i][2] = wrNum[i]
/\ interface' = [interface EXCEPT ![i] = NotRegVal]
/\ UNCHANGED <<imem, wrNum, rdVal1, rdVal2>>
\end{notla}
\begin{tlatex}
\@x{ BeginWr ( i ,\, cmd ) \.{\defeq} \.{\land} interface [ i ] \.{=}
NotRegVal
\@x{\@s{95.48} \.{\land} wrNum \.{'} \.{=} [ wrNum {\EXCEPT} {\bang} [ i ]
\.{=} wrNum [ i ] \.{+} 1 ]
\@x{\@s{95.48} \.{\land} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang}
[ i ] \.{=} cmd ]
\@x{\@s{95.48} \.{\land} {\UNCHANGED} {\langle} imem ,\, rdVal1 ,\, rdVal2
{\rangle}
\@pvspace{8.0pt
\@x{ DoWr ( i ) \.{\defeq} \.{\land} interface [ i ] \.{\in} RegVals
\@x{\@s{58.69} \.{\land} imem [ i ] [ 2 ] \.{\neq} wrNum [ i ]
\@x{\@s{58.69} \.{\land} imem \.{'} \.{=} [ imem {\EXCEPT} {\bang} [ i ]
\.{=} {\langle} interface [ i ] ,\, wrNum [ i ] {\rangle} ]
\@x{\@s{58.69} \.{\land} {\UNCHANGED} {\langle} interface ,\, wrNum ,\,
rdVal1 ,\, rdVal2 {\rangle}
\@pvspace{8.0pt
\@x{ EndWr ( i ) \.{\defeq} \.{\land} interface [ i ] \.{\in} RegVals
\@x{\@s{63.55} \.{\land} imem [ i ] [ 2 ] \.{=} wrNum [ i ]
\@x{\@s{63.55} \.{\land} interface \.{'}\@s{3.73} \.{=} [ interface {\EXCEPT}
{\bang} [ i ] \.{=} NotRegVal ]
\@x{\@s{63.55} \.{\land} {\UNCHANGED} {\langle} imem ,\, wrNum ,\, rdVal1 ,\,
rdVal2 {\rangle}
\@pvspace{8.0pt
\end{tlatex}
\end{display}
The $BeginRd(i)$ action is straightforward.
\begin{display}
\begin{notla}
BeginRd(i) == /\ interface[i] \in MemVals
/\ interface' = [interface EXCEPT ![i] = NotMemVal]
/\ UNCHANGED <<imem, wrNum, rdVal1, rdVal2>>
\end{notla}
\begin{tlatex}
\@x{ BeginRd ( i ) \.{\defeq} \.{\land} interface [ i ] \.{\in} MemVals
\@x{\@s{68.09} \.{\land} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang}
[ i ] \.{=} NotMemVal ]
\@x{\@s{68.09} \.{\land} {\UNCHANGED} {\langle} imem ,\, wrNum ,\, rdVal1 ,\,
rdVal2 {\rangle}
\end{tlatex}
\end{display}
The definitions of the actions that perform the \emph{scan} procedure
use the following definition. We define $AddToFcn(f, x, v)$ to
be the function $g$ obtained from the function $f$ by adding $x$ to its
domain and defining $g[x]$ to equal $v$. Using operators defined in
the $TLC$ module, it can be defined to equal $f@@(x :> v)$. However,
it's easy enough to define it directly as:
\begin{display}
\begin{notla}
AddToFcn(f, x, v) ==
[y \in (DOMAIN f) \cup {x} |-> IF y = x THEN v ELSE f[y]]
\end{notla}
\begin{tlatex}
\@x{ AddToFcn ( f ,\, x ,\, v ) \.{\defeq}
\@x{\@s{20.5} [ y \.{\in} ( {\DOMAIN} f ) \.{\cup} \{ x \} \.{\mapsto} {\IF}
y \.{=} x \.{\THEN} v \.{\ELSE} f [ y ] ]
\end{tlatex}
\end{display}
Using $AddToFcn$, we define the $Rd1$ action that performs the scan's
first read of $imem$ and the $Rd2$ action that performs its second
read.
\begin{display}
\begin{notla}
Rd1(i) == /\ interface[i] = NotMemVal
/\ \E j \in Writers \ DOMAIN rdVal1[i] :
rdVal1' = [rdVal1 EXCEPT
![i] = AddToFcn(rdVal1[i], j, imem[j])]
/\ UNCHANGED <<interface, imem, wrNum, rdVal2>>
Rd2(i) == /\ interface[i] = NotMemVal
/\ DOMAIN rdVal1[i] = Writers
/\ \E j \in Writers \ DOMAIN rdVal2[i] :
rdVal2' = [rdVal2 EXCEPT
![i] = AddToFcn(rdVal2[i], j, imem[j])]
/\ UNCHANGED <<interface, imem, wrNum, rdVal1>>
\end{notla}
\begin{tlatex}
\@x{ Rd1 ( i ) \.{\defeq} \.{\land} interface [ i ] \.{=} NotMemVal
\@x{\@s{48.67} \.{\land} \E\, j \.{\in} Writers \.{\,\backslash\,} {\DOMAIN}
rdVal1 [ i ] \.{:}
\@x{\@s{67.01} rdVal1 \.{'} \.{=} [ rdVal1 {\EXCEPT}
\@x{\@s{119.21} {\bang} [ i ] \.{=} AddToFcn ( rdVal1 [ i ] ,\, j ,\, imem [
j ] ) ]
\@x{\@s{48.67} \.{\land} {\UNCHANGED} {\langle} interface ,\, imem ,\, wrNum
,\, rdVal2 {\rangle}
\@pvspace{8.0pt
\@x{ Rd2 ( i ) \.{\defeq} \.{\land} interface [ i ] \.{=} NotMemVal
\@x{\@s{48.67} \.{\land} {\DOMAIN} rdVal1 [ i ] \.{=} Writers
\@x{\@s{48.67} \.{\land} \E\, j \.{\in} Writers \.{\,\backslash\,} {\DOMAIN}
rdVal2 [ i ] \.{:}
\@x{\@s{67.01} rdVal2 \.{'} \.{=} [ rdVal2 {\EXCEPT}
\@x{\@s{119.21} {\bang} [ i ] \.{=} AddToFcn ( rdVal2 [ i ] ,\, j ,\, imem [
j ] ) ]
\@x{\@s{48.67} \.{\land} {\UNCHANGED} {\langle} interface ,\, imem ,\, wrNum
,\, rdVal1 {\rangle}
\end{tlatex}
\end{display}
Finally, we define $TryEndRd(i)$ to be an action that is enabled when
the reader's scan operation has completed. It compares the values
read by the two sets of reads and, if they are equal, it performs
the $EndOp$ for the read. Otherwise, it enables the next scan to begin.
\begin{display}
\begin{notla}
TryEndRd(i) == /\ interface[i] = NotMemVal
/\ DOMAIN rdVal1[i] = Writers
/\ DOMAIN rdVal2[i] = Writers
/\ IF rdVal1[i] = rdVal2[i]
THEN interface' =
[interface EXCEPT
![i] = [j \in Writers |-> rdVal1[i][j][1]] ]
ELSE interface' = interface
/\ rdVal1' = [rdVal1 EXCEPT ![i] = << >>]
/\ rdVal2' = [rdVal2 EXCEPT ![i] = << >>]
/\ UNCHANGED <<imem, wrNum>>
\end{notla}
\begin{tlatex}
\@x{ TryEndRd ( i ) \.{\defeq} \.{\land} interface [ i ] \.{=} NotMemVal
\@x{\@s{76.65} \.{\land} {\DOMAIN} rdVal1 [ i ] \.{=} Writers
\@x{\@s{76.65} \.{\land} {\DOMAIN} rdVal2 [ i ] \.{=} Writers
\@x{\@s{76.65} \.{\land} {\IF} rdVal1 [ i ] \.{=} rdVal2 [ i ]
\@x{\@s{95.96} \.{\THEN} interface \.{'} \.{=}
\@x{\@s{139.57} [ interface {\EXCEPT}
\@x{\@s{146.45} {\bang} [ i ] \.{=} [ j \.{\in} Writers \.{\mapsto} rdVal1 [
i ] [ j ] [ 1 ] ] ]
\@x{\@s{95.96} \.{\ELSE} interface \.{'} \.{=} interface
\@x{\@s{76.65} \.{\land} rdVal1 \.{'} \.{=} [ rdVal1 {\EXCEPT} {\bang} [ i ]
\.{=} {\langle} {\rangle} ]
\@x{\@s{76.65} \.{\land} rdVal2 \.{'} \.{=} [ rdVal2 {\EXCEPT} {\bang} [ i ]
\.{=} {\langle} {\rangle} ]
\@x{\@s{76.65} \.{\land} {\UNCHANGED} {\langle} imem ,\, wrNum {\rangle}
\end{tlatex}
\end{display}
The complete specification is in module $AfekSimplified$, shown in
\lref{targ:AfekSimplified}{Figure~\ref{fig:AfekSimplified}} with the action definitions
above elided.
\begin{figure} \target{targ:AfekSimplified}
\begin{notla}
--------------------------- MODULE AfekSimplified ---------------------------
EXTENDS Integers
CONSTANTS Readers, Writers, RegVals, InitRegVal
MemVals == [Writers -> RegVals]
InitMem == [i \in Writers |-> InitRegVal]
NotMemVal == CHOOSE v : v \notin MemVals
NotRegVal == CHOOSE v : v \notin RegVals
IRegVals == RegVals \X Nat
IMemVals == [Writers -> IRegVals]
InitIMem == [i \in Writers |-> <<InitRegVal, 0>> ]
VARIABLES imem, interface, wrNum, rdVal1, rdVal2
vars == <<imem, interface, wrNum, rdVal1, rdVal2>>
Init == /\ imem = InitIMem
/\ interface = [i \in Readers \cup Writers |->
IF i \in Readers THEN InitMem ELSE NotRegVal]
/\ wrNum = [i \in Writers |-> 0]
/\ rdVal1 = [i \in Readers |-> << >>]
/\ rdVal2 = [i \in Readers |-> << >>]
BeginWr(i, cmd) == ...
DoWr(i) == ...
EndWr(i) == ...
BeginRd(i) == ...
AddToFcn(f, x, v) == ...
Rd1(i) == ...
Rd2(i) == ...
TryEndRd(i) == ...
Next == \/ \E i \in Readers : BeginRd(i) \/ Rd1(i) \/ Rd2(i) \/ TryEndRd(i)
\/ \E i \in Writers : \/ \E cmd \in RegVals : BeginWr(i, cmd)
\/ DoWr(i) \/ EndWr(i)
Spec == Init /\ [][Next]_vars
=============================================================================
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} AfekSimplified}\moduleRightDash\@xx{
\@x{ {\EXTENDS} Integers
\@pvspace{8.0pt
\@x{ {\CONSTANTS} Readers ,\, Writers ,\, RegVals ,\, InitRegVal
\@pvspace{8.0pt
\@x{ MemVals \.{\defeq} [ Writers \.{\rightarrow} RegVals ]
\@x{ InitMem\@s{2.60} \.{\defeq} [ i \.{\in} Writers \.{\mapsto} InitRegVal
]
\@x{ NotMemVal \.{\defeq} {\CHOOSE} v \.{:} v \.{\notin} MemVals
\@x{ NotRegVal\@s{6.27} \.{\defeq} {\CHOOSE} v \.{:} v \.{\notin} RegVals
\@pvspace{8.0pt
\@x{ IRegVals\@s{6.27} \.{\defeq} RegVals \.{\times} Nat
\@x{ IMemVals \.{\defeq} [ Writers \.{\rightarrow} IRegVals ]
\@x{ InitIMem\@s{2.60} \.{\defeq} [ i \.{\in} Writers \.{\mapsto} {\langle}
InitRegVal ,\, 0 {\rangle} ]
\@pvspace{8.0pt
\@x{ {\VARIABLES} imem ,\, interface ,\, wrNum ,\, rdVal1 ,\, rdVal2
\@x{ vars \.{\defeq} {\langle} imem ,\, interface ,\, wrNum ,\, rdVal1 ,\,
rdVal2 {\rangle}
\@pvspace{8.0pt
\@x{ Init\@s{2.02} \.{\defeq} \.{\land} imem \.{=} InitIMem
\@x{\@s{37.72} \.{\land} interface \.{=} [ i \.{\in} Readers \.{\cup} Writers
\.{\mapsto}
\@x{\@s{107.47} {\IF} i \.{\in} Readers \.{\THEN} InitMem \.{\ELSE} NotRegVal
]
\@x{\@s{37.72} \.{\land} wrNum \.{=} [ i \.{\in} Writers \.{\mapsto} 0 ]
\@x{\@s{37.72} \.{\land} rdVal1 \.{=} [ i \.{\in} Readers \.{\mapsto}
{\langle} {\rangle} ]
\@x{\@s{37.72} \.{\land} rdVal2 \.{=} [ i \.{\in} Readers \.{\mapsto}
{\langle} {\rangle} ]
\@pvspace{8.0pt
\@x{ BeginWr ( i ,\, cmd ) \.{\defeq} \.{\dots}
\@x{ DoWr ( i ) \.{\defeq} \.{\dots}
\@x{ EndWr ( i ) \.{\defeq} \.{\dots}
\@x{ BeginRd ( i ) \.{\defeq} \.{\dots}
\@x{ AddToFcn ( f ,\, x ,\, v ) \.{\defeq} \.{\dots}
\@x{ Rd1 ( i ) \.{\defeq} \.{\dots}
\@x{ Rd2 ( i ) \.{\defeq} \.{\dots}
\@x{ TryEndRd ( i ) \.{\defeq} \.{\dots}
\@pvspace{8.0pt
\@x{ Next \.{\defeq} \.{\lor} \E\, i \.{\in} Readers \.{:} BeginRd ( i )
\.{\lor} Rd1 ( i ) \.{\lor} Rd2 ( i ) \.{\lor} TryEndRd ( i )
\@x{\@s{39.83} \.{\lor} \E\, i \.{\in} Writers\@s{0.49} \.{:} \.{\lor} \E\,
cmd \.{\in} RegVals \.{:} BeginWr ( i ,\, cmd )
\@x{\@s{118.73} \.{\lor} DoWr ( i ) \.{\lor} EndWr ( i )
\@pvspace{8.0pt
\@x{ Spec\@s{1.46} \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ vars}
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Module \emph{AfekSimplified}} \label{fig:AfekSimplified}
\end{figure}
\subsection{Another Snapshot Specification} \label{sec:another-snapshot}
The algorithm in module $AfekSimplified$ satisfies the safety
specification in $LinearSnapshot$, but we now show that it does not
implement that safety specification under any refinement mapping. Let
$Spec_{A}$ be the algorithm's specification and let $SSpec_{L}$ be the
specification $SafeSpec$ of $LinearSnapshot$. We assume there is a
refinement mapping $mem<-\ov{mem}$ and $istate <- \ov{istate}$ under
which $Spec_{A}$ implements $SSpec_{L}$ and obtain a contradiction.
Let $\ov{F}$ be the formula obtained from a formula $F$ of module
$LinearSnapshot$ by replacing $mem$ with $\ov{mem}$ and $istate$ with
$\ov{istate}$. Consider a behavior satisfying $Spec_{A}$ that begins
with the following three sequences of steps.
\begin{enumerate}
\item
\begin{sloppypar}
Reader $i$ does a $BeginRd(i)$ step, completes its first scan
(so $\DOMAIN rdVal1{[i]}$ equals $Writers$) and begins its second scan by
reading $imem[j]=<<v_{1},0>>$ for some writer $j$ and $v_{1}$ in
$RegVals$ (so $\DOMAIN rdVal2{[i]}$ equals $\{j\}$ and $rdVal2{[i]}[j]$ equals
$<<v_{1},0>>$).
\end{sloppypar}
\item Writer $j$ then does a complete write operation, writing a new
value $v_{2}$ different from $v_{1}$.
\item Reader $i$ completes its second scan, executes its $TryEndRd(i)$
action, finding $rdVal2[i]$ equal to $rdVal1[i]$, and completing the
read operation by setting $interface[i]$ to a value $M$ with
$M[j]=v_{1}$.
\end{enumerate}
The behavior satisfies \ov{SSpec_{L}}, so this sequence of actions
must start with a \ov{BeginRd(i)} step, contain a \ov{DoRd(i)} step,
and end with an \ov{EndRd(i)} step. The reader has not determined the
value to be output by the read command until it has finished its
second scan, so the \ov{DoRd(i)} step must occur in sequence~3. The
three steps of the write of $v_{2}$ by writer $j$ occur in sequence~2,
so the \ov{DoWr(j)} step for that write must occur in that sequence,
therefore preceding the \ov{DoRd(i)} step. Hence, the
$LinearSnapshot$ spec implies that the \ov{DoRd(i)} step must set
$\ov{istate}[i][j]$ to $v_{2}$. However in the last step of~3, the
reader sets the value of $interface[i][j]$ to $v_{1}$, which implies
that the \ov{DoRd(i)} step set $\ov{istate}[i][j]$ to $v_{1}$. Since
$v_{1}#v_{2}$, this is a contradiction, showing that the refinement
mapping cannot exist.
This behavior of $Spec_{A}$ is allowed by \tlabox{\EE
mem,istate:SSpec_{L}}, since we can choose values of $mem$ and $istate$
for which $SSpec_{L}$ is satisfied---namely, values for which the
\ov{DoRd(i)} step occurs before the \ov{DoWr(j)} step. However,
choosing those values requires knowing what steps occur after the
\ov{DoWr(j)} step. The linearizability specification $SSpec_{L}$
chooses the value returned by a read sooner than it has to. This
tells us that to find a refinement mapping that shows $Spec_{A}$
implements \tlabox{\EE mem,istate:SSpec_{L}}, we must add a prophecy
variable to $Spec_{A}$.
Instead of adding a prophecy variable to $Spec_{A}$, we write a new
snapshot specification $Spec_{NL}$ that allows the same externally
visible behaviors as specification $Spec_{L}$ of $LinearSnapshot$; and
whose safety specification $SSpec_{NL}$ allows the same visible
behaviors as $SSpec_{L}$. However, in $Spec_{NL}$ we make a reader
wait as long as possible before choosing its output value. We can
then find a refinement mapping to show that $Spec_{A}$ implements
$SSpec_{NL}$ without using a prophecy variable.
We will still need a prophecy variable to show that $SSpec_{NL}$
allows the same externally visible behavior as $SSpec_{L}$. The
advantage of introducing $Spec_{NL}$ is that the specification of what
an algorithm is supposed to do is generally much simpler than the
algorithm. Prophecy variables are the most complicated kind of
auxiliary variables, and it is easier to add one to a high-level
specification than to a lower-level algorithm. (This same idea of
modifying the high-level specification to avoid adding a prophecy
variable to the algorithm can be applied to the queue example of
Herlihy and Wing~\cite{herlihy:axioms}.)
Specification $Spec_{NL}$ records in its internal state all values of
the memory $mem$ that a read operation is allowed to return. The
$EndRd$ operation nondeterministically chooses one of those values as
its output. Its internal state therefore remembers much more about
what happened in the past than a reasonable implementation would. This
means that defining a refinement mapping under which an algorithm
implements $Spec_{NL}$ will require adding a history variable to the
algorithm's spec. Adding a history variable is much easier than
adding a prophecy variable.
We write specification $Spec_{NL}$ (and $SSpec_{NL}$) in module
$NewLinearSnapshot$. It has the same declarations of $Readers$,
$Writers$, $RegVals$, and $InitRegVal$ and the same definitions of
$MemVals$, $InitMem$, $NotMemVal$, and $NotRegVal$ as in module
$LinearSnapshot$. It has the same variables $interface$ and $mem$ as
module $LinearSnapshot$, plus these two internal variables:
\begin{describe}{$wstate$}
\item[$wstate$] A function with domain $Writers$ such that the value
$wstate[i]$ is the same as the value of $istate[i]$ in
$LinearSnapshot$, for each writer $i$.
\item[$rstate$] A function with domain $Readers$ so that, for each
reader $i$ currently executing a read operation, $rstate[i]$ is the
sequence of values that $mem$ has assumed thus far while the operation
has been executing. The first element of $rstate[i]$ is therefore the
value $mem$ had when the $BeginRd(i)$ step occurred. The value of
$rstate[i]$ is the empty sequence $<<\,>>$ when $i$ is not executing a
read operation.
\end{describe}
The $BeginWr$ command is essentially the same as in $LinearSnapshot$.
\begin{display}
\begin{notla}
BeginWr(i, cmd) == /\ interface[i] = NotRegVal
/\ interface' = [interface EXCEPT ![i] = cmd]
/\ wstate' = [wstate EXCEPT ![i] = cmd]
/\ UNCHANGED <<mem, rstate>>
\end{notla}
\begin{tlatex}
\@x{ BeginWr ( i ,\, cmd ) \.{\defeq} \.{\land} interface [ i ] \.{=}
NotRegVal
\@x{\@s{95.48} \.{\land} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang}
[ i ] \.{=} cmd ]
\@x{\@s{95.48} \.{\land} wstate \.{'} \.{=} [ wstate {\EXCEPT} {\bang} [ i ]
\.{=} cmd ]
\@x{\@s{95.48} \.{\land} {\UNCHANGED} {\langle} mem ,\, rstate {\rangle}
\end{tlatex}
\end{display}
The $BeginRd(i)$ command, which sets $rstate[i]$ to a one-element sequence
containing the current value of $mem$, is:
\begin{display}
\begin{notla}
BeginRd(i) == /\ interface[i] \in MemVals
/\ interface' = [interface EXCEPT ![i] = NotMemVal]
/\ rstate' = [rstate EXCEPT ![i] = <<mem>>]
/\ UNCHANGED <<mem, wstate>>
\end{notla}
\begin{tlatex}
\@x{ BeginRd ( i ) \.{\defeq} \.{\land} interface [ i ] \.{\in} MemVals
\@x{\@s{68.09} \.{\land} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang}
[ i ] \.{=} NotMemVal ]
\@x{\@s{68.09} \.{\land} rstate \.{'} \.{=} [ rstate {\EXCEPT} {\bang} [ i ]
\.{=} {\langle} mem {\rangle} ]
\@x{\@s{68.09} \.{\land} {\UNCHANGED} {\langle} mem ,\, wstate {\rangle}
\end{tlatex}
\end{display}
The writer executes a $DoWr$ that is the same as in $LinearSnapshot$, except
that it also appends the new value of $mem$ to the end of $rstate[j]$
for every reader $j$ currently executing a read operation.
\begin{display}
\begin{notla}
DoWr(i) == /\ interface[i] \in RegVals
/\ wstate[i] = interface[i]
/\ mem' = [mem EXCEPT ![i] = interface[i]]
/\ wstate' = [wstate EXCEPT ![i] = NotRegVal]
/\ rstate' = [j \in Readers |->
IF rstate[j] = << >>
THEN << >>
ELSE Append(rstate[j], mem')]
/\ interface' = interface
\end{notla}
\begin{tlatex}
\@x{ DoWr ( i ) \.{\defeq} \.{\land} interface [ i ] \.{\in} RegVals
\@x{\@s{58.69} \.{\land} wstate [ i ] \.{=} interface [ i ]
\@x{\@s{58.69} \.{\land} mem \.{'} \.{=} [ mem {\EXCEPT} {\bang} [ i ] \.{=}
interface [ i ] ]
\@x{\@s{58.69} \.{\land} wstate \.{'} \.{=} [ wstate {\EXCEPT} {\bang} [ i ]
\.{=} NotRegVal ]
\@x{\@s{58.69} \.{\land} rstate \.{'}\@s{2.42} \.{=} [ j \.{\in} Readers
\.{\mapsto}
\@x{\@s{125.17} {\IF} rstate [ j ] \.{=} {\langle} {\rangle}
\@x{\@s{133.37} \.{\THEN} {\langle} {\rangle}
\@x{\@s{133.37} \.{\ELSE} Append ( rstate [ j ] ,\, mem \.{'} ) ]
\@x{\@s{58.69} \.{\land} interface \.{'} \.{=} interface
\end{tlatex}
\end{display}
A reader $i$ has no internal actions, only the externally visible
$BeginRd(i)$ and $EndRd(i)$ actions. Its $EndRd(i)$ action outputs an
arbitrarily chosen element of $rstate[i]$.
\begin{display}
\begin{notla}
EndRd(i) == /\ interface[i] = NotMemVal
/\ \E j \in 1..Len(rstate[i]) :
interface' = [interface EXCEPT ![i] = rstate[i][j]]
/\ rstate' = [rstate EXCEPT ![i] = << >>]
/\ UNCHANGED <<mem, wstate>>
\end{notla}
\begin{tlatex}
\@x{ EndRd ( i ) \.{\defeq} \.{\land} interface [ i ] \.{=} NotMemVal
\@x{\@s{61.19} \.{\land} \E\, j \.{\in} 1 \.{\dotdot} Len ( rstate [ i ] )
\.{:}
\@x{\@s{83.62} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang} [ i ]
\.{=} rstate [ i ] [ j ] ]
\@x{\@s{61.19} \.{\land} rstate \.{'} \.{=} [ rstate {\EXCEPT} {\bang} [ i ]
\.{=} {\langle} {\rangle} ]
\@x{\@s{61.19} \.{\land} {\UNCHANGED} {\langle} mem ,\, wstate {\rangle}
\end{tlatex}
\end{display}
The writer's $EndWr$ action is essentially the same as in
$LinearSnapshot$.
\begin{display}
\begin{notla}
EndWr(i) == /\ interface[i] \in RegVals
/\ wstate[i] = NotRegVal
/\ interface' = [interface EXCEPT ![i] = wstate[i]]
/\ UNCHANGED <<mem, rstate, wstate>>
\end{notla}
\begin{tlatex}
\@x{ EndWr ( i ) \.{\defeq} \.{\land} interface [ i ] \.{\in} RegVals
\@x{\@s{63.55} \.{\land} wstate [ i ] \.{=} NotRegVal
\@x{\@s{63.55} \.{\land} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang}
[ i ] \.{=} wstate [ i ] ]
\@x{\@s{63.55} \.{\land} {\UNCHANGED} {\langle} mem ,\, rstate ,\, wstate
{\rangle}
\end{tlatex}
\end{display}
The complete module, minus the action definitions above, is in
\lref{targ:NewLinearSnapshot}{Figure~\ref{fig:NewLinearSnapshot}}.
\begin{figure} \target{targ:NewLinearSnapshot}
\begin{notla}
------------------------- MODULE NewLinearSnapshot -------------------------
EXTENDS Integers, Sequences
CONSTANTS Readers, Writers, RegVals, InitRegVal
ASSUME /\ Readers \cap Writers = {}
/\ InitRegVal \in RegVals
InitMem == [i \in Writers |-> InitRegVal]
MemVals == [Writers -> RegVals]
NotMemVal == CHOOSE v : v \notin MemVals
NotRegVal == CHOOSE v : v \notin RegVals
VARIABLES mem, interface, rstate, wstate
vars == <<mem, interface, rstate, wstate>>
Init == /\ mem = InitMem
/\ interface = [i \in Readers \cup Writers |->
IF i \in Readers THEN InitMem ELSE NotRegVal]
/\ rstate = [i \in Readers |-> << >>]
/\ wstate = [i \in Writers |-> NotRegVal]
BeginRd(i) == ...
BeginWr(i, cmd) == ...
DoWr(i) == ...
EndRd(i) == ...
EndWr(i) == ...
Next == \/ \E i \in Readers : BeginRd(i) \/ EndRd(i)
\/ \E i \in Writers : \/ \E cmd \in RegVals : BeginWr(i, cmd)
\/ DoWr(i) \/ EndWr(i)
SafeSpec == Init /\ [][Next]_vars
Fairness == /\ \A i \in Readers : WF_vars(EndRd(i))
/\ \A i \in Writers : WF_vars(DoWr(i)) /\ WF_vars(EndWr(i))
Spec == Init /\ [][Next]_vars /\ Fairness
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} NewLinearSnapshot}\moduleRightDash\@xx{
\@x{ {\EXTENDS} Integers ,\, Sequences
\@pvspace{8.0pt
\@x{ {\CONSTANTS} Readers ,\, Writers ,\, RegVals ,\, InitRegVal
\@pvspace{8.0pt
\@x{ {\ASSUME} \.{\land} Readers\@s{4.1} \.{\cap} Writers \.{=} \{ \}
\@x{\@s{38.24} \.{\land} InitRegVal \.{\in} RegVals
\@pvspace{8.0pt
\@x{ InitMem\@s{2.60} \.{\defeq} [ i \.{\in} Writers \.{\mapsto} InitRegVal
]
\@x{ MemVals \.{\defeq} [ Writers \.{\rightarrow} RegVals ]
\@x{ NotMemVal \.{\defeq} {\CHOOSE} v \.{:} v \.{\notin} MemVals
\@x{ NotRegVal\@s{6.27} \.{\defeq} {\CHOOSE} v \.{:} v \.{\notin} RegVals
\@pvspace{8.0pt
\@x{ {\VARIABLES} mem ,\, interface ,\, rstate ,\, wstate
\@x{ vars \.{\defeq} {\langle} mem ,\, interface ,\, rstate ,\, wstate
{\rangle}
\@pvspace{16.0pt
\@x{ Init\@s{2.02} \.{\defeq} \.{\land} mem \.{=} InitMem
\@x{\@s{37.72} \.{\land} interface \.{=} [ i \.{\in} Readers \.{\cup} Writers
\.{\mapsto}
\@x{\@s{107.47} {\IF} i \.{\in} Readers \.{\THEN} InitMem \.{\ELSE} NotRegVal
]
\@x{\@s{37.72} \.{\land} rstate\@s{2.42} \.{=} [ i \.{\in} Readers
\.{\mapsto} {\langle} {\rangle} ]
\@x{\@s{37.72} \.{\land} wstate \.{=} [ i \.{\in} Writers\@s{0.49}
\.{\mapsto} NotRegVal ]
\@pvspace{8.0pt
\@x{ BeginRd ( i ) \.{\defeq} \.{\dots}
\@x{ BeginWr ( i ,\, cmd ) \.{\defeq} \.{\dots}
\@x{ DoWr ( i ) \.{\defeq} \.{\dots}
\@x{ EndRd ( i )\@s{2.35} \.{\defeq} \.{\dots}
\@x{ EndWr ( i ) \.{\defeq} \.{\dots}
\@pvspace{8.0pt
\@x{ Next \.{\defeq} \.{\lor} \E\, i \.{\in} Readers \.{:} BeginRd ( i )
\.{\lor} EndRd ( i )
\@x{\@s{39.83} \.{\lor} \E\, i \.{\in} Writers\@s{0.49} \.{:} \.{\lor} \E\,
cmd \.{\in} RegVals \.{:} BeginWr ( i ,\, cmd )
\@x{\@s{118.73} \.{\lor} DoWr ( i ) \.{\lor} EndWr ( i )
\@pvspace{8.0pt
\@x{ SafeSpec \.{\defeq}\@s{4.1} Init \.{\land} {\Box} [ Next ]_{ vars}
\@pvspace{8.0pt
\@x{ Fairness\@s{0.49} \.{\defeq} \.{\land} \A\, i \.{\in} Readers \.{:}
{\WF}_{ vars} ( EndRd ( i ) )
\@x{\@s{56.76} \.{\land} \A\, i \.{\in} Writers\@s{0.49} \.{:} {\WF}_{ vars}
( DoWr ( i ) ) \.{\land} {\WF}_{ vars} ( EndWr ( i ) )
\@pvspace{8.0pt
\@x{ Spec \.{\defeq} Init \.{\land} {\Box} [ Next ]_{ vars} \.{\land}
Fairness
\end{tlatex}
\caption{Module \emph{NewLinearSnapshot}\label{fig:NewLinearSnapshot}}
\end{figure}
\subsection{\emph{NewLinearSnapshot} Implements \emph{LinearSnapshot}}
For compactness, in the following discussion we let:
\[ \begin{noj3}
\Sp_{L} & == & \tlabox{\EE mem, istate:Spec_{L}} \V{.2}
\Sp_{NL} & == & \tlabox{\EE mem, rstate, wstate : Spec_{NL}}
\end{noj3}\]
Specifications $\Sp_{L}$ and $\Sp_{NL}$ are equivalent. However, our
goal is to prove that $Spec_{A}$ implements $\Sp_{L}$, for which it
suffices to show that it implements specification $\Sp_{NL}$ and that
$\Sp_{NL}$ implements $\Sp_{L}$. So, we won't bother showing
equivalence of the two specs; we just show here that $\Sp_{NL}$
implements $\Sp_{L}$. We show in Section~\ref{sec:Afek-implements}
below that $Spec_{A}$ implements $\Sp_{NL}$.
To show that $\Sp_{NL}$ implements $\Sp_{L}$, we
add to $Spec_{NL}$ a prophecy variable $p$
then a stuttering variable $s$ to obtain a specification
$Spec_{NL}^{ps}$ such that
\tlabox{\EE s, p : Spec_{NL}^{ps}}
is equivalent to $Spec_{NL}$. We then show that
\tlabox{\EE s, p : Spec_{NL}^{ps}} implements $\Sp_{L}$ by showing
that
$Spec_{NL}^{ps}$ implements $Spec_{L}$ under
a suitable refinement mapping $mem<-\ov{mem}$, $istate <- \ov{istate}$.
The two auxiliary variables we add to $Spec_{NL}$ have the following
functions:
\begin{describe}{$s$}
\item[$p$] A prophecy variable that predicts for each reader $i$
which element of the sequence of memory values $rstate[i]$ will
be chosen as the output.
\item[$s$] A stuttering variable that adds:
\begin{itemize}
\item A single stuttering step after a $BeginRd(i)$ step if $p[i]$
predicts that the read will return the current value of memory. The
refinement mapping will be defined so that stuttering step will be a
\ov{DoRd(i)}.
\item Stuttering steps after a $DoWr(i)$ step that will implement
the \ov{DoRd(j)} step of every current read operation that
returns the value of $mem$ immediately after the $DoWr(i)$ step.
\end{itemize}
\end{describe}
Both these variables are added in a single module named
$NewLinearSnapshotPS$.
\subsubsection{Adding the Prophecy Variable}
The prophecy variable $p$ is a prophecy data structure variable as
described in Section~\ref{sec:proph-data-struct}. Its domain $Dom$
is the set of readers that are currently executing a read, which can be
described as the set of readers $i$ such that $rstate[i]$ is a
nonempty sequence. The value of $p[i]$ is a positive integer that
predicts which element of the list $rstate[i]$ will be chosen as the
output. This value can be arbitrarily large, since arbitrarily many
writes can occur during a read operation, so $\Pi$ is the set $Nat :\:
\{0\}$. Module $NewLinearSnapshotPS$ therefore begins
\begin{display}
\begin{notla}
EXTENDS NewLinearSnapshot
Pi == Nat \ {0}
Dom == {r \in Readers : rstate[r] # << >>}
INSTANCE Prophecy WITH DomPrime <- Dom'
\end{notla}
\begin{tlatex}
\@x{ {\EXTENDS} NewLinearSnapshot
\@pvspace{5.0pt
\@x{ Pi \.{\defeq} Nat \.{\,\backslash\,} \{ 0 \}
\@x{ Dom \.{\defeq} \{ r \.{\in} Readers \.{:} rstate [ r ] \.{\neq}
{\langle} {\rangle} \}
\@x{ {\INSTANCE} Prophecy {\WITH} DomPrime \.{\leftarrow} Dom \.{'}
\end{tlatex}
\end{display}
\begin{sloppypar} \noindent
It is most convenient to define $p$ in terms of a disjunctive representation
in which $EndRd(i)$ is decomposed into
\tlabox{\,\E \, j \in 1\dd Len(rstate[i]) : IEndRd(i,j)\,},
where $IEndRd$ can be defined by:
\end{sloppypar}
\begin{display}
\begin{notla}
IEndRd(i, j) == /\ interface[i] = NotMemVal
/\ interface' = [interface EXCEPT ![i] = rstate[i][j]]
/\ rstate' = [rstate EXCEPT ![i] = << >>]
/\ UNCHANGED <<mem, wstate>>
\end{notla}
\begin{tlatex}
\@x{ IEndRd ( i ,\, j ) \.{\defeq} \.{\land} interface [ i ] \.{=} NotMemVal
\@x{\@s{75.67} \.{\land} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang}
[ i ] \.{=} rstate [ i ] [ j ] ]
\@x{\@s{75.67} \.{\land} rstate \.{'} \.{=} [ rstate {\EXCEPT} {\bang} [ i ]
\.{=} {\langle} {\rangle} ]
\@x{\@s{75.67} \.{\land} {\UNCHANGED} {\langle} mem ,\, wstate {\rangle}
\end{tlatex}
\end{display}
We could make the change in our original specification $NewLinearSnapshot$,
but instead we define a new next-state action $Nxt$ that is equivalent
to $Next$:
\begin{display}
\begin{notla}
Nxt == \/ \E i \in Readers : \/ BeginRd(i)
\/ \E j \in 1..Len(rstate[i]) : IEndRd(i,j)
\/ \E i \in Writers : \/ \E cmd \in RegVals : BeginWr(i, cmd)
\/ DoWr(i) \/ EndWr(i)
\end{notla}
\begin{tlatex}
\@x{ Nxt \.{\defeq} \.{\lor} \E\, i \.{\in} Readers \.{:} \.{\lor} BeginRd (
i )
\@x{\@s{114.13} \.{\lor} \E\, j \.{\in} 1 \.{\dotdot} Len ( rstate [ i ] )
\.{:} IEndRd ( i ,\, j )
\@x{\@s{35.23} \.{\lor} \E\, i \.{\in} Writers\@s{0.49} \.{:} \.{\lor} \E\,
cmd \.{\in} RegVals \.{:} BeginWr ( i ,\, cmd )
\@x{\@s{114.13} \.{\lor} DoWr ( i ) \.{\lor} EndWr ( i )
\end{tlatex}
\end{display}
It's easy to see that $Nxt$ is equivalent to formula $Next$ of
$NewLinearSnapShot$, and TLAPS can easily check this proof.
\begin{display}
\begin{notla}
THEOREM Next = Nxt
BY DEF Next, Nxt, EndRd, IEndRd
\end{notla}
\begin{tlatex}
\@x{ {\THEOREM} Next \.{=} Nxt
\@x{ {\BY} {\DEF} Next ,\, Nxt ,\, EndRd ,\, IEndRd
\end{tlatex}
\end{display}
A prediction is made for reader $i$ when the element $i$ is added to
$Dom$, which is done by a $BeginRd(i)$ step. The prediction is used
by the $IEndRd(i, j)$ action, allowing it to be performed only if
$p[i]$ has predicted that the $j$\tth\ item in the sequence
$rstate[i]$ will be output. The definitions of $Pred_{A}$, $PredDom_{A}$,
and $DomInj_{A}$ for the subactions $A$ are given along with the
beginning of the module in \lref{targ:NewLinearSnapshotPS1}{Figure~\ref{fig:NewLinearSnapshotPS1}}.
\begin{figure} \target{targ:NewLinearSnapshotPS1}
\begin{notla}
------------------------ MODULE NewLinearSnapshotPS ------------------------
EXTENDS NewLinearSnapshot
Pi == Nat \ {0}
Dom == {r \in Readers : rstate[r] # << >>}
INSTANCE Prophecy WITH DomPrime <- Dom'
IEndRd(i, j) == /\ interface[i] = NotMemVal
/\ interface' = [interface EXCEPT ![i] = rstate[i][j]]
/\ rstate' = [rstate EXCEPT ![i] = << >>]
/\ UNCHANGED <<mem, wstate>>
Nxt == \/ \E i \in Readers : \/ BeginRd(i)
\/ \E j \in 1..Len(rstate[i]) : IEndRd(i,j)
\/ \E i \in Writers : \/ \E cmd \in RegVals : BeginWr(i, cmd)
\/ DoWr(i) \/ EndWr(i)
THEOREM Next = Nxt
BY DEF Next, Nxt, EndRd, IEndRd
PredBeginRd(p) == TRUE
PredDomBeginRd == {}
DomInjBeginRd == IdFcn(Dom)
PredIEndRd(p, i, j) == j = p[i]
PredDomIEndRd(i) == {i}
DomInjIEndRd == IdFcn(Dom')
PredBeginWr(p) == TRUE
PredDomBeginWr == {}
DomInjBeginWr == IdFcn(Dom)
PredDoWr(p) == TRUE
PredDomDoWr == {}
DomInjDoWr == IdFcn(Dom)
PredEndWr(p) == TRUE
PredDomEndWr == {}
DomInjEndWr == IdFcn(Dom)
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE}
NewLinearSnapshotPS}\moduleRightDash\@xx{
\@x{ {\EXTENDS} NewLinearSnapshot
\@pvspace{8.0pt
\@x{ Pi \.{\defeq} Nat \.{\,\backslash\,} \{ 0 \}
\@x{ Dom \.{\defeq} \{ r \.{\in} Readers \.{:} rstate [ r ] \.{\neq}
{\langle} {\rangle} \}
\@x{ {\INSTANCE} Prophecy {\WITH} DomPrime \.{\leftarrow} Dom \.{'}
\@pvspace{8.0pt
\@x{ IEndRd ( i ,\, j ) \.{\defeq} \.{\land} interface [ i ] \.{=} NotMemVal
\@x{\@s{75.67} \.{\land} interface \.{'} \.{=} [ interface {\EXCEPT} {\bang}
[ i ] \.{=} rstate [ i ] [ j ] ]
\@x{\@s{75.67} \.{\land} rstate \.{'} \.{=} [ rstate {\EXCEPT} {\bang} [ i ]
\.{=} {\langle} {\rangle} ]
\@x{\@s{75.67} \.{\land} {\UNCHANGED} {\langle} mem ,\, wstate {\rangle}
\@pvspace{8.0pt
\@x{ Nxt \.{\defeq} \.{\lor} \E\, i \.{\in} Readers \.{:} \.{\lor} BeginRd (
i )
\@x{\@s{114.13} \.{\lor} \E\, j \.{\in} 1 \.{\dotdot} Len ( rstate [ i ] )
\.{:} IEndRd ( i ,\, j )
\@x{\@s{35.23} \.{\lor} \E\, i \.{\in} Writers\@s{0.49} \.{:} \.{\lor} \E\,
cmd \.{\in} RegVals \.{:} BeginWr ( i ,\, cmd )
\@x{\@s{114.13} \.{\lor} DoWr ( i ) \.{\lor} EndWr ( i )
\@pvspace{8.0pt
\@x{ {\THEOREM} Next \.{=} Nxt
\@x{ {\BY} {\DEF} Next ,\, Nxt ,\, EndRd ,\, IEndRd
\@pvspace{8.0pt
\@x{ PredBeginRd ( p )\@s{7.31} \.{\defeq} {\TRUE}
\@x{ PredDomBeginRd \.{\defeq} \{ \}
\@x{ DomInjBeginRd\@s{7.14} \.{\defeq} IdFcn ( Dom )
\@pvspace{8.0pt
\@x{ PredIEndRd ( p ,\, i ,\, j ) \.{\defeq} j \.{=} p [ i ]
\@x{ PredDomIEndRd ( i ) \.{\defeq} \{ i \}
\@x{ DomInjIEndRd\@s{4.1} \.{\defeq} IdFcn ( Dom \.{'} )
\@pvspace{8.0pt
\@x{ PredBeginWr ( p )\@s{7.31} \.{\defeq} {\TRUE}
\@x{ PredDomBeginWr \.{\defeq} \{ \}
\@x{ DomInjBeginWr\@s{7.14} \.{\defeq} IdFcn ( Dom )
\@pvspace{8.0pt
\@x{ PredDoWr ( p )\@s{7.31} \.{\defeq} {\TRUE}
\@x{ PredDomDoWr \.{\defeq} \{ \}
\@x{ DomInjDoWr\@s{7.14} \.{\defeq} IdFcn ( Dom )
\@pvspace{8.0pt
\@x{ PredEndWr ( p )\@s{7.31} \.{\defeq} {\TRUE}
\@x{ PredDomEndWr \.{\defeq} \{ \}
\@x{ DomInjEndWr\@s{7.14} \.{\defeq} IdFcn ( Dom )
\end{tlatex}
\caption{Module \emph{NewLinearSnapshotPS}, part 1.}
\label{fig:NewLinearSnapshotPS1}
\end{figure}
The module next defines the temporal formula $Condition$, which should
be implied by $Spec$. There follows the definition of the
specification $SpecP$ obtained by adding the prophecy variable $p$ to
$Spec$. TLC can check that $Condition$ is implied by $Spec$, which
implies that \tlabox{\EE p: SpecP} is equivalent to $Spec$. These
definitions appear in \lref{targ:NewLinearSnapshotPS2}{Figure~\ref{fig:NewLinearSnapshotPS2}}.
\begin{figure} \target{targ:NewLinearSnapshotPS2}
\begin{notla}
Condition ==
[][ /\ \A i \in Readers :
/\ ProphCondition(BeginRd(i), DomInjBeginRd,
PredDomBeginRd, PredBeginRd)
/\ \A j \in 1..Len(rstate[i]) :
ProphCondition(IEndRd(i, j), DomInjIEndRd,
PredDomIEndRd(i),
LAMBDA p : PredIEndRd(p, i, j))
/\ \A i \in Writers :
/\ \A cmd \in RegVals :
ProphCondition(BeginWr(i, cmd), DomInjBeginWr,
PredDomBeginWr, PredBeginWr)
/\ ProphCondition(DoWr(i), DomInjDoWr, PredDomDoWr,
PredDoWr)
/\ ProphCondition(EndWr(i), DomInjEndWr, PredDomEndWr,
PredEndWr)
]_vars
VARIABLE p
varsP == <<vars, p>>
InitP == Init /\ (p = EmptyFcn)
BeginRdP(i) == ProphAction(BeginRd(i), p, p', DomInjBeginRd,
PredDomBeginRd, PredBeginRd)
BeginWrP(i, cmd) == ProphAction(BeginWr(i, cmd), p, p', DomInjBeginWr,
PredDomBeginWr, PredBeginWr)
DoWrP(i) == ProphAction(DoWr(i), p, p', DomInjDoWr,
PredDomDoWr, PredDoWr)
IEndRdP(i, j) == ProphAction(IEndRd(i, j), p, p', DomInjIEndRd,
PredDomIEndRd(i),
LAMBDA q : PredIEndRd(q, i, j))
EndWrP(i) == ProphAction(EndWr(i), p, p', DomInjEndWr,
PredDomEndWr, PredEndWr)
NextP == \/ \E i \in Readers : \/ BeginRdP(i)
\/ \E j \in 1..Len(rstate[i]) : IEndRdP(i,j)
\/ \E i \in Writers : \/ \E cmd \in RegVals : BeginWrP(i, cmd)
\/ DoWrP(i) \/ EndWrP(i)
SpecP == InitP /\ [][NextP]_varsP /\ Fairness
\end{notla}
\begin{tlatex}
\@x{ Condition \.{\defeq}
\@x{\@s{4.1} {\Box} [ \.{\land} \A\, i \.{\in} Readers \.{:}
\@x{\@s{33.66} \.{\land} ProphCondition ( BeginRd ( i ) ,\, DomInjBeginRd
,\,
\@x{\@s{118.43} PredDomBeginRd ,\, PredBeginRd )
\@x{\@s{33.66} \.{\land} \A\, j \.{\in} 1 \.{\dotdot} Len ( rstate [ i ] )
\.{:}
\@x{\@s{51.99} ProphCondition ( IEndRd ( i ,\, j ) ,\, DomInjIEndRd ,\,
\@x{\@s{125.66} PredDomIEndRd ( i ) ,\,
\@x{\@s{125.66} {\LAMBDA} p \.{:} PredIEndRd ( p ,\, i ,\, j ) )
\@x{\@s{14.35} \.{\land} \A\, i \.{\in} Writers \.{:}
\@x{\@s{33.66} \.{\land} \A\, cmd \.{\in} RegVals \.{:}
\@x{\@s{51.99} ProphCondition ( BeginWr ( i ,\, cmd ) ,\, DomInjBeginWr ,\,
\@x{\@s{125.66} PredDomBeginWr ,\, PredBeginWr )
\@x{\@s{33.66} \.{\land} ProphCondition ( DoWr ( i ) ,\, DomInjDoWr ,\,
PredDomDoWr ,\,
\@x{\@s{118.43} PredDoWr )
\@x{\@s{33.66} \.{\land} ProphCondition ( EndWr ( i ) ,\, DomInjEndWr ,\,
PredDomEndWr ,\,
\@x{\@s{118.43} PredEndWr )
\@x{\@s{11.57} ]_{ vars}
\@pvspace{4.0pt
\@x{ {\VARIABLE} p
\@x{ varsP \.{\defeq} {\langle} vars ,\, p {\rangle}
\@pvspace{4.0pt
\@x{ InitP\@s{2.14} \.{\defeq} Init \.{\land} ( p \.{=} EmptyFcn )
\@pvspace{4.0pt
\@x{ BeginRdP ( i ) \.{\defeq} ProphAction ( BeginRd ( i ) ,\, p ,\, p \.{'}
,\, DomInjBeginRd ,\,
\@x{\@s{133.99} PredDomBeginRd ,\, PredBeginRd )
\@pvspace{4.0pt
\@x{ BeginWrP ( i ,\, cmd ) \.{\defeq} ProphAction ( BeginWr ( i ,\, cmd )
,\, p ,\, p \.{'} ,\, DomInjBeginWr ,\,
\@x{\@s{161.33} PredDomBeginWr ,\, PredBeginWr )
\@pvspace{4.0pt
\@x{ DoWrP ( i ) \.{\defeq} ProphAction ( DoWr ( i ) ,\, p ,\, p \.{'} ,\,
DomInjDoWr ,\,
\@x{\@s{124.54} PredDomDoWr ,\, PredDoWr )
\@pvspace{4.0pt
\@x{ IEndRdP ( i ,\, j ) \.{\defeq} ProphAction ( IEndRd ( i ,\, j )
,\,\@s{4.1} p ,\, p \.{'} ,\, DomInjIEndRd ,\,
\@x{\@s{141.57} PredDomIEndRd ( i ) ,\,
\@x{\@s{141.57} {\LAMBDA} q \.{:} PredIEndRd ( q ,\, i ,\, j ) )
\@pvspace{4.0pt
\@x{ EndWrP ( i ) \.{\defeq} ProphAction ( EndWr ( i ) ,\, p ,\, p \.{'} ,\,
DomInjEndWr ,\,
\@x{\@s{129.40} PredDomEndWr ,\, PredEndWr )
\@pvspace{4.0pt
\@x{ NextP \.{\defeq} \.{\lor} \E\, i \.{\in} Readers \.{:} \.{\lor} BeginRdP
( i )
\@x{\@s{125.59} \.{\lor} \E\, j \.{\in} 1 \.{\dotdot} Len ( rstate [ i ] )
\.{:} IEndRdP ( i ,\, j )
\@x{\@s{46.69} \.{\lor} \E\, i \.{\in} Writers\@s{0.49} \.{:} \.{\lor} \E\,
cmd \.{\in} RegVals \.{:} BeginWrP ( i ,\, cmd )
\@x{\@s{125.59} \.{\lor} DoWrP ( i ) \.{\lor} EndWrP ( i )
\@pvspace{4.0pt
\@x{ SpecP\@s{1.08} \.{\defeq} InitP \.{\land} {\Box} [ NextP ]_{ varsP}
\.{\land} Fairness
\end{tlatex}
\caption{Module \emph{NewLinearSnapshotPS}, part 2.}
\label{fig:NewLinearSnapshotPS2}
\end{figure}
\subsubsection{Adding the Stuttering Variable}
The module next adds the stuttering variable $s$ to $SpecS$. We need
to add a single stuttering step after a $BeginRdP(i)$ step iff the
reader will output the current value of $mem$, which is the case iff
the step sets $p[i]$ to 1. We also need to add a stuttering step
after ${DoWrP(i)}$ for every currently reading reader $j$ for which
$p[j]$ predicts that the value {of $mem$} that the step appends to
$rstate[j]$ is the one that the read will output. These steps are
added by letting the values of $s.val$ be subsets of readers, ordered
by the subset relation, with the decrement operation removing an
element from the set chosen with the \textsc{choose} operator.
The specification $SpecPS$ obtained by adding the stuttering variable
$s$ to $SpecP$ is defined in the part of module $NewLinearSnapshotPS$
shown in
\lref{targ:NewLinearSnapshotPS3}{Figure~\ref{fig:NewLinearSnapshotPS3}}.
The two theorems at the beginning are conditions
(\ref{eq:stuttering-cond}) for adding the stuttering steps to
$BeginRdP(i)$ and $DoWrP(i)$ steps. They can be checked by
temporarily ending the module immediately after those theorems and
running TLC on a model having $SpecP$ as its specification. Two
\textsc{assume} statements have been added to check the constant
conditions on the arguments of the $MayPostStutter$ operators used to
add those stuttering steps.
\begin{figure} \target{targ:NewLinearSnapshotPS3}
\begin{notla}
THEOREM SpecP => [][\A i \in Readers : BeginRdP(i) =>
(IF p'[i] = 1 THEN 1 ELSE 0) \in {0,1}]_varsP
THEOREM SpecP => [][\A i \in Writers, cmd \in RegVals :
DoWrP(i) =>
{j \in Readers : (rstate[j] # << >>)
/\ (p[j] = Len(rstate'[j]))}
\in (SUBSET Readers)]_varsP
------------------------------------------------------------------------
VARIABLE s
varsPS == <<vars, p, s>>
INSTANCE Stuttering WITH vars <- varsP
InitPS == InitP /\ (s = top)
BeginRdPS(i) == MayPostStutter(BeginRdP(i), "BeginRd", i, 0,
IF p'[i] = 1 THEN 1 ELSE 0,
LAMBDA j : j-1)
ASSUME StutterConstantCondition({0,1}, 0, LAMBDA j : j-1)
BeginWrPS(i, cmd) == NoStutter(BeginWrP(i, cmd))
DoWrPS(i) == MayPostStutter(DoWrP(i), "DoWr", i, {},
{j \in Readers :
(rstate[j] # << >>) /\ (p[j] = Len(rstate'[j]))},
LAMBDA S : S \ {CHOOSE x \in S : TRUE})
ASSUME StutterConstantCondition(SUBSET Readers, {},
LAMBDA S : S \ {CHOOSE x \in S : TRUE})
IEndRdPS(i, j) == NoStutter(IEndRdP(i, j))
EndWrPS(i) == NoStutter(EndWrP(i))
NextPS == \/ \E i \in Readers : \/ BeginRdPS(i)
\/ \E j \in 1..Len(rstate[i]) : IEndRdPS(i,j)
\/ \E i \in Writers : \/ \E cmd \in RegVals : BeginWrPS(i, cmd)
\/ DoWrPS(i) \/ EndWrPS(i)
SafeSpecPS == InitPS /\ [][NextPS]_varsPS
SpecPS == SafeSpecPS /\ Fairness
\end{notla}
\begin{tlatex}
\@x{ {\THEOREM} SpecP \.{\implies} {\Box} [ \A\, i \.{\in} Readers \.{:}
BeginRdP ( i ) \.{\implies}
\@x{\@s{107.54} ( {\IF} p \.{'} [ i ] \.{=} 1 \.{\THEN} 1 \.{\ELSE} 0 )
\.{\in} \{ 0 ,\, 1 \} ]_{ varsP}
\@pvspace{8.0pt
\@x{ {\THEOREM} SpecP \.{\implies} {\Box} [ \A\, i \.{\in} Writers ,\, cmd
\.{\in} RegVals \.{:}
\@x{\@s{107.54} DoWrP ( i ) \.{\implies}
\@x{\@s{115.74} \{ j \.{\in} Readers \.{:} ( rstate [ j ] \.{\neq} {\langle}
{\rangle} )
\@x{\@s{193.83} \.{\land}\@s{6.10} ( p [ j ] \.{=} Len ( rstate \.{'} [ j ] )
) \}
\@x{\@s{124.84} \.{\in} ( {\SUBSET} Readers ) ]_{ varsP}
\@x{}\midbar\@xx{
\@x{ {\VARIABLE} s
\@x{ varsPS \.{\defeq} {\langle} vars ,\, p ,\, s {\rangle}
\@pvspace{8.0pt
\@x{ {\INSTANCE} Stuttering {\WITH} vars \.{\leftarrow} varsP
\@pvspace{8.0pt
\@x{ InitPS \.{\defeq} InitP \.{\land} ( s \.{=} top )
\@pvspace{8.0pt
\@x{ BeginRdPS ( i ) \.{\defeq} MayPostStutter ( BeginRdP ( i )
,\,\@w{BeginRd} ,\, i ,\, 0 ,\,
\@x{\@s{153.64} {\IF} p \.{'} [ i ] \.{=} 1 \.{\THEN} 1 \.{\ELSE} 0 ,\,
\@x{\@s{153.64} {\LAMBDA} j \.{:} j \.{-} 1 )
\@x{ {\ASSUME} StutterConstantCondition ( \{ 0 ,\, 1 \} ,\, 0 ,\, {\LAMBDA} j
\.{:} j \.{-} 1 )
\@pvspace{8.0pt
\@x{ BeginWrPS ( i ,\, cmd ) \.{\defeq} NoStutter ( BeginWrP ( i ,\, cmd ) )
\@pvspace{8.0pt
\@x{ DoWrPS ( i ) \.{\defeq} MayPostStutter ( DoWrP ( i ) ,\,\@w{DoWr} ,\, i
,\, \{ \} ,\,
\@x{\@s{144.19} \{ j \.{\in} Readers \.{:}
\@x{\@s{153.29} ( rstate [ j ] \.{\neq} {\langle} {\rangle} ) \.{\land} ( p [
j ] \.{=} Len ( rstate \.{'} [ j ] ) ) \} ,\,
\@x{\@s{144.19} {\LAMBDA} S \.{:} S \.{\,\backslash\,} \{ {\CHOOSE} x \.{\in}
S \.{:} {\TRUE} \} )
\@x{ {\ASSUME} StutterConstantCondition ( {\SUBSET} Readers ,\, \{ \} ,\,
\@x{\@s{155.21} {\LAMBDA} S \.{:} S \.{\,\backslash\,} \{ {\CHOOSE} x \.{\in}
S \.{:} {\TRUE} \} )
\@pvspace{8.0pt
\@x{ IEndRdPS ( i ,\, j ) \.{\defeq} NoStutter ( IEndRdP ( i ,\, j ) )
\@pvspace{8.0pt
\@x{ EndWrPS ( i ) \.{\defeq} NoStutter ( EndWrP ( i ) )
\@pvspace{8.0pt
\@x{ NextPS \.{\defeq} \.{\lor} \E\, i \.{\in} Readers \.{:} \.{\lor}
BeginRdPS ( i )
\@x{\@s{131.39} \.{\lor} \E\, j \.{\in} 1 \.{\dotdot} Len ( rstate [ i ] )
\.{:} IEndRdPS ( i ,\, j )
\@x{\@s{52.48} \.{\lor} \E\, i \.{\in} Writers\@s{0.49} \.{:} \.{\lor} \E\,
cmd \.{\in} RegVals \.{:} BeginWrPS ( i ,\, cmd )
\@x{\@s{131.39} \.{\lor} DoWrPS ( i ) \.{\lor} EndWrPS ( i )
\@pvspace{8.0pt
\@x{ SafeSpecPS \.{\defeq} InitPS\@s{3.04} \.{\land} {\Box} [ NextPS ]_{
varsPS}
\@x{ SpecPS \.{\defeq} SafeSpecPS \.{\land} Fairness
\end{tlatex}
\caption{Module \emph{NewLinearSnapshotPS}, part 3.}
\label{fig:NewLinearSnapshotPS3}
\end{figure}
\subsubsection{The Refinement Mapping}
Let's again use the abbreviations $Spec_{L}$ for formula $Spec$ of
$LinearSnapshot$ and $Spec_{PS}$ for formula $SpecPS$ of module
$NewLinearSnapshot$. We now define the state functions \ov{mem} and
\ov{istate} such that $Spec_{PS}$ implements $Spec_{L}$ under the
refinement mapping
$mem<-\ov{mem}$, $istate <- \ov{istate}$.
We want writer actions of $Spec_{L}$ to be simulated
by the corresponding writer actions of $Spec_{PS}$. Hence, we
let \ov{mem} equal $mem$ and we let \ov{istate[i]} equal $wstate[i]$
for every writer~$i$. The problem is defining \ov{istate[i]}
for readers~$i$.
In $Spec_{L}$, for any process $i$ not executing a read or write,
$istate[i]$ equals $interface[i]$. Hence, we can define
\ov{istate[i]} to equal $interface[i]$ for any reader $i$ not
currently reading. We now consider the case when $i$ is currently
reading, which is true iff $rstate[i]#<<\,>>$, which implies $p[i]$
is a positive integer. There are two possibilities:
\begin{describe}{$p[i]=1$}
\item[{$p[i]=1$}] In this case, the $DoRd(i)$ step of $Spec_{L}$ is
simulated by the stuttering step added to $BeginRd(i)$. The $DoRd(i)$
step changes $istate[i]$ from $NotMemVal$ to the memory value to be
output, so \ov{istate[i]} should equal $rstate[i][1]$ when
$rstate[i]#<<\,>>$, except after the $BeginRd(i)$ step and before the
stuttering step added immediately after it.
\item[{$p[i]>1$}] In this case, the $DoRd(i)$ step of $Spec_{L}$ is
simulated by one of the stuttering steps added to the $DoWr(j)$
step for the writer that appends the $p[i]$\tth\ element to
$rstate[i]$. We let it be the stuttering step that removes $i$ from
$s.val$, so \ov{istate[i]} equals $NotMemVal$ until $p[i]\leq
Len(rstate[i])$ and it's not the case that $i$ is an element of
$s.val$ while some writer is performing a stuttering step added after
its $DoWr$ step.
\end{describe}
The definition of \ov{istate}, under the name $istateBar$,
appears near the end of the module, shown in
\lref{targ:NewLinearSnapshotPS4}{Figure~\ref{fig:NewLinearSnapshotPS4}}.
\begin{figure} \target{targ:NewLinearSnapshotPS4}
\begin{notla}
istateBar == [i \in Readers \cup Writers |->
IF i \in Writers
THEN wstate[i]
ELSE IF rstate[i] = << >>
THEN interface[i]
ELSE IF p[i] = 1
THEN IF /\ s # top
/\ s.id = "BeginRd"
/\ s.ctxt = i
THEN NotMemVal
ELSE rstate[i][1]
ELSE IF \/ p[i] > Len(rstate[i])
\/ /\ s # top
/\ s.id = "DoWr"
/\ i \in s.val
THEN NotMemVal
ELSE rstate[i][p[i]] ]
LS == INSTANCE LinearSnapshot WITH istate <- istateBar
THEOREM SafeSpecPS => LS!SafeSpec
THEOREM SpecPS => LS!Spec
=============================================================================
\end{notla}
\begin{tlatex}
\@x{ istateBar \.{\defeq} [ i \.{\in} Readers \.{\cup} Writers \.{\mapsto}
\@x{\@s{66.70} {\IF} i \.{\in} Writers
\@x{\@s{74.90} \.{\THEN} wstate [ i ]
\@x{\@s{74.90} \.{\ELSE} {\IF} rstate [ i ] \.{=} {\langle} {\rangle}
\@x{\@s{114.41} \.{\THEN} interface [ i ]
\@x{\@s{114.41} \.{\ELSE} {\IF} p [ i ] \.{=} 1
\@x{\@s{153.92} \.{\THEN} {\IF} \.{\land} s \.{\neq} top
\@x{\@s{197.39} \.{\land} s . id\@s{4.1} \.{=}\@w{BeginRd}
\@x{\@s{197.39} \.{\land} s . ctxt \.{=} i
\@x{\@s{193.43} \.{\THEN} NotMemVal
\@x{\@s{193.43} \.{\ELSE} rstate [ i ] [ 1 ]
\@x{\@s{153.92} \.{\ELSE} {\IF} \.{\lor} p [ i ]\@s{0.63} \.{>} Len ( rstate
[ i ] )
\@x{\@s{197.39} \.{\lor} \.{\land} s \.{\neq} top
\@x{\@s{208.50} \.{\land} s . id \.{=}\@w{DoWr}
\@x{\@s{208.50} \.{\land} i \.{\in} s . val
\@x{\@s{193.43} \.{\THEN} NotMemVal
\@x{\@s{193.43} \.{\ELSE} rstate [ i ] [ p [ i ] ] ]
\@pvspace{8.0pt
\@x{ LS \.{\defeq} {\INSTANCE} LinearSnapshot {\WITH} istate \.{\leftarrow}
istateBar
\@pvspace{8.0pt
\@x{ {\THEOREM} SafeSpecPS \.{\implies} LS {\bang} SafeSpec
\@pvspace{4.0pt
\@x{ {\THEOREM} SpecPS \.{\implies} LS {\bang} Spec
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{Module \emph{NewLinearSnapshotPS}, part 4.}
\label{fig:NewLinearSnapshotPS4}
\end{figure}
The theorems at the end of the module can be checked with TLC. In
fact, TLC checks that $SpecPS$ satisfies property $LS!Spec$ by
checking that the safety part of $SpecPS$ satisfies both (a)~the
safety part of $LS!Spec$ and (b)~the property that the liveness part
of $SpecPS$ implies the liveness part of $LS!Spec$. Therefore, having
TLC check that $SpecPS$ satisfies $LS!Spec$ checks both theorems.
\subsection{\emph{AfekSimplified} Implements \emph{NewLinearSnapshot}}
\label{sec:Afek-implements}
We now finish checking the correctness of the algorithm $Spec_{A}$ of
$AfekSimplified$ by showing that it implements
\tlabox{\EE mem, rstate, wstate : SSpec_{NL}},
where $SSpec_{NL}$ is the safety specification of $NewLinearSnapshot$.
As we suggested in Section~\ref{sec:another-snapshot}, finding a
refinement mapping to show this requires adding a history variable to
$Spec_{A}$ that captures the information remembered by $SSpec_{NL}$ in
the variable $rstate$. This is straightforward. We just add a
history variable $h$ such that $h[i]$ is changed by $BeginRd(i)$,
$DoWr(i)$, and an ending $TryEndRdH(i)$ action (one executed with
$rdVal1[i] = rdVal2[i]$) the same way $rstate[i]$ is changed by the
corresponding $BeginRd(i)$, $DoWr(i)$, and $EndRd(i)$ action of
$SSpec_{NL}$.
The specification $SpecH$, obtained by adding the
history variable $h$ to specification $Spec$ of module $AfekSimplified$,
is defined in module $AfekSimplified$ as shown in
\lref{targ:AfekSimplifiedH-beg}{Figure~\ref{fig:AfekSimplifiedH-beg}}.
\begin{figure} \target{targ:AfekSimplifiedH-beg}
\begin{notla}
-------------------------- MODULE AfekSimplifiedH --------------------------
EXTENDS AfekSimplified, Sequences
VARIABLE h
varsH == <<vars, h>>
InitH == Init /\ (h = [i \in Readers |-> << >>])
memBar == [i \in Writers |-> imem[i][1]]
BeginWrH(i, cmd) == BeginWr(i, cmd) /\ (h' = h)
DoWrH(i) == /\ DoWr(i)
/\ h' = [j \in Readers |->
IF h[j] = << >>
THEN << >>
ELSE Append(h[j], memBar')]
EndWrH(i) == EndWr(i) /\ (h' = h)
BeginRdH(i) == /\ BeginRd(i)
/\ h' = [h EXCEPT ![i] = <<memBar>>]
Rd1H(i) == Rd1(i) /\ (h' = h)
Rd2H(i) == Rd2(i) /\ (h' = h)
TryEndRdH(i) == /\ TryEndRd(i)
/\ h' = IF rdVal1[i] = rdVal2[i]
THEN [h EXCEPT ![i] = << >>]
ELSE h
NextH ==
\/ \E i \in Readers : BeginRdH(i) \/ Rd1H(i) \/ Rd2H(i) \/ TryEndRdH(i)
\/ \E i \in Writers : \/ \E cmd \in RegVals : BeginWrH(i, cmd)
\/ DoWrH(i) \/ EndWrH(i)
SpecH == InitH /\ [][NextH]_varsH
\end{notla}
\begin{tlatex}
\@x{}\moduleLeftDash\@xx{ {\MODULE} AfekSimplifiedH}\moduleRightDash\@xx{
\@x{ {\EXTENDS} AfekSimplified ,\, Sequences
\@pvspace{8.0pt
\@x{ {\VARIABLE} h
\@x{ varsH \.{\defeq} {\langle} vars ,\, h {\rangle}
\@pvspace{8.0pt
\@x{ InitH\@s{2.14} \.{\defeq} Init \.{\land} ( h \.{=} [ i \.{\in} Readers
\.{\mapsto} {\langle} {\rangle} ] )
\@pvspace{8.0pt
\@x{ memBar \.{\defeq} [ i \.{\in} Writers \.{\mapsto} imem [ i ] [ 1 ] ]
\@pvspace{8.0pt
\@x{ BeginWrH ( i ,\, cmd ) \.{\defeq} BeginWr ( i ,\, cmd ) \.{\land} ( h
\.{'} \.{=} h )
\@pvspace{8.0pt
\@x{ DoWrH ( i ) \.{\defeq} \.{\land} DoWr ( i )
\@x{\@s{66.69} \.{\land} h \.{'} \.{=} [ j \.{\in} Readers \.{\mapsto}
\@x{\@s{124.54} {\IF} h [ j ] \.{=} {\langle} {\rangle}
\@x{\@s{132.74} \.{\THEN} {\langle} {\rangle}
\@x{\@s{132.74} \.{\ELSE} Append ( h [ j ] ,\, memBar \.{'} ) ]
\@pvspace{8.0pt
\@x{ EndWrH ( i ) \.{\defeq} EndWr ( i ) \.{\land} ( h \.{'} \.{=} h )
\@pvspace{8.0pt
\@x{ BeginRdH ( i ) \.{\defeq} \.{\land} BeginRd ( i )
\@x{\@s{76.13} \.{\land} h \.{'} \.{=} [ h {\EXCEPT} {\bang} [ i ] \.{=}
{\langle} memBar {\rangle} ]
\@pvspace{8.0pt
\@x{ Rd1H ( i ) \.{\defeq} Rd1 ( i ) \.{\land} ( h \.{'} \.{=} h )
\@pvspace{8.0pt
\@x{ Rd2H ( i ) \.{\defeq} Rd2 ( i ) \.{\land} ( h \.{'} \.{=} h )
\@pvspace{8.0pt
\@x{ TryEndRdH ( i ) \.{\defeq} \.{\land} TryEndRd ( i )
\@x{\@s{84.69} \.{\land} h \.{'} \.{=} {\IF} rdVal1 [ i ] \.{=} rdVal2 [ i ]
\@x{\@s{126.02} \.{\THEN} [ h {\EXCEPT} {\bang} [ i ] \.{=} {\langle}
{\rangle} ]
\@x{\@s{126.02} \.{\ELSE} h
\@pvspace{8.0pt
\@x{ NextH \.{\defeq}
\@x{\@s{8.2} \.{\lor} \E\, i \.{\in} Readers \.{:} BeginRdH ( i ) \.{\lor}
Rd1H ( i ) \.{\lor} Rd2H ( i ) \.{\lor} TryEndRdH ( i )
\@x{\@s{8.2} \.{\lor} \E\, i \.{\in} Writers\@s{0.49} \.{:} \.{\lor} \E\, cmd
\.{\in} RegVals \.{:} BeginWrH ( i ,\, cmd )
\@x{\@s{87.10} \.{\lor} DoWrH ( i ) \.{\lor} EndWrH ( i )
\@pvspace{8.0pt
\@x{ SpecH \.{\defeq} InitH \.{\land} {\Box} [ NextH ]_{ varsH}
\end{tlatex}
\caption{Beginning of module \emph{AfekSimplifiedH}.}
\label{fig:AfekSimplifiedH-beg}
\end{figure}
The definition should be easy to understand by comparing the
definition of the initial predicate $InitH$ and of the actions in the
module to the corresponding definitions in module $NewLinearSnapshot$.
Note that the action definitions use $memBar$ where the corresponding
action definitions in $NewLinearSnapshot$ use $mem$. The module
defines $memBar$ to equal the memory value obtained from $imem$ in the
obvious way, by letting $memBar[i]$ equal the first element of
$imem[i]$. The expression $memBar$ is, of course, the value substituted
for $mem$ by the refinement mapping.
The rest of the refinement mapping is defined at the end of the module,
\begin{figure} \target{targ:AfekSimplifiedH-end}
\begin{notla}
wstateBar == [i \in Writers |->
IF (interface[i] = NotRegVal) \/ (wrNum[i] = imem[i][2])
THEN NotRegVal
ELSE interface[i]]
NLS == INSTANCE NewLinearSnapshot
WITH mem <- memBar, rstate <- h, wstate <- wstateBar
THEOREM SpecH => NLS!SafeSpec
=============================================================================
\end{notla}
\begin{tlatex}
\@x{ wstateBar \.{\defeq} [ i \.{\in} Writers \.{\mapsto}
\@x{\@s{70.29} {\IF} ( interface [ i ] \.{=} NotRegVal ) \.{\lor} ( wrNum [ i
] \.{=} imem [ i ] [ 2 ] )
\@x{\@s{78.49} \.{\THEN} NotRegVal
\@x{\@s{78.49} \.{\ELSE} interface [ i ] ]
\@pvspace{8.0pt
\@x{ NLS \.{\defeq} {\INSTANCE} NewLinearSnapshot
\@x{\@s{47.61} {\WITH} mem \.{\leftarrow} memBar ,\, rstate \.{\leftarrow} h
,\, wstate \.{\leftarrow} wstateBar
\@pvspace{8.0pt
\@x{ {\THEOREM} SpecH \.{\implies} NLS {\bang} SafeSpec
\@x{}\bottombar\@xx{
\end{tlatex}
\caption{End of module \emph{AfekSimplifiedH}.}
\label{fig:AfekSimplifiedH-end}
\end{figure}
shown in
\lref{targ:AfekSimplifiedH-end}{Figure~\ref{fig:AfekSimplifiedH-end}}.
It substitutes $h$ for $rstate$. The expression $wstateBar$ is
substituted for $wstate$. To understand it, remember that in
$SSpec_{NL}$, the value of $wstate[i]$ for a writer $i$ is $NotRegVal$
except between a $BeginWr(i,cmd)$ action and a $DoWr(i)$ action, when
it equals $interface[i]$ (which equals $cmd$).
The theorem was checked by TLC in about 10 hours on a circa 2012
laptop, using a model with: two readers, two writers, and two register
values; symmetry of readers and writers (register values aren't
symmetric because $IntRegVal$ equals one of them); a state constraint
limiting each writer to at most three writes; and two worker threads.
\clearpage
\addcontentsline{toc}{section}{References}
|
1,108,101,565,256 | arxiv | \section{Introduction}
In this paper, we prove the following extremal result, which resembles an entropy power inequality:
\begin{theorem}\label{thm:vector}
For $n\times n$ positive definite matrices $\Sigma_X, \Sigma_Z$, let $\mathbf{X}\sim N(\mu_X,\Sigma_X)$ and $\mathbf{Z}\sim N(\mu_Z,\Sigma_Z)$ be independent $n$-dimensional Gaussian vectors, and define $\mathbf{Y} = \mathbf{X} + \mathbf{Z}$. For any $U,V$ such that $U-\mathbf{X}-\mathbf{Y}-V$ form a Markov chain, the following inequality holds:
\begin{align}
2^{-\frac{2}{n} (I(\mathbf{Y};U)+ I(\mathbf{X};V))} &\geq \frac{| \Sigma_X |^{1/n}}{| \Sigma_X + \Sigma_Z |^{1/n}} ~2^{-\frac{2}{n} (I(\mathbf{X};U)+I(\mathbf{Y};V))} + 2^{-\frac{2}{n}( I(\mathbf{X};\mathbf{Y})+I(U;V))}.\label{introVec}%
\end{align}
\end{theorem}
\noindent In the simplest case, where $\mathbf{Y} = \rho \mathbf{X} + \mathbf{Z}$, $\Sigma_X = I_n$ and $\Sigma_Z = (1-\rho^2) I_n$, Theorem \ref{thm:vector} implies
\begin{align}
2^{-\frac{2}{n} (I(\mathbf{Y};U)+ I(\mathbf{X};V))} &\geq (1-\rho^2)2^{-\frac{2}{n} I(V;U)} + \rho^2 2^{-\frac{2}{n} (I(\mathbf{X};U)+I(\mathbf{Y};V))}. \label{introWhite}%
\end{align}
If $V$ is degenerate, \eqref{introWhite} further simplifies to an inequality shown by Oohama in \cite{Oohama1997}, which proved to be instrumental in establishing the rate-distortion region for the one-helper quadratic Gaussian source coding problem. Together with Oohama's work, the sum-rate constraint established by Wagner \emph{et al}. in their \emph{tour de force} \cite{WagnerRateRegion2008} completely characterized the rate-distortion region for the two-encoder quadratic Gaussian source coding problem. It turns out that the sum-rate constraint of Wagner \emph{et al}. can be recovered as an immediate corollary to \eqref{introWhite}, thus unifying the works of Oohama and Wagner \emph{et al}. under a common inequality. The entire argument is given as follows.
\subsection{Recovery of the scalar-Gaussian sum-rate constraint}
Using the Markov relationship $U-\mathbf{X}-\mathbf{Y}-V$, we can rearrange the exponents in \eqref{introWhite} to obtain the equivalent inequality
\begin{align}
2^{-\frac{2}{n} (I(\mathbf{X};U,V) +I(\mathbf{Y};U,V))} \geq 2^{-\frac{2}{n}I(\mathbf{X},\mathbf{Y};U,V) } \left( 1-\rho^2 + \rho^2 2^{-\frac{2}{n}I(\mathbf{X},\mathbf{Y};U,V) }\right) . \label{LRHSmonotone}
\end{align}
The left- and right-hand sides of \eqref{LRHSmonotone} are monotone decreasing in $\frac{1}{n}(I(\mathbf{X};U,V) +I(\mathbf{Y};U,V))$ and $\frac{1}{n}I(\mathbf{X},\mathbf{Y};U,V)$, respectively. Therefore, if
\begin{align}
&\frac{1}{n} (I(\mathbf{X};U,V) +I( \mathbf{Y};U,V) ) \geq \frac{1}{2} \log \frac{1}{D} \mbox{~~~and~~~}\frac{1}{n}I(\mathbf{X},\mathbf{Y};U,V) \leq R \label{eqnRD}
\end{align}
for some pair $(R,D)$, then we have $D \geq 2^{-2 R } \left( 1-\rho^2 + \rho^2 2^{-2 R }\right)$,
which is a quadratic inequality with respect to the term $2^{-2 R }$. This is easily solved using the quadratic formula to obtain:
\begin{align}
2^{-2R} \leq \frac{2D}{(1-\rho^2)\beta(D)} \quad \Rightarrow \quad R \geq \frac{1}{2}\log \frac{(1-\rho^2)\beta(D)}{2D}, \label{RDineq}
\end{align}
where $\beta(D)\triangleq 1 + \sqrt{1 + \frac{4\rho^2 D}{(1-\rho^2)^2}}$. Note that Jensen's inequality and the maximum-entropy property of Gaussians imply
\begin{align}
\frac{1}{n} (I(\mathbf{X};U,V) +I( \mathbf{Y};U,V) ) \geq \frac{1}{2} \log \frac{1}{\mathsf{mmse}(\mathbf{X}|U,V) \mathsf{mmse}(\mathbf{Y}|U,V) },\label{mmseEq}
\end{align}
where $\mathsf{mmse}(\mathbf{X}|U,V) \triangleq \frac{1}{n} \| \mathbf{X} - \mathbb{E}[\mathbf{X}|U,V] \|^2$, and $\mathsf{mmse}(\mathbf{Y}|U,V)$ is defined similarly. Put $U = f_x(\mathbf{X})$ and $V = f_y(\mathbf{Y})$, where $f_x : \mathbb{R}^n \rightarrow [1:2^{nR_x}]$ and $f_y : \mathbb{R}^n \rightarrow [1:2^{nR_y}]$. Supposing $\mathsf{mmse}(\mathbf{X}|U,V)\leq d_x$ and $\mathsf{mmse}(\mathbf{Y}|U,V)\leq d_y$, inequalities \eqref{eqnRD}-\eqref{mmseEq} together imply
\begin{align}
R_x + R_y \geq \frac{1}{2}\log \frac{(1-\rho^2)\beta(d_x d_y)}{2d_x d_y},\label{recoveredSumRate}
\end{align}
which is precisely the sum-rate constraint for the two-encoder quadratic Gaussian source coding problem.
\subsection{Distributed compression of minimal-volume ellipsoids}
Above, recovery of the quadratic Gaussian sum-rate constraint \eqref{recoveredSumRate} demonstrated the utility of Theorem \ref{thm:vector} in proving nontrivial results. Now, we consider a new problem which, the the authors' knowledge, is not a consequence of known results in the open literature. In particular, we study the problem of compressing ellipsoids that cover a set of points which, subject to rate constraints, have approximately minimal volume. Such ellipsoids are similar to L\"owner-John ellipsoids, which are defined as the (unique) ellipsoid of minimal volume that covers a finite set of points \cite{henk2012lowner}. These minimum-volume ellipsoids and their approximations play a prominent role in the fields of optimization, data analysis, and computational geometry (e.g., \cite{boyd2004convex}).
To begin, we recall that an $n$-dimensional ellipsoid $\mathcal{E}$ can be parameterized by a positive semidefinite matrix $A\in \mathbb{R}^{n\times n}$ and a vector $b\in \mathbb{R}^n$ as follows:
\begin{align}
\mathcal{E} = \mathcal{E}(A,b) = \left\{ x\in \mathbb{R}^n : \|Ax - b\|\leq 1 \right\}.\label{ellParam}
\end{align}
The volume of $\mathcal{E}(A,b)$ is related to the determinant of $A$ by
\begin{align}
\operatorname{vol}\left(\mathcal{E}(A,b)\right) = \frac{c_n}{|A|},
\end{align}
where $c_n \sim \frac{1}{\sqrt{n\pi}}\left(\frac{2\pi e}{n}\right)^{n/2}$ is the volume of the $n$-dimensional unit ball.
Fix $\rho\in(0,1)$, and let
$\{\Sigma_n : n\geq 1\}$ be a sequence of positive definite $n\times n$ matrices. Suppose $(\mathbf{X}_1, \mathbf{Y}_1), \dots, (\mathbf{X}_k, \mathbf{Y}_k)$ are $k$ independent pairs of jointly Gaussian vectors, each equal in distribution to $(\mathbf{X},\mathbf{Y})$, where $\mathbb{E}[\mathbf{X} \mathbf{X}^T] = \mathbb{E}[\mathbf{Y} \mathbf{Y}^T]=\Sigma_n$, and $\mathbb{E}[\mathbf{X} \mathbf{Y}^T] =\rho\Sigma_n$.
A $(n,R_x,R_y,\nu_x,\nu_y,k,\Sigma_n, \epsilon)$-code consists of encoding functions
\begin{align}
f_x &: \mathbb{R}^{k n} \rightarrow \{1,2,\dots,2^{kn R_x }\}\\
f_y &: \mathbb{R}^{k n} \rightarrow \{1,2,\dots,2^{kn R_y }\}
\end{align}
and a decoding function
\begin{align}
\psi : \left(f_x(\mathbf{X}_1, \dots, \mathbf{X}_k), f_y(\mathbf{Y}_1, \dots, \mathbf{Y}_k)\right) \mapsto (A_x, A_y, b_x, b_y)
\end{align}
such that
\begin{align}
\max_{1\leq i\leq k} \Pr\left\{ \mathbf{X}_i \notin \mathcal{E}(A_x,b_x) \right\} < \epsilon \mbox{~~and~~}
\max_{1\leq i\leq k} \Pr\left\{ \mathbf{Y}_i \notin \mathcal{E}(A_y,b_y) \right\} < \epsilon,
\end{align}
and
\begin{align}
\Big( \operatorname{vol}(\mathcal{E}(A_x,b_x) ) \Big)^{1/n} &\leq (1+\epsilon) {c_n^{1/n}}{\sqrt{{n \nu_x} |\Sigma_n|^{1/n}}} \label{vol1}\\
\Big( \operatorname{vol}(\mathcal{E}(A_y,b_y) ) \Big)^{1/n} &\leq (1+\epsilon) {c_n^{1/n}}{\sqrt{{n \nu_y} |\Sigma_n|^{1/n}}}.\label{vol2}
\end{align}%
We remark that $\sqrt{n}c_n^{1/n}\rightarrow \sqrt{2 \pi e}$ as $n\rightarrow \infty$ by Stirling's approximation, which explains the normalization factor of $\sqrt{n}$ in the volume constraint. In particular, \eqref{vol1}-\eqref{vol2} can be replaced with
\begin{align}
\Big( \operatorname{vol}(\mathcal{E}(A_x,b_x) ) \Big)^{1/n} &\leq (1+\epsilon) {\sqrt{{(2\pi e) \nu_x} |\Sigma_n|^{1/n}}} \label{vol1a}\\
\Big( \operatorname{vol}(\mathcal{E}(A_y,b_y) ) \Big)^{1/n} &\leq (1+\epsilon) {\sqrt{{(2\pi e) \nu_y} |\Sigma_n|^{1/n}}}.\label{vol2a}
\end{align}
\begin{definition}
For a sequence $\{\Sigma_n : n\geq 1\}$ of positive definite $n\times n$ matrices, a tuple $(R_x,R_y,\nu_x,\nu_y,k)$ is $\{\Sigma_n : n\geq 1\}$-achievable if there exists a sequence of $(n,R_x,R_y,\nu_x,\nu_y,k,\Sigma_n,\epsilon_n)$ codes satisfying $\epsilon_n\rightarrow 0$ as $n\rightarrow \infty$.
\end{definition}
If $(R_x,R_y,\nu_x,\nu_y,k)$ is a Pareto-optimal $\{\Sigma_n : n\geq 1\}$-achievable point, the corresponding ellipsoids $\mathcal{E}(A_x,b_x) ,\mathcal{E}(A_y,b_y)$ can be viewed as the best approximations to L\"owner-John ellipsoids subject to rate-constrained descriptions of the data. That is, the two ellipsoids cover the $k$ points observed at their respective encoders, and are (essentially) the minimum-volume such ellipsoids that can be computed from rate-constrained descriptions of the data. The general problem setup is illustrated in Figure \ref{fig:ellipse}.
\begin{figure}
\def\textwidth{\textwidth}
\input{figs/Ellipse.pdf_tex}
\caption{Computation of covering-ellipsoids from compressed descriptions of the observed data points ($k=4$). Note the decoder only computes the ellipsoids $\mathcal{E}(A_x,b_x),\mathcal{E}(A_y,b_y)$. The data points at the output of the decoder are only shown for reference.}\label{fig:ellipse}
\end{figure}
\begin{theorem}\label{thm:Ellipsoid}
For any sequence $\{\Sigma_n : n\geq 1\}$ of positive definite $n\times n$ matrices, a tuple $(R_x,R_y,\nu_x,\nu_y,k)$ is $\{\Sigma_n : n\geq 1\}$-achievable if and only if
\begin{align}
R_x &\geq\frac{1}{2}\log \left[\frac{1}{\nu_x}\left(1-\rho^2 + \rho^2 2^{-2 R_y} \right) \right]\label{RxConstratint}\\%
R_y &\geq\frac{1}{2}\log \left[\frac{1}{\nu_y}\left(1-\rho^2 + \rho^2 2^{-2 R_x} \right) \right]\label{RyConstratint}\\
R_x + R_y &\geq \frac{1}{2}\log \frac{(1-\rho^2)\beta(\nu_x \nu_y)}{2\nu_x \nu_y},\label{RxyConstratint}
\end{align}
where $\beta(z)\triangleq 1 + \sqrt{1 + \frac{4\rho^2 z}{(1-\rho^2)^2}}$.
\end{theorem}
\begin{remark}
As we will see, the direct part of Theorem \ref{thm:Ellipsoid} follows from an application of the achievability scheme for the two-encoder quadratic Gaussian source coding problem. However, the converse result does not appear to be a similar consequence since the matrices $A_x,A_y$ describing the principal axes of the ellipsoids are allowed to depend on the source realizations. %
Nonetheless, with Theorem \ref{thm:vector} at our disposal, the proof of the converse is fairly routine.
\end{remark}
Since the primary focus of this paper is on the extremal inequality \eqref{introVec}, we defer the proof of Theorem \ref{thm:Ellipsoid} until Appendix \ref{app:ellipsoidProof}. The remainder of this paper is divided into two parts: a treatment of the scalar version of \eqref{introVec} is given in Section \ref{sec:scalar}, and the vector generalization is considered in Section \ref{sec:vector}. Closing remarks are provided in Section \ref{sec:conclusion}.
\section{Scalar Setting}\label{sec:scalar}
We begin the journey toward our main result by studying the scalar version of Theorem \ref{thm:vector}. Most of our effort will carry over to the vector setting, but the notation in the scalar case is less cumbersome. Therefore, for the remainder of this section, we will assume that $X,Y$ are jointly Gaussian, each with unit variance and correlation $\rho$. Our main result in this section is the following rearrangement of \eqref{introVec}.
\begin{theorem}\label{thm:scalar}
Suppose $X,Y$ are jointly Gaussian, each with unit variance and correlation $\rho$. Then, for any $U,V$ satisfying $U-X-Y-V$, the following inequality holds:
\begin{align}
2^{-2I(Y;U)} 2^{-2I(X;V|U)} \geq (1-\rho^2) + \rho^2 2^{-2I(X;U)} 2^{-2I(Y;V|U)} .\label{main}
\end{align}
\end{theorem}
\subsection{Proof of Theorem \ref{thm:scalar}}
Instead of working directly with inequality \eqref{main}, it will be convenient to consider a dual form. To this end, for $\lambda\geq 0$, define
\begin{align}
F(\lambda)\triangleq \inf_{U,V: U-X-Y-V}\Big\{ I(X;U)-\lambda I(Y;U) + I(Y;V|U)-\lambda I(X;V|U)\Big\}. \label{funL}
\end{align}
The remainder of this section is devoted to characterizing the function $F(\lambda)$. We remark that that the infimum in \eqref{funL} is attained for any $\lambda$. The proof of this is routine, and deferred to Appendix \ref{app:InfimaObtained}. The bulk of the work ahead is devoted to establishing the existence of valid minimizers $U,V$ for which $X|\{U = u\}$ is normal for almost every $u$.
To accomplish this, we now describe a simple construction that will be used throughout much of the sequel. This construction was first introduced for proving extremal inequalities in \cite{GengNairPrivateMessages2014}. Suppose $U,X,Y,V$ satisfy the Markov relationship $U-X-Y-V$, and consider two independent copies of $U,X,Y,V$, which will be denoted by the same variables with subscripts $1$ and $2$. Define
\begin{align}
&\xh1 = \frac{X_1 + X_2}{\sqrt{2}} &\xh2 = \frac{X_1 - X_2}{\sqrt{2}}.
\end{align}
In a similar manner, define $\yh1, \yh2$. Note that $(\xh1,\xh2,\yh1,\yh2)$ and $(X_1,X_2,Y_1,Y_2)$ are equal in distribution. Let $g:\mathbb{R}^2\rightarrow \mathbb{R}$ be a one-to-one measurable transformation\footnote{Every uncountable Polish space is Borel isomorphic to $\mathbb{R}$ \cite{srivastava1998course}.}. Define $\hat{U} = g(U_1,U_2)$ and $\hat{V} = g(V_1,V_2)$.
\begin{lemma} \label{lem:candidateMinimizers}
If $U,X,Y,V$ minimize the functional \eqref{funL}, and $\xh1,\xh2,\yh1,\yh2,\hat{U},\hat{V}$ are constructed as above, then
\begin{enumerate}
\item For almost every $y$, $\hat{U},\xh1,\yh1,\hat{V}$ conditioned on $\{\yh2=y\}$ is a valid minimizer of \eqref{funL}.
\item For almost every $y$, $\hat{U},\xh2,\yh2,\hat{V}$ conditioned on $\{\yh1=y\}$ is a valid minimizer of \eqref{funL}.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\phi_1$ be such that $\xh1,\yh1$ are independent of $\phi_1$ and $\hat{U}-\xh1-\yh1 - \hat{V}$ conditioned on $\phi_1$. Valid assignments of $\phi_1$ include any nonempty subset of $\xh2, \yh2$. Let $\phi_2$ be defined similarly.
Now, observe that we can write:
\begin{align}
2 I(X;U) &= I(\xh1,\xh2;\hat{U}) = I(\xh1;\hat{U}) + I(\xh2;\hat{U} | \xh1) \\
&= I(\xh1;\hat{U}) + I(\xh2;\hat{U} , \xh1) \\
&=I(\xh1;\hat{U}) + I(\xh2;\hat{U}) + I(\xh2; \xh1|\hat{U}) \\
&=I(\xh1;\hat{U}|\phi_1) + I(\xh2;\hat{U}|\phi_2) - I(\xh1;\phi_1|\hat{U}) - I(\xh2;\phi_2|\hat{U}) + I(\xh1; \xh2|\hat{U}).
\end{align}
Similarly,
\begin{align}
2 I(Y;U)
&=I(\yh1;\hat{U}|\phi_1) + I(\yh2;\hat{U}|\phi_2) - I(\yh1;\phi_1|\hat{U}) - I(\yh2;\phi_2|\hat{U}) + I(\yh1; \yh2|\hat{U})\\
\end{align}
Also, we have
\begin{align}
2I(Y;V|U) &= I(\yh1,\yh2;\hat{V}|\hat{U}) = I(\yh1;\hat{V}|\hat{U}) + I(\yh2;\hat{V} |\hat{U}) -I(\yh2;\yh1|\hat{U}) + I(\yh2;\yh1|\hat{U},\hat{V}) \\
&= I(\yh1; \phi_1,\hat{V}|\hat{U}) + I(\yh2;\phi_2,\hat{V} |\hat{U}) - I(\yh1;\phi_1|\hat{U},\hat{V}) - I(\yh2;\phi_2|\hat{U},\hat{V}) \\
&\quad-I(\yh2;\yh1|\hat{U}) + I(\yh2;\yh1|\hat{U},\hat{V}) \notag\\
&= I(\yh1; \hat{V}|\hat{U},\phi_1) + I(\yh2;\hat{V} |\hat{U},\phi_2) \\
&\quad- I(\yh1;\phi_1|\hat{U},\hat{V}) - I(\yh2;\phi_2|\hat{U},\hat{V}) + I(\yh2;\yh1|\hat{U},\hat{V}) \notag\\
&\quad + I(\yh1; \phi_1|\hat{U}) + I(\yh2; \phi_2|\hat{U}) -I(\yh1;\yh2|\hat{U}).\notag
\end{align}
And, similarly,
\begin{align}
2I(X;V|U) &= I(\xh1; \hat{V}|\hat{U},\phi_1) + I(\xh2;\hat{V} |\hat{U},\phi_2) \\
&\quad- I(\xh1;\phi_1|\hat{U},\hat{V}) - I(\xh2;\phi_2|\hat{U},\hat{V}) + I(\xh2;\xh1|\hat{U},\hat{V}) \notag\\
&\quad + I(\xh1; \phi_1|\hat{U}) + I(\xh2; \phi_2|\hat{U}) -I(\xh1;\xh2|\hat{U}).\notag
\end{align}
Assume $U,V$ minimize the functional \eqref{funL} subject to the Markov constraint $U-X-Y-V$, the existence of such $U,V$ was established in Lemma \ref{lem:InfAttained}. Then, combining above, we have
\begin{align}
2F(\lambda) =& I(\xh1;\hat{U}|\phi_1) + I(\xh2;\hat{U}|\phi_2) - I(\xh1;\phi_1|\hat{U}) - I(\xh2;\phi_2|\hat{U}) + I(\xh1; \xh2|\hat{U})\\
& -\lambda\left( I(\yh1;\hat{U}|\phi_1) + I(\yh2;\hat{U}|\phi_2) - I(\yh1;\phi_1|\hat{U}) - I(\yh2;\phi_2|\hat{U}) + I(\yh1; \yh2|\hat{U}) \right) \notag\\
& + I(\yh1; \hat{V}|\hat{U},\phi_1) + I(\yh2;\hat{V} |\hat{U},\phi_2) \notag\\
&\quad- I(\yh1;\phi_1|\hat{U},\hat{V}) - I(\yh2;\phi_2|\hat{U},\hat{V}) + I(\yh2;\yh1|\hat{U},\hat{V}) \notag\\
&\quad + I(\yh1; \phi_1|\hat{U}) + I(\yh2; \phi_2|\hat{U}) -I(\yh1;\yh2|\hat{U})\notag\\
& -\lambda \Big( I(\xh1; \hat{V}|\hat{U},\phi_1) + I(\xh2;\hat{V} |\hat{U},\phi_2) \notag\\
&\quad- I(\xh1;\phi_1|\hat{U},\hat{V}) - I(\xh2;\phi_2|\hat{U},\hat{V}) + I(\xh2;\xh1|\hat{U},\hat{V}) \notag\\
&\quad + I(\xh1; \phi_1|\hat{U}) + I(\xh2; \phi_2|\hat{U}) -I(\xh1;\xh2|\hat{U})\Big) \notag\\
=& I(\xh1;\hat{U}|\phi_1) - \lambda I(\yh1;\hat{U}|\phi_1) + I(\yh1;\hat{V} |\hat{U},\phi_1) - \lambda I(\xh1;\hat{V} |\hat{U},\phi_1) \\
& + I(\xh2;\hat{U}|\phi_2) -\lambda I(\yh2;\hat{U}|\phi_2) + I(\yh2;\hat{V} |\hat{U},\phi_2) - \lambda I(\xh2;\hat{V} |\hat{U},\phi_2)\notag\\
& -(\lambda+1) I(\xh1;\phi_1|\hat{U}) -(\lambda+1) I(\xh2;\phi_2|\hat{U}) + (\lambda+1) I(\xh1; \xh2|\hat{U}) \notag\\
& +(\lambda+1) I(\yh1;\phi_1|\hat{U}) +(\lambda+1) I(\yh2;\phi_2|\hat{U}) - (\lambda+1) I(\yh1; \yh2|\hat{U}) \notag\\
& - I(\yh1;\phi_1|\hat{U},\hat{V}) - I(\yh2;\phi_2|\hat{U},\hat{V}) + I(\yh2;\yh1|\hat{U},\hat{V}) \notag\\
& +\lambda I(\xh1;\phi_1|\hat{U},\hat{V}) +\lambda I(\xh2;\phi_2|\hat{U},\hat{V}) - \lambda I(\xh2;\xh1|\hat{U},\hat{V}) \notag\\
\geq& 2 F(\lambda) \\
& -(\lambda+1) I(\xh1;\phi_1|\hat{U}) -(\lambda+1) I(\xh2;\phi_2|\hat{U}) + (\lambda+1) I(\xh1; \xh2|\hat{U}) \notag\\
& +(\lambda+1) I(\yh1;\phi_1|\hat{U}) +(\lambda+1) I(\yh2;\phi_2|\hat{U}) - (\lambda+1) I(\yh1; \yh2|\hat{U}) \notag\\
& - I(\yh1;\phi_1|\hat{U},\hat{V}) - I(\yh2;\phi_2|\hat{U},\hat{V}) + I(\yh2;\yh1|\hat{U},\hat{V}) \notag\\
& +\lambda I(\xh1;\phi_1|\hat{U},\hat{V}) +\lambda I(\xh2;\phi_2|\hat{U},\hat{V}) - \lambda I(\xh2;\xh1|\hat{U},\hat{V}) .\notag
\end{align}
The last inequality follows since $\hat{U}-\xh1-\yh1 - \hat{V}$ conditioned on $\phi_1$ is a candidate minimizer of the functional, and same for $\hat{U}-\xh2-\yh2 - \hat{V}$ conditioned on $\phi_2$. Hence, we can conclude that the following must hold
\begin{align}
& (\lambda+1) I(\xh1;\phi_1|\hat{U}) +(\lambda+1) I(\xh2;\phi_2|\hat{U}) - (\lambda+1) I(\xh1; \xh2|\hat{U}) \notag\\
& + I(\yh1;\phi_1|\hat{U},\hat{V}) + I(\yh2;\phi_2|\hat{U},\hat{V}) - I(\yh2;\yh1|\hat{U},\hat{V}) \notag\\
\geq&(\lambda+1) I(\yh1;\phi_1|\hat{U}) +(\lambda+1) I(\yh2;\phi_2|\hat{U}) - (\lambda+1) I(\yh1; \yh2|\hat{U}) \label{mainIneq}\\
& +\lambda I(\xh1;\phi_1|\hat{U},\hat{V}) +\lambda I(\xh2;\phi_2|\hat{U},\hat{V}) - \lambda I(\xh2;\xh1|\hat{U},\hat{V}) .\notag
\end{align}
Now, set $\phi_1 = \xh2, \phi_2 = \yh1$. The LHS of \eqref{mainIneq} is given by
\begin{align}
& (\lambda+1) I(\xh1;\phi_1|\hat{U}) +(\lambda+1) I(\xh2;\phi_2|\hat{U}) - (\lambda+1) I(\xh1; \xh2|\hat{U}) \notag\\
& + I(\yh1;\phi_1|\hat{U},\hat{V}) + I(\yh2;\phi_2|\hat{U},\hat{V}) - I(\yh2;\yh1|\hat{U},\hat{V}) \notag\\
=& (\lambda+1) I(\xh1;\xh2|\hat{U}) +(\lambda+1) I(\xh2;\yh1|\hat{U}) - (\lambda+1) I(\xh1; \xh2|\hat{U}) \\
& + I(\yh1;\xh2|\hat{U},\hat{V}) + I(\yh2;\yh1|\hat{U},\hat{V}) - I(\yh2;\yh1|\hat{U},\hat{V}) \notag\\
=& (\lambda+1) I(\yh1;\xh2|\hat{U}) + I(\yh1;\xh2|\hat{U},\hat{V}) .
\end{align}
Also, the RHS of \eqref{mainIneq} can be expressed as
\begin{align}
&(\lambda+1) I(\yh1;\phi_1|\hat{U}) +(\lambda+1) I(\yh2;\phi_2|\hat{U}) - (\lambda+1) I(\yh1; \yh2|\hat{U}) \notag\\
& +\lambda I(\xh1;\phi_1|\hat{U},\hat{V}) +\lambda I(\xh2;\phi_2|\hat{U},\hat{V}) - \lambda I(\xh2;\xh1|\hat{U},\hat{V}) \notag\\
= & (\lambda+1) I(\yh1;\xh2|\hat{U}) +(\lambda+1) I(\yh2;\yh1|\hat{U}) - (\lambda+1) I(\yh1; \yh2|\hat{U}) \\
& +\lambda I(\xh1;\xh2|\hat{U},\hat{V}) +\lambda I(\xh2;\yh1|\hat{U},\hat{V}) - \lambda I(\xh2;\xh1|\hat{U},\hat{V}) \notag\\
= & (\lambda+1) I(\yh1;\xh2|\hat{U}) +\lambda I(\yh1;\xh2|\hat{U},\hat{V}).
\end{align}
Substituting into \eqref{mainIneq}, we find that $(\lambda-1) I(\yh1;\xh2|\hat{U},\hat{V}) \leq 0 \Rightarrow I(\yh1;\xh2|\hat{U},\hat{V})=0$. Therefore, \eqref{mainIneq} is met with equality, and it follows that:
\begin{align}
F(\lambda) & = I(\xh1;\hat{U}|\xh2) - \lambda I(\yh1;\hat{U}|\xh2) + I(\yh1;\hat{V}|\hat{U},\xh2) - \lambda I(\xh1;\hat{V}|\hat{U},\xh2) \\
&= I(\xh2;\hat{U}|\yh1) - \lambda I(\yh2;\hat{U}|\yh1) + I(\yh2;\hat{V}|\hat{U},\yh1) - \lambda I(\xh2;\hat{V}|\hat{U},\yh1).
\end{align}
Since $\hat{U},\xh1,\yh1,\vh1$ conditioned on $\{\yh1=y\}$ is a candidate minimizer of \eqref{funL}, the second assertion of the claim follows.
By a symmetric argument, if we set $\phi_1 = \yh2, \phi_2 = \xh1$, the roles of the indices are reversed, and we find that
\begin{align}
F(\lambda) & = I(\xh1;\hat{U}|\yh2) - \lambda I(\yh1;\hat{U}|\yh2) + I(\yh1;\hat{V}|\hat{U},\yh2) - \lambda I(\xh1;\hat{V}|\hat{U},\yh2) \\
&= I(\xh2;\hat{U}|\xh1) - \lambda I(\yh2;\hat{U}|\xh1) + I(\yh2;\hat{V}|\hat{U},\xh1) - \lambda I(\xh2;\hat{V}|\hat{U},\xh1).
\end{align}
This establishes the first assertion of the claim and completes the proof.
\end{proof}
\begin{lemma} \label{lem:ConserveDerivative}
If $U_{\lambda},V_{\lambda}$ are valid minimizers of the functional \eqref{funL} for parameter $\lambda$, then
\begin{align}
I(Y;U_{\lambda}) + I(X;V_{\lambda}|U_{\lambda}) = -F'(\lambda) ~~~\mbox{for a.e.\ $\lambda$.}
\end{align}
\end{lemma}
\begin{proof}
To begin, let $U_{\lambda+\Delta},V_{\lambda+\Delta}$ be arbitrary, valid minimizers of the functional \eqref{funL} for parameter $\lambda+\Delta$, and let $U_{\lambda-\Delta},V_{\lambda-\Delta}$ be arbitrary, valid minimizers of the functional \eqref{funL} for parameter $\lambda-\Delta$. Next, note that $F(\lambda)$ is concave and (strictly) monotone decreasing in $\lambda$, and hence $F'(\lambda)$ exists for a.e. $\lambda$. Thus, for any $\Delta > 0$,
\begin{align}
\frac{F(\lambda+\Delta)-F(\lambda)}{\Delta} =& \frac{1}{\Delta} \Big(I(X;U_{\lambda+ \Delta}) + I(Y;V_{\lambda+ \Delta}|U_{\lambda+ \Delta}) - (\lambda+\Delta) \left( I(Y;U_{\lambda+ \Delta}) + I(X;V_{\lambda+ \Delta}|U_{\lambda+ \Delta}) \right) \Big)\notag\\
& - \frac{1}{\Delta} \Big(I(X;U_{\lambda}) + I(Y;V_{\lambda}|U_{\lambda}) - \lambda \left( I(Y;U_{\lambda}) + I(X;V_{\lambda}|U_{\lambda}) \right) \Big) \notag\\
=& -\Big( I(Y;U_{\lambda+ \Delta}) + I(X;V_{\lambda+ \Delta}|U_{\lambda+ \Delta}) \Big) \\
& + \frac{1}{\Delta} \Big(I(X;U_{\lambda+ \Delta}) + I(Y;V_{\lambda+ \Delta}|U_{\lambda+ \Delta}) - \lambda \left( I(Y;U_{\lambda+ \Delta}) + I(X;V_{\lambda+ \Delta}|U_{\lambda+ \Delta}) \right) \Big)\notag\\
& - \frac{1}{\Delta} \Big(I(X;U_{\lambda}) + I(Y;V_{\lambda}|U_{\lambda}) - \lambda \left( I(Y;U_{\lambda}) + I(X;V_{\lambda}|U_{\lambda}) \right) \Big) \notag\\
\geq& -\Big( I(Y;U_{\lambda+ \Delta}) + I(X;V_{\lambda+ \Delta}|U_{\lambda+ \Delta}) \Big),
\end{align}
where the last inequality follows since $U_{\lambda + \Delta},V_{\lambda + \Delta}$ is a candidate minimizer of \eqref{funL} with parameter $\lambda$.
Similarly,
\begin{align}
\frac{F(\lambda)-F(\lambda-\Delta)}{\Delta} =& \frac{1}{\Delta} \Big(I(X;U_{\lambda}) + I(Y;V_{\lambda}|U_{\lambda}) - \lambda \left( I(Y;U_{\lambda}) + I(X;V_{\lambda}|U_{\lambda}) \right) \Big)\notag\\
& - \frac{1}{\Delta} \Big(I(X;U_{\lambda-\Delta}) + I(Y;V_{\lambda-\Delta}|U_{\lambda-\Delta}) - (\lambda-\Delta) \left( I(Y;U_{\lambda-\Delta}) + I(X;V_{\lambda-\Delta}|U_{\lambda-\Delta}) \right) \Big)\notag\\
=& -\Big( I(Y;U_{\lambda- \Delta}) - I(X;V_{\lambda+ \Delta}|U_{\lambda+ \Delta}) \Big) \\
& + \frac{1}{\Delta} \Big(I(X;U_{\lambda}) + I(Y;V_{\lambda}|U_{\lambda}) - \lambda \left( I(Y;U_{\lambda}) + I(X;V_{\lambda}|U_{\lambda}) \right) \Big)\notag\\
& - \frac{1}{\Delta} \Big(I(X;U_{\lambda-\Delta}) + I(Y;V_{\lambda-\Delta}|U_{\lambda-\Delta}) - \lambda \left( I(Y;U_{\lambda-\Delta}) + I(X;V_{\lambda-\Delta}|U_{\lambda-\Delta}) \right) \Big)\notag\\
\leq& -\Big( I(Y;U_{\lambda- \Delta}) + I(X;V_{\lambda- \Delta}|U_{\lambda- \Delta}) \Big),
\end{align}
where the last inequality follows since $U_{\lambda - \Delta},V_{\lambda - \Delta}$ is a candidate minimizer of \eqref{funL} with parameter $\lambda$. Recalling concavity of $F(\lambda)$, we have shown
\begin{align}
I(Y;U_{\lambda+ \Delta}) + I(X;V_{\lambda+ \Delta}|U_{\lambda+ \Delta}) \geq -F'(\lambda) \geq I(Y;U_{\lambda- \Delta}) + I(X;V_{\lambda- \Delta}|U_{\lambda- \Delta}).%
\end{align}
As $F'$ is monotone and well-defined up to a set of measure zero, we are justified in writing
\begin{align}
- \lim_{z\rightarrow \lambda^+}F'(z)\geq I(Y;U_{\lambda}) + I(X;V_{\lambda}|U_{\lambda}) \geq - \lim_{z\rightarrow \lambda^-}F'(z).
\end{align}
Since $F'$ is monotone, it is almost everywhere continuous, and so the LHS and RHS above coincide with $-F'(\lambda)$ for almost every $\lambda$.
\end{proof}
Since the derivative $F'(\lambda)$ is just a function of $F$ itself, and not of a particular minimizer, we have the following
\begin{corollary}\label{cor:Stability}
If $U_{\lambda},V_{\lambda}$ are valid minimizers of the functional \eqref{funL} for parameter $\lambda$, then
\begin{align}
I(X;{U}_{\lambda}|Y) + I(Y;{V}_{\lambda}|X) = F(\lambda)-(\lambda-1)F'(\lambda) \mbox{~~for a.e. $\lambda$}.
\end{align}
\end{corollary}
\begin{proof}
Suppose $U_{\lambda},V_{\lambda}$ are valid minimizers. Then, we can write:
\begin{align}
F(\lambda) &= I(X;U_{\lambda}) - \lambda I(Y;U_{\lambda}) + I(Y;V_{\lambda}|U_{\lambda})-\lambda I(X;V_{\lambda}|U_{\lambda})\\
&= I(X;U_{\lambda}) - \lambda I(Y;U_{\lambda}) + I(Y;V_{\lambda})-\lambda I(X;V_{\lambda}) + (\lambda-1)I(U_{\lambda};V_{\lambda})\\
&= I(X;U_{\lambda}|Y) + I(Y;V_{\lambda}|X) +(\lambda-1)\Big(I(U_{\lambda};V_{\lambda}) - I(Y;U_{\lambda}) - I(X;V_{\lambda})\Big)\\
&=I(X;U_{\lambda}|Y) + I(Y;V_{\lambda}|X) +(\lambda-1)F'(\lambda),
\end{align}
where the last line follows from Lemma \ref{lem:ConserveDerivative}.
\end{proof}
\begin{lemma} \label{lem:inductionLemma}
If $U,V$ are valid minimizers of the functional \eqref{funL}, and $\hat{U},\xh1,\yh1,\xh2,\yh2,\hat{V}$ are constructed as described above, then there exist valid minimizers $\widetilde{U},\widetilde{V}$ such that
\begin{align}
I(X;U|Y) \geq I(X;\widetilde{U}|Y) + \frac{1}{2} I(\xh1;\xh2 | \hat{U}, \yh1,\yh2).
\end{align}
\end{lemma}
\begin{proof}
To begin, note that:
\begin{align}
I(\xh1;\xh2|\hat{U},\yh1,\yh2) &= I(\xh1;\xh2,\hat{U}|\yh1,\yh2) - I(\xh1;\hat{U}|\yh1,\yh2)\\
&=I(\xh1;\hat{U}|\yh1,\yh2,\xh2) - I(\xh1;\hat{U}|\yh1,\yh2)\\
&=I(\xh1,\xh2;\hat{U}|\yh1,\yh2) - I(\xh1;\hat{U}|\yh1,\yh2) - I(\xh2;\hat{U}|\yh1,\yh2) \\
&=2 I(X;U|Y) - I(\xh1;\hat{U}|\yh1,\yh2) - I(\xh2;\hat{U}|\yh1,\yh2).
\end{align}
Thus, without loss of generality (relabeling indices 1 and 2 if necessary), we can assume
\begin{align}
I(X;U|Y) \geq I(\xh1;\hat{U}|\yh1,\yh2) + \frac{1}{2}I(\xh1;\xh2|\hat{U},\yh1,\yh2).%
\end{align}
Lemma \ref{lem:candidateMinimizers} asserts that, for almost every $y$, the tuple $\hat{U},\xh1,\yh1,\hat{V}$ conditioned on $\{\yh2=y\}$ is a valid minimizer of \eqref{funL}. Hence, there must exist a $y^*$ such that
\begin{align}
I(X;U|Y) \geq I(\xh1;\hat{U}|\yh1,\yh2=y^*) + \frac{1}{2}I(\xh1;\xh2|\hat{U},\yh1,\yh2),
\end{align}
and $\hat{U},\xh1,\yh1,\hat{V}$ conditioned on $\{\yh2=y^*\}$ is a valid minimizer of \eqref{funL}.
Therefore, the claim follows by letting $\widetilde{U},X,Y,\widetilde{V}$ be equal in distribution to $\hat{U},\xh1,\yh1,\hat{V}$ conditioned on $\{\yh2=y^*\}$.
\end{proof}
\begin{corollary} \label{cor:MIEqZero}
There exist $U,V$ which are valid minimizers of the functional \eqref{funL}, and satisfy
\begin{align}
I(\xh1;\xh2 | \hat{U}, \yh1,\yh2)=0,
\end{align}
where $\hat{U},\xh1,\yh1,\xh2,\yh2,\hat{V}$ are constructed as described above.
\end{corollary}
\begin{proof}
Applying Lemma \ref{lem:inductionLemma}, we can inductively construct a sequence of valid minimizers $\{U^{(k)},X,Y,V^{(k)}\}_{k\geq 1}$ which satisfy
\begin{align}
I(X;U^{(k)}|Y) \geq I(X;U^{(k+1)}|Y) + \frac{1}{2}I(\xh1;\xh2|\hat{U}^{(k)},\yh1,\yh2) \mbox{~~for $k=1,2,\dots$,}
\end{align}
where $\hat{U}^{(k)},\xh1,\yh1,\xh2,\yh2,\hat{V}^{(k)} $ are constructed from two independent copies of $U^{(k)},X,Y,V^{(k)}$.
By Corollary \ref{cor:Stability}, we must also have
\begin{align}
I(X;U^{(k)}|Y) + I(Y;V^{(k)}|X) = F(\lambda)-(\lambda-1)F'(\lambda)
\end{align}
for all $k=1,2,\dots$. Therefore, for any $n$, we have:
\begin{align}
F(\lambda)-(\lambda-1)F'(\lambda) &= \frac{1}{n}\sum_{k=1}^n I(X;U^{(k)}|Y) + I(Y;V^{(k)}|X)\\
&\geq \frac{1}{n}\sum_{k=2}^n \Big( I(X;U^{(k)}|Y) + I(Y;V^{(k)}|X) \Big) + \frac{1}{n}\Big( I(X;U^{(n+1)}|Y) + I(Y;V^{(1)}|X)\Big) \\
&~~+ \frac{1}{2n}\sum_{k=1}^n I(\xh1;\xh2|\hat{U}^{(k)},\yh1,\yh2)\notag\\
&\geq \frac{n-1}{n}\Big(F(\lambda)-(\lambda-1)F'(\lambda) \Big) + \frac{1}{2n}\sum_{k=1}^n I(\xh1;\xh2|\hat{U}^{(k)},\yh1,\yh2),
\end{align}
and thus
\begin{align}
\sum_{k=1}^n I(\xh1;\xh2|\hat{U}^{(k)},\yh1,\yh2) \leq 2\Big(F(\lambda)-(\lambda-1)F'(\lambda) \Big). \label{sumConverges}
\end{align}
Hence, the sum on the LHS of \eqref{sumConverges} must converge as $n\rightarrow \infty$, implying
\begin{align}
\lim_{k\rightarrow \infty} I(\xh1;\xh2|\hat{U}^{(k)},\yh1,\yh2) = 0.
\end{align}
Arguing as in the proof of Lemma \ref{lem:InfAttained} in Appendix \ref{app:InfimaObtained}, we can conclude that there exists an optimizer $U,V$ for which $ I(\xh1;\xh2|\hat{U},\yh1,\yh2)$ is exactly zero.
\end{proof}
\begin{lemma}\cite{ghurye1962characterization}\label{SDcharacterization}
Let $\mathbf{A}_1$ and $\mathbf{A}_2$ be mutually independent $n$-dimensional random vectors. If $\mathbf{A}_1+\mathbf{A}_2$ is independent of $\mathbf{A}_1-\mathbf{A}_2$, then $\mathbf{A}_1$ and $\mathbf{A}_2$ are normally distributed.
\end{lemma}
\begin{corollary} \label{existUV_XuGauss}
There exist optimizers $U,V$ such that $X|\{U = u\}$ is Gaussian for a.e. $u$.
\end{corollary}
\begin{proof}
By construction and Corollary \ref{cor:MIEqZero}, we can conclude that there exist optimizers $U,V$ for which
\begin{align}
I(X_1;X_2|U_1,U_2,Y_1,Y_2) = I(\xh1;\xh2|U_1,U_2,Y_1,Y_2) =0.
\end{align}
Therefore, by Lemma \ref{SDcharacterization}, there exist optimizers $U,V$ such that $X|\{U,Y = u,y\}$ is Gaussian for a.e. $u,y$.
Letting $P(x,y,u,v)$ denote the joint distribution of the above $X,Y,U,V$, we can use Markovity to write:
\begin{align}
P(x,y,u,v) &=P(u)P(y|u)P(x|u,y)P(v|y) \\
&= P(u)P(x|u)P(y|x)P(v|y).
\end{align}
Taking logarithms and rearranging, we have the identity
\begin{align}
\log(P(x|u)) = \log(P(y|u))+\log(P(x|u,y))-\log(P(y|x)). \label{fromUVtoU}
\end{align}
Since $X|\{U,Y = u,y\}$ is Gaussian for a.e. $u,y$, and $X,Y$ are jointly Gaussian by assumption, the RHS of \eqref{fromUVtoU} is a quadratic function of $x$ for a.e. $u,y$. Hence, $\log(P(x|u))$ is quadratic in $x$ for a.e. $u$, and the claim follows.
\end{proof}
\begin{lemma}\label{OohamaLemma} \cite{Oohama1997}%
\label{lem:epi}
For any $U$ satisfying $U-X-Y$, the following inequality holds:
\begin{align}
2^{-2 I(Y;U)} \geq 1-\rho^2+\rho^2 2^{-2I(X;U)}.\label{oohamaEPI}
\end{align}
\end{lemma}
\begin{proof}
Consider any $U$ satisfying $U-X-Y$. Let $Y_u, X_u$ denote the random variables $X,Y$ conditioned on $U=u$. By Markovity and definition of $X,Y$, we have that $Y_u = \rho X_u + Z$, where $Z\sim N(0,1-\rho^2)$ is independent of $X_u$. Hence, the conditional entropy power inequality implies that
\begin{align*}
2^{2h(Y|U)} &\geq \rho^2 2^{2h(X|U)} + 2 \pi e(1-\rho^2) = 2 \pi e \rho^2 2^{-2I(X;U)} + 2 \pi e(1-\rho^2).
\end{align*}
From here, the lemma easily follows.
\end{proof}
\begin{lemma}\label{OohamaExplicit}
\begin{align}
&\inf_{U: U-X-Y} \Big\{ I(X;U)-\lambda I(Y;U) \Big\} =
\begin{cases}
\frac{1}{2}\left[\log\left(\frac{\rho^2(\lambda-1)}{1-\rho^2}\right)-\lambda\log
\left(\frac{\lambda-1}{\lambda(1-\rho^2)}\right)\right] & \mbox{If $\lambda \geq 1/\rho^2$}\\
0 & \mbox{If $0 \leq \lambda \leq 1/\rho^2$.}
\end{cases}\label{uonly}
\end{align}
\end{lemma}
\begin{proof}
The claim follows from Lemma \ref{OohamaLemma} and Lemma \ref{calculusLemma} in Appendix \ref{app:Calculus} by identifying $a_1 \leftarrow \rho^2$ and $a_2 \leftarrow (1-\rho^2)$.
\end{proof}
\begin{lemma} \label{lem:characterizeF}
\begin{align}
F(\lambda) =\inf_{U: U-X-Y} \Big\{ I(X;U)-\lambda I(Y;U) \Big\}.
\end{align}
\end{lemma}
\begin{proof}
We will assume $\lambda \geq 1/\rho^2$. The claim that $F(\lambda)=0$ in the complementary case will then follow immediately by monotonicity of $F(\lambda)$. To this end, let $U,V$ be optimizers such that $X|\{U = u\}$ is Gaussian for a.e. $u$. The existence of such $U,V$ is guaranteed by Corollary \ref{existUV_XuGauss}. Let $X_u,Y_u$ denote the random variables $X,Y$ conditioned on $U=u$. By Markovity, $X_u,Y_u$ are jointly Gaussian with
\begin{align}
Y_u = \rho X_u + Z,
\end{align}
where $Z\sim N(0,1-\rho^2)$ is independent of $X_u$. Letting $\sigma_u^2$ be the variance of $X_u$, the variance of $Y_u$ is $\rho^2 \sigma_u^2 + (1-\rho^2)$. Moreover, the squared linear correlation of $X_u$ and $Y_u$ is given by
\begin{align}
\rho_u^2 \triangleq \frac{\rho^2 \sigma_u^2}{\rho^2 \sigma_u^2 + (1-\rho^2)}.
\end{align}
By Lemma \ref{OohamaExplicit},
\begin{align}
&\inf_{V: V-Y_u-X_u} \Big\{ I(Y_u;V)-\lambda I(X_u;V) \Big\} = \frac{1}{2}\left[\log\left(\frac{\rho_u^2(\lambda-1)}{1-\rho_u^2}\right)-\lambda\log
\left(\frac{\lambda-1}{\lambda(1-\rho_u^2)}\right)\right] \label{simpleOpt}
\end{align}
whenever $\lambda \geq 1/\rho_u^2$, and the infimum is equal to zero otherwise.
By definition, we have
\begin{align}
F(\lambda) &= I(X;U)- \lambda I(Y;U) + I(Y;V|U) - \lambda I(X;V|U) \\
&= \int \Big(h(X)-h(X|u) - \lambda(h(Y) -h(Y|U=u)) + I(Y;V|U=u) - \lambda I(X;V|U=u) \Big)dP_U(u) \\
&= \int \Big(-\frac{1}{2}\log \sigma_u^2 + \frac{\lambda}{2}\log( \rho^2 \sigma_u^2 +(1-\rho^2)) + I(Y;V|U=u) - \lambda I(X;V|U=u) \Big)dP_U(u). \label{integrandScalar}
\end{align}
If $\lambda \geq 1/\rho_u^2$, we can apply \eqref{simpleOpt} to bound the integrand in \eqref{integrandScalar} as follows
\begin{align}
-\frac{1}{2}\log \sigma_u^2& + \frac{\lambda}{2}\log( \rho^2 \sigma_u^2 +(1-\rho^2)) + I(Y;V|U=u) - \lambda I(X;V|U=u)\notag\\
\geq & -\frac{1}{2}\log \sigma_u^2 + \frac{\lambda}{2}\log( \rho^2 \sigma_u^2 +(1-\rho^2)) + \frac{1}{2}\left[\log\left(\frac{\rho_u^2(\lambda-1)}{1-\rho_u^2}\right)-\lambda\log
\left(\frac{\lambda-1}{\lambda(1-\rho_u^2)}\right)\right] \\
=& \frac{1}{2}\left[\log\left(\frac{\rho^2(\lambda-1)}{1-\rho^2}\right)-\lambda\log
\left(\frac{\lambda-1}{\lambda(1-\rho^2)}\right)\right].
\end{align}
On the other hand, if $\lambda \leq 1/\rho_u^2$, then we can bound the integrand in \eqref{integrandScalar} by
\begin{align}
-\frac{1}{2}\log \sigma_u^2 & + \frac{\lambda}{2}\log( \rho^2 \sigma_u^2 +(1-\rho^2)) + I(Y;V|U=u) - \lambda I(X;V|U=u)\notag\\
\geq & -\frac{1}{2}\log \sigma_u^2 + \frac{\lambda}{2}\log( \rho^2 \sigma_u^2 +(1-\rho^2)) \\
\geq& \frac{1}{2}\left[\log\left(\frac{\rho^2(\lambda-1)}{1-\rho^2}\right)-\lambda\log
\left(\frac{\lambda-1}{\lambda(1-\rho^2)}\right)\right],
\end{align}
where the final inequality follows since $\lambda \leq 1/\rho_u^2\Rightarrow \sigma_u^2 \leq \frac{1-\rho^2}{\rho^2(\lambda-1)}$, and $-\frac{1}{2}\log \sigma_u^2 + \frac{\lambda}{2}\log( \rho^2 \sigma_u^2 +(1-\rho^2))$ is monotone decreasing in $\sigma_u^2$ for $ \sigma_u^2 \leq \frac{1-\rho^2}{\rho^2(\lambda-1)}$. Therefore, we have established the inequality
\begin{align}
F(\lambda) \geq \frac{1}{2}\left[\log\left(\frac{\rho^2(\lambda-1)}{1-\rho^2}\right)-\lambda\log
\left(\frac{\lambda-1}{\lambda(1-\rho^2)}\right)\right]. %
\end{align}
The definition of $F(\lambda)$ together with Lemma \ref{OohamaExplicit} implies the reverse inequality,
completing the proof.
\end{proof}
Since \eqref{uonly} is a dual characterization of the inequality \eqref{oohamaEPI}, we have proved Theorem \ref{thm:scalar}.
\begin{remark}
Although Lemma \ref{lem:characterizeF} implies that the functional \eqref{funL} is minimized when either $U$ or $V$ is degenerate, there are also minimizers for which this is not the case. For example, if $-1\leq \rho_u,\rho_v \leq 1$ satisfy
\begin{align}
(1-\rho^2)(1-\rho^2\rho_u^2\rho_v^2) = \rho^2(\lambda-1)(1-\rho_u^2)(1-\rho_v^2),
\end{align}
then $U,V$ defined according to
\begin{align}
U&= \rho_u X + Z_u\\
V&= \rho_v Y + Z_v,
\end{align}
where $Z_u \sim N(0,1-\rho_u^2)$ and $Z_v \sim N(0,1-\rho_v^2)$ are independent of everything else, also minimize \eqref{funL}.
\end{remark}
\section{Vector Setting}\label{sec:vector}
Now, we turn our attention to the vector case. Throughout the remainder of this section, let $\Sigma_X, \Sigma_Z$ be positive definite $n\times n$ matrices. Suppose $\mathbf{X}\sim N(\mu_X,\Sigma_X)$ and $\mathbf{Z}\sim N(\mu_Z,\Sigma_Z)$ are independent $n$-dimensional Gaussian vectors, and define $\mathbf{Y} = \mathbf{X} + \mathbf{Z}$. We recall the statement of Theorem \ref{thm:vector} here, along with conditions for equality:
\noindent \textbf{Theorem \ref{thm:vector}.}
\emph{
For any $U,V$ such that $U-\mathbf{X}-\mathbf{Y}-V$,
\begin{align}
2^{-\frac{2}{n} (I(\mathbf{Y};U)+ I(\mathbf{X};V|U))} &\geq \frac{| \Sigma_X |^{1/n}}{| \Sigma_X + \Sigma_Z |^{1/n}} ~2^{-\frac{2}{n} (I(\mathbf{X};U)+I(\mathbf{Y};V|U))} + 2^{-\frac{2}{n} I(\mathbf{X};\mathbf{Y})}.\label{VecMain}%
\end{align}
Moreover, equality holds iff $\mathbf{X}|\{U=u\} \sim N(\mu_u ,\Sigma_{X|U} )$ for all $u$, where $\mu_u \triangleq \mathbb{E}[\mathbf{X}|U=u]$, and $\Sigma_{X|U}$ is proportional to $\Sigma_Z$.}
\subsection{Proof of Theorem \ref{thm:vector}}
Instead of working directly with inequality \eqref{VecMain}, it will be convenient to consider a dual form. As before, for $\lambda\geq 0$, define
\begin{align}
\mathbf{F}(\lambda)\triangleq \inf_{U,V: U-\mathbf{X}-\mathbf{Y}-V}\Big\{ I(\mathbf{X};U)-\lambda I(\mathbf{Y};U) + I(\mathbf{Y};V|U)-\lambda I(\mathbf{X};V|U)\Big\} \label{funLVector}
\end{align}
The remainder of this section is devoted to bounding the function $\mathbf{F}(\lambda)$.
To begin, we remark that the extension of the results up to Lemma \ref{lem:epi} for the scalar setting immediately generalize to the present vector case by repeating the proofs verbatim. Namely, we have the key observation:
\begin{corollary} \label{existUV_XuGaussVector}
There exist $U,V$ which minimize the functional \eqref{funLVector} such that $\mathbf{X}|\{U = u\}$ is Gaussian for a.e. $u$.
\end{corollary}
Therefore, we pick up at this point and prove a vector version of Lemma \ref{lem:epi}.
\begin{lemma}\label{lem:vecEPI}
For any $U$ such that $U-\mathbf{X}-\mathbf{Y}$,
\begin{align}
2^{-2 I(\mathbf{Y};U)/n} &\geq \frac{| \Sigma_X |^{1/n}}{| \Sigma_X + \Sigma_Z |^{1/n}} ~2^{-2 I(\mathbf{X};U)/n} + 2^{-2 I(\mathbf{X};\mathbf{Y})/n}.%
\end{align}
Moreover, equality holds iff $\mathbf{X}|\{U=u\} \sim N(\mu_u ,\Sigma )$ for all $u$, where $\mu_u \triangleq \mathbb{E}[\mathbf{X}|U=u]$, and $\Sigma \propto \Sigma_Z$.
\end{lemma}
\begin{proof}
Without loss of generality, we can assume $\mu_X=\mu_Z=0$. Note that $\Sigma_Y = \Sigma_X + \Sigma_Z$. We can diagonalize the covariance matrices as
\begin{align}
\Sigma_X &= U_X \Lambda_X U_X^T\\
\Sigma_Y &= U_Y \Lambda_Y U_Y^T,%
\end{align}
where $U_X$ and $U_Y$ are unitary.
Define $\mathbf{Y}' = \Lambda_Y^{-1/2} U_Y^T \mathbf{Y}$, implying $\mathbf{Y}'\sim N(0,I_n)$ and
\begin{align}
\mathbf{Y}' = \Lambda_Y^{-1/2} U_Y^T \mathbf{X} + \Lambda_Y^{-1/2} U_Y^T \mathbf{Z}.
\end{align}
Further, define $\mathbf{X}' = \Lambda_X^{-1/2} U_X^T \mathbf{X}$, so that $\mathbf{X}'\sim N(0,I_n)$ and
\begin{align}
\mathbf{Y}' = \Lambda_Y^{-1/2} U_Y^T U_X \Lambda_X^{1/2} \mathbf{X}' + \Lambda_Y^{-1/2} U_Y^T \mathbf{Z} = B \mathbf{X}' + W,
\end{align}
where $B \triangleq \Lambda_Y^{-1/2} U_Y^T U_X \Lambda_X^{1/2}$ and $W\triangleq \Lambda_Y^{-1/2} U_Y^T \mathbf{Z}$, implying $W\sim N (0,\Lambda_Y^{-1/2} U_Y^T \Sigma_Z U_Y \Lambda_Y^{-1/2} )$.
For any $U$ such that $U - \mathbf{X} - \mathbf{Y}$, $\mathbf{X}'$ and $W$ are independent given $U$. Thus, the conditional entropy power inequality implies
\begin{align}
2^{2 h(\mathbf{Y}'|U)/n} &\geq 2^{2 h( B\mathbf{X}'|U)/n} + 2^{2 h( W | U)/n}\\
&= |B|^{2/n} 2^{2 h(\mathbf{X}'|U)/n} + 2 \pi e | \Lambda_Y^{-1/2} U_Y^T \Sigma_Z U_Y \Lambda_Y^{-1/2} |^{1/n} \\
&= | \Lambda_Y^{-1/2} U_Y^T U_X \Lambda_X^{1/2} |^{2/n} 2^{2 h(\mathbf{X}'|U)/n} + 2 \pi e \frac{|\Sigma_Z|^{1/n}}{|\Sigma_Y|^{1/n}} \\
&= \frac{|\Sigma_X|^{1/n}}{|\Sigma_Y|^{1/n}} 2^{2 h(\mathbf{X}'|U)/n} + 2 \pi e \frac{|\Sigma_Z|^{1/n}}{|\Sigma_Y|^{1/n}}.
\end{align}
Multiplying both sides by $2^{-2 h(\mathbf{Y}')/n} = 2^{-2 h(\mathbf{X}')/n} =\frac{1}{2\pi e}$, we obtain:
\begin{align}
2^{-2 I(\mathbf{Y}';U)/n} &\geq \frac{|\Sigma_X|^{1/n}}{|\Sigma_Y|^{1/n}} 2^{-2 I(\mathbf{X}';U)/n} + \frac{|\Sigma_Z|^{1/n}}{|\Sigma_Y|^{1/n}}.
\end{align}
Since mutual information is invariant under one-to-one transformations of support, we also have
\begin{align}
2^{-2 I(\mathbf{Y};U)/n} &\geq \frac{|\Sigma_X|^{1/n}}{|\Sigma_Y|^{1/n}} 2^{-2 I(\mathbf{X};U)/n} + \frac{|\Sigma_Z|^{1/n}}{|\Sigma_Y|^{1/n}}\\
&=\frac{| \Sigma_X |^{1/n}}{|\Sigma_X + \Sigma_Z|^{1/n}} 2^{-2 I(\mathbf{X};U)/n} + 2^{-2 I(\mathbf{X};\mathbf{Y})/n}.%
\end{align}
The condition for equality follows from the necessary conditions for equality in the conditional entropy power inequality.
\end{proof}
\begin{lemma} \label{lem:MatrixExplicit}Let $U$ be such that $U-\mathbf{X}-\mathbf{Y}$.
\begin{enumerate}
\item If $\lambda \geq 1 + |\Sigma_X^{-1} \Sigma_Z|^{1/n}$, then
\begin{align}
&I(\mathbf{X};U) - \lambda I(\mathbf{Y};U) \geq
\frac{n}{2}\left[\log\left(\frac{|\Sigma_X |^{1/n}(\lambda-1)}{|\Sigma_Z|^{1/n}}\right)-\lambda\log
\left(\frac{| \Sigma_X + \Sigma_Z|^{1/n}(\lambda-1)}{ |\Sigma_Z|^{1/n} \lambda }\right)\right]. \label{matrixLBa}
\end{align}
\item If $0 \leq \lambda \leq 1 + |\Sigma_X^{-1} \Sigma_Z|^{1/n}$, then
\begin{align}
&I(\mathbf{X};U) - \lambda I(\mathbf{Y};U) \geq -\frac{\lambda n}{2}\log \left( \frac{|\Sigma_{X}+\Sigma_Z|^{1/n}}{|\Sigma_{X}|^{1/n} + |\Sigma_{Z}|^{1/n}}\right). \label{matrixLB2a}
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
The claim follows from Lemma \ref{calculusLemma} in Appendix \ref{app:Calculus} by identifying $a_1 \leftarrow \frac{|\Sigma_X|^{1/n}}{|\Sigma_X+\Sigma_Z|^{1/n}}$ and $a_2 \leftarrow 2^{-2 I(\mathbf{X};\mathbf{Y})/n} = \frac{|\Sigma_Z|^{1/n}}{|\Sigma_X+\Sigma_Z|^{1/n}}$. The hypothesis that $a_1 + a_2 \leq 1$ is satisfied since the Minkowski determinant theorem \cite{marcus1992survey} asserts that $|\Sigma_X|^{1/n} + |\Sigma_Z|^{1/n} \leq |\Sigma_X+\Sigma_Z|^{1/n}$.
\end{proof}
Note that the lower bound \eqref{matrixLBa} is achieved if $U$ can be chosen such that $\mathbf{X}|\{U=u\} \sim N (\mu_u , \Sigma_{X|U})$ for each $u$, and $ \Sigma_{X|U} = \alpha \Sigma_{Z}$. In this case
\begin{align}
\frac{1}{n}I(\mathbf{X};U) - \frac{\lambda}{n} I(\mathbf{Y};U) &= \frac{1}{2n}\log\left( \frac{|\Sigma_{X}|}{|\Sigma_{X|U}|}\right) - \frac{\lambda}{2n}\log \left( \frac{|\Sigma_{X}+\Sigma_Z|}{|\Sigma_{X|U} + \Sigma_Z |} \right) \\
&= \frac{1}{2n}\log\left(
\frac{|\Sigma_{X}|}
{\alpha^n |\Sigma_Z|}
\right)
- \frac{\lambda}{2n}\log \left(
\frac{|\Sigma_{X}+\Sigma_Z|}
{(1+\alpha)^n |\Sigma_Z |}
\right) \\
&= \frac{1}{2}\left[\log\left(\frac{| \Sigma_X |^{1/n}(\lambda-1)}{|\Sigma_Z|^{1/n}}\right)-\lambda\log
\left(\frac{|\Sigma_X + \Sigma_Z|^{1/n}(\lambda-1)}{ |\Sigma_Z|^{1/n} \lambda }\right)\right],
\end{align}
where we set $\alpha = \frac{1}{\lambda-1}$ to arrive at the final equality. The lower bound \eqref{matrixLB2a} is only attainable if $\Sigma_X$ and $\Sigma_Z$ are proportional. In this case, the RHS of \eqref{matrixLB2a} is precisely zero.
\begin{lemma} \label{lem:FVecExplicit}
~
\begin{enumerate}
\item If $\lambda \geq 1 + |\Sigma_X^{-1} \Sigma_Z|^{1/n}$, then
\begin{align}
&\mathbf{F}(\lambda) \geq
\frac{n}{2}\left[\log\left(\frac{|\Sigma_X |^{1/n}(\lambda-1)}{|\Sigma_Z|^{1/n}}\right)-\lambda\log
\left(\frac{| \Sigma_X + \Sigma_Z|^{1/n}(\lambda-1)}{ |\Sigma_Z|^{1/n} \lambda }\right)\right]. \label{matrixLB}
\end{align}
\item If $0 \leq \lambda \leq 1 + |\Sigma_X^{-1} \Sigma_Z|^{1/n}$, then
\begin{align}
&\mathbf{F}(\lambda) \geq -\frac{\lambda n}{2}\log \left( \frac{|\Sigma_{X}+\Sigma_Z|^{1/n}}{|\Sigma_{X}|^{1/n} + |\Sigma_{Z}|^{1/n}}\right). \label{matrixLB2}
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
Similar to the scalar setting, it is sufficient to restrict our attention to the setting where $\lambda \geq 1 + |\Sigma_X^{-1} \Sigma_Z|^{1/n}$. Therefore, we assume this throughout the proof.
Let $U,V$ be valid minimizers of \eqref{funLVector}, where $\mathbf{X}|\{U=u\}$ is Gaussian for a.e. $u$. The existence of such $U,V$ is guaranteed by Corollary \ref{existUV_XuGaussVector}. By definition, we have
\begin{align}
\mathbf{F}(\lambda) &= I(\mathbf{X};U)- \lambda I(\mathbf{Y};U) + I(\mathbf{Y};V|U) - \lambda I(\mathbf{X};V|U) \\
&= \int \Big(h(\mathbf{X})-h(\mathbf{X}|u) - \lambda(h(\mathbf{Y}) -h(\mathbf{Y}|U=u)) + I(\mathbf{Y};V|U=u) - \lambda I(\mathbf{X};V|U=u) \Big)dP_U(u) \label{MatrixIntegrand}
\end{align}
Let $\mathbf{X}_u$ and $\mathbf{Y}_u$ denote the random variables $\mathbf{X},\mathbf{Y}$ conditioned on $\{U=u\}$. Suppose $\mathbf{X}_u \sim N(\mu_u, \Sigma_{X_u})$. Then $\mathbf{X}_u,\mathbf{Y}_u$ are jointly normal, and we can write
$ \mathbf{X}_u = B \mathbf{Y}_u + W$, where $W\sim N(0, \Sigma_{X_u }-\Sigma_{X_u Y_u} \Sigma^{-1}_{ Y_u} \Sigma_{ Y_u X_u} )$ is independent of $\mathbf{Y}_u$, and $B = \Sigma_{X_u Y_u} \Sigma^{-1}_{ Y_u}$. Note that Markovity implies $\mathbf{Y}_u = \mathbf{X}_u + \mathbf{Z}$, so that $\Sigma_{X_u Y_u} = \Sigma_{Y_u X_u} = \Sigma_{X_u}$, which simplifies $\Sigma_{X_u Y_u} \Sigma^{-1}_{ Y_u} \Sigma_{ Y_u X_u}$ to $\Sigma_{X_u} \Sigma^{-1}_{ Y_u} \Sigma_{X_u}$, and also implies $\Sigma_{Y_u}-\Sigma_{X_u}=\Sigma_Z$.
Suppose $\Sigma_{X_u}$ is such that $\lambda \geq 1 + \frac{|\Sigma_{X_u }-\Sigma_{X_u } \Sigma^{-1}_{ Y_u} \Sigma_{ X_u}|^{1/n}}{|\Sigma_{X_u } \Sigma^{-1}_{ Y_u} \Sigma_{ X_u}|^{1/n}} = 1 + \frac{|\Sigma_Z|^{1/n}}{|\Sigma_{X_u}|^{1/n}}$. Then, Lemma \ref{lem:MatrixExplicit} allows us to
lower bound the integrand in \eqref{MatrixIntegrand} as:
\begin{align}
h(\mathbf{X}) - h(\mathbf{X}|u)& - \lambda(h(\mathbf{Y}) - h(\mathbf{Y}|u)) + I(\mathbf{Y};V|U=u) - \lambda I(\mathbf{X};V|U=u) \notag\\
\geq&
\frac{n}{2}\left[ \log\left( \frac{|\Sigma_X|^{1/n}}{|\Sigma_{X_u}|^{1/n}}\right) - \lambda \log\left( \frac{|\Sigma_Y|^{1/n}}{|\Sigma_{Y_u}|^{1/n}}\right)\right]\\
&+\frac{n}{2}\left[\log\left(\frac{|B \Sigma_{Y_u} B^T |^{1/n}(\lambda-1)}{|\Sigma_W |^{1/n}}\right)-\lambda\log
\left(\frac{|B \Sigma_{Y_u} B^T+ \Sigma_W |^{1/n}(\lambda-1)}{ | \Sigma_W |^{1/n} \lambda }\right)\right]\notag\\
=&\frac{n}{2}\left[ \log\left( \frac{|\Sigma_X|^{1/n}}{|\Sigma_{X_u}|^{1/n}}\right) - \lambda \log\left( \frac{|\Sigma_Y|^{1/n}}{|\Sigma_{Y_u}|^{1/n}}\right)\right]\\
&+\frac{n}{2}\left[\log\left(\frac{|\Sigma_{X_u } \Sigma^{-1}_{ Y_u} \Sigma_{ X_u}|^{1/n}(\lambda-1)}{|\Sigma_{X_u }-\Sigma_{X_u } \Sigma^{-1}_{ Y_u} \Sigma_{ X_u}|^{1/n}}\right)
-\lambda\log
\left(\frac{|\Sigma_{Xu}|^{1/n}(\lambda-1)}{ |\Sigma_{X_u }-\Sigma_{X_u } \Sigma^{-1}_{ Y_u} \Sigma_{ X_u} |^{1/n} \lambda }\right)\right]\notag\\
=&\frac{n}{2}\left[ \log\left( \frac{|\Sigma_X|^{1/n}}{|\Sigma_{X_u}|^{1/n}}\right) - \lambda \log\left( \frac{|\Sigma_Y|^{1/n}}{|\Sigma_{Y_u}|^{1/n}}\right)\right]\\
&+\frac{n}{2}\left[\log\left(\frac{|\Sigma_{X_u } |^{1/n}(\lambda-1)}{|\Sigma_{Y_u }-\Sigma_{X_u } |^{1/n}}\right)
-\lambda\log
\left(\frac{|\Sigma_{Y_u}|^{1/n}(\lambda-1)}{ |\Sigma_{Y_u }-\Sigma_{X_u } |^{1/n} \lambda }\right)\right]\notag\\
=&\frac{n}{2}\left[
\log\left(
\frac{|\Sigma_X|^{1/n} (\lambda-1)}
{ |\Sigma_{Z }|^{1/n}}\right)
-\lambda\log\left(
\frac{|\Sigma_{X}+\Sigma_Z|^{1/n}(\lambda-1)}
{ |\Sigma_{Z} |^{1/n} \lambda }
\right)\right]. \label{finalDesired}
\end{align}
On the other hand, if $\Sigma_{X_u}$ is such that $0 \leq \lambda \leq 1 + \frac{|\Sigma_Z|^{1/n}}{|\Sigma_{X_u}|^{1/n}}$. Then, Lemma \ref{lem:MatrixExplicit} allows us to
lower bound the integrand in \eqref{MatrixIntegrand} as:
\begin{align}
h(\mathbf{X}) - h(\mathbf{X}|u)& - \lambda(h(\mathbf{Y}) - h(\mathbf{Y}|u)) + I(\mathbf{Y};V|U=u) - \lambda I(\mathbf{X};V|U=u) \notag \\
\geq&
\frac{n}{2}\left[ \log\left( \frac{|\Sigma_X|^{1/n}}{|\Sigma_{X_u}|^{1/n}}\right) - \lambda \log\left( \frac{|\Sigma_Y|^{1/n}}{|\Sigma_{Y_u}|^{1/n}}\right)\right]\\
&-\frac{\lambda n}{2}\log \left( \frac{|\Sigma_{X_u}|^{1/n}}{|\Sigma_{X_u}\Sigma_{Y_u}^{-1}\Sigma_{X_u}|^{1/n} + |\Sigma_{X_u} - \Sigma_{X_u}\Sigma_{Y_u}^{-1}\Sigma_{X_u}|^{1/n}}\right)\notag\\
=&
\frac{n}{2}\left[ \log\left( \frac{|\Sigma_X|^{1/n}}{|\Sigma_{X_u}|^{1/n}}\right)
- \lambda \log \left( \frac{|\Sigma_Y|^{1/n}}{|\Sigma_{X_u} |^{1/n} + |\Sigma_{Z}|^{1/n}}\right)\right]\\
\geq &
\frac{n}{2}\left[ \log\left( \frac{|\Sigma_X|^{1/n}(\lambda-1) }{|\Sigma_{Z}|^{1/n}}\right)
- \lambda \log \left( \frac{|\Sigma_Y|^{1/n}(\lambda-1)}{ |\Sigma_{Z}|^{1/n} \lambda }\right)\right],
\end{align}
where the last line follows since $ -\log\left({|\Sigma_{X_u}|^{1/n}}\right) +
\lambda \log \left( {|\Sigma_{X_u} |^{1/n} + |\Sigma_{Z}|^{1/n}}\right)$ is a monotone decreasing function of $|\Sigma_{X_u} |^{1/n}$ provided $\lambda \leq 1 + \frac{|\Sigma_Z|^{1/n}}{|\Sigma_{X_u}|^{1/n}}$. Thus, setting $|\Sigma_{X_u} |^{1/n} = \frac{1}{\lambda-1}|\Sigma_{Z} |^{1/n}$ only weakens the inequality.
\end{proof}
The combination of Lemmas \ref{lem:vecEPI}-\ref{lem:FVecExplicit} proves Theorem \ref{thm:vector}.
\section{Closing Remarks}\label{sec:conclusion}
The focus of this paper was on the extremal result asserted by Theorem \ref{thm:vector}, and not on operational coding problems. However, since the entropy-power-like inequality of Theorem \ref{thm:vector} leads to what is arguably the simplest solution for the two-encoder quadratic Gaussian source coding problem (an archetypical problem in network information theory), we have little doubt that it will find other interesting applications. We provided Theorem \ref{thm:Ellipsoid} as one such example. As another example, Theorem \ref{thm:scalar} can be applied to show that jointly Gaussian auxiliaries exhaust the rate region for multiterminal source coding under logarithmic loss \cite{bib:CourtadeWeissman_LogLoss_TransIT2014} when the sources are Gaussian. This leads to yet another solution for the two-encoder quadratic Gaussian source coding problem, and unifies the two problems under the paradigm of compression under logarithmic loss.
\section*{Acknowledgement}
The authors acknowledge several conversations with Tsachy Weissman, Kartik Venkat, and Vignesh Ganapathi-Subramanian which led to deeper intuition and insight. %
The first author also wishes to acknowledge an inspiring discussion with Chandra Nair that took place at the 2014 International Zurich Seminar on Communications following the presentation of \cite{bib:CourtadeIZS}. This conversation and the following exchanges generated the spark which led to a successful proof of Theorem \ref{thm:scalar}, a long-held conjecture of the first author that was first made public on the \emph{Information Theory b-Log} in March, 2013 \cite{BLOG, ITSOCnewsletter}.
\appendices
\section{Proof of Theorem \ref{thm:Ellipsoid}}\label{app:ellipsoidProof}
\noindent \textbf{Converse Part:}
Define $U=f_x(\mathbf{X}_1, \dots, \mathbf{X}_k)$, and $V=f_y(\mathbf{Y}_1, \dots, \mathbf{Y}_k)$, and suppose that $U,V$ are such that we can determine an ellipsoid $\mathcal{E}(A_x,b_x)$ having volume bounded by
\begin{align}
\Big( \operatorname{vol}(\mathcal{E}(A_x,b_x) ) \Big)^{1/n}= \frac{c_n^{1/n}}{|A_x|^{1/n}} \leq {c_n^{1/n}}{\sqrt{{n \nu_x} |\Sigma_n|^{1/n}}} \label{VolSupp}
\end{align}
and containing the points $\{\mathbf{X}_i\}_{i=1}^k$ with probability at least $1-\epsilon_n$, where $\lim_{n\rightarrow\infty} \epsilon_n= 0$.
For $i\in \{1,\dots,k\}$, define the indicator random variable $E_i = \mathbf{1}_{\{\mathbf{X}_i\notin \mathcal{E}(A_x,b_x) \}}$, and note that
\begin{align}
&\left| \mathbb{E}\left[ \left(A_x \mathbf{X}_i - \mathbb{E}[A_x\mathbf{X}_i|U,V,E_i] \right)\left(A_x \mathbf{X}_i - \mathbb{E}[A_x\mathbf{X}_i|U,V,E_i] \right)^T \Big| E_i=0\right]\right|^{1/n} \notag\\
&\leq \frac{1}{n}{\operatorname{Tr}\left( \mathbb{E}\left[ \left(A_x \mathbf{X}_i - \mathbb{E}[A_x\mathbf{X}_i|U,V,E_i] \right)\left(A_x \mathbf{X}_i - \mathbb{E}[A_x\mathbf{X}_i|U,V,E_i] \right)^T \Big| E_i=0 \right] \right)}\label{AGineq}\\
&= \frac{1}{n}\mathbb{E}\left[ \left\| A_x \mathbf{X}_i - \mathbb{E}[A_x\mathbf{X}_i|U,V,E_i] \right\|^2 \Big| E_i=0 \right]\\
& \leq \frac{1}{n}\mathbb{E}\left[ \left\| A_x \mathbf{X}_i -b_x\right\|^2 \Big| E_i=0 \right]\label{mmseEst}\\
&\leq \frac{1}{n},\label{EllipsHyp}
\end{align}
where \eqref{AGineq} follows from the inequality of arithmetic and geometric means, \eqref{mmseEst} follows since conditional expectation minimizes mean square error, and \eqref{EllipsHyp} follows since $\{E_i=0\}\Rightarrow \{\mathbf{X}_i \in \mathcal{E}(A_x,b_x)\}\Rightarrow \{\left\| A_x \mathbf{X}_i -b_x\right\|\leq 1\}$. Summarizing the above, we have established that, for each $i=1,\dots,k$,
\begin{align}
\left| \mathbb{E}\left[ \left(A_x \mathbf{X}_i - \mathbb{E}[A_x\mathbf{X}_i|U,V,E_i] \right)\left(A_x \mathbf{X}_i - \mathbb{E}[A_x\mathbf{X}_i|U,V,E_i] \right)^T \Big| E_i=0\right]\right|^{1/n} \leq \frac{1}{n}. \label{maxEntLem}
\end{align}
Applying \eqref{maxEntLem} in conjunction with the maximum-entropy property of Gaussians, we have
\begin{align}
\frac{1}{n}I(\mathbf{X}_i;U,V|E_i=0) &= \frac{1}{n}h(\mathbf{X}_i|E_i=0)- \frac{1}{n}h(\mathbf{X}_i|U,V,E_i=0)\\
&=\frac{1}{n}h(\mathbf{X}_i|E_i=0)- \frac{1}{n}h\left(A_x \mathbf{X}_i|U,V,E_i=0\right) + \frac{1}{n}\log |A_x | \\
&\geq \frac{1}{n}h(\mathbf{X}_i|E_i=0) - \frac{1}{2n}\log\left( (2\pi e)^n\right) - \frac{1}{2}\log\left( \frac{1}{n}\right) + \frac{1}{2n}\log \frac{1}{n^n \nu_x^n |\Sigma_n|} \label{keyVolIneq}\\
&= \frac{1}{n}\Big( h(\mathbf{X}_i|E_i=0)-h(\mathbf{X}_i)\Big) + \frac{1}{2}\log \frac{1}{\nu_x}.
\end{align}
Since $I(\mathbf{X}_i;U,V)\geq I(\mathbf{X}_i;U,V|E_i)-H(E_i)\geq \Pr\{E_i=0\}I(\mathbf{X}_i;U,V|E_i=0) - H(E_i)$ and $\Pr\{E_i=0\} \geq 1-\epsilon_n \rightarrow 1$, it follows by Lemma \ref{lem:hXE_hX} (see Appendix \ref{app:Calculus}) that, for any $\delta>0$,
\begin{align}
\frac{1}{n} I(\mathbf{X}_i;U,V) \geq \frac{1}{2}\log \frac{1}{\nu_x\sqrt{1+\delta}}
\end{align}
for $n$ sufficiently large. Since $\mathbf{X}_1, \dots, \mathbf{X}_k$ are mutually independent, we have
\begin{align}
\frac{1}{k n} I(\mathbf{X}_1, \dots, \mathbf{X}_k;U,V) \geq \frac{1}{k n} \sum_{i=1}^k I(\mathbf{X}_i;U,V) \geq \frac{1}{2}\log \frac{1}{\nu_x\sqrt{1+\delta}}\label{xCons}
\end{align}
for all $n$ sufficiently large.
By a symmetric argument, it also holds that
\begin{align}
\frac{1}{k n} I(\mathbf{Y}_1, \dots, \mathbf{Y}_k;U,V) \geq \frac{1}{2}\log \frac{1}{\nu_y\sqrt{1+\delta}}\label{yCons}
\end{align}
for $n$ sufficiently large. Thus, since
\begin{align}
R_x+R_y \geq \frac{1}{k n} H(U,V) \geq \frac{1}{k n} I(\mathbf{X}_1,\mathbf{Y}_1, \dots, \mathbf{X}_k,\mathbf{Y}_k;U,V),
\end{align}
it follows from \eqref{eqnRD} and \eqref{RDineq} that
\begin{align}
R_x + R_y \geq \frac{1}{2}\log \frac{(1-\rho^2)\beta((1+\delta)\nu_x \nu_y)}{2(1+\delta)\nu_x \nu_y}.
\end{align}
Since $\delta>0$ can be taken arbitrarily small, \eqref{RxyConstratint} must hold.
Likewise, since \eqref{introWhite} implies
\begin{align}
2^{\frac{2}{kn} \left(I(\mathbf{X}_1, \dots, \mathbf{X}_k;U|V)-I(\mathbf{X}_1, \dots, \mathbf{X}_k;U,V)\right)} = 2^{-\frac{2}{kn} I(\mathbf{X}_1, \dots, \mathbf{X}_k;V)} &\geq (1-\rho^2) + \rho^2 2^{-\frac{2}{kn} I(\mathbf{Y}_1, \dots, \mathbf{Y}_k;V)},%
\end{align}
it follows that
\begin{align}
2^{\frac{2}{kn} \left(H(U)-I(\mathbf{X}_1, \dots, \mathbf{X}_k;U,V)\right)} &\geq (1-\rho^2) + \rho^2 2^{-\frac{2}{kn} H(V)}.%
\end{align}
Therefore, \eqref{xCons} also implies \eqref{RxConstratint}. By a symmetric argument, we obtain \eqref{RyConstratint}.
\medskip
\noindent \textbf{Direct Part:}
Fix $\epsilon>0$, and assume \eqref{RxConstratint}-\eqref{RxyConstratint} are satisfied. Further, diagonalize $\Sigma = U_{\Sigma} \Lambda U_{\Sigma}^T$ (throughout the proof, we suppress the explicit dependence of the covariance matrices on $n$ for convenience).
Suppose $(X^n,Y^n)$ is a sequence of independent pairs of random variables, where $(X_j,Y_j)$ are jointly normal with linear correlation $\rho$, and $X_i,Y_i$ each have unit variance.
For $n$ sufficiently large, the achievability result for the two-encoder quadratic Gaussian source coding problem (e.g., \cite[Theorem 1]{WagnerRateRegion2008}) implies that there exist functions $f_x : \mathbb{R}^{n} \rightarrow \{1,2,\dots,2^{n R_x }\}$ and $f_y : \mathbb{R}^{n} \rightarrow \{1,2,\dots,2^{n R_y }\}$
for which
\begin{align}
\Pr\left\{ \mathbb{E} \left[ \| X^n - \mathbb{E}\left[ X^n |f_x(X^n),f_y(Y^n)\right] \|^2\right] > n\nu_x \right\} &< \epsilon\quad\mbox{and} \\
\Pr\left\{ \mathbb{E} \left[ \| Y^n - \mathbb{E}\left[ Y^n |f_x(X^n),f_y(Y^n)\right] \|^2\right] > n\nu_y \right\} &< \epsilon.
\end{align}
Therefore, the rates $(R_x,R_y)$ are sufficient to communicate $k$ ellipsoids of the form
\begin{align}
\mathcal{E}_i = \{ x : \|A_x x - b_i\| \leq 1 \} \mbox{~~~~for $i=1,2,\dots, k$,}
\end{align}
where $A_x \triangleq \frac{1}{\sqrt{n \nu_x } }\Lambda^{-1/2}U_{\Sigma}^T$, $b_i \triangleq A_x \mathbb{E}\left[ {\mathbf{X}_i}|f_x({\mathbf{X}_i}),f_y({\mathbf{Y}_i})\right]$, and $\mathbf{X}_i \in \mathcal{E}_i$ with probability greater than $1-\epsilon$ for $i=1,2,\dots, k$. This follows since the pair of vectors $(\Lambda^{-1/2}U_{\Sigma}^T \mathbf{X}_i, \Lambda^{-1/2}U_{\Sigma}^T \mathbf{Y}_i)$ is equal in distribution to $(X^n,Y^n)$.
Let $\delta>0$ satisfy $\sqrt{\delta/\nu_x} < \epsilon$, and define $\tau = 1-2\sqrt{\delta}$, and $\gamma = (1-\delta)\tau$ for convenience. Further, let $u_1, \dots, u_k$ be an orthonormal basis for the vector space spanned by $ b_1, \dots, b_k$, and define
\begin{align}
\matr{\mathbf{B_x}} \triangleq
\left(\tau I_n - \gamma \sum_{i=1}^k u_i u_i^T\right)A_x.
\end{align}
We remark that $\matr{\mathbf{B_x}}$ is a random matrix, and is a function of $\{f_x({\mathbf{X}_i}),f_y({\mathbf{Y}_i})\}_{i=1}^k$. Note that
\begin{align}
\matr{\mathbf{B_x}} \mathbf{X}_i &= \left(\tau I_n - \gamma \sum_{i=1}^k u_i u_i^T\right) A_x \mathbf{X}_i = \left(\tau I_n - \gamma \sum_{i=1}^k u_i u_i^T\right) \widetilde{\mathbf{X}}_i ,
\end{align}
where $\widetilde{\mathbf{X}}_i \triangleq A_x \mathbf{X}_i$ for $i=1,2, \dots, k$. Define $\mathbf{Z}_i=\widetilde{\mathbf{X}}_i - b_i$, and continue with
\begin{align}
\matr{\mathbf{B_x}} \mathbf{X}_i
= \left(\tau I_n - \gamma \sum_{j=1}^k u_j u_j^T\right) \widetilde{\mathbf{X}}_i
&=\tau ( \widetilde{\mathbf{X}}_i - b_i ) + \tau b_i - \gamma \sum_{j=1}^k u_j u_j^T\left(\mathbf{Z}_i +b_i\right) \\
&=\tau ( \widetilde{\mathbf{X}}_i - b_i ) + (\tau-\gamma) b_i - \gamma \sum_{j=1}^k u_j u_j^T \mathbf{Z}_i,\label{orthProj}
\end{align}
where \eqref{orthProj} follows since $\sum_{j=1}^k u_j u_j^T$ is an orthogonal projection onto the space spanned by $b_1, \dots, b_k$.
Applying the triangle inequality, we can conclude
\begin{align}
\|\matr{\mathbf{B_x}} \mathbf{X}_i \| &\leq \tau \| \widetilde{\mathbf{X}}_i - b_i \| + (\tau-\gamma) \|b_i \| + \gamma \left\| \sum_{j=1}^k u_j u_j^T \mathbf{Z}_i \right\|\\
&= \tau \| \widetilde{\mathbf{X}}_i - b_i \| + (\tau-\gamma) \|b_i \| + \gamma \sqrt{\sum_{j=1}^k (u_j^T \mathbf{Z}_i)^2}\\
&\leq \tau \| \widetilde{\mathbf{X}}_i - b_i \| + \delta \|b_i \| + \sqrt{\sum_{j=1}^k (u_j^T \mathbf{Z}_i)^2}.
\end{align}
Using Jensen and Markov's inequalities, we can bound
\begin{align}
\Pr\left\{ \|b_i \| \geq \frac{1}{\sqrt{\delta}} \right\} &= \Pr\left\{ \left\|\mathbb{E}\left[ {\widetilde{\mathbf{X}}_i}|f_x({\mathbf{X}_i}),f_y({\mathbf{Y}_i})\right] \right\| \geq \frac{1}{\sqrt{\delta}} \right\}\\
&\leq \Pr\left\{ \mathbb{E}\left[ \left\|{\widetilde{\mathbf{X}}_i} \right\| \Big| f_x({\mathbf{X}_i}),f_y({\mathbf{Y}_i})\right] \geq \frac{1}{\sqrt{\delta}} \right\}\\
&\leq \sqrt{\delta} \mathbb{E}\left[ \left\|{\widetilde{\mathbf{X}}_i} \right\|\right] \\
&\leq \sqrt{\delta\mathbb{E}\left[ \left\|{\widetilde{\mathbf{X}}_i} \right\|^2 \right] }\\
&= \sqrt{\frac{\delta}{\nu_x}}\label{usedXDist}\\
&\leq \epsilon,
\end{align}
where \eqref{usedXDist} follows since $\widetilde{X}_i \sim N\left(0,\frac{1}{n\nu_x} I_n\right)$.
Next, since $b_j$ is the LLMSE estimator of $\widetilde{X}_i$ given $f_x({\mathbf{X}_i}),f_y({\mathbf{Y}_i})$ by construction, we have
\begin{align}
\mathbb{E} \left[ \sum_{j=1}^k (u_j^T \mathbf{Z}_i)^2 \right] &=\sum_{j=1}^k u_j^T\Sigma_{\mathbf{Z}_i} u_j \leq \frac{k}{n \nu_x},
\end{align}
where the inequality follows since the center quantity is upper bounded by the $k$ largest eigenvalues of $\Sigma_{\mathbf{Z}_i}$, which are themselves upper bounded by the $k$ largest eigenvalues of $\Sigma_{\widetilde{\mathbf{X}}_i} = \frac{1}{n\nu_x}I_n$. Therefore, proceeding with Markov's inequality, we have
\begin{align}
\Pr\left\{ \sum_{j=1}^k (u_j^T \mathbf{Z}_i)^2 \geq \delta \right\} \leq \frac{k }{\delta n \nu_x},
\end{align}
which is upper-bounded by $\epsilon$ for $n$ sufficiently large.
Also by construction, $\| \widetilde{\mathbf{X}}_i - b_i \| = \| A_x {\mathbf{X}}_i - b_i \| \leq 1$ with probability greater than $1-\epsilon$. Therefore for $n$ sufficiently large, we can conclude that
\begin{align}
\Pr \Big\{ \|\matr{\mathbf{B_x}} \mathbf{X}_i \| \leq 1 \Big\} > 1-3\epsilon .%
\end{align}
Finally, note that
\begin{align}
\left| \matr{\mathbf{B_x}}\right| &= \left| A_x \right| \left| \tau I_n - \gamma \sum_{i=1}^k u_i u_i^T \right| = \left| A_x\right| \tau^{n-k}(\tau-\gamma)^k,
\end{align}
and so
\begin{align}
\left| \matr{\mathbf{B_x}}\right| ^{1/n} = \tau \delta^{k/n} |A_x|^{1/n} = \frac{(1-2\sqrt{\delta}) \delta^{k/n}}{\sqrt{n \nu_x|\Sigma|^{1/n}}}.
\end{align}
Since $\delta$ can be taken arbitrarily small, a symmetric argument involving the $\mathbf{Y}_i$'s completes the proof.
\section{Existence of Minimizers}\label{app:InfimaObtained}
We will require the following result on lower semicontinuity of relative entropy.
\begin{lemma} \cite[Lemma 1.4.3]{dupuis2011weak}\label{lem:DSemiCont}
Let $\mathcal{X}$ be a Polish space, and let $\mathcal{P}(\mathcal{X})$ denote the set of probability measures on $\mathcal{X}$. The relative entropy $D(P \|Q)$ is a lower semicontinuous function of $(P,Q) \in \mathcal{P}(\mathcal{X})\times \mathcal{P}(\mathcal{X})$ with respect to the weak topology.
\end{lemma}
\begin{lemma}\label{lem:InfAttained}
The infimum in \eqref{funL} is attained.
\end{lemma}
\begin{proof}
We can assume $\lambda>1$, else the data processing inequality implies that $F(\lambda)\geq 0$, which is easily attained.
First, we show that if $\{U_n,X_n,Y_n,V_n\}_{n\geq 1}$ is a sequence of candidate minimizers\footnote{i.e., $(X_n,Y_n)$ equal $(X,Y)$ in distribution, and $U_n-X_n-Y_n-V_n$ for each $n$.} of \eqref{funL} which converge weakly to $(U^*,X^*,Y^*,V^*)$, then $(U^*,X^*,Y^*,V^*)$ is also a candidate minimizer. To see this, note that Lemma \ref{lem:DSemiCont} asserts that relative entropy is lower semicontinuous with respect to the weak topology, and hence
\begin{align}
0 =\lim_{n\rightarrow \infty} D(P_{X_n Y_n} \| P_{XY}) \geq D(P_{X^* Y^*} \| P_{XY}) \geq 0.
\end{align}
Therefore, we see that $(X^*,Y^*)$ must be equal to $(X,Y)$ in distribution. Similarly, by recognizing that (unconditional) mutual information is a relative entropy, lower semicontinuity also yields
\begin{align}
I(X;Y) =\lim_{n\rightarrow \infty} I(U_n,X_n;Y_n,V_n) \geq I(U^*,X^*;Y^*,V^*) \geq I(X^*;Y^*) = I(X;Y).
\end{align}
Hence, we can conclude that $U^*-X^*-Y^*-V^*$, and therefore $(U^*,X^*,Y^*,V^*)$ is a candidate minimizer as desired.
Suppose again that $\{U_n,X_n,Y_n,V_n\}_{n\geq 1}$ is a sequence of candidate minimizers that converges weakly to $(U^*,X^*,Y^*,V^*)$. As established previously, $(X_n,Y_n) = (X^*,Y^*) = (X,Y)$ in distribution. Fix an arbitrary conditional distribution $P_{Z | Y}$. Let $P_{Y} \rightarrow P_{Z | Y} \rightarrow P_{Z}$.
By lower semicontinuity of relative entropy, we have:
\begin{align}
\liminf_{n\rightarrow \infty} D( P_{X_n Y_n U_n} P_{Z} \| P_{X_n U_n} P_{Y Z}) \geq
D( P_{X^* Y^* U^*} P_{Z} \| P_{X^* U^*} P_{Y Z})
\end{align}
Thus, there exists $\delta(n)\rightarrow 0$ such that
\begin{align}
D( P_{X_n Y_n U_n} P_{Z} \| P_{X_n U_n} P_{Y Z}) \geq
D( P_{X^* Y^* U^*} P_{Z} \| P_{X^* U^*} P_{Y Z}) -\delta(n),
\end{align}
which implies
\begin{align}
\inf_{P_{Z|Y}} D( P_{X_n Y_n U_n} P_{Z} \| P_{X_n U_n} P_{Y Z}) \geq
\inf_{P_{Z|Y}} D( P_{X^* Y^* U^*} P_{Z} \| P_{X^* U^*} P_{Y Z}) -\delta(n).
\end{align}
By the variational representation of mutual information, the infima on the LHS and RHS are attained when $P_{Z|Y} = P_{U_n|Y}$ and $P_{Z|Y} = P_{U^*|Y}$, respectively. Therefore,
\begin{align}
I(X_n ; Y_n | U_n) \geq I(X^* ; Y^* | U^* ) -\delta(n).
\end{align}
Since $\delta(n)$ vanishes, we can conclude that
\begin{align}
\liminf_{n\rightarrow \infty} I(X_n ; Y_n | U_n) &\geq I(X^* ; Y^* | U^* ). \label{semi1}%
\end{align}
Now, we can add $2\lambda I(X;Y)$ to the functional being considered in \eqref{funL} without changing the nature of the optimization problem. Therefore, we aim to show that the infimum of the functional
\begin{align}
&I(X;U)-\lambda I(Y;U) + I(Y;V|U)-\lambda I(X;V|U) + 2 \lambda I(X;Y)\\
=&I(X;U) + \lambda I(X;Y|U) + I(Y;V) + \lambda I(X;Y|V) + (\lambda-1) I(U;V).
%
\end{align}
is attained. By our previous observation, if $\{U_n,X_n,Y_n,V_n\}_{n\geq 1}$ is a sequence of candidate minimizers which approach the infimum of \eqref{funL} and converge weakly to $(U^*,X^*,Y^*,V^*)$, then we can apply lower semicontinuity again together with \eqref{semi1} and its symmetric variant $\liminf_{n\rightarrow \infty} I(X_n ; Y_n | V_n) \geq I(X^* ; Y^* | V^* )$ to obtain
\begin{align}
F(\lambda) + 2 \lambda I(X;Y) &= \lim_{n\rightarrow \infty} I(X_n;U_n) + \lambda I(X_n;Y_n|U_n) + I(Y_n;V_n) + \lambda I(X_n;Y_n|V_n) + (\lambda-1) I(U_n;V_n) \notag\\
&\geq I(X^*;U^*) + \lambda I(X^*;Y^*|U^*) + I(Y^*;V^*) + \lambda I(X^*;Y^*|V^*) + (\lambda-1) I(U^*;V^*)\\
&\geq F(\lambda) + 2 \lambda I(X;Y),
\end{align}
implying equality throughout, and optimality of $(U^*,X^*,Y^*,V^*)$.
Thus, we only need to show that if $\{U_n,X_n,Y_n,V_n\}_{n\geq 1}$ is a sequence of candidate minimizers, then there exists a subsequence $\{U_{n_k},X_{n_k},Y_{n_k},V_{n_k}\}_{k\geq 1}$ which converges weakly to some $(U^*,X^*,Y^*,V^*)$. Since mutual information is invariant under one-to-one transformations of support, we can assume without loss of generality that $U_n,V_n$ are each supported on the interval $[0,1]$ for all $n$. Recalling Prohorov's theorem \cite[Theorem 3.9.2]{durrett2010probability}, we only need to show that the sequence of measures $P_n$ is tight, where $P_n$ is the joint distribution of $(U_n,X_n,Y_n,V_n)$. To this end, note that for any $\epsilon>0$, we can choose $t$ sufficiently large so that
\begin{align}
P_n \left\{ (U_n,X_n,Y_n,V_n) \notin [-t,t]^4\right\} = P_n \left\{ (X_n,Y_n) \notin [-t,t]^2 \right\} < \epsilon.
\end{align}
Thus, the claim is proved.
\end{proof}
\section{Auxiliary Lemmas}\label{app:Calculus}
\begin{lemma}\label{lem:hXE_hX}
For $n= 1, 2, \dots$, suppose $X^n \sim N(0,\Sigma_n)$, where $\Sigma_n \in \mathbb{R}^{n\times n}$ is positive definite, and let $E_n\in\{0,1\}$ be correlated with $X^n$. If $\lim_{n\rightarrow \infty} \Pr\{E_n = 0\} = 1$, then
\begin{align}
\lim_{n\rightarrow \infty } \frac{1}{n}\Big( h(X^n|E_n=0) - h(X^n) \Big) = 0. \label{normalizeByn}
\end{align}
\end{lemma}
\begin{proof}
For the proof, we suppress the explicit dependence of $X^n$ and $\Sigma_n$ on $n$, and simply write $\mathbf{X}$ and $\Sigma$.
Note that
\begin{align}
H(E_n)\geq I(\mathbf{X};E_n) &= \Pr\{E_n=0\}D(P_{\mathbf{X}|E_n=0} \| P_{\mathbf{X}} ) + \Pr\{E_n=1\}D(P_{\mathbf{X}|E_n=1} \| P_{\mathbf{X}} ) \\
&\geq \Pr\{E_n=0\}D(P_{\mathbf{X}|E_n=0} \| P_{\mathbf{X}} ),
\end{align}
and therefore $D(P_{\mathbf{X}|E_n=0} \| P_{\mathbf{X}} )\rightarrow 0$. Define $Z_n = \frac{1}{n}\mathbf{X}^T \Sigma^{-1} \mathbf{X}$, and $W_n = \frac{1}{n}\mathbf{X}^T \Sigma^{-1} \mathbf{X} |\{E_n=0\}$. By the data processing theorem for relative entropy, $D(P_{W_n} \| P_{Z_n} )\rightarrow 0$ as well. Thus, for any $\varepsilon>0$, Pinsker's inequality and the WLLN together imply
\begin{align}
\Pr\{ |W_n-1| \geq \varepsilon \} &= \Pr\{ |W_n-1| \geq \varepsilon \} - \Pr\{ |Z_n-1| \geq \varepsilon \} + \Pr\{ |Z_n-1| \geq \varepsilon \}\\
&\leq \| P_{W_n}-P_{Z_n} \|_{TV} + \Pr\{ |Z_n-1| \geq \varepsilon \}\\
&\leq \varepsilon
\end{align}
for all $n$ sufficiently large, establishing that $W_n\rightarrow 1$ in probability. For any function $f : \mathbb{R}^n\rightarrow \mathbb{R}$%
\begin{align}
\mathbb{E}\left[ \mathbf{X}^T \Sigma^{-1} \mathbf{X} f(\mathbf{X}) \right] = \Pr\{E_n=0\} \mathbb{E}\left[ \mathbf{X}^T \Sigma^{-1} \mathbf{X} f(\mathbf{X}) | E_n=0 \right]
+ \Pr\{E_n=1\} \mathbb{E}\left[ \mathbf{X}^T \Sigma^{-1} \mathbf{X} f(\mathbf{X}) | E_n=1 \right], \notag
\end{align}
so it follows that
\begin{align}
\Pr\{E_n = 0\} \mathbb{E}\left[ W_n 1_{\{W_n \geq K\}}\right] \leq \mathbb{E}\left[ Z_n 1_{\{Z_n \geq K\}}\right] \label{unifIntegra}
\end{align}
by non-negativity of $W_n$ and $Z_n$. The Cauchy-Schwarz and Markov inequalities together imply
\begin{align}
\left| \mathbb{E}\left[ Z_n 1_{\{Z_n \geq K\}}\right] \right|^2 \leq \mathbb{E}\left[ Z_n^2 \right] \Pr\{Z_n\geq K\} =\frac{n+2}{n} \Pr\{Z_n\geq K\}\leq \frac{3}{K},
\end{align}
and therefore \eqref{unifIntegra} implies that $\{W_n\}_{n\geq 1}$ is uniformly integrable. It follows that $\mathbb{E}[W_n]\rightarrow 1 = \mathbb{E}[Z_n]$ (e.g., \cite[Theorem 5.5.2]{durrett2010probability}).
To conclude, we observe that
\begin{align}
\frac{1}{n}\Big( h(\mathbf{X}|E_n=0) - h(\mathbf{X}) \Big)
&= -\frac{1}{n}D(P_{\mathbf{X}|E_n=0} || P_\mathbf{X} ) + \frac{1}{n}\int_{\mathbb{R}^n} \Big( P_\mathbf{X}(\mathbf{x}) - P_{\mathbf{X}|E_n=0}(\mathbf{x}) \Big) \log(P_\mathbf{X}(\mathbf{x})) d\mathbf{x} \\
&= -\frac{1}{n}D(P_{\mathbf{X}|E_n=0} || P_\mathbf{X} ) -\frac{1}{2n} \mathbb{E}\left[ \mathbf{X}^T \Sigma^{-1} \mathbf{X} \right] + \frac{1}{2n} \mathbb{E}\left[ \mathbf{X}^T \Sigma^{-1} \mathbf{X} | E_n=0 \right] \\
&= -\frac{1}{n}D(P_{\mathbf{X}|E_n=0} || P_\mathbf{X} ) +\frac{1}{2}\left(\mathbb{E}[W_n] - \mathbb{E}[Z_n]\right),
\end{align}
which completes the proof.
\end{proof}
Lemma \ref{lem:hXE_hX} can be viewed as a regularity property enjoyed by Gaussian vectors. Though not needed elsewhere in this paper, it is interesting to note that Lemma \ref{lem:hXE_hX} is sharp in the following sense:
\begin{proposition}
For $n= 1, 2, \dots$, let $X^n \sim N(0,\Sigma_n)$, where $\Sigma_n \in \mathbb{R}^{n\times n}$ is positive definite. For any function $g(n)\rightarrow \infty$ arbitrarily slowly, there exists a sequence of random variables $E_n\in\{0,1\}$ such that $\lim_{n\rightarrow \infty} \Pr\{E_n = 0\} = 1$ and
\begin{align}
\lim_{n\rightarrow \infty } \frac{g(n)}{n}\Big( h(X^n|E_n=0) - h(X^n) \Big) = \infty.
\end{align}
In particular, the normalization by $1/n$ in \eqref{normalizeByn} is essential.
\end{proposition}
\begin{proof}
From the proof of Lemma \ref{lem:hXE_hX}, we can assume without loss of generality that $X^n$ is iid $N(0,1)$. Let $E_n = 0$ if there are at least $f(n)$ different $X_i$'s for which $|X_i| \geq \sqrt{2}$, and let $E_n = 1$ otherwise.
Since the $X_i$'s are independent, we can see that
\begin{align}
\mathbb{E}\left[ \sum_i X_i^2 \Big| E_n = 0 \right] \geq 2 f(n) + (n-f(n)) = f(n) + n.
\end{align}
On the other hand,
\begin{align}
\mathbb{E}\left[ \sum_i X_i^2 \right] = n.
\end{align}
Now, we have that $\Pr\{E_n = 0\} = \Pr\{ B(n,p) \geq f(n) \}$, where $B(n,p)$ is a Binomial random variable consisting of $n$ trials with bias $p = \Pr\{ |X_i| \geq \sqrt{2} \}$. Continuing, we have
\begin{align}
\Pr\{E_n = 0\} = \Pr\{ B(n,p) \geq f(n) \} = \Pr\left\{ \frac{1}{\sqrt{n}}(B(n,p) - np)\geq \sqrt{n}\left( \frac{1}{n} f(n) - p\right) \right\}.
\end{align}
By the CLT, $\frac{1}{\sqrt{n}}(B(n,p) - np) \rightarrow N(0,p(1-p))$ in distribution. Therefore, $\Pr\{E_n = 0\} \rightarrow 1$ provided $f(n) = o(n)$. Recalling the proof of Lemma \ref{lem:hXE_hX}, we have
\begin{align}
h(X^n | E_n =0 ) - h(X^n) \geq \frac{1}{2}f(n) +o(1).
\end{align}
Thus, the claim is proved by putting $f(n) = n/\sqrt{g(n)}$, where $g(n)\rightarrow \infty$ arbitrarily slowly.
\end{proof}
\begin{lemma} \label{calculusLemma}
Consider a function $f : [0,\infty) \rightarrow \mathbb{R}$ defined implicitly by:
\begin{align}
2^{-2 t} = a_1 2^{-2 f(t)} + a_2. \label{implicit}
\end{align}
If $a_1 + a_2 \leq 1$, then
\begin{align}
\min_{t\geq 0} \Big\{ \max\{f(t),0\} - \lambda t \Big\}= \begin{cases}
\frac{1}{2}\log \left(\frac{a1 (\lambda-1)}{a2} \right) -\frac{\lambda}{2}\log \left(\frac{\lambda-1}{a_2 \lambda}\right) & \mbox{if $\lambda \geq \frac{a_1+a_2}{a_1}$} \\
-\frac{\lambda}{2}\log\left(\frac{1}{a_1 + a_2}\right) & \mbox{if $0 \leq \lambda \leq \frac{a_1+a_2}{a_1}$}.
\end{cases}\label{dualCases}
\end{align}
\end{lemma}
\begin{proof}
Note that $f'(t) = \frac{2^{-2t}}{2^{-2t}-a_2}$, and therefore $f'(t) = \lambda \Rightarrow t = \frac{1}{2}\log \frac{\lambda-1}{a_2 \lambda}$. Now, suppose $a_1 + a_2 \leq 1$. Then $f(t) = 0 \Rightarrow t = \frac{1}{2}\log\frac{1}{a_1 + a_2}\geq 0$.
Define $f_+(t) = \max\{f(t),0\}$. Like $f(t)$, $f_+(t)$ is a convex increasing function. For $\lambda > f'\left(\frac{1}{2}\log\frac{1}{a_1 + a_2}\right) = \frac{a_1+a_2}{a_1}$, $\lambda$ is a derivative of $f_+(t)$ at $t = \frac{1}{2}\log \frac{\lambda-1}{a_2 \lambda}$. On the other hand, if $0 \leq \lambda \leq \frac{a_1+a_2}{a_1}$, then $\lambda$ is a subderivative of $f_+(t)$ at $t = \frac{1}{2}\log\frac{1}{a_1 + a_2}$. Therefore, we can conclude that
\begin{align}
\min_{t\geq 0} \{ f_+(t) - \lambda t \}= \begin{cases}
\frac{1}{2}\log \left(\frac{a1 (\lambda-1)}{a2} \right) -\frac{\lambda}{2}\log\left( \frac{\lambda-1}{a_2 \lambda}\right) & \mbox{if $\lambda \geq \frac{a_1+a_2}{a_1}$} \\
-\frac{\lambda}{2}\log\left( \frac{1}{a_1 + a_2} \right)& \mbox{if $0 \leq \lambda \leq \frac{a_1+a_2}{a_1}$}.
\end{cases}
\end{align}
\end{proof}
\vspace{-10pt}
\bibliographystyle{ieeetr}
|
1,108,101,565,257 | arxiv | \section{Introduction}
\label{sec:introduction}
The Higgs boson is the last ingredient of the standard model (SM)
to be probed at experiments.
Precision measurements of the electroweak parameters
with logarithmic dependence on the Higgs boson mass
give indirect but tantalizing limit on $m_H$ to be less
than 186 GeV at the 95\% confidence level (C.L.)\cite{EWPD:mh}.
Direct search by the four LEP collaborations, ALEPH, DELPHI, L3 and OPAL,
resulted in no significant data.
A lower bound on the Higgs boson mass is established to be 114.4 GeV
at the 95\% C.L.\,\cite{LEP},
which is applicable to the SM and its extensions that preserve the
nature of the SM Higgs boson, \textit{e.g.}, minimal supersymmetric SM (MSSM)
in most parameter space.
In some other extensions, however, if the nature of the light Higgs boson
is drastically modified,
the limit from direct search at LEP becomes weaker.
Phenomenologically, evading the LEP data is possible when the
Higgs boson coupling $g_{ZZH}$ with the $Z$ boson is reduced and/or
the Higgs boson decays into non-SM light particles.
In the CP-conserving MSSM, for example,
the lower bound on $m_H$ can be
in the vicinity of 93 GeV at the 95\% C.L.\,\cite{MSSM:Higgs:search}.
If we further allow CP violation
the result becomes more dramatic that
no absolute limits can
be set for the Higgs boson mass\,\cite{CPX_mh}.
Since the Higgs mass bound has far-reaching implications on the
Higgs search at the LHC,
the examination of the LEP bound on $m_H$ in other new models
is of great significance.
Recently, little Higgs models have drawn a lot of interests
as they can solve the little hierarchy problem
between the electroweak scale and the 10 TeV cut-off scale
$\Lambda$\,\cite{LH}.
A relatively light Higgs boson mass
compared to $\Lambda \sim 10~{\rm TeV}$ can be explained if
the Higgs boson is a pseudo-Nambu-Goldstone boson (pNGB) of
an enlarged global symmetry.
Quadratically divergent Higgs boson mass at one-loop level,
through the gauge, Yukawa, and self-couplings of the Higgs boson,
is prohibited by the collective symmetry breaking
mechanism.
According to the
global symmetry breaking pattern, there are various models with the
little Higgs mechanism\,\cite{various}.
Detailed studies have been also made, such as
their implications on electroweak precisions data (EWPD)\,\cite{EWPD}
and phenomenologies
at high energy colliders\,\cite{phenomenology}.
Considering the possibility of evading the LEP data on the Higgs mass,
the simplest little Higgs model\,\cite{simplest}
is attractive as it accommodates a light pseudoscalar boson $\eta$,
which the Higgs boson can dominantly decay into.
The model is based on [SU(3) $\times$ U(1)$_X]^2$ global symmetry
with its diagonal subgroup SU(3) $\times$ U(1)$_X$ gauged.
The vacuum expectation value (VEV) of
two SU(3)-triplet scalar fields,
$\langle\Phi_{1,2}\rangle = (0,0,f_{1,2})^T$,
spontaneously breaks both the global symmetry
and the gauge symmetry.
Here $f_{1,2}$ are at the TeV scale.
Uneaten pNGB's consist of a SU(2)$_L$ doublet
$h$ and a pseudoscalar $\eta$.
Loops of gauge bosons and fermions generate
the Coleman-Weinberg (CW) potential $V_{CW}$
which contains the terms such as $h^\dagger h$ and $(h^\dagger h)^2$:
The Higgs boson mass and its self-coupling are
radiatively generated.
However the CW potential with non-trivial operators of
$|\Phi_1^\dagger \Phi_2|^n$ does not have the dependence of $\eta$
which is only a phase of sigma fields $\Phi_{1,2}$\,\cite{CW,smoking}.
This $\eta$ becomes massless,
which is problematic for $\eta$ production in rare $K$ and $B$
decays, $\bar{B}$-$B$ mixing, and $\Upsilon \to \eta\gamma$,
as well as for the cosmological axion limit.
One of the simplest remedies was suggested
by introducing a $-\mu^2(\Phi_1 ^\dagger \Phi_2+ h.c.)$
term into the scalar potential by hand.
Even though this breaks
the global SU(3) symmetry and thus damages the little Higgs mechanism,
its contribution to the Higgs boson mass is numerically insignificant.
This $\mu$ determines the scale of $\eta$ mass.
By requiring negative Higgs mass-squared parameter for electroweak
symmetry breaking (EWSB),
we show that the $\mu$ (and thus $m_\eta$)
is of the order of 10 GeV.
Thus, we have light pseudoscalar particles.
In addition,
the $\mu$ term also generates the
$\lambda' h^\dagger h \,\eta^2$ term in the CW potential.
As the $h$ field develops the VEV $v$,
$H$-$\eta$-$\eta$ coupling emerges with the strength proportional to $v\mu^2/f^2$,
with $f = \sqrt{f_1^2+ f_2^2}$ at the TeV scale.
The Higgs boson can then decay into two $\eta$ bosons.
Furthermore, this light $\eta$ opens a new decay channel
of $H \to Z\eta$.
Indeed, these two new decay channels can be dominant,
as shall be shown later.
Another issue which we make a thorough investigation into is the condition
for successful electroweak symmetry breaking (EWSB).
The model with the $\mu$ term is determined by four parameters: $f $,
$\tan\beta (= f_2/f_1)$, $x_{\lambda}$, and $\mu$.
Here $x_{\lambda}$ is the
ratio of two Yukawa couplings in the third generation quark sector.
The radiatively generated Higgs VEV $v$ is also determined by
these four parameters:
The SM EWSB condition $v =246~{\rm GeV}$ fixes one parameter, {\it e.g.,} $\tan\beta$.
For $x_{\lambda} \in [1,15]$, $\mu \sim \mathcal{O}(10)~{\rm GeV}$,
and $f=2-4~{\rm TeV}$,
the $v=246~{\rm GeV}$ condition limits $\tan\beta$ around 10.
This large $\tan\beta$
reduces the effective $g_{ZZH}$ coupling in this model.
With smaller $g_{ZZH}$ and $B(H\to b\bar b)$ than in the SM,
the LEP Higgs boson mass bound based on the limit $(g_{ZZH}/g^{\rm SM}_{ZZH})^2
\, B(H \to b\bar b)$ can be reduced \cite{LEP}.
Yet there was a general search by the DELPHI collaboration\,\cite{DELPHI}
in the channel $ e^+ e^- \to Z H \to Z (AA) \to Z + 4b$. The $\eta$ boson
in the present model is similar to the $A$ boson.
We shall apply the limit obtained in the DELPHI analysis to the present
model, which shall be shown entirely unconstrained.
The organization of the paper is as follows. In the next section, we
highlight the essence of the original SU(3) simplest little Higgs model, in
particular the Higgs sector. We will show that the original model can
accommodate proper EWSB as well as the Higgs mass $\sim 100~{\rm GeV}$.
After explicit demonstration of no $\eta$ dependence on the scalar potential,
we will discuss the problem of the massless pseudoscalar $\eta$.
In Sec.\ref{sec:mu}, we introduce the $\mu$ term and discuss the EWSB implication
as well as the mass spectra of the Higgs boson and $\eta$.
In Sec. \ref{sec:BR}, we calculate the branching ratio $H \to \eta \eta$ and
discuss its impact on the Higgs boson mass bound.
We discuss further possibilities to investigate this scenario and then
conclude in Sec. \ref{sec:Conclusions}.
\section{SU(3) simplest group model without the $\mu$ term}
\label{sec:simplest:nomu}
The SU(3) simplest little Higgs model is based on
$[\,$SU(3) $\times$ U(1)$_X]^2$ global symmetry
with its diagonal subgroup SU(3) $\times$ U(1)$_X$ gauged.
The pNGB multiplet is parameterized by two complex SU(3) triplet
scalar fields
$\Phi_{1,2}$:
\beq
\Phi_1 = e^{i t_\beta \Theta} \Phi_{1}^{(0)}
, \quad
\Phi_2 = e^{-i \Theta/t_\beta } \Phi_{2}^{(0)}
,
\eeq
where $t_{\beta}\equiv\tan\beta$ and
\begin{equation}
\Theta = \frac{1}{f} \left[
\left( \begin{array}{cc}
\begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array}
& h \\
h^{\dagger} & 0 \end{array} \right)
+ \frac{\eta}{\sqrt{2}}
\left( \begin{array}{ccr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \end{array} \right) \right]
\equiv \frac{1}{f} \,\mathbb{H} +\frac{\eta}{\sqrt{2} f} {\mathbb I \/}_3 .
\end{equation}
The kinetic term for $\Phi_{1,2}$ is
\begin{equation}
\label{eq:LgPhi}
\mathcal{L}_{\Phi} = \sum_{i=1,2} \left| \left( \partial_{\mu}
+ i g A_{\mu}^a T^a - \frac{i g_x}{3} B_{\mu}^x \right)
\Phi_i \right|^2,
\end{equation}
where $T^a$
are the SU(3) generators while $A^a_\mu$ and $B_\mu$ are the SU(3) and U(1)
gauge fields, respectively.
Two gauge couplings of $g$ and $g_x$ are fixed by the SM gauge couplings
such that
SU(3) gauge coupling $g$ is just the SM SU(2)$_L$ gauge coupling
and $ g_x = {g'}/{\sqrt{1 - t_W^2/3}}$.
Each of the SM fermionic doublets is promoted
to a SU(3) triplet.
Focusing on the third generation quarks,
we introduce a $\mathbf{3}$ representation of SU(3),
$\chi_{L} = (t_{L},b_{L},iU_{L})^T$,
as well as two weak-singlet quarks, $U_{R1}$ and $U_{R2}$.
The Yukawa
interaction is
\beq
\label{eq:Yuk3}
{\cal L} =
i \lambda_1 U^{\dagger }_{R1} \Phi_1^\dagger \chi_L \,
+\,i \lambda_2 U^{\dagger} _{R2} \Phi_2^\dagger \chi^{}_L \,+\, {\rm h.c.},
\eeq
where the complex number $i$'s guarantee positive masses for fermions.
According to the SU(3) representation of the first two
generation quarks and all generation leptons,
there are two versions
for fermion embedding.
This variation in model building is possible
since light quarks and leptons
make very little contributions to the radiative Higgs mass.
The first fermion embedding is called ``universal" embedding\,\cite{smoking},
where all three generations have identical quantum numbers.
The other is the ``anomaly-free"
embedding where anomaly-cancellation is required for easier UV
completion\,\cite{kong}:
The third generation quarks and all leptons are put into {$\mathbf{3}$} representations of SU(3),
while the first two generation quarks into {$\mathbf{\bar 3}$}.
Yukawa couplings for light quarks and leptons in both embedding cases
are referred to Ref.~\cite{smoking}.
When $\Phi_{1}$ and $\Phi_2$ develop the aligned VEV of
\beq
\langle \Phi_{1} \rangle =
\Phi_1^{(0)}=
(0,0,f\cos\beta)^T,
\quad
\langle \Phi_{2} \rangle = \Phi_2^{(0)}=
(0,0,f\sin\beta)^T,
\eeq
two kinds of symmetry breaking occur.
First,
the global symmetry is spontaneously broken into its subgroup of
$[\,$SU(2) $\times$ U(1)$]^2$,
giving rise to ten Nambu-Goldstone bosons.
Second, the gauge symmetry SU(3) $\times$ U(1)$_X$ is broken into the SM
SU(2)$_L \times$ U(1)$_Y$, as five Nambu-Goldstone bosons are eaten.
Five new gauge bosons and one heavy top-like quark $T$ appear
with heavy mass of order $f \sim ~{\rm TeV}$.
The heavy gauge bosons include
a $Z'$ gauge boson (a linear combination of $A^8_\mu$ and $B^x_\mu$)
and a complex SU(2) doublet $(Y^0,X^-)$
with masses of
\beq
M_{Z'}=\sqrt{\dfrac{2}{3-t_W^2}}\,g\, f, \quad M_{X^\pm}=M_Y=\dfrac{g f}{\sqrt{2}}
\,.
\eeq
The new heavy $T$ quark mass is
\beq
M_T
=\sqrt{2} \frac{t_{\beta}^2+x_{\lambda}^2}{(1+t_{\beta}^2)x_{\lambda}} \,\frac{m_t}{v} \,f
\,,
\eeq
where $x_{\lambda} = \lambda_1/\lambda_2$.
Brief comments on the EWPD constraint on $f$ are in order here.
According to Ref.~\cite{simplest},
the anomaly-free model is less constrained.
The strongest bound comes
from atomic parity violation with $f>1.7~{\rm TeV}$ at the 95\%
C.L.
A more recent analysis in Ref.~\cite{Marandella:2005wd}
gives a stronger bound of $f>4.5~{\rm TeV}$ at 99\% C.L.
Main contribution comes from an oblique parameter $\hat{S}$
due to the $Z'$ gauge boson.
They applied the approximation for $Z'$ that is eliminated by solving its equation of motion.
Considering both analyses, we take $f=2-4~{\rm TeV}$ as reasonable choices.
The gauge and Yukawa interactions of the Higgs boson explicitly break the SU(3) global
symmetry,
generating the Higgs mass at loop level.
In the CW potential
up to dimension four operators,
only the $|\Phi_1^\dagger \Phi_2|^2$ term
leads to non-trivial result for the pNGB's.
A remarkable observation is that this
$|\Phi_1^\dagger \Phi_2|^2$ term does not have any dependence on
$\eta$\cite{Dias}.
This can be easily seen by the expansion of, \textit{e.g.}, $\Phi_{1}$ as
\beq
\label{eq:Phi12:matrix}
\Phi_1 = \exp\left( i\frac{t_{\beta}\eta}{\sqrt{2} f}\right)
\exp \left( i \frac{t_{\beta} }{f}\mathbb{H} \right)\Phi_{1}^{(0)},
\eeq
which we have used the Baker-Hausdorff formula
with $[\mathbb{H}, {\mathbb I \/}_3]=0$.
This compact form is very useful
when calculating the $\Phi_1^\dagger \Phi_2$:
\beq
\label{eq:Phi:dagger:Phi2}
\Phi_1^\dagger \Phi_2 =
f^2 s_{\beta}c_{\beta} e^{-i \left(t_{\beta}+\frac{1}{t_{\beta}} \right) \frac{\eta}{\sqrt{2} f} }
\cos \left(
\frac{h_0}{f \cbts_{\beta}}
\right).
\eeq
The $|\Phi_1^\dagger \Phi_2|^2$ term
or the CW potential has no dependence on $\eta$.
Thus, the pseudoscalar $\eta$ remains massless in the original model.
On the contrary, the Higgs boson mass
is radiatively generated with one-loop logarithmic divergence
and two-loop quadratic divergence.
The troublesome one-loop quadratic divergence
is eliminated by the little Higgs mechanism.
The CW potential is
\begin{equation}
\label{eq:VCW:nomu}
V_{\rm CW} = -m_0^2 \, h^\dagger h +\lambda_0 (h^\dagger h)^2
,
\end{equation}
where
\begin{eqnarray}
\label{eq:msq}
m_0^2 &=& \frac{3}{8 \pi^2} \left[
\lambda_t^2 M_T^2 \ln \frac{\Lambda^2}{M_T^2}
- \frac{g^2}{4} M_X^2 \ln \frac{\Lambda^2}{M_X^2}
- \frac{g^2}{8} (1 + t_W^2) M^2_{Z^{\prime}}
\ln \frac{\Lambda^2}{M^2_{Z^{\prime}}} \right],
\\ \label{eq:lambda}
\lambda_0 &=& \frac{1}{3 s_{\beta}^2 c_{\beta}^2} \frac{m_0^2}{f^2}
+ \frac{3}{16 \pi^2} \left[
\lambda_t^4 \ln\frac{M_T^2}{m_t^2}
- \frac{g^4}{8} \ln \frac{M_X^2}{m_W^2}
- \frac{g^4}{16} (1 + t_W^2)^2
\ln \frac{M^2_{Z^{\prime}}}{m^2_Z} \right]\,.
\end{eqnarray}
Here $\lambda_t = \sqrt{2}m_t/v$ and $\Lambda \simeq 4 \pi f$.
The negative mass-squared term for the Higgs doublet in Eq.\,(\ref{eq:VCW:nomu})
generates the VEV for the Higgs boson as
$\langle h \rangle =v_0 /\sqrt{2}$,
which then triggers the EWSB and generates the Higgs boson mass $m_{H0}$,
given by
\bea
\label{eq:v:mH:meta}
v_{0}^2 = \frac{m_0^2}{\lambda_0}, \quad m_{H0}^2 = 2 m_0^2
\,.
\eea
This CW potential alone has been considered
insufficient to explain the EWSB,
due to excessively large soft mass-squared $m_0^2$.
If $f=2~{\rm TeV}$ and $x_{\lambda}=t_{\beta}=2$, for example,
$m_0\simeq 710~{\rm GeV}$ and thus $m_H \simeq 1~{\rm TeV}$.
In addition, the quartic coupling
$\lambda_0$ is also small since it is generated by logarithmically divergent diagrams,
not by quadratically divergent ones.
In the ordinary parameter space of $t_{\beta}$ and $x_{\lambda}$ of the order of one,
the $v_{CW} \simeq 246~{\rm GeV}$ condition cannot be satisfied.
However, this flaw
in the original model without the $\mu$ term
is not as serious as
usually considered in the literatures.
If we extend the parameter space allowing $x_{\lambda}$ and $t_{\beta}$ up to $\simeq 10$,
the $v_{0} \simeq 246~{\rm GeV}$ condition can be met easily.
Reducing $m_0^2$ in Eq.~(\ref{eq:msq}) is possible
if the heavy $T$ mass decreases.
As discussed in Ref.~\cite{SingleT}, the heavy $T$ mass is minimized when $t_{\beta}=x_{\lambda}$
and $t_{\beta}$ increases.
Larger $t_{\beta}$ can help to satisfy $v_0 \simeq 246~{\rm GeV}$.
In addition,
large $t_{\beta}$ suppresses the new contributions to the EWPD\,\cite{SingleT}.
When we require that the radiatively generated Higgs VEV be equal
to the SM Higgs VEV,
what is the SM Higgs VEV in this model is an important question.
A definite way is to require that the SM Higgs VEV $v$ should explain
the observed SM $W$ gauge boson mass.
In this model, the $W$ gauge boson mass is modified into
\beq
\label{eq:mW}
m_W
=\frac{g v}{2} \left[
1-\frac{v^2}{12 f^2} \frac{t_{\beta}^4-t_{\beta}^2+1}{t_{\beta}^2} + \mathcal{O} \left( \frac{v^4}{f^4}\right)
\right].
\eeq
The Higgs boson VEV explaining $m_W$, which we denote by $v_W$,
is
\beq
\label{eq:v}
v = v_0
\left[ 1+ \frac{v_0^2}{12 f^2}\frac{t_{\beta}^4-t_{\beta}^2+1}{t_{\beta}^2}
+ \mathcal{O} \left( \frac{v^4}{f^4}\right)\right]
\equiv v_W
\,,
\eeq
where $v_0 = 2 m_W/g = 246.26 ~{\rm GeV}$.
With the observed $m_W$,
the $v_W$ in this model depends on $t_{\beta}$ and $f$.
\begin{figure}[th!]
\begin{center}
\includegraphics[scale=0.8]{v_mu.eps}
\end{center}
\caption {Allowed parameter space of $(x_\lambda,\; t_\beta)$ for $\mu=0,~30~{\rm GeV}$
by valid
electroweak symmetry breaking. The red and blue (or thin) lines are
the contours of $m^2=0$ and $v=v_W$ for $\mu=0$, respectively.
The black and green (or thick) lines satisfies
$m^2=0$ and $v=v_W$ for $\mu=30~{\rm GeV}$, respectively.
}
\label{fig:v-mu}
\end{figure}
In Fig. \ref{fig:v-mu},
we present the contours of
$m_0^2=0$ and $v_0=v_W$ (lines for $\mu=0$).
In the upper right corner, $m^2_0$ becomes negative such that the EWSB is not
possible. This is because too large $t_{\beta}$ and thus too small
$M_T$ makes $m_0^2$ negative.
The $\lambda_0 <0$ region is contained in the excluded region by $m_0^2=0$.
Thin lines are for $\mu=0$ case:
We do have considerably large parameter space, particularly around
$t_{\beta} \simeq 10$, to explain appropriate EWSB.
Apparently the EWSB condition does not really need the extra $\mu$ term
if we can take large
$t_{\beta}$ around 10.
The most serious problem is
the presence of \emph{massless} pseudoscalar $\eta$.
Any term in the CW potential,
proportional to $|\Phi_i^\dagger \Phi_i|^n$ or $|\Phi_1^\dagger \Phi_2|^n$,
cannot accommodate the $\eta$ dependence.
Even though lower bounds on CP-odd scalar masses from
the $b$-physics signal\,\cite{Hiller} and cosmology\,\cite{PDG} are not very stringent,
any pseudoscalar particle should be massive:
The $\eta$ mass can be as low as $\mathcal{O}(100)$ MeV
from the $b$-physics signal such as rare $K$, $B$ and radiative $\Upsilon$ decays
with the $\eta$ in the final state, $B_s\to\mu^+ \mu^-$ and
$B$-$\bar{B}$ mixing;
the cosmological bound is also weak but finite, as low as 10 MeV.
We should, therefore, extend the model to cure this massless
pseudoscalar problem.
\section{SU(3) model with the $\mu$ term}
\label{sec:mu}
The simplest solution to the massless $\eta$ problem
as well as generically large $m_0^2$ problem is to introduce
a new term of $-\mu^2 (\Phi^\dagger_1 \Phi_2 + h.c.)$
into the scalar potential
by hand\,\cite{simplest,Kaplan:Schmaltz,Kilian:pseudo-scalar}.
Unfortunately, this explicitly breaks the global SU(3) symmetry.
The little Higgs mechanism is lost as the Higgs loop
generates the one-loop quadratically divergent corrections to the Higgs mass.
Since this correction is numerically insignificant,
we adopt this extension.
Since the new term can be written as
\beq
-\mu^2 (\Phi^\dagger_1 \Phi_2 + h.c.)
=
- 2 \mu^2 f^2 s_{\beta}c_{\beta} \cos\left( \frac{\eta}{\sqrt{2} s_{\beta}c_{\beta} f} \right)
\cos \left(
\frac{\sqrt{h^\dagger h}}{f \cbts_{\beta}}
\right),
\eeq
the scalar potential becomes
\beq
\label{eq:VCW}
V = - m^2 h^\dagger h + \lambda (h^\dagger h)^2 - \frac{1}{2} m_\eta^2 \eta^2
+\lambda' h^\dagger h \eta^2 + \cdots,
\eeq
where
\beq
\label{eq:msq:lambda}
m^2 = m_0^2 - \frac{\mu^2}{s_{\beta} c_\beta}, \quad
\lambda =\lambda_0 - \frac{\mu^2}{12s_{\beta}^3 c_{\beta}^3},
\quad
\lambda' = - \frac{\mu^2}{4 f^2 s_{\beta}^3 c_{\beta}^3}.
\eeq
The Higgs VEV $v$, the Higgs mass $m_H$, and $\eta$ mass $m_\eta$
are then
\beq
\label{eq:vsq:mH:meta}
v^2 = \frac{ m^2}{\lambda} ,
\quad
m_H^2 = 2 m^2 ,
\quad
m_\eta^2 = \frac{\mu^2}{s_{\beta} c_\beta}
\cos\left(
\frac{v}{\sqrt{2} f s_{\beta} c_\beta}
\right).
\eeq
The CW potential as well as the masses of new heavy particles
depend on the following four parameters:
\beq
\label{eq:parameters}
f,\quad x_{\lambda}, \quad t_{\beta},\quad \mu\,.
\eeq
As before, the $v=v_W$ condition removes one parameter.
In Fig.~\ref{fig:v-mu}, we present the contours of $v=v_W$ for $\mu=30~{\rm GeV}$
and $f=2~{\rm TeV}$.
Increasing $\mu$ reduces
the allowed value of $t_{\beta}$ by $\sim 10\%$.
\begin{figure}[th!]
\begin{center}
\includegraphics[scale=0.6]{mu_f2.eps}
\includegraphics[scale=0.6]{mu_f4.eps}
\end{center}
\caption {Allowed parameter space of $(t_{\beta},\; \mu)$ for $x_{\lambda}=3,~6,~10$
by requiring positive Higgs mass-squared parameter $m^2$.
We consider $f=2~{\rm TeV}$ and $f=4~{\rm TeV}$.
Upper right corner is excluded since $m^2<0$.
}
\label{fig:mu:scale}
\end{figure}
Unfortunately there is no prior
information even about the scale of $\mu$.
Nevertheless upper bound on $\mu$ can be imposed
since $\mu$ contributes negatively to the Higgs mass-squared parameter $m^2$.
If $m^2$ becomes negative due to too large $\mu$,
the EWSB cannot occur.
In Fig.~\ref{fig:mu:scale},
we present the allowed parameter space of $(t_{\beta},\; \mu)$ for $x_{\lambda}=3,~6,~10$
and $f=2,4~{\rm TeV}$
by requiring $m^2>0$.
The upper right corner where $m^2 < 0$ is excluded due to the EWSB condition.
Since the $v=v_W$ condition prefers $t_{\beta}\simeq 10$ as in Fig.~\ref{fig:v-mu},
the scale of $\mu$ is about $\mathcal{O}(10)$ GeV.
With the constraint of $v_{CW}=v$,
two parameters of $x_{\lambda}$ and $\mu$
determine the masses of the Higgs boson and $\eta$ at a given $f$.
In Fig.~\ref{fig:xlm:mheta},
we plot, as a function of $x_{\lambda}$, the $m_H$ (solid lines) and $m_\eta$ (dashed line)
for $\mu=0,~10,~30~{\rm GeV}$ and $f=2,4~{\rm TeV}$.
Note that $m_\eta=0$ for $\mu=0$.
For non-zero $\mu$,
the $\eta$ mass is around $\sim \mathcal{O}(10)~{\rm GeV}$.
And the Higgs boson mass is generically around $\sim $100 GeV.
\begin{figure}[th!]
\begin{center}
\includegraphics[scale=0.8]{mheta_xlm_f2.eps}
\includegraphics[scale=0.8]{mheta_xlm_f4.eps}
\end{center}
\caption {The masses of the Higgs boson (solid line) and $\eta$ (dashed line)
as a function of $x_{\lambda}$
for $f=2~{\rm TeV}$ and $f=4~{\rm TeV}$.
The value of $t_{\beta}$ is determined by the $v_{CW}=v$ condition.
}
\label{fig:xlm:mheta}
\end{figure}
In addition, we find some other interesting features.
First, both $m_H$ and $m_\eta$ attain a minimum with a given $f$,
which occurs when $\mu=0$.
This minimum of the Higgs boson mass is close to the LEP bound of $114.4~{\rm GeV}$,
and decreases as $f$ increases.
For example, $m_H^{(\mathrm{min})}=114.5~{\rm GeV}$ for $f=2~{\rm TeV}$,
and $m_H^{(\mathrm{min})}=88.9 \,(88.8) ~{\rm GeV}$ for $f=3\,(4)~{\rm TeV}$.
Investigation of the LEP bound on the Higgs boson mass is
of great significant in this model.
Second, $\mu$ increases both $m_H$ and $m_\eta$.
Since $m_\eta \propto \mu$ as in Eq.(\ref{eq:v:mH:meta}),
increasing $m_\eta$ with $\mu$ is easy to understand.
However $m_H$ has negative contribution from increasing $\mu$
as in Eq.~(\ref{eq:msq:lambda}):
Increasing $m_H$ with $\mu$ seems strange.
This behavior is due to the $t_{\beta}$ value determined by the $v=v_W$ condition.
With high $\mu$, the $t_{\beta}$ value for $v=v_W$
is reduced as in Fig.\ref{fig:v-mu}:
Smaller $t_{\beta}$ raises the $M_T$, and thus also raises
its radiative contribution to
the Higgs boson mass.
Another important point is that $m_\eta$ can be quite light.
In principle,
$m_\eta$ can be as light as the current $b$ physics and/or cosmological bounds allow.
In this paper, however, we adopt the generic mass scale for $\eta$,
around $\mathcal{O}(10)~{\rm GeV}$.
This light pseudoscalar particle can have
a significant implication on the phenomenology of the Higgs boson.
The $\lambda' h^\dagger h \eta^2$ term in the scalar potential of Eq.(\ref{eq:VCW})
leads to the coupling of $H$-$\eta$-$\eta$:
If $\eta$ boson is light enough, the Higgs boson can decay into a pair of $\eta$
and the Higgs discovery strategy should be reexamined.
In Fig.\ref{fig:mratio},
we present, with $f=2,4~{\rm TeV}$, the parameter space of $(\mu,x_{\lambda})$
where $2 m_\eta < m_H$ (to the left-hand side of the contours).
If $\mu$ is too large, $H\to \eta\eta$ decay is kinematically prohibited
unless $x_{\lambda}$ is smaller than a certain value.
\begin{figure}[th!]
\begin{center}
\includegraphics[scale=0.8]{mratio.eps}
\end{center}
\caption {The contours of $m_H = 2 m_\eta$ in the parameter
space $(\mu,x_{\lambda})$ for $f=2,4~{\rm TeV}$. To the left-hand (right-hand) side of
the contour, $2 m_\eta < \;( > )\, m_H$
}
\label{fig:mratio}
\end{figure}
\section{$H \to \eta\eta$ Decay and LEP implications}
\label{sec:BR}
\subsection{Branching ratios}
In this model, major decay modes of the Higgs boson
are SM-like ones with the partial decay rates as
\bea
\label{eq:Gamma}
\Gamma(H \to f\bar{f}) &=& \frac{N_C g^2 m_f^2}{32 \pi m_W^2} (1-x_f)^{3/2} m_H,
\qquad \mbox{for $f=t,b,c,\tau$},
\\ \nonumber
\Gamma(H \to W^+W^-) &=& \frac{g^2}{64 \pi} \frac{m_H^3}{m_W^2}
\sqrt{1-x_W} \left(1-x_W + \frac{3}{4}x_W^2 \right), \\ \nonumber
\Gamma(H \to ZZ) &=& \frac{g^2}{128 \pi} \frac{m_H^3}{m_Z^2}
\sqrt{1-x_Z} \left(1-x_Z + \frac{3}{4}x_Z^2 \right),
\eea
where $x_i = 4 m_i^2/m_H^2$, $N_c = 3\,(1)$ for $f$ being a quark (lepton).
New decay channels are
\bea
\label{eq:Gamma:new}
\Gamma(H \to \eta\eta) &=& \frac{{\lambda'}^2}{8\pi}\frac{v^2}{m_H} \sqrt{1-x_\eta}
=\frac{m_\eta^4 }{8 \pi v^2 m_H}\sqrt{1-x_\eta}
,\\ \nonumber
\Gamma( H \to Z \eta) &=& \frac{m_H^3}{32 \pi f^2}
\left( t_\beta - \frac{1}{t_\beta} \right)^2 \,
\lambda^{3/2} \left(1, \frac{m_Z^2}{m_H^2}, \frac{m_\eta^2}{m_H^2}
\right ),
\eea
where $\lambda (1,x,y) = (1-x-y)^2 - 4 xy$.
The last decay mode was mentioned in Ref. \cite{Kilian:pseudo-scalar},
which could be dominant and phenomenologically quite interesting.
\begin{figure}[th!]
\begin{center}
\includegraphics[scale=.58]{BReta-f2.eps}
\includegraphics[scale=.58]{BReta-f4.eps}
\end{center}
\caption {Contours of $B(H \to \eta \eta)=0.3,0.5,0.7$
in the parameter space $(x_{\lambda},\mu)$
for $f=2,4~{\rm TeV}$.
}
\label{fig:BReta:contour}
\end{figure}
Search strategy of the Higgs boson depends sensitively
on its branching ratios (BR):
In the SM, the major decay mode for $m_H < 2 m_W$
is into $b\bar{b}$
while that for $m_H \gsim 2 m_W$ is into $W^+ W^-$.
In this model, there are two new decay modes for the Higgs boson,
$H \to \eta \eta$ and $H \to Z \eta$.
In Fig.~\ref{fig:BReta:contour}, we present the
contours of $B(H \to \eta \eta)=0.3,0.5,0.7$
in the parameter space $(x_{\lambda},\mu)$
for $f=2,4~{\rm TeV}$.
Quite sizable portions of the parameter space can accommodate
dominant decay of $H \to \eta\eta$.
For $f=2~{\rm TeV}$,
$B(H \to \eta \eta)> 0.5$ requires $x_{\lambda} \in [6,14]$
and $\mu \in [16,30]$ GeV.
A smaller $\mu$ increases the 2-body phase-space factor
since $\mu$ is proportional
to the produced $\eta$ mass,
while it reduces the $H$-$\eta$-$\eta$ coupling.
The optimal $\mu$ for large $B(H \to \eta \eta)$
is around 20 GeV.
The size of parameter space for $f=4$ TeV is relatively smaller
with $x_{\lambda} \in [5.6,6.6]$ and $\mu \in [10,22]~{\rm GeV}$. In this case, the
optimal $\mu$ is also around 20 GeV.
\begin{figure}[th!]
\begin{center}
\includegraphics[scale=.58]{BRZeta-f2.eps}
\includegraphics[scale=.58]{BRZeta-f4.eps}
\end{center}
\caption {Contours of $B(H \to Z \eta)=0.3,0.5,0.7$
in the parameter space $(x_{\lambda},\mu)$
for $f=2,4~{\rm TeV}$.
}
\label{fig:BRZeta:contour}
\end{figure}
Figure \ref{fig:BRZeta:contour} shows the same contours
for $B(H \to Z \eta)$,
which
depend quite sensitively on $f$.
For $f=2~{\rm TeV}$,
sizable parameter space of
$x_{\lambda} \gsim 6 $ and $\mu\lsim 10~{\rm GeV}$
can allow dominant decay of $H \to Z\eta$.
When $f=4~{\rm TeV}$,
only a small region around $x_{\lambda} \simeq 6$ and $\mu\lsim 15~{\rm GeV}$
can accommodate dominant $H \to Z\eta$.
This is mainly due to the $\eta$ mass.
As can be seen in Fig.\ref{fig:xlm:mheta},
$\eta$ for $f=4~{\rm TeV}$ is relatively heavier than that for $f=2~{\rm TeV}$.
\begin{figure}[th!]
\begin{center}
\includegraphics[scale=.8]{BR-f2tev-mu20.eps}
\includegraphics[scale=.8]{BR-f4tev-mu20.eps}
\end{center}
\caption {Branching ratios of the Higgs boson
in the simplest little Higgs model with the $\mu$ term
as a function of $m_H$ for $f=2~{\rm TeV}$ and $f=4~{\rm TeV}$.
We fix $\mu =20~{\rm GeV}$ but vary $x_{\lambda}$.
}
\label{fig:BR}
\end{figure}
In order to see the $m_H$ dependence on each branching ratio,
we present the branching ratios
as a function of $m_H$ for $f=2,4~{\rm TeV}$
in Fig.~\ref{fig:BR}.
We fix $\mu=20$ GeV for both $f=2,4$ TeV
while vary $x_{\lambda}$ to generate various $m_H$.
Different distribution of BRs for $f=2~{\rm TeV}$ from that for $f=4~{\rm TeV}$
is mainly due to the Higgs mass range.
In the $f=2~{\rm TeV}$ case,
$Z \eta$ mode is solely dominant for $m_H$ from the $Z\eta$ threshold
to $2m_W$.
Even for $m_H > 2 m_W$ $B(H \to Z\eta)$
is almost the same as $B(H \to W^+ W^-)$.
In the $f=4~{\rm TeV}$ case,
the $H \to \eta\eta$ is dominant for $140 \lsim m_H \lsim 160$ GeV,
but the $H \to b\bar{b}$ becomes dominant if $m_H$ is below about 140 GeV.
For $m_H$ above $WW$ threshold,
$H \to W W$ is the leading decay mode, but not as dominant as in the SM
because of the presence of the $Z \eta$ mode.
The second important decay mode is into $Z \eta$,
which is very different from a
SM-like Higgs boson~\cite{Kilian:pseudo-scalar}.
Brief comments on the decay of $\eta$ is in order here.
If $m_\eta<2 \, m_W$, the decay pattern is very similar to that of the
SM Higgs boson
with the main decay mode into a SM fermion pair
via the coupling $c (m_f/f) i \bar f \gamma_5 f$, where
$c \sim O(t_\beta)$ and $m_f$ is the mass of the fermion.
Although this coupling is suppressed by $1/f$, the decay is still prompt
in collider experiments for $f \sim O({\rm TeV})$.
Therefore, the light $\eta$ boson mainly
decays into a $b \bar{b}$ pair\,\cite{Kilian:pseudo-scalar} if kinematically
allowed.
This characteristic feature of $\eta$ decay is useful to probe $\eta$
at high energy colliders.
\subsection{LEP bound on $m_H$}
Due to the presence of dominant decay of $H \to \eta\eta$, one may
expect that the LEP bound on the Higgs mass can be loosened to some extent.
The four LEP collaborations~\cite{LEP} searched for the Higgs boson via
\beq
\label{eq:LEP:process}
e^+ e^- \to Z H \to (l^+ l^-,q\bar{q},\nu\bar{\nu}) + b\bar{b}.
\eeq
Here the main decay mode of the SM Higgs boson into $b\bar{b}$ dominates
the width of the Higgs boson, with a branching fraction about 90\% for
most of the mass range and down to about 74\% at $m_H= 115$ GeV.
There is also a search using a minor mode of $H \to \tau^+ \tau^-$.
Nevertheless, the combined
limit is almost the same as that using just the $b\bar b$ mode.
The mass bound on the SM Higgs boson is 114.4 GeV \cite{LEP}.
For model-independent limits the
LEP collaborations presented the upper bound on
$[g_{ZZH}/g^{\rm SM}_{ZZH} ]^2 \times B( H \to b\bar b)$
at the 95\% C.L., as shown by the rugged curve in
Fig. \ref{fig:LEP2}.
In the simplest little Higgs scenario with the $\mu$ term,
one anticipates that the LEP bound on $m_H$ would be reduced, because of
(i) sizable decay rate of $H\to \eta\eta$ such that $B(H\to b\bar b)$
is substantially reduced as shown in Fig. \ref{fig:BR}, and
(ii) the reduced coupling $g_{ZZH}$ in the simplest little Higgs model, especially
when $t_\beta$ is large.
In this model,
the $g_{ZZH}$ deviates from the SM value by
\beq
\frac{g_{ZZH}}{g_{ZZH}^{\rm SM}} =
\left[
1-\frac{v_0^2}{4 f^2}\left\{t_{\beta}^2-1+\frac{1}{t_{\beta}^2} +(1-t_W^2)^2 \right\}
\right]\,.
\eeq
In Fig.~\ref{fig:LEP2}, we present the prediction of
$[g_{ZZH}/g^{\rm SM}_{ZZH} ]^2 \times B( H \to b\bar b)$
for $f=2,3,4~{\rm TeV}$, and compare to the 95\% C.L. upper limit obtained
by the LEP collaborations.
We found the best value of $\mu = 14\;(15)$ GeV for $f=3\;(4)$ TeV
such that the prediction of
$[g_{ZZH}/g^{\rm SM}_{ZZH} ]^2 \times B( H \to b\bar b)$
for $f=2,3,4~{\rm TeV}$ is the smallest.
The $f=2~{\rm TeV}$ case is safe because the minimum value of $m_H$ predicted
is already above 114 GeV.
For $f=3,4~{\rm TeV}$, however,
the Higgs boson mass bound is restricted by the data as follows:
\bea
m_H &>& 109 ~{\rm GeV} ~~ \quad \mathrm{for}~~ f = 3 ~{\rm TeV}, \\ \nonumber
m_H &>& 111 ~{\rm GeV}~~\quad \mathrm{for}~~ f = 4 ~{\rm TeV}.
\eea
\begin{figure}[th!]
\begin{center}
\includegraphics[scale=1]{LEP-limit.eps}
\end{center}
\caption {Upper bound on $[g_{ZZH}/g^{\rm SM}_{ZZH} ]^2 \times B( H \to b\bar b)$
established by the LEP collaborations,
and the corresponding values in the simplest little Higgs model
that we are considering.
}
\label{fig:LEP2}
\end{figure}
\subsection{DELPHI limit on $C^2_{Z(AA\to4b)}$}
The DELPHI collaboration~\cite{DELPHI} has searched for
the process $e^+ e^- \to ZH \to Z(AA) \to Z + 4b$
for $m_H>2 m_A$.
Here $A$ is a CP-odd scalar particle, for which $\eta$ is a good candidate.
The DELPHI collaboration parameterized the cross section by
\beq
\sigma_{(AA)Z \to 4 b+jets} = \sigma_{HZ}^{\rm SM} \times B(Z \to\mathrm{ hadrons}) \times
C^2_{Z(AA\to4b)},
\eeq
where
\beq
C^2_{Z(AA\to4b)} = \left( \frac{g_{ZZH}}{g_{ZZH}^{\rm SM}} \right)^2
\times
B(H\to A A) \times B(A\to b\bar{b})^2.
\eeq
As no convincing evidence for a signal was found,
the upper bound on $C^2_{Z(AA\to4b)}$ was presented~\cite{DELPHI}.
\begin{figure}[th!]
\begin{center}
\includegraphics[scale=1]{delphi.eps}
\end{center}
\caption {
Upper bound on $C^2_{Z(AA\to4b)}$ by the DELPHI collaboration,
and the values of $C^2_{Z(AA\to4b)}$ in our model for $f=2,3,4$ TeV.
}
\label{fig:delphi}
\end{figure}
We show the values of $C^2_{Z(AA\to4b)}$ predicted in
our model for $f=2,3,4$ TeV in Fig.~\ref{fig:delphi}.
Here we fix $\mu=20,\;14,\;15~{\rm GeV}$ for $f=2,3,4$ TeV, respectively.
We also show the upper bounds on $C^2_{Z(AA\to4b)}$
for various combinations of $m_H$ and $m_A$ obtained
by the DELPHI collaboration.\footnote{The mass ranges of the DELPHI data are
$12 \,{\rm GeV} < m_A < 55 \,{\rm GeV}$ and
$2\, m_A < m_H < 110\,{\rm GeV}$.
} For all three cases
the $C^2_{Z(AA\to4b)}$ values in this model are much smaller than the
experimental upper bound.
The DELPHI searches do not constrain the model at all.
For $f=2~{\rm TeV}$ case, it is because the Higgs boson mass is already above
the lower bound of 114.4 GeV.
For $f=3,4~{\rm TeV}$, smaller $m_H$ can evade the DELPHI search
since $g_{ZZH}$ decreases substantially for large $t_{\beta}$
and $H\to b\bar{b}$ is still dominant for
$m_H\lsim 100~{\rm GeV}$ as discussed before.
The kinks in the curves are due to the onset of the $Z \eta$ mode when
$m_H > m_Z + m_\eta$.
\section{Conclusions}
\label{sec:Conclusions}
Little Higgs models provide a very interesting perspective on
answering the little hierarchy problem.
As attributing the lightness
of Higgs boson to its being a pseudo Nambu-Goldstone boson, the
collective symmetry breaking mechanism removes the quadratically divergent
radiative-corrections to the Higgs mass at one-loop level.
As a perfect type of ``simple group" models, the SU(3)
simplest little Higgs model has drawn a lot of interests due to its
lowest fine-tuning associated to electroweak symmetry
breaking\,\cite{fine-tuning}. In the original framework, this simplest
model cannot avoid the presence of massless pseudoscalar particle
$\eta$.
Cosmological lower bound on the axion mass requires
to extend the model.
One of the simplest choices is to add the so-called $\mu$ term
in the scalar potential by hand.
Then $\eta$ acquires a mass of order $\mu$, and
the $H$-$\eta$-$\eta$ coupling is also generated of the order of $v\mu^2/f^2$.
In order to accommodate the EWSB,
this $\mu$ has a natural scale of a few ten GeVs,
which leads to relatively light $\eta$.
It is possible to
allow a substantial branching ratio for the $H \to \eta\eta$ decay.
In addition,
the $H$-$Z$-$\eta$ coupling, which is present in the original model without the $\mu$ term,
leads to $H \to Z\eta$ decay.
We found that the $H\to \eta\eta$ decay can be dominant for $m_H$
below the $WW$ threshold for $\mu \simeq 15-20$ GeV, while
$H \to Z \eta$ dominant if $140 ~{\rm GeV} \lsim m_H \lsim 2 m_W$.
For $m_H$ even above $2 m_W$,
the $H \to Z \eta$ decay can be as important as $H \to W^+ W^-$.
We have investigated the LEP bound on $[g_{ZZH}/g^{\rm SM}_{ZZH}]^2
B(H \to b \bar{b})$
in the search for the SM Higgs boson.
In the $f=2~{\rm TeV}$ case,
the model restricts $m_H$ above the LEP bound.
For the $f=3\,(4)~{\rm TeV}$ cases,
a lowering in the Higgs boson mass bound occurs:
$m_H > 109\; (111)~{\rm GeV}$, respectively. This is the main result of our work.
A few comments are in order here.
\begin{itemize}
\item
This new and dominant decay channel can lead to important implications
on the LEP search for the neutral Higgs boson.
The DELPHI collaboration examined, in extended models, the process of
$e^+ e^- \to H Z \to (AA) Z \to (b \bar{b} b \bar{b})Z$,
and presented the upper bound on
$[g_{ZZH}/g^{\rm SM}_{ZZH}]^2 B(H\to\eta\eta) B(\eta\to b \bar{b})^2 $.
Our models with $f=2,3,4~{\rm TeV}$ are not constrained by this bound.
\item
Further probes of the scenario are possible at LEP, at the Tevatron, and
at the LHC. The LEP collaborations can investigate the scenario
by searching for
\[
e^+ e^- \to ZH \to Z\,(\eta \eta) \to Z ( 4b,\, 2b\, 2\tau, 4\tau) \;,
\]
where $Z \to \ell^+ \ell^-,\, \nu\bar \nu,\,q\bar q$. This mode may suffer
from the fact that the coupling $g_{ZZH}$ is reduced relative to the SM
one because of the little Higgs corrections.
At the Tevatron, similar channels such as
\[
p \bar p \to W H, ZH \to W/Z + (4b,\, 2b \,2\tau,\, 4 \tau)
\]
can be searched for.
At the LHC, the two-photon decay mode of the intermediate Higgs boson will
suffer because of the dominance of $H \to \eta \eta$ mode in that mass
range. Thus, the branching ratio into $\gamma\gamma$ reduces. On
the other hand, $gg \to H \to \eta \eta \to 4b,\, 2b \,2\tau ,\, 4 \tau$ open,
which may be interesting modes to search for the Higgs boson. However,
a detailed study is needed to establish the feasibility.
\item The $Z \eta$ decay mode of the Higgs boson is very unique in this simplest
little Higgs model.
In fact it
dominates for $140 \;{\rm GeV} < m_H < 2 \, m_W$. Even for $2 \, m_W < m_H$ the
$Z \eta$ mode is as important as $WW$ mode.
It is very different from a SM-like Higgs boson, which usually has the
$ZZ$ mode in the second place.
Since the $ZZ \to \ell^+ \ell^- \ell^+ \ell^-$ is the golden mode for
Higgs discovery, the emergence of the $Z \eta$ mode will affect the
Higgs detection significantly. Careful studies of $Z \eta$ mode is
therefore important for Higgs searches.
\item
Another possibility to probe the $\eta$
is the direct production of the $\eta$ boson in $gg$
fusion \cite{SingleT} or the associated production with a heavy quark
pair. Although the production is suppressed by $1/f$ in the coupling
of the $\eta$ to the SM fermion pair, this remains as an interesting possibility
because the coupling to the heavy top quark is not suppressed.
\end{itemize}
We end here with an emphasis that $4b, 2b\, 2\tau, 4\tau$ modes should be
seriously searched for in the pursuit of the Higgs boson, which we
have clearly demonstrated that it is possible in the simplest little
Higgs models for $H\to \eta \eta$ and $H \to Z \eta$ to be dominant.
\acknowledgments
We thank the Physics division of the KIAS for hospitality during the
initial stage of the work.
K.C. also thanks K.S. Cheng and the Centre of Theoretical and Computational
Physics at the University of Hong Kong for hospitality.
And we would like to express our special gratitude to Alex G. Dias,
for correcting our mistakes.
We also appreciate the valuable comment from Juergen Reuter.
The work of JS is supported by KRF under grant No. R04-2004-000-10164-0.
The work of KC is supported by
the National Science Council of Taiwan under grant no.
95-2112-M-007-001- and by the National Center for Theoretical Sciences.
|
1,108,101,565,258 | arxiv | \section{the standard example}
Although we shall consider quotients of $S^2\times{S^2}$ briefly in \S3,
our main concern is with 4-manifolds $M$ covered by $S^2\times{R}^2$.
We shall identify $S^2$ with $CP^1=\mathbb{C}\cup\{\infty\}$,
via stereographic projection from
$(0,1)\in{S^2}\subset\mathbb{C}\times\mathbb{R}$.
Under this identification the antipodal map $a$ is given by
$a(z)=-z/|z|^2$ (i.e, $a([z_0:z_1])=[-\overline{z_1}:\overline{z_0}]$),
and rotation through an angle $\theta$ about the axis through $0$ and
$\infty$ is given by $R_\theta(z)=e^{i\theta}z$.
(Care! Multiplication by $-1$ in $CP^1$ is $R_\pi$,
not $a$!)
We shall identify the groups $\mathbb{Z}^\times=\{\pm1\}$,
$Z/2Z$ and $\mathbb{F}_2$, where appropriate.
Let $M$ be a closed 4-manifold with $\pi_2(M)\cong{Z}$
and $\pi=\pi_1(M)\not=1$,
and let $u:\pi\to{Aut(\pi_2(M))}=\mathbb{Z}^\times$ be the natural action.
Let $U\in{H^1(\pi;\mathbb{F}_2)}=Hom(\pi,Z/2Z)$ be
the cohomology class corresponding to the epimorphism $u$.
Then $M$ has universal cover $\widetilde{M}\cong{S^2}\times{R^2}$ and
$\kappa=\mathrm{Ker}(u)$ is a $PD_2$-group,
and $w=w_1(M)$ is determined by the pair $(\pi,u)$.
In particular, $w_1(M)|_\kappa=w_1(\kappa)$,
since $\kappa$ acts trivially on $\pi_2(M)$.
(See Chapter 10 of \cite{Hi}.
Note that if $u$ is nontrivial $\pi$ may have automorphisms
that do not preserve $u$.)
Let $[M]\in{H_4(M;Z^w)}\cong{Z}$ be a fundamental class.
If $\pi$ is torsion-free then $M$ is TOP $s$-cobordant
to the total space of an $S^2$-bundle over an aspherical surface.
If $\pi\cong\kappa\times{Z/2Z}$ then any 4-manifold $M$ with
$\pi_1(M)\cong\pi$ and $\pi_2(M)\cong{Z^u}$ is simple homotopy equivalent
to the total space of an $RP^2$-bundle over $K(\kappa,1)$.
For each $PD_2$-group $\kappa$ there are two such bundles,
distinguished by whether $v_2(M)=0$ or not.
As these cases are well-understood, we shall
usually assume that $M$ is not homotopy equivalent to a bundle space.
If $\pi$ has torsion but is not a direct product
then $u$ is nontrivial and $\pi\cong\kappa\rtimes{Z/2Z}$.
Moreover $\pi$ is the orbifold fundamental group of a $\mathbb{E}^2$-
or $\mathbb{H}^2$-orbifold $B$.
Since $\kappa$ is torsion free the singular locus $\Sigma{B}$
consists of cone points of order 2 and reflector curves.
The surface $K(\kappa,1)$ has an involution $\zeta$
corresponding to the action of $\pi/\kappa\cong{Z/2Z}$.
The ``standard" example of a closed 4-manifold realizing $(\pi,u)$ is
\[M_{st}=S^2\times{K(\kappa,1)}/(s,k)\sim(-s,\zeta(k)).\]
This is a $\mathbb{S}^2\times\mathbb{E}^2$-manifold if $\chi(\pi)=0$,
and is a $\mathbb{S}^2\times\mathbb{H}^2$-manifold otherwise.
Projection onto the first factor induces a bundle projection
from $M_{st}$ to $RP^2$, with fibre $F=K(\kappa,1)$.
In particular, $U^3=0$, since $U$ is induced from the generator of
$H^1(RP^2;\mathbb{F}_2)$.
Projection onto the second factor induces an orbifold
bundle projection $p_{st}:M_{st}\to{B}$ with regular fibre $F\cong{S^2}$.
The algebraic 2-type $[\pi,\pi_2(M),k_1(M)]$ determines $P_2(M)$,
the second stage of the Postnikov tower for $M$,
and the homotopy type of $M$ is determined by the
image of $[M]$ in $H_4(P_2(M);Z^w)$,
modulo the action of $Aut(P_2(M))$.
There are at most two possible values for this image,
up to sign and automorphisms of the algebraic 2-type,
by Theorem 10.6 of \cite{Hi}.
It is clear from this Theorem that the homotopy type of $M$
is in fact detected by the image of $[M]$ in $H_4(P;\mathbb{F}_2)$.
We shall construct a model for $P_2(M_{st})$ in \S6.
\section{local models for orbifold bundles}
A cone point of order 2 in a 2-orbifold has a regular neighbourhood
which is orbifold-homeomorphic to $D(2)=D^2/d\sim-d$.
Let $\mathbb{J}=[[0,1]=[-1,1]/x\sim-x$ be the compact connected 1-orbifold
with one reflector point.
A reflector curve (with no corner points) in a 2-orbifold
has a regular neighbourhood which is orbifold-homeomorphic
to $\mathbb{J}\times{S^1}$.
However there are two possible surjections
$u:{\pi^{orb}(\mathbb{J}\times{S^1})}\to{Z/2Z}$ with torsion-free kernel.
We shall say that the curve is {\it $u$-twisted}
if the cover is the M\"obius band $Mb={[-1,1]\times{S^1}/(x,u)\sim(-x,-u)}$
with the involution $[x,u]\mapsto[-x,u]=[x,-u]$;
if the cover is $[-1,1]\times{S^1}$ with involution $(x,u)\mapsto(-x,u)$
we shall say that the curve is untwisted.
(Note that this notion involves both the reflector curve and the action.)
For example, as the quotient of an involution of the torus $T$
the ``silvered annulus" $\mathbb{A}=S^1\times{S^1}/(u,v)\sim(u,\bar{v})$
has two untwisted reflector curves.
However it is also the quotient of an involution of the Klein bottle $Kb$,
and the reflector curves are then both twisted.
On the other hand, the ``silvered M\"obius band"
$\mathbb{M}b={S^1\times{S^1}/(u,v)\sim(v,u)}$
has two distinct (but isomorphic) nonsingular covers,
but in both cases the reflector curve is untwisted.
Models for regular neighbourhoods of the exceptional fibres
of such orbifold bundles may be constructed as follows.
Let
\[E(2)=S^2\times{D^2}/(z,w)\sim(a(z),-w),\]
\[\mathbb{E}=S^2\times[-1,1]\times{S^1}/(z,x,u)\sim(a(z),-x,u)\]
and
\[\mathbb{E}'=S^2\times[-1,1]\times{S^1}/(z,x,u)\sim(a(z),-x,u)\sim(z,-x,-u).\]
Then $p_2([z,w])=[w]$, $p_\mathbb{E}([z,x,u])=[u,x]$
and $p_{\mathbb{E}'}([z,x,u])=[x,u]$
define bundle projections $p_2:E(2)\to{D(2)}$,
$p_\mathbb{E}:\mathbb{E}\to{\mathbb{J}\times{S^1}}$
(with untwisted reflector curve)
and $p_{\mathbb{E}'}:\mathbb{E}'\to{\mathbb{J}}\times{S^1}$
(with twisted reflector curve).
Any $S^2$-bundle over $\mathbb{J}\times{S^1}$ or $D(2)$
with nonsingular total space must be of this form.
The other local models for nontrivial actions on the fibre have base
$Mb$ and total space $S^2\times{Mb}$ (non-orientable) or
$S^2\times[-1,1]\times[0,1]/(z,t,0)\sim(a(z),-t,1)$ (orientable).
It is also convenient to let
$D(2,2)=[-1,1]\times{S^1}/(x,u)\sim(-x,\bar{u})$
be the disc with two cone points of order 2 and
\[E(2,2)=S^2\times[-1,1]\times{S^1}/(z,x,u)\sim(a(z),-x,\bar{u}),\]
with projection $p_{2,2}([z,x,u])=[x,u]$.
Then $D(2,2)$ is the boundary-connected-sum of two copies of $D(2)$,
and $E(2,2)$ is the corresponding fibre sum of two copies of $E(2)$.
The manifolds $E(2)$ and $\mathbb{E}'$ have boundary $S^2\tilde\times{S^1}$,
and $p_2|_{\partial{E(2)}}$ and $p|_{\partial\mathbb{E}'}$
are nontrivial $S^2$-bundles over $S^1$.
In all the other cases the restriction of the fibration over
the boundary of the base orbifold is trivial.
(When the base is $B=Mb$ or $D(2,2)$ this can be seen by
noting that $\partial{B}$ is homotopic to the product of two generators
of $\pi_1^{orb}(B)$, and considering the action on $\pi_2(E)\cong{Z}$.)
For later uses we may need to choose homeomorphisms
$\partial{E}\cong{S^2}\times{S^1}$.
Let $\alpha,\beta$ and $\tau$ be the self-homeomorphisms
of $S^2\times{S^1}$ defined by $\alpha(z,u)=(a(z),u)$,
$\beta(z,u)=(z,\bar{u})$ and
$\tau(z,u)=(uz,u)$, for all $(z,u)\in{S^2\times{S^1}}$.
The images of $\alpha,\beta$ and $\tau$ generate
$\pi_0(Homeo(S^2\times{S^1}))\cong(Z/2Z)^3$.
The group $\pi_0(Homeo(S^2\tilde\times{S^1}))\cong(Z/2Z)^2$
is generated by the involution $\tilde\beta([z,u])=[z,\bar{u}]$
and the twist $\xi([z,u])=[uz,u]$.
\begin{lemma}
\begin{enumerate}
\item The self-homeomorphisms $\alpha$ and $\beta$ of $S^1\times{S^2}$
extend to fibre-preserving self-homeomorphisms of $S^2\times{D^2}$ and $E(2,2)$.
\item Every self-homeomorphism of $S^1\times{S^2}$ extends
to a fibre-preserving self-homeomorphism of $\mathbb{E}$.
\item The self-homeomorphism $\tilde\beta$ of $S^2\tilde\times{S^1}$
extends to fibre-preserving self-homeomorphisms of $E(2)$
and $\mathbb{E}'$.
\end{enumerate}
\end{lemma}
\begin{proof}
It is sufficient to check that the above representatives of
the isotopy classes extend, which in each case is clear.
\end{proof}
However $\tau$ does not extend across $S^2\times{D^2}$ or $E(2,2)$,
as we shall see.
Nor does $\xi$ extend across $E(2)$ or $\mathbb{E}'$.
\section{general results on orbifold bundles}
Let $M$ be a closed 4-manifold which is the total space of
an orbifold bundle $p:M\to{B}$ with regular fibre $F\cong{S^2}$
over the 2-orbifold $B$.
Then $\pi_1^{orb}(B)\cong\pi_1(M)$.
Let $\Sigma{B}$ be the singular locus of $B$.
For brevity, we shall say that $M$ is an {\it $S^2$-orbifold bundle space}
and $p$ is an {\it $S^2$-orbifold bundle}.
\begin{lemma}
The singular locus $\Sigma{B}$ consists of cone points of
order $2$ and reflector curves (with no corner points).
The number of cone points plus the number of $u$-twisted reflector curves
is even.
In particular, the base orbifold must be good.
There is a cone point if and only if $\pi=\pi_1^{orb}(B)$
has an element $x$ of order $2$ such that $w(x)\not=0$,
and there is a reflector curve if and only if $\pi$ has
an element $x$ of order $2$ such that $w(x)=0$.
\end{lemma}
\begin{proof}
The first assertion holds since the stabilizer of a point
in the base orbifold must act freely on the fibre $S^2$.
Let $N$ be a regular neighbourhood of $\Sigma{B}$,
and let $V$ be the restriction of $U$ to $B\setminus{N}$.
Then $V(\partial{N})=0$.
The action $u$ is trivial on boundary components of $N$
parallel to untwisted reflector curves,
but is nontrivial on all other boundary components.
Therefore $V(\partial{N})$ is the sum of the number of cone points
and the number of $u$-twisted reflector curves, modulo $(2)$.
Thus this number must be even, and $B$ cannot be $S(2)$,
which is the only bad orbifold in which all point stabilizers
have order at most 2.
The final assertions follow since an involution of a surface
with a fixed point point is either locally a rotation
about an isolated fixed point or locally a reflection across a
fixed arc.
\end{proof}
If $B$ is spherical then $\widetilde{M}\cong{S^2}\times{S^2}$;
otherwise $\widetilde{M}\cong{S^2}\times{R^2}$.
\begin{lemma}
Let $q:E\to{F}$ be an $S^2$-bundle over a surface with nonempty boundary.
If $q$ is nontrivial but $q|_{\partial{E}}$ is trivial then there is a
non-separating simple closed curve $\gamma$
in the interior of $F$ such that the restriction of the bundle over
$F\setminus\gamma$ is trivial.
\end{lemma}
\begin{proof}
The bundle is determined by the action of $\pi_1(F)$ on $\pi_2(E)$,
and thus by a class $u\in{H^1(F;\mathbb{F}_2)}$.
Since $u|_{\partial{F}}=0$ and $u\not=0$
the Poincar\'e-Lefshetz dual of $u$ is represented
by a simple closed curve $\gamma$ in the interior of $F$,
and $u$ restricts to 0 on $F\setminus\gamma$.
\end{proof}
The restrictions to each fibre of a bundle automorphism of an $S^2$-bundle
over a connected base must either all preserve the orientation
of the fibre or reverse the orientation of the fibre.
As every $S^2$-orbifold bundle has a fibre-preserving
self-homeomorphism which is the involution on each fibre,
it shall suffice to consider the fibre-orientation-preserving
automorphisms.
\begin{lemma}
Let $q:E\to{F}$ be an $S^2$-bundle over a surface such that
$q|_{\partial{E}}$ is trivial.
If $\partial{E}$ has boundary components $\{C_i\mid1\leq{i}\leq{d}\}$
for some $d>0$ and if $\phi_i$ is an orientation-preserving
bundle automorphism of $q|_{C_i}$ for $i<d$
then there is a bundle automorphism $\phi$ of $q$ such that
$\phi|_{q^{-1}(C_i)}=\phi_i$ for $i<d$.
\end{lemma}
\begin{proof}
We may clearly assume that $d\geq2$.
Suppose first that $q$ is trivial.
We may obtain $F$ by identifying in pairs $2k$ sides of a $(2k+d)$-gon $P$.
(The remaining sides corresponding to the boundary components $C_i$.)
A bundle automorphism of a trivial $S^2$-bundle over $X$
is determined by a map from $X$ to $Homeo(S^2)$.
Let $[\phi_i]$ be the image of $\phi_i$ in $\pi_1(Homeo(S^2))=Z/2Z$,
for $i<d$,
and define $\phi_d$ on $q^{-1}(C_d)$ so that $[\phi_d]=\Sigma_{i<d}[\phi_i]$.
Let $\phi$ be the identity on the images of the other sides of $P$.
Then $[\phi|_{\partial{P}}]=0$ and so $\phi|_{\partial{P}}$ extends across $P$.
This clearly induces a bundle automorphism $\phi$ of $q$
compatible with the data.
If $q$ is nontrivial let $\gamma$ be a simple closed curve in $F$
as in the previous lemma,
and let $N$ be an open regular neighbourhood of $\gamma$.
If $q$ is trivial let $N=\emptyset$.
Then the restriction of $q$ over $F'=F\setminus{N}$ is
trivial, and so $E'=q^{-1}(F')\cong{F'}\times{S^2}$.
If $N\cong\gamma\times(-1,1)$ then $\partial{E'}$ has $d+2$ components;
if $N\cong{Mb}$ and $\partial{E'}$ has $d+1$ components.
In either case, we let $\phi$ be the identity on the new boundary components,
and proceed as before.
\end{proof}
By Lemma 2 the number of components of $\partial{N}$ over which
the restriction of $p$ is nontrivial is even.
We may use the following lemmas to simplify the treatment of
such components.
Let $D_{oo}=S^2\setminus3intD^2$ be the ``pair of pants",
with boundary $\partial{D_{oo}}=C_1\cup{C_1}\cup{C_3}$.
\begin{lemma}
Let $F$ be a compact surface with at least $2$ boundary components
$C$ and $C'$.
Then there is a simple closed curve $\gamma$ in the interior of $F$
such that $F=X\cup{Y}$,
where $X\cong{D_{oo}}$ and $\partial{X}=C\cup{C'}\cup\gamma$.
\end{lemma}
\begin{proof}
Let $\alpha$ be an arc from $C$ to $C'$.
Then we may take $X$ to be a regular
neighbourhood of $C\cup\alpha\cup{C'}$.
\end{proof}
The two exceptional fibres in $E(2,2)$ have regular neighbourhoods
equivalent to $E(2)$. If we delete the interiors of two such
neighbourhoods we obtain the $S^2$-bundle over $D_{oo}$
which is trivial over exactly one component of $\partial{D_{oo}}$.
Since $D_{oo}\simeq{S^1\vee{S^1}}$ this bundle is well-defined
up to isomorphism.
\begin{lemma}
Let $q:E\to{D_{oo}}$ be the $S^2$-bundle which is nontrivial
over $C_1$ and $C_2$ and trivial over $C_3$.
If $\phi\in {Aut}(q)$ is an automorphism of $q$ let $\phi_i$ be
the restriction of $\phi$ to $E_i=q^{-1}(C_i)$, and let $b_i$
the underlying self-homeomorphism of $C_i$, for $i\leq3$.
Then
\begin{enumerate}
\item the $b_i$ either all preserve or all reverse orientation;
\item If $\psi$ is an automorphism of $S^2\tilde\times{S^1}$ then there is an
automorphism $\phi$ of $q$ such that $\phi_1=\phi_2=\psi$,
and such that $\phi_3$ extends across $S^2\times{D^2}$;
\item if $\phi\in {Aut}(q)$ then $\phi_1$ and $\phi_2$ are isotopic
if and only if $\phi_3$ extends across $S^2\times{D^2}$;
\item there is a $\phi\in {Aut}(q)$ such that $\phi_1=id$, $\phi_2=\xi$ and
$\phi_3=\tau$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $L=S^2\times[0,1]^2/\sim$, where $(z,x,0)\sim(a(z),x,1)$
for all $s\in{S^2}$ and $0\leq{x}\leq1$.
Then $L$ is the total space of the nontrivial $S^2$-bundle over the annulus
$A=[0,1]\times{S^1}$, with projection $p_L:L\to{A}$ given by
$p_L([z,x,y])=(x,e^{2\pi{iy}})$.
The boundary components of $L$ are each homeomorphic to $S^2\tilde\times{S^1}$.
Let $k=(\frac12,1)\in{A}$, $D=\{(x,u)\in{A}\mid d((x,u),K)<\frac14\}$,
$B=A\setminus{D}$ and $E=L\setminus{p_L^{-1}(D)}$.
Then $p_L|_E$ is a model for $q$.
The first assertion is clear, since $D_{oo}$ is orientable.
The automorphism $id_{[0,1]}\times\psi$ of $p_L$ restricts
to an automorphism $\phi$ of $q$ with the desired boundary behaviour.
If $\phi_3$ extends across $S^2\times{D^2}$ then $\phi_1$ and $\phi_2$
together bound an automorphism of $p_L$, and so must be isotopic.
Conversely, if $\phi_1$ and $\phi_2$ are isotopic we may assume that
they are isotopic to the identity, by (2).
The automorphism $\phi$ then extends to an automorphism of $E(2,2)$.
Now $E(2,2)\cup_\tau{S^2}\times{D^2}$
is not homeomorphic to $E(2,2)\cup{S^2}\times{D^2}$.
(See \S4 below).
Therefore $\tau$ does not extend across $E(2,2)$,
and so $\phi_3$ must extend across $S^2\times{D^2}$.
Let $P=(0,-1)$, $Q=(1,-1)$ $R=(\frac34,1)$ and $S=(1,1)$ be points in $B$
and let $B'=B\setminus(PQ\cup{RS})\times(-\varepsilon,\varepsilon)$.
Then $B'\cong{D^2}$, and so the restriction $q'=q|_{B'}$ is trivial.
We may clearly define a bundle automorphism of $q'$
which rotates the fibre once as we go along each of the arcs
corresponding to $\{1\}\times{S^1}$ and $\partial{D}$
and is the identity over the rest of the boundary.
Since the automorphisms agree along the pairs of arcs corresponding to
$PQ$ and $RS$, we obtain the desired automorphism of $q$.
\end{proof}
Let $j:S^2\times{D^2}\to{M}$ be a fibre-preserving embedding of a
closed regular neighbourhood of a regular fibre of $p$,
and let $N$ be the image of $j$.
The {\it Gluck reconstruction} of $p$ is the orbifold bundle
$p^\tau:M^\tau\to{B}$ with total space
$M^\tau=M\setminus{intN}\cup_{j\tau}{S^2\times{D^2}}$
and projection given by $p$ on $M\setminus{intN}$
and by projection to the second factor on $S^2\times{D^2}$.
\begin{theorem}
Let $p:M\to{B}$ and $p':M'\to{B}$ be $S^2$-orbifold bundles
over the same base $B$ and with the same action
$u:\pi_1^{orb}(B)\to\mathbb{Z}^\times$.
If $\Sigma{B}$ is nonempty then $p'$ is isomorphic to $p$ or $p^\tau$,
and so $M'\cong{M}$ or $M^\tau$.
\end{theorem}
\begin{proof}
The base $B$ has a suborbifold $N$ which contains $\Sigma{B}$
and is a disjoint union of copies of regular neighbourhoods
of reflector curves and copies of $D(2,2)$, by Lemma 2.
If $C$ is a reflector curve, with regular neighbourhood
$N(C)\cong{J}\times{S^1}$,
then $p^{-1}(N(C))\cong\mathbb{E}$ or $\mathbb{E}'$,
while if $D(2,2)\subset{B}$ then $p^{-1}(D(2,2))\cong{E(2,2)}$.
Since $N$ is nonempty and the restrictions of $p$ and $p'$
over $B\setminus{N}$ are $S^2$ bundles with the same data they are isomorphic.
Moreover the bundles are trivial over the boundary components of
$B\setminus{N}$.
After composing with a fibrewise involution, if necessary,
we may assume that the bundle isomorphism restricts
to orientation-preserving homeomorphisms of these boundary components.
Let $R$ be a regular neighbourhood of a regular fibre $S^2$.
Using Lemmas 4 and 6 we may construct a fibre-preserving homeomorphism $h$
from $M\setminus{p^{-1}(R)}$ to $M'\setminus{p'^{-1}(R)}$.
If $h|_{\partial{R}}$ extends across $R$ then $p'\cong{p}$;
otherwise $p'\cong{p^\tau}$.
\end{proof}
If $u$ is nontrivial the standard geometric 4-manifold $M_{st}$
realizing $\pi=\pi_1^{orb}(B)$ is the total space of an orbifold bundle
$p_{st}$ with regular fibre $S^2$, base $B$ and action $u$.
\begin{cor}[A]
Every $S^2$-orbifold bundle is either geometric or is the
Gluck reconstruction of a standard geometric orbifold bundle.
\qed
\end{cor}
\begin{cor}[B]
If $\Sigma{B}$ contains a reflector curve then
every $S^2$-orbifold bundle over $B$ is a standard geometric bundle.
\qed
\end{cor}
We may also realize actions with base a non-compact hyperbolic
2-orbifold by geometric orbifold bundles.
\begin{cor}[C]
If $B$ has a nontrivial decomposition into hyperbolic pieces
then $M$ has a proper geometric decomposition.
\qed
\end{cor}
In particular, if $B$ is hyperbolic (and not $T(2,2)$ or $Kb(2,2)$)
then either $M$ is geometric or it has a proper geometric decomposition.
Let $B$ and $\overline{B}$ be 2-orbifolds and
let $u$ and $\bar{u}$ be actions of $\pi=\pi^{orb}(B)$ and
$\overline\pi=\pi^{orb}(\overline{B})$ on $Z$ with torsion-free kernels.
An orbifold map $f:B\to\overline{B}$ is {\it compatible with the actions\/}
$u$ and $\bar{u}$ if it induces an epimorphism $f_*:\pi\to\bar\pi$
such that $u=\bar{u}f$.
If $p:\overline{M}\to\overline{B}$ is an $S^2$-orbifold bundle
realizing $(\overline\pi,\bar{u})$ then the pullback $f^*p$
is an $S^2$-orbifold bundle realizing $(\pi,u)$.
If moreover $f$ is an isomorphism over a non-empty open subset
of $\overline{B}$ then $(f^*p)^\tau=f^*(p^\tau)$.
In his dissertation Vogt classified $S^2$-orbifold bundles
over 2-orbifolds with no reflector curves.
While he expected that (in our terminology) Gluck reconstruction
should change the homeomorphism type of the total space,
he left this question open \cite{Vo70}.
\section{spherical base orbifold}
If the base orbifold is spherical then it must be one of $S^2$,
$RP^2$, $S(2,2)$, $\mathbb{D}$ or $\mathbb{D}(2)$, by Lemma 2.
There are two $S^2$-bundle spaces over $S^2$, and four over $RP^2$.
The latter are quotients of $S^2\times{S^2}$ by involutions of the form
$(A,-I)$, where $A\in{GL(3,\mathbb{Z})}$ is a diagonal matrix,
and projection to the quotient of the second factor by
the antipodal map induces the bundle projection.
If $A=diag[-1,-1,1]=R_\pi$ or $diag[1,1,-1]=aR_\pi$
then projection to the first factor induces an orbifold bundle
(over $S(2,2)$ or $\mathbb{D}$, respectively) with general fibre $S^2$.
The geometric orbifold bundle over $S(2,2)$
has total space $E(2,2)\cup{S^2}\times{D^2}$.
It is also the total space of an $S^2$-bundle over $RP^2$.
There is another $S^2$-orbifold bundle over $S(2,2)$,
with total space $RP^4\#_{S^1}RP^4=E(2,2)\cup_\tau{S^2}\times{D^2}$.
(Note that by Lemma 6 there is a bundle automorphism
of $E(2,2)\setminus{E(2)}$ which is the twist $\tau$
on $\partial{E(2,2)}$ and the twist $\xi$ on $\partial{E(2)}$.
Hence $E(2,2)\cup_\tau{S^2}\times{D^2}\cong{E(2)}\cup_\xi{E(2)}$.
The latter model for $RP^4\#_{S^1}RP^4$ is used in \cite{KKR}.)
The total spaces of these two $S^2$-bundles over $S(2,2)$
are not homotopy equivalent,
since the values of the $q$-invariant of \cite{KKR} differ.
Thus $\tau$ does not extend to a homeomorphism of $E(2,2)$.
The $S^2$-orbifold bundle over $\mathbb{D}=S^2/z\sim{aR_\pi(z)}$
given by this construction is the unique such bundle,
by Corollary B of Theorem 7.
(The reflector curve is untwisted.)
The total space is orientable and has $v_2=0$.
Finally, $\mathbb{D}(2)$ is the quotient of $S^2$ by the group $(Z/2Z)^2$
generated by $a$ and $R_\pi$.
Since these generators commute, $R_\pi$ induces an involution of $RP^2$
which fixes $RP^1$ and a disjoint point.
The corresponding $S^2$-orbifold bundle space
is $S^2\times{S^2}/(x,y)\sim(x,-y)\sim(-x,R_\pi(y))$.
This is again the unique such bundle,
by Corollary B of Theorem 7.
(The reflector curve is now $u$-twisted.)
It is also the total space of the nontrivial $RP^2$-bundle over $RP^2$.
\section{the $k$-invariant}
If $\pi=\pi_1(M)$ is torsion-free then $c.d.\pi=2$, and so $H^3(\pi;Z^u)=0$.
Hence $k_1(M)=0$.
Therefore in this section we may assume that $\pi$
has an element $x$ of order 2.
Let $P=P_2(M_{st})$.
The image of $H_4(CP^\infty;\mathbb{F}_2)$ in $H_4(P;\mathbb{F}_2)$
is fixed under the action of $Aut(P)$,
and so $Aut(P)$ acts on this homology group through
a quotient of order at most 2.
Since $M_{st}$ is geometric $Aut(\pi)$ acts isometrically.
More generally, if $M$ is the total space of an orbifold bundle
then $Aut(\pi)$ acts by orbifold automorphisms of the base.
The antipodal map on the fibres defines a self-homeomorphism which
induces $-1$ on $\pi_2(M)$.
These automorphisms clearly fix $H_4(P;\mathbb{F}_2)$.
Thus it shall be enough to consider the action of the subgroup
of $Aut(P)$ which acts trivially on $\pi_1$ and $\pi_2$.
Since $P$ is a connected cell-complex with $\pi_i(P)=0$ for $i>2$
this subgroup is isomorphic to $H^2(\pi;Z^u)$ \cite{Ru92}.
\begin{theorem}
Let $M_o=M_{st}\setminus{intD^4}$ be the complement of an open disc in $M_{st}$.
Then $M_{st}^\tau\simeq{M_o\cup_fD^4}$ for some $f:S^3\to{M_o}$.
\end{theorem}
\begin{proof}
Since
$S^2\times{D^2}=(D^2\times{D^2})\cup(D^2\times{D^2})=(D^2\times{D^2})\cup{D^4}$,
we may obtain each of $M_{st}$ and $M_{st}^\tau$ from $M_{st}\setminus{N}$
(up to homotopy) by first adding a 2-cell and then a 4-cell.
The attaching maps for the 2-cells are the inclusions $u\mapsto(1,u)$
and $u\mapsto(u,u)$ of $S^1$ into $\partial{N}=S^2\times{S^1}$,
respectively.
Since these are clearly homotopic,
$M_{st}^\tau$ may be obtained from $M_{st}$
by changing the attaching map for the top cell
of $M_{st}=M_o\cup{D^4}$.
\end{proof}
(It can be shown that the attaching maps differ
by the image of the Hopf map $\eta$ in $\pi_3(M_o)$.)
\begin{cor}
The inclusions of $M_o$ into $M_{st}$ and $M_{st}^\tau$
induce isomorphisms of cohomology in degrees $\leq3$.
\qed
\end{cor}
This theorem also implies that $P_2(M_{st}^\tau)\simeq{P_2(M_{st})}$,
since each may be constructed by adjoining cells to $M_o$ to
kill the higher homotopy.
However the Corollary of Theorem 10 below is stronger,
in that it does not assume the manifolds under consideration
are $S^2$-orbifold bundle spaces.
If $M$ is {\it any} closed 4-manifold with $\widetilde{M}\simeq{S^2}$
then the $u$-twisted Bockstein $\beta^u$ maps $H^2(\pi;\mathbb{F}_2)$
onto $H^3(\pi;Z^u)$, and the restriction of $k_1(M)$
to each subgroup of order 2 in $\pi$ is nontrivial,
by Lemma 10.4 of \cite{Hi}.
On looking at the structure of such groups and applying
Mayer-Vietoris arguments to compute these cohomology groups,
we can show that there is only one possible $k$-invariant.
\begin{lemma}
Let $\alpha=*^kZ/2Z=
\langle{x_i, 1\leq{i}\leq{k}}\mid{x_i^2=1~\forall~i}\rangle$
and let $u(x_i)=-1$ for all $i$.
Then restriction from $\alpha$ to $\phi=\mathrm{Ker}(u)$
induces an epimorphism from $H^1(\alpha;Z^u)$ to $H^1(\phi;Z)$.
\end{lemma}
\begin{proof}
Let $x=x_1$ and $y_i=x_1x_i$ for all $i>1$.
Then $\phi=\mathrm{Ker}(u)$ is free with basis $\{y_2,\dots,y_k\}$
and so $\alpha\cong{F(k-1)\rtimes{Z/2Z}}$.
If $k=2$ then $\alpha$ is the infinite dihedral group $D$ and
the lemma follows by direct calculation with resolutions.
In general, the subgroup $D_i$ generated by $x$ and $y_i$ is an
infinite dihedral group, and is a retract of $\alpha$.
The retraction is compatible with $u$,
and so restriction maps $H^1(\alpha;Z^u)$ onto $H^1(D_i;Z^u)$.
Hence restriction maps $H^1(\alpha;Z^u)$ onto each summand
$H^1(\langle{y_i}\rangle;Z)$ of $H^1(\phi;Z)$, and the result follows.
\end{proof}
In particular, if $k$ is even then $z=\Pi{x_i}$ generates
a free factor of $\phi$, and restriction
maps $H^1(\alpha;Z^u)$ onto $H^1(\langle{z}\rangle;Z)$.
Let $S(2_k)$ be the sphere with $k$ cone points of order 2.
\begin{theorem}
Let $B$ be an aspherical $2$-orbifold,
and let $u:\pi=\pi_1^{orb}(B)\to\mathbb{Z}^\times$ be an
epimorphism with torsion-free kernel $\kappa$.
Suppose that $\Sigma{B}\not=\emptyset$, and that
$B$ has $r$ reflector curves and $k$ cone points.
Then $H^2(\pi;Z^u)\cong(Z/2Z)^r$ if $k>0$ and
$H^2(\pi;Z^u)\cong{Z}\oplus(Z/2Z)^{r-1}$ if $k=0$.
In all cases $\beta^u(U^2)$ is the unique element of $H^3(\pi;Z^u)$
which restricts non-trivially to each subgroup of order $2$.
\end{theorem}
\begin{proof}
Suppose first that $B$ has no reflector curves.
Then $B$ is the connected sum of a closed surface $G$
with $S(2_k)$, and $k$ is even, by Lemma 2.
If $B=S(2_k)$ then $k\geq4$, since $B$ is aspherical.
Hence $\pi\cong\mu*_Z\nu$,
where $\mu=*^{k-2}Z/2Z$ and $\nu=Z/2Z*Z/2Z$
are generated by cone point involutions.
Otherwise $\pi\cong\mu*_Z\nu$,
where $\mu=*^kZ/2Z$ and $\nu=\pi_1(G\setminus{D^2})$
is a non-trivial free group.
Every non-trivial element of finite order in such a generalized
free product must be conjugate to one of the involutions.
In each case a generator of the amalgamating subgroup is identified with
the product of the involutions which generate the factors of $\mu$
and which is in $\phi=\mathrm{Ker}(u|_\mu)$.
Restriction from $\mu$ to $Z$ induces an epimorphism from
$H^1(\mu;Z^u)$ to $H^1(Z;Z)$, by Lemma 9,
and so
\[H^2(\pi;Z^u)\cong{H^2(\mu;Z^u)}\oplus{H^2(\nu;Z^u)}=0,\]
by the Mayer-Vietoris sequence with coefficients $Z^u$.
Similarly,
\[H^2(\pi;\mathbb{F}_2)\cong
{H^2(\mu;\mathbb{F}_2)}\oplus{H^2(\nu;\mathbb{F}_2)},\]
by the Mayer-Vietoris sequence with coefficients $\mathbb{F}_2$.
Let $e_i\in{H^2(\pi;\mathbb{F}_2)}$
$=Hom(H_2(\pi;\mathbb{F}_2),\mathbb{F}_2)$
correspond to restriction to the $i$th cone point.
Then $\{e_1,\dots,e_{2g+2}\}$ forms a basis for
$H^2(\pi;\mathbb{F}_2)\cong\mathbb{F}_2^{2g+2}$,
and $\Sigma{e_i}$ is clearly the only element with nonzero
restriction to all the cone point involutions.
Since $H^2(\pi;Z^u)=0$ the $u$-twisted Bockstein maps
$H^2(\pi;\mathbb{F}_2)$ isomorphically onto $H^3(\pi;Z^u)$,
and so there is an unique possible $k$-invariant.
Suppose now that $r>0$.
Then $B=r\mathbb{J}\cup{B_o}$, where $B_o$ is a connected 2-orbifold
with $r$ boundary components and $k$ cone points.
Hence $\pi=\pi\mathcal{G}$,
where $\mathcal{G}$ is a graph of groups with underlying graph
a tree having one vertex of valency $r$ with group
$\nu=\pi_1^{orb}(B_o)$, $r$ terminal vertices,
with groups $\gamma_i\cong\pi_1^{orb}(\mathbb{J})={Z}\oplus{Z/2Z}$,
and $r$ edge groups $\omega_i\cong{Z}$.
If $k>0$ then restriction maps $H^1(\nu;Z^u)$ onto $\oplus{H^1(\omega_i;Z)}$
and then $H^2(\pi;Z^u)\cong\oplus{H^2(\gamma_i;Z^u)}\cong{Z/2Z}^r$.
However if $k=0$ then $H^2(\pi;Z^u)\cong{Z}\oplus(Z/2Z)^{r-1}$.
The Mayer-Vietoris sequence with coefficients $\mathbb{F}_2$
gives an isomorphism
$H^2(\pi;\mathbb{F}_2)\cong{H^2(\nu;\mathbb{F}_2)}\oplus
(H^2(Z\oplus{Z/2Z};\mathbb{F}_2))^r\cong\mathbb{F}_2^{2r+k}$.
The generator of the second summand of $H^2(Z\oplus{Z/2Z};\mathbb{F}_2)$
is in the image of reduction modulo $(2)$ from $H^2(Z\oplus{Z/2Z};Z^u)$,
and so is in the kernel of $\beta^u$.
Therefore the image of $\beta^u$ has a basis corresponding to the
cone points and reflector curves,
and we again find an unique $k$-invariant.
Since $\beta^u(U^2)$ restricts to the generator of $H^3(Z/2Z;Z^u)$ at
each involution in $\pi$,
we must have $k_1(M)=\beta^u(U^2)$.
\end{proof}
\begin{cor}
If $M$ is a closed $4$-manifold with $\pi_2(M)\cong{Z}$
and $\pi_1(M)\cong\pi^{orb}(B)$ then $P_2(M)\simeq{P_2(M_{st})}$,
where $M_{st}$ is the standard geometric $4$-manifold with the
same fundamental group.
\qed
\end{cor}
\section{the image of $[M]$ in $H_4(P_2(M);\mathbb{F}_2)$}
As in \cite{Hi09} it is useful to begin this section
by considering first the simpler case when $u$ is trivial.
The group $\pi$ is then a $PD_2$-group, and so $k_1(M)=0$.
Let $F$ be a closed surface with $\pi_1(F)=\pi$,
and let $P={CP^\infty}\times{F}\simeq\Omega{K(Z,3)}\times{F}$.
The natural inclusion $f_{st}:M_{st}=S^2\times{F}\to{P}$ is 3-connected,
and so it is the second stage of the Postnikov tower for $M_{st}$.
The nontrivial bundle space with this group and action
is the Gluck reconstruction $M_{st}^\tau$.
We may assume that the neighbourhood $N$ of a fibre is a
product $S^2\times{D^2}$, where $D^2\subset{F}$.
Let $h:M^\tau\to{CP^2\times{F}}\subset{P}$ be the map defined by
$h(m)=f_{st}(m)$ for all $m\in{M\setminus{N}}$
and $h([z_0:z_1],d)=([dz_0:z_1:(1-|d|)z_0],d)$ for all $[z_0:z_1]\in{S^2=CP^1}$
and $d\in{D^2}$.
(The two definitions agree on $S^2\times{S^1}$,
since $\tau([z_0:z_1],u)=([uz_0:z_1],u)$ for $u\in{S^1}$.)
Then $h$ is 3-connected, and so is the second stage of
the Postnikov tower for $M_{st}^\tau$.
By the K\"unneth Theorem,
\[H_4(P;\mathbb{F}_2)\cong{H_4(CP^\infty;\mathbb{F}_2)}\oplus
(H_2(CP^\infty;\mathbb{F}_2)\otimes{H_2(F;\mathbb{F}_2))}\cong\mathbb{F}_2^2.\]
Homotopy classes of self-maps of $P$ which induce the identity on $\pi$
and $\pi_2$ are represented by maps $(c,f)\mapsto (c.s(f),f)$,
where $s:F\to\Omega{K(Z,3)}$ and we use the loop space multiplication
on $\Omega{K(Z,3)}$.
It is not hard to see that these act trivially on $H_4(P;\mathbb{F}_2)$.
Since automorphisms of $\pi$ and $\pi_2$ are realized
by self-homeomorphisms of $F$ and $CP^\infty$, respectively,
$Aut(P)$ acts trivially on $H_4(P;\mathbb{F}_2)$.
Let $q:P\to{CP^\infty}$ be the projection to the first factor.
Then $qf_{st}$ factors through the inclusion of $CP^1$, and so has degree 0.
On the other hand,
if $(w,d)$ is in the open subset $U=\mathbb{C}\times{intD^2}$
with $z_0\not=0$ and $|d|<1$ then $qh(w,d)=[d:w:1-|d|]$,
and ${(qh)^{-1}([a:b:1])}=(b/(1+|a|),a/(1+|a|))$.
Hence $qh$ maps $U$ bijectively onto the dense open subset
$CP^2\setminus{CP^1}$,
and collapses $M_{st}^\tau\setminus{h(U)}=M\setminus{intN}$ onto $CP^1$.
Therefore $qh:M_{st}^\tau\to{CP^2}$ has degree 1.
Thus the images of $[M_{st}]$ and $[M_{st}^\tau]$
in $H_4(P_2(M);\mathbb{F}_2)$ are not equivalent under the action of $Aut(P)$.
This is not surprising, as $v_2(M_{st})=0$,
but twisting the neighbourhood of a regular fibre changes the
{\it mod}-(2) self-intersection number of a section to the bundle,
and so $v_2(M_{st}^\tau)\not=0$.
If $M$ is an $S^2$-orbifold bundle space with exceptional fibres
then the image of a regular fibre in $H_2(M;\mathbb{F}_2$) is trivial,
since the inclusion factors through the covering $S^2\to{RP^2}$,
up to homotopy.
Therefore the {\it mod}-(2) Hurewicz homomorphism is trivial,
and Gluck reconstruction does not change the {\it mod}-(2)
self-intersection pairing.
In particular, $H^2(\pi;\mathbb{F}_2)\cong {H^2(M;\mathbb{F}_2)}$,
and $v_2(M_{st}^\tau)=v_2(M_{st})$.
Although we cannot expect to detect the effect of twisting through the Wu class,
we may adapt the argument above to $S^2$-orbifold bundles with $u\not=1$.
Then
\[K(\pi,1)\simeq{S^\infty}\times{K(\kappa,1)}/(s,k)\sim(-s,\zeta(k)).\]
(If $\pi$ is torsion-free we do not need the $S^\infty$ factor.)
The antipodal map of $CP^1=S^2$ extends to involutions on $CP^n$
given by
\[[z_0:z_1:z_2:\dots:z_n]\mapsto
[-\overline{z_1}:\overline{z_0}:\overline{z_2}:\dots:\overline{z_n}].\]
(Here only the first two harmonic coordinates change position or sign.)
Since these are compatible with the inclusions of $CP^n$ into $CP^{n+1}$
given by $[z_0:\dots:z_n]\mapsto[z_0:\dots:z_n:0]$,
they give rise to an involution $\sigma$ on $CP^\infty=\varinjlim{CP^n}$.
Let
\[P=CP^\infty\times{S^\infty}\times{K(\kappa,1)}/
(z,s,k)\sim(\sigma(z),-s,\zeta(k)).\]
Then $\pi_1(P)\cong\pi$, $\pi_2(P)\cong{Z^u}$ and $\pi_j(P)=0$ for $j>2$.
We shall exclude the case of $RP^2$-bundle spaces, with
$\pi\cong\kappa\times{Z/2Z}$, as these are well understood.
(The self-intersection number argument does apply in this case.)
\begin{theorem}
Let $\pi$ be a group with an epimorphism $u:\pi\to{Z/2Z}$
such that $\kappa=\mathrm{Ker}(u)$ is a $PD_2$-group, and
suppose that $\pi$ is not a direct product $\kappa\times{Z/2Z}$.
Let $M_{st}$ be the standard geometric $4$-manifold corresponding
to $(\pi,u)$ and $P=P_2(M_{st})$.
Then the images of $[M_{st}]$ and $[M_{st}^\tau]$ in $H_4(P;\mathbb{F}_2)$
are distinct.
\end{theorem}
\begin{proof}
The diagonal map from $S^2$ to ${S^2}\times{S^2}=CP^1\times{S^2}$
determines a 3-connected map $f_{st}:M_{st}\to{P}$
by $f_{st}([s,k])=[s,s,k]$.
This is the second stage of the Postnikov tower for $M_{st}$,
and embeds $M_{st}$ as a submanifold of
${CP^1\times{S^2}\times{K(\kappa,1)}/\sim}$ in $P$.
We again have $H_4(P;\mathbb{F}_2)\cong\mathbb{F}_2^2$,
with generators the images of $[M_{st}]$ and $[CP^2]$.
The projection of $CP^\infty\times{S^\infty}\times{K(\kappa,1)}$
onto its first two factors induces a map
$g:P\to{Q}=CP^\infty\times{S^\infty}/(z,s)\sim(\sigma(z),-s)$
which is in fact a bundle projection with fibre $K(\kappa,1)$.
Since $gf_{st}$ factors through $S^2$ the image of $[M_{st}]$
in $H_4(Q;\mathbb{F}_2)$ is trivial.
Since $\pi$ is not a direct product,
$M_{st}$ is the total space of an $S^2$-orbifold bundle $p_{st}$.
Let $v:S^2\times{D^2}\to{V}\subset{M_{st}}$ be a fibre-preserving
homeomorphism onto a regular neighbourhood of a regular fibre.
Since $V$ is 1-connected $f_{st}|_V$ factors through
${CP^\infty\times{S^\infty}\times{K(\kappa,1)}}$.
Let $f_1$ and $f_2$ be the composites of a fixed lift of
$f_{st}v\tau:S^2\times{S^1}\to{P}$ with the projections to $CP^\infty$
and $S^\infty$, respectively.
Let $F_1$ be the extension of $f_2$ given by
\[F_2([z_0:z_1],d)=[dz_0:z_1:(1-|d|)z_0]\]
for all $[z_0:z_1]\in{S^2}=CP^1$ and $d\in{D^2}$.
Since $f_2$ maps $S^2\times{S^1}$ to $S^2$ it is nullhomotopic in $S^3$,
and so extends to a map $F_2:S^2\times{D^2}\to{S^3}$.
Then the map $F:M_{st}^\tau\to{P}$ given by $f_{st}$
on $M_{st}\setminus{N}$ and $F(s,d)=[F_1(s),F_2(s),d]$
for all $(s,d)\in{S^2\times{D^2}}$ is 3-connected,
and so it is the second stage of the Postnikov tower for $M_{st}^\tau$.
Now $F_1$ maps the open subset $U=\mathbb{C}\times{intD^2}$
with $z_0\not=0$ bijectively onto its image in $CP^2$,
and maps $V$ onto $CP^2$.
Let $\Delta$ be the image of $CP^1$ under the diagonal embedding in
$CP^1\times{CP^1}\subset{CP^2\times{S^3}}$.
Then $(F_1,F_2)$ carries $[V,\partial{V}]$ to the image of $[CP^2,CP^1]$ in
$H_4(CP^2\times{S^3},\Delta;\mathbb{F}_2)$.
The image of $[V,\partial{V}]$ generates $H_4(M,M\setminus{U};\mathbb{F}_2)$.
A diagram chase now shows that $[M^\tau]$ and $[CP^2]$ have the same
image in $H_4(Q;\mathbb{F}_2)$, and so $[M^\tau]\not=[M]$
in $H_4(P_2(M);\mathbb{F}_2)$.
\end{proof}
It remains to consider the action of $Aut(P)$.
Since $M$ is geometric $Aut(\pi)$ acts isometrically.
The antipodal map on the fibres defines a self-homeomorphism which
induces $-1$ on $\pi_2(M)$.
These automorphisms clearly fix $H_4(P;\mathbb{F}_2)$.
Thus it is enough to consider the action of $G=H^2(\pi;Z^u)$
on $H^2(\pi;Z^u)$.
\begin{cor}
Every $4$-manifold realizing $(\pi,u)$ is homotopy equivalent
to $M$ or $M^\tau$.
If $B=X/\pi$ has no reflector curves then $M^\tau\not\simeq{M}$.
\end{cor}
\begin{proof}
The first assertion holds since the image of the fundamental class
in $H_4(P_2(M);\mathbb{F}_2)$ must generate {\it mod} $[CP^2]$,
and so be $[M]$ or $[M]+[CP^2]$.
If $B$ is nonsingular then Gluck reconstruction changes the
self-intersection of a section,
and hence changes the Wu class $v_2(M)$.
If $B$ has cone points but no reflector curves then $H^2(\pi;Z^u)=0$,
by Theorem 10, and so $M^\tau\not\simeq{M}$, by Theorem 11.
\end{proof}
Is there a more explicit invariant?
The $q$-invariant of \cite{KKR} is 0 for every
orbifold bundle with regular fibre $S^2$ over an aspherical base.
A closed 4-manifold $M$ is strongly minimal if
the equivariant intersection pairing on $\pi_2(M)$ is 0.
Every group $G$ with $c.d.G\leq2$ is the fundamental group of a
strongly minimal 4-manifold, and every closed 4-manifold
with fundamental group $G$ admits a 2-connected degree-1
map to a strongly minimal 4-manifold \cite{Hi09}.
However, if we allow torsion but assume that $v.c.d.G=2$ and
$G$ has one end then $\pi\cong\kappa\rtimes{Z/2Z}$,
with $\kappa$ a $PD_2$-group, by Theorem 4 of \cite{Hi09}.
When does a closed 4-manifold $N$ with $\pi_1(N)\cong\kappa\rtimes{Z/2Z}$
admit a 2-connected degree-1 map to an $RP^2$-bundle space or to
an $S^2$-orbifold bundle space?
\section{the main result}
We may now summarize our results in the following theorem.
\begin{theorem}
Let $M$ be a closed $4$-manifold with $\pi_2(M)\cong{Z}$,
and let $\kappa=\mathrm{Ker}(u)$,
where $u:\pi=\pi_1(M)\to{Aut(\pi_2(M))}=\mathbb{Z}^\times$
is the natural action.
Then
\begin{enumerate}
\item{if} $\pi=1$ then $M\simeq{CP^2}$;
\item{if} $\pi\cong\kappa\times{Z/2Z}$ then $M$ is homotopy equivalent to the
total space of an $RP^2$-bundle over an aspherical surface
$F\simeq{K(\kappa,1)}$;
\item{if} $\pi\not=1$ and $\pi\not\cong\kappa\times{Z/2Z}$
then $M$ is an $S^2$-orbifold bundle space over an aspherical
$2$-orbifold $B$ with $\pi^{orb}(B)\cong\pi$.
If $B$ has a reflector curve then $M\simeq{M_{st}}$;
otherwise there are two homotopy types.
\end{enumerate}
\end{theorem}
\begin{proof}
If $\pi=1$ then $P_2(M)\simeq{CP^\infty}$,
and the classifying map $f_M:M\to{P_2(M)}$
factors through $CP^2$, by general position.
This map induces isomorphisms on cohomology,
by the nonsingularity of Poincar\'e duality,
and so is a homotopy equivalence.
If $\pi\cong\kappa\times{Z/2Z}$ then $M$ is homotopy equivalent to the
total space of an $RP^2$-bundle over an aspherical surface $F$,
by Theorem 5.16 of \cite{Hi}.
Clearly $\pi_1(F)\cong\pi$.
If $\pi$ is nontrivial and not a product with $Z/2Z$ then
$k_1(M)$ is determined by $(\pi,u)$, by Theorem 9, and so
there are at most two possible homotopy types,
by Theorem 10.6 of \cite{Hi}.
These are represented by the $S^2$-orbifold bundle spaces
$M_{st}$ and $M_{st}^\tau$, by Theorem 11.
If moreover $B$ has a reflector curve
then $M_{st}^\tau$ and $M_{st}$ are diffeomorphic, by Corollary B of
Theorem 7.
Otherwise, $H^2(\pi;Z^u)=0$ and so these orbifold bundle spaces
are not homotopy equivalent.
\end{proof}
\begin{cor} [A]
Let $M_\kappa$ be the double cover associated to $\kappa$.
If $\pi\not=1$ and $\pi\not\cong\kappa\times{Z/2Z}$
then $M_\kappa\simeq{S^2}\times{K(\kappa,1)}$.
\end{cor}
\begin{proof}
The double cover of $M_{st}$ is ${S^2}\times{K(\kappa,1)}$,
and the double cover of $M_{st}^\tau$ may be obtained from
this by two Gluck reconstructions.
Hence these covers are homeomorphic.
The second assertion follows.
\end{proof}
The quotient of the total space of any $S^2$-bundle over
a closed surface $F$ by the fibrewise antipodal involution
is an $RP^2$-bundle over $F$.
Thus the corollary fails if $\pi\cong\kappa\times{Z/2Z}$.
\begin{cor} [B]
If $M$ is orientable and $\pi$ has torsion then $M\simeq{M_{st}}$.
\end{cor}
\begin{proof}
The double cover $M_\kappa$ is an $S^2$-bundle over a surface $F$.
Since $M$ is orientable and $\kappa$ acts trivially,
$F$ must also be orientable and the covering involution of $F$
over the base orbifold $B$ must be orientation-reversing.
Since $\pi$ has torsion $\Sigma{B}$ is a non-empty union of reflector curves,
by Lemma 2.
\end{proof}
If $M$ is orientable then the base $B$ is non-orientable.
In fact all $S^2$-orbifold spaces over non-orientable bases are geometric,
by the next result.
\begin{theorem}
Let $B$ be a $\mathbb{X}^2$-orbifold
and let $u:\pi=\pi^{orb}(B)\to{Z/2Z}$ be an epimorphism
with torsion-free kernel $\kappa$.
Then $M_{st}^\tau$ is geometric if and only if either
$B$ has a reflector curve or $\pi$ is not generated by involutions.
\end{theorem}
\begin{proof}
If $\pi$ is torsion-free then all $S^2$-bundle spaces over $B$ are geometric,
by Theorems 10.8 and 10.9 of \cite{Hi},
while if $\Sigma{B}$ has a reflector curve then $M_{st}^\tau\cong{M_{st}}$,
by Theorem 7.
Therefore we may assume that $\Sigma{B}$
is a non-empty finite set of cone points of order $2$.
If $B$ has no reflector curves and $\pi=\pi^{orb}(B)$
is generated by involutions then $B$
is the quotient of an orientable surface
by the hyperelliptic involution.
As involutions have fixed points in $R^2$,
they must act without fixed points on $S^2$.
Therefore every geometric 4-manifold
with group $\pi$ is diffeomorphic to $M_{st}$,
and so $M_{st}^\tau$ is not geometric.
If $\pi$ is not generated by involutions
then $B\cong{S((2)_{2k})}\#{G}$,
where $G$ is a closed surface other than $S^2$.
The action $u$ is trivial on the separating curve of the connected sum,
and so defines an action $u_G$ of $\pi_1(G)$ on $Z$.
The Gluck reconstruction of the standard $S^2$-orbifold bundle over $B$
may be achieved by modifying the $S^2$-bundle over $G$.
If $G$ is aspherical the Gluck reconstruction of the standard bundle over $G$
again has geometric total space,
and the two bundles realizing the action $u_G$ are
distinguished by the representation $\rho$ of $\pi_1(G)$ in $O(3)$,
as in Theorems 10.8 and 10.9 of \cite{Hi}.
We may clearly modify the standard representation of $\pi=\pi^{orb}(B)$
to show that $M^\tau$ is also geometric.
Otherwise, $G=RP^2$ and $B\cong{S((2)_{2(k-1)})}\#{P(2,2)}$,
and a similar argument applies.
\end{proof}
\section{the second wu class}
If $M$ is an $S^2$-bundle space (with $\pi$ torsion-free)
Gluck reconstruction changes the second Wu class $v_2(M)$.
Similarly, if $M$ is an $RP^2$-bundle space we may change
$v_2(M)$ by reattaching a product neighbourhood of a fibre.
However we shall show here that $v_2(M)$ is determined by $\pi$
if $M$ is an $S^2$-orbifold bundle space and the base orbifold
has singularities.
If $\widetilde{M}\simeq{S^2}$ and $x\in\pi$ has order 2
then the generator of $\pi_2(M)$ factors through
$\widetilde{M}/\langle{x}\rangle\simeq{RP^2}$,
and so the {\it mod}-(2) Hurewicz homomorphism is trivial.
Hence $H^i(\pi;\mathbb{F}_2)\cong{H^i(M;\mathbb{F}_2)}$ for $i\leq2$.
\begin{lemma}
The restriction $Res_\pi^\kappa:H^2(\pi;\mathbb{F}_2)\to
{H^2(\kappa;\mathbb{F}_2)}=\mathbb{F}_2$
is surjective,
and cup-product with $U$ maps $H^1(\pi;\mathbb{F}_2)$
onto $\mathrm{Ker}(Res_\pi^\kappa)$.
\end{lemma}
\begin{proof}
Let $\theta$ be the automorphism of $H^1(\kappa;\mathbb{F}_2)$
given by $\theta(A)(k)=A(xkx)$ for all $A\in{H^1(\pi;\mathbb{F}_2)}$
and $k\in\kappa$.
Let $r=\mathrm{dim}_{\mathbb{F}_2}\mathrm{Ker}(\theta+1)$
and $s=\mathrm{dim}_{\mathbb{F}_2}\mathrm{Im}(\theta+1)$.
Then
$\mathrm{dim}_{\mathbb{F}_2}H^1(Z/2Z;H^1(\kappa;\mathbb{F}_2))=r-s$
and $\beta_1(\kappa;\mathbb{F}_2)=r+s$.
It follows from the LHS spectral sequence that
$\beta_1(\pi;\mathbb{F}_2)=1+r$ and
$\beta_2(\pi;\mathbb{F}_2)=1+r-s+\delta$,
where $\delta=\mathrm{dim}_{\mathbb{F}_2}\mathrm{Im}(Res_\pi^\kappa)\leq1$.
Since $\chi(M)=2-2\beta_1(\pi;\mathbb{F}_2)+\beta_2(\pi;\mathbb{F}_2)$
and also $\chi(M)=\chi(\kappa)=2-\beta_1(\kappa;\mathbb{F}_2)$,
we see that in fact $\delta=1$.
Therefore $Res_\pi^\kappa$ is surjective.
Certainly $Res_\pi^\kappa(U\cup{A})=0$ for all $A\in{H^1(\pi;\mathbb{F}_2)}$,
and $U^2\not=0$.
Suppose that $A\in{H^1(\pi;\mathbb{F}_2)}$ is such that $A(x)=0$.
If $U\cup{A}=0$ there is a function $f:\pi\to\mathbb{F}_2$ such that
$U(g)A(h)=f(g)+f(h)+f(gh)$ for all $g,h\in\pi$.
If $g\in\kappa$ then $U(g)=0$ and so $f|_\kappa$ is a homomorphism.
Taking $g=x$ we have $A(h)=f(x)+f(h)+f(xh)$, for all $h\in\pi$,
while taking $h=x$ we have $f(gx)=f(g)+f(x)$ for all $g\in\pi$.
In particular, $f(xhx)=f(xh)+f(x)$, for all $h\in\pi$.
Therefore $A(h)=f(h)+f(xhx)$, for all $h\in\pi$,
and so $A\in\mathrm{Im}(\theta+1)$.
Thus $\mathrm{dim}_{\mathbb{F}_2}\mathrm{Ker}(U\cup-)\leq{s}$,
and so the image of cup-product with $U$ has rank
at least $r-s+1=\mathrm{dim}_{\mathbb{F}_2}\mathrm{Ker}(Res_\pi^\kappa)$.
\end{proof}
If $r>s$ then there are classes $A,B\in{H^1(\pi;\mathbb{F}_2)}$
such that $Res_\pi^\kappa(A\cup{B})\not=0$.
However if $r=s$ then $U\cup{H^1(\pi;\mathbb{F}_2)}=\langle{U^2}\rangle$.
The image of $U^3$ in $H^3(M_{st};\mathbb{F}_2)$ is 0,
since $H^3(RP^2;\mathbb{F}_2)=0$.
Therefore $U^3$ also has image 0 in $H^3(M_{st}^\tau;\mathbb{F}_2)$,
by the Corollary to Theorem 8.
(Can we see this for any 4-manifold $M$ with $\widetilde{M}\simeq{S^2}$
more directly, without invoking Theorem 12?)
\begin{theorem}
Let $p:M\to{B}$ be an $S^2$-orbifold bundle, and suppose that
$\Sigma{B}\not=\emptyset$.
\begin{enumerate}
\item{If} $Res_\pi^\kappa(A)^2=0$ for all $A\in{H^1(\pi;\mathbb{F}_2)}$
and $B$ has a reflector curve then $v_2(M)=0$;
\item{if} $Res_\pi^\kappa(A)^2=0$ for all $A\in{H^1(\pi;\mathbb{F}_2)}$
and $B$ has a cone point of order $2$ then $v_2(M)=U^2$;
\item{if} $Res_\pi^\kappa(A)^2\not=0$ for some $A\in{H^1(\pi;\mathbb{F}_2)}$
then $v_2(M)=UW$.
\end{enumerate}
\end{theorem}
\begin{proof}
If $A\in{H^1(\pi;\mathbb{F}_2)}$ then $U^2\cup{UA}=0$, since $U^3=0$.
Let $\sigma:RP^2\to{M}$ be an exceptional fibre.
Then $U^2(\sigma_*[RP^2])=1$,
and so $H^2(\pi;\mathbb{F}_2)$ is generated by
$U\cup{H^1(\pi;\mathbb{F}_2)}$ and
the Poincar\'e dual of $\sigma_*[RP^2]$, by Lemma 13.
(In particular,
$\phi$ has nonzero restriction to $H^2(\kappa;\mathbb{F}_2)$.)
If $\sigma$ is a fibre over a point on a reflector curve of $B$
then it has self-intersection 0, and $\phi^2=0$.
If $\sigma$ is a fibre over a cone point of order 2
it has a regular neighbourhood isomorphic to $E(2)$.
Let $\sigma_t[\pm{s}]=[s,(tx,ty)]\in{E(2)}$
for $s=(x,y,z)\in{S^2}$ and $|t|<1$.
Then $\sigma=\sigma_0$ and $\sigma_t$ is an isotopy of
embeddings with $\sigma_t.\sigma_0=1$ if $t\not=0$.
In this case $\sigma$ has self-intersection 1, and $\phi^2\not=0$.
Suppose that $Res_\pi^\kappa(A^2)=0$
for all $A\in{H^1(\pi;\mathbb{F}_2)}$.
Then $v_2(M)=0$ (if $\phi^2=0$) or $U^2$ (otherwise),
by the nonsingularity of Poincar\'e duality.
The first two assertions now follow.
On the other hand,
if there is an $A\in{H^1(\pi;\mathbb{F}_2)}$ with
$Res_\pi^\kappa(A)^2\not=0$ then $H^2(\pi;\mathbb{F}_2)$
is generated by $U\cup{H^1(\pi;\mathbb{F}_2)}$ and $A^2$.
Since $Res_\pi^\kappa(W)=w_1(\kappa)$,
we then have $Res_\pi^\kappa(A)^2=Res_\pi^\kappa(AW)$.
In particular, $w_1(\kappa)\not=0$ and so $W\not=0$ or $U$.
Poincar\'e duality now gives
$v_2(M)=UW+\delta{U^2}$, where $\delta=0$ (if $A^3=0$
or if $UA^3$ and $A^4$ are nonzero) or 1 (otherwise).
We may determine $\delta$ by passing to suitable 2-fold covers.
If $B$ has a reflector curve then so does the 2-fold cover $B^+$
associated to $\mathrm{ker}(W)$, and so
$v_2(M^+)=\delta{U^2}$ must be 0, by (1).
If $B$ has cone points we consider instead the
covering spaces $M_V$ and $B_V$ on which $U=W$.
The orbifold $B_V$ now has cone points, and so $v_2(M_V)=(1+\delta)U^2$
must be $U^2$, by (2).
In each case $\delta$ must be 0, and so $v_2(M)=UW$.
\end{proof}
If $\kappa$ is orientable then $Res_\pi^\kappa(A^2)=0$
for all $A\in{H^1(\pi;\mathbb{F}_2)}$.
However the converse is false: if $\pi=Z*_ZD=\pi^{orb}(P(2,2))$
then $\kappa=\pi_1(Kb)$ is non-orientable but
$Res_\pi^\kappa(A^2)=0$ for all $A\in{H^1(\pi;\mathbb{F}_2)}$.
Is it easy to see directly that if $B$
has both cone points and reflector curves then
this condition does not hold?
Whereas regular fibres in an $S^2$-orbifold bundle
over a connected base are isotopic,
exceptional fibres over distinct components
of the singular locus of $B$ are usually not even homologous.
An arc $\gamma$ in $B$ connecting two such components is
in fact a reflector interval, and the
restriction of the fibration over $\gamma$
has total space $RP^3\#RP^3$.
The fibres over the reflector points represent independent
generators of $H_2(RP^3\#RP^3;\mathbb{F}_2)$.
Thus it should not be surprising that fibres over reflector curves have
self-intersection 0,
whereas fibres over cone points have self-intersection 1.
The calculation of $v_2(M)$ when $\pi=(Z\oplus(Z/2Z))*_ZD$
given in Theorem 10.16 of \cite{Hi} is wrong.
(In fact $Res_\pi^\kappa(S^2)\not=0$, in the notation of \cite{Hi}.)
\section{$\mathbb{S}^2\times\mathbb{E}^2$-manifolds}
In this section we shall assume that $M$ is a closed 4-manifold with
$\chi(M)=0$ and $\pi_2(M)\cong{Z}$
(equivalently, that $\pi$ is virtually $Z^2$).
In Chapter 10 of \cite{Hi} it is shown that there are between
21 and 24 possible homotopy types of such 4-manifolds.
Ten are total spaces of $S^2$-bundles over $T$ or $Kb$,
four are total spaces of $RP^2$-bundles,
and four are mapping tori of self-homeomorphisms of $RP^3\#{RP^3}$.
These bundle spaces are all $\mathbb{S}^2\times\mathbb{E}^2$-manifolds,
and their homotopy types are detected by the fundamental groups
and Stiefel-Whitney classes.
The uncertainty relates to the three possible fundamental groups
with finite abelianization.
In each case, the action is unique up to an automorphism of the group.
There is one geometric manifold for each of the groups $D*_ZD$
and $(Z\oplus(Z/2Z))*_ZD$, and two for $Z*_ZD$.
By Theorem 13 there is one other (non-geometric)
orbifold bundle over $S(2,2,2,2)$ (with group $D*_ZD$),
and these five homotopy types are distinct.
Thus there are in fact 23 homotopy types in all.
If $M$ is an orbifold bundle over a flat base then
it follows from Lemma 2 that either
\begin{enumerate}
\item $M$ is an $S^2$- or $RP^2$-bundle over $T$ or $Kb$; or
\item $B=\mathbb{A}$ or $\mathbb{M}b$; or
\item $B=S(2,2,2,2)$, $P(2,2)$ or $\mathbb{D}(2,2)$.
\end{enumerate}
There are two $S^2$-orbifold bundles with base
$S(2,2,2,2)=D(2,2)\cup{D(2,2)}$.
The double of $E(2,2)$ is geometric,
whereas $E(2,2)\cup_\tau{E(2,2)}$ is not.
There is just one $S^2$-orbifold bundle with base $\mathbb{D}(2,2)$.
It has geometric total space.
The orbifold $P(2,2)=D(2,2)\cup{Mb}$ is the quotient of
the plane $\mathbb{R}^2$ by the group of euclidean isometries
generated by the glide-reflection $t=(\frac12\mathbf{j},
\left(\begin{smallmatrix}
-1&0\\
0&1
\end{smallmatrix}\right) ) $ and the rotation
$x=(\frac12(\mathbf{i}+\mathbf{j}),-I)$.
There are two $S^2$-orbifold bundles with base $P(2,2)$.
If we fix identifications of $\partial{Mb}$ with $S^1$ and
$\partial{E(2,2)}$ with $S^2\times{S^1}$ then one has total space
$M=E(2,2)\cup{S^2}\times{Mb}$ and the other has total space
$M'=E(2,2)\cup_\tau{S^2}\times{Mb}$.
(The bundles with total space $E(2,2)\cup_{(\tau)}S^2\tilde\times{Mb}$
are each equivalent to one of these via the automorphism of the base induced
by reflection of $\mathbb{R}^2$ across the principal diagonal.)
The total spaces of these two $S^2$-orbifold bundles are the
two affinely distinct $\mathbb{S}^2\times\mathbb{E}^2$-manifolds
with fundamental group $Z*_ZD$.
Let $T=(\theta,t)$ and $X=(a,x)$, where $\theta=\pm1\in{S^1}$.
(Equivalently, $\theta=I_3$ or $R_\pi=diag[-1,-1,1]\in{O(3)}$.)
Then $\{t,x\}$ generates a free, discrete, cocompact isometric action
of $Z*_ZD$ on $S^2\times{R}^2$.
The subgroup $\kappa\cong{Z}\rtimes_{-1}Z$ is generated by $T$ and $(XT)^2$.
These manifolds are not homotopy equivalent, by Theorem 12.
\section{surgery}
If $\pi$ is a surface group or has a surface group as an
index-2 subgroup then $Wh(\pi)=0$, by Theorem 6.1 of \cite{Hi}.
Therefore homotopy equivalences of manifolds with such
fundamental groups are simple.
Let $M$ be a closed 4-manifold with $\pi_2(M)\cong{Z}$ and $\chi=0$.
Then there are nine possibilities for $\pi$.
The relevant surgery obstruction groups
can be computed (or shown to be not finitely generated) in most cases,
via the Shaneson-Wall exact sequences and the results of \cite{CD}
on $L_n(D,w)$.
L\"uck has settled the one case in which such reductions do not easily apply
\cite{Lu10}.
(The groups $L(\pi)\otimes\mathbb{Z}[\frac12]$ are computed for all
aspherical 2-orbifold groups $\pi$ when $w$ is trivial in \cite{LS00}.)
If $\pi\cong{Z^2}$ or $Z\rtimes_{-1}Z$ then $M$ is homeomorphic
to the total space of an $S^2$-bundle over $T$ or $Kb$.
(See Theorem 6.11 of \cite{Hi}.)
If $\pi\cong{Z^2\times{Z/2Z}}$ then $|S_{TOP}(M)|=8$,
while if $\pi\cong{Z\rtimes_{-1}Z}\times{Z/2Z}$
then $8\leq|S_{TOP}(M)|\leq32$.
(See Theorems 6.13 and 6.14 of \cite{Hi}.)
If $\pi\cong{D}\times{Z^-}$ then $L_1(\pi,w)=0$ and $|S_{TOP}(M)|\leq16$.
In each of the remaining cases the structure sets are infinite.
Let $\sigma$ be the automorphism of $D=Z/2Z*Z/2Z$
which interchanges the factors.
Let $I_\pi:\pi/\pi'\to{L_1(\pi)}$ be the natural transformation described in
\S6.2 of \cite{Hi}. Then we have
\begin{enumerate}
\item $L_1(D\times{Z})\cong{L_1(D)}\cong{Z^3}$ \cite{CD}.
The direct summand $L_1(Z)\cong{Z}$ is the image of $I_\pi$.
\item $L_1(D\rtimes_\sigma{Z})\cong\mathrm{Ker}(1-L_0(\sigma))\cong{Z^2}$.
The direct summand $L_1(Z)\cong{Z}$ is the image of $I_\pi$.
\item $L_1(D\rtimes_\sigma{Z^-},w)\cong\mathrm{Ker}(1+L_0(\sigma))\cong{Z}$.
\item $D*_ZD$ retracts onto $D(-,-)=Z/2Z^-*Z/2Z^-$,
compatibly with $w$. Hence $L_1(\pi,w)$ is not finitely generated \cite{CD}.
\item $(Z\oplus(Z/2Z))*_ZD$ retracts onto $D(-,-)=Z/2Z^-*Z/2Z^-$,
compatibly with $w$. Hence $L_1(\pi,w)$ is not finitely generated \cite{CD}.
\item
$L_1(Z*_ZD,w)$ has an infinite $UNil$ summand,
of exponent 4 \cite{Lu10}.
(However $Z*_ZD$ does not surject to $D$.)
\end{enumerate}
In order to estimate the number of homeomorphism types
within each homotopy type we must consider the
actions of the groups $E(M)$ of homotopy classes of
self-homotopy equivalences.
(The image of $I_\pi$ acts trivially in $S_{TOP}(M)$,
by Theorem 6.7 of \cite{Hi}.)
Let $M$ be a closed 4-manifold with $\widetilde{M}\simeq{S^2}$.
As observed above, if $M$ is the total space of an orbifold bundle
then $Aut(\pi)$ and $Aut(\pi_2(M))$ act on $M$ via homeomorphisms.
Thus in order to understand the action of $E(M)$ on $S_{TOP}(M)$
it is sufficient to consider the action of the subgroup $K_\pi(M)$
of self-homotopy equivalences which induce the identity
on $\pi$ and $\pi_2(M)$.
(Note also that if $f:M\to{M}$ is a self-map such that $\pi_2(f)=id$ then
lifts of $f$ to $\widetilde{M}$ are homotopic to the identity,
and so $\pi_k(f)=id$ for all $k\geq2$.)
We may assume that $M_o=M\setminus{intD^4}$ is homotopy equivalent
to a 3-complex.
Fix a basepoint $*\in{M_o}$.
Let $P_3(M)=M\cup{e^{\geq5}}$
be the 3-stage of the Postnikov tower for $M$.
(Thus $\pi_i(M)\cong\pi_i(P_3(M))$ for $i\leq3$ and $\pi_j(P_3(M))=0$ for all $j>3$).
If $(X,*)$ is a based space let $E_*(X)$ be the group
of based homotopy classes of based self-homotopy equivalences.
If $f\in{E_*(M)}$ is in the kernel of the natural homomorphism from
$E_*(M)$ to $E_*(P_3(M))$ then we may assume that $f|_{M_o}$ is the identity,
by cellular approximation.
Thus $f$ differs from $id_M$ by at most a pinch map corresponding to
$\eta{S\eta}\in\pi_4(\widetilde{M})=Z/2Z$.
Let $K_\#$ be the kernel of the natural homomorphism
from $E_*(P_3(M))$ to $\Pi_{j\leq3}{Aut(\pi_j)}$.
Let $P=P_2(M)$ be the 2-stage of the Postnikov tower for $M$.
Then $K_\#(M)$ maps onto $K_\#$, with kernel of order $\leq2$.
There is an exact sequence
\begin{equation*}
\begin{CD}
H^1(\pi;Z^u)@>\Delta>>{H^3(P;\mathbb{Z})}\to{K_\#}
\to{H^2(\pi;Z^u)}@>\rho>>{H^3(P;\mathbb{Z})},
\end{CD}
\end{equation*}
and the image of ${H^3(P;\mathbb{Z})}$ under the second homomorphism is central.
The homomorphism $\Delta$ involves the second $k$-invariant
$k_2(M)\in{H^4(P;\mathbb{Z})}$
and factors through the finite group $H^3(\pi;\mathbb{Z})$.
The kernel of $\rho$ is the isotropy subgroup of $k_2(M)$
under the action of $H^2(\pi;Z^u)$ on $P$.
(See Corollary 2.9 of \cite{Ru92}.)
Since $v.c.d.\pi=2$ spectral sequence arguments show that
$H^i(\pi;Z^u)$ is commensurable
with $H^0(Z/2Z;H^i(\kappa;\mathbb{Z})\otimes{Z^u})$, for all $i$,
and $H^3(P;\mathbb{Z})$ is commensurable with $H^1(\pi;\mathbb{Z})$.
Thus $K_\#$ is a finitely generated, nilpotent group.
In particular, if $\pi/\pi'$ is finite then $K_\#$ is finite,
and so there are infinitely many homeomorphism types within
each such homotopy type.
However, if $\pi\cong{D}\times{Z}$ or $D\rtimes{Z}$ then $K_\#$ is infinite,
and it is not clear how this group acts on $S_{TOP}(M)$.
\section{surface bundles over $RP^2$}
Let $F$ be a closed aspherical surface and
$p:M\to{RP^2}$ be a bundle with fibre $F$,
and such that $\pi_2(M)\cong{Z}$.
(This condition is automatic if $\chi(F)<0$.)
Then $\pi=\pi_1(M)$ acts nontrivially on $\pi_2(M)$.
The covering space $M_\kappa$ associated to the kernel $\kappa$
of the action is an $F$-bundle over $S^2$,
and so $M_\kappa\cong{S^2}\times{F}$,
since all such bundles are trivial.
The projection admits a section if and only if $\pi\cong\kappa\rtimes{Z/2Z}$.
The product $RP^2\times{F}$ is easily characterized.
\begin{theorem}
Let $M$ be a closed $4$-manifold with fundamental group $\pi$,
and let $F$ be an aspherical closed surface.
Then the following are equivalent.
\begin{enumerate}
\item$M\simeq{RP^2}\times{F}$;
\item$\pi\cong{Z/2Z}\times\pi_1(F)$, $\chi(M)=\chi(F)$ and $v_2(M)=0$;
\item$\pi\cong{Z/2Z}\times\pi_1(F)$, $\chi(M)=\chi(F)$ and $M\simeq{E}$,
where $E$ is the total space of an $F$-bundle over $RP^2$.
\end{enumerate}
\end{theorem}
\begin{proof}
Clearly $(1)\Rightarrow(2)$ and (3).
If (2) holds then $M$ is homotopy equivalent to the total space of an
$RP^2$-bundle over $F$, by Theorem 5.16 of \cite{Hi}.
This bundle must be trivial since $v_2(M)=0$.
If (3) holds then there are maps $q:M\to{F}$ and $p:M\to{RP^2}$
such that $\pi_1(p)$ and $\pi_1(q)$ are the projections of $\pi$ onto its
factors and $\pi_2(p)$ is surjective.
The map $(p,q):M\to{RP^2}\times{F}$ is then a homotopy equivalence.
\end{proof}
The implication $(3)\Rightarrow(1)$ fails if $F=RP^2$ or $S^2$.
We may assume henceforth that $\pi$ is not a product.
The fixed points of an involution of an orientable surface must be all cone
points (if the involution is orientation-preserving)
or all on reflector curves (if the involution is orientation-reversing).
\begin{theorem}
A closed orientable $4$-manifold $M$ is homotopy equivalent to
the total space of an $F$-bundle over $RP^2$ with a section
if and only if $\pi=\pi_1(M)$ has an element of order $2$,
$\pi_2(M)\cong{Z}$ and
$\kappa=\mathrm{Ker}(u)\cong\pi_1(F)$,
where $u$ is the natural action of $\pi$ on $\pi_2(M)$.
\end{theorem}
\begin{proof}
The conditions are clearly necessary.
If they hold, then $M$ is homotopy equivalent to an $S^2$-orbifold bundle space
(since it is not homotopy equivalent to an $RP^2$-bundle space).
The base orbifold must have a reflector curve,
by Lemma 2.
Therefore $M\simeq{M_{st}}$,
which is the total space of an $F$-bundle over $RP^2$ with a section.
\end{proof}
Orientability is used here mainly to ensure that
the base orbifold has a reflector curve.
When $\pi$ is torsion-free $M$ is homotopy equivalent to
the total space of an $S^2$-bundle over a surface $B$,
with $\pi=\pi_1(B)$ acting nontrivially on the fibre.
Inspection of the geometric models for such bundle spaces
shows that if also $v_2(M)\not=0$ then the bundle space fibres over $RP^2$.
(See Theorems 10.8 and 10.9 of \cite{Hi}.)
Is the condition $v_2(M)\not=0$ necessary?
The standard $\mathbb{S}^2\times\mathbb{E}^2$-manifold with group $Z*_ZD$
fibres over $RP^2$, with fibre $Kb$.
Does the other example (constructed using $\theta=-1$)
also fibre over $RP^2$?
|
1,108,101,565,259 | arxiv | \section{Introduction}
At extremely high temperatures and densities, the strong-interaction matter can experience a phase transition
and form a hot and thermalized medium called the quark-gluon plasma (QGP), where quarks and gluons are no longer confined, but propagate over larger distances than the typical size of a hadron~\cite{Lee:1974ma,Collins:1974ky}. Around a few microseconds after the Big Bang, the QGP once filled in the whole early universe. With the expansion and cooling down of the universe, the primordial QGP went through a phase transition and formed hadrons, including protons and neurons, the basic building blocks of our current visible word. The QGP can also be created at the Relativistic Heavy-Ion Collider (RHIC) and the Large Hadron Collider (LHC), where the ulta-relativistic collisions of heavy ions allow us to achieve the needed extreme conditions for the QCD phase transitions and for the formation of the QGP~\cite{Lee:1974ma,Collins:1974ky,Baumgardt:1975qv}.
Since the running of RHIC in 2000, strong evidences were gradually accumulated for the creation of the QGP in the high energy nucleus-nucleus collisions~\cite{Arsene:2004fa,Back:2004je,Adams:2005dq,Adcox:2004mh,Arsene:2004fa,Gyulassy:2004vg,Muller:2006ee}. The observation of strong collective flow and the successful descriptions from hydrodynamics reveal that the QGP is a strongly-coupled system and behaves like an almost perfect liquid~\cite{Gyulassy:2004vg,Muller:2006ee,Kolb:2003dz}. It was also realized that, since the nucleons inside the colliding nuclei constantly change their positions, the created QGP fireballs fluctuate event-by-event~\cite{Alver:2006wh,Miller:2003kd,Alver:2008zza}. The collective expansion of the hot systems transforms the initial spacial inhomogeneities and deformation into anisotropic momentum distributions of final produced particles~\cite{Ollitrault:1992bk,Voloshin:1994mz}, which can be quantitatively evaluated by various flow observables~\cite{Voloshin:2008dg,Snellings:2011sz,Heinz:2013th,Gale:2013da,Song:2013gia,Luzum:2013yya,Jia:2014jca}.
For example, the elliptic flow $v_2$ is associated with the elliptic deformation of the initial fireball, the triangular flow
$v_3$ is mainly controlled by the event-by-event fluctuations of the systems and the quadrangular flow $v_4$ is driven by both initial spacial deformations and inhomogeneities of the created fireball, etc~\cite{Alver:2010dn,ALICE:2011ab,Gardim:2011xv,ATLAS:2012at}. Besides these individual flow harmonics, other flow observables, such as such as $v_n$ in ultra-central collisions~\cite{Luzum:2012wu,CMS:2013bza}
, the distributions of event-by-event flow harmonics~\cite{Aad:2013xma,Gale:2012rq}, the event-plane correlations~\cite{Aad:2014fla,Qiu:2012uy}, and the correlations between different flow harmonics~\cite{Aad:2015lwa,ALICE:2016kpq,Giacalone:2016afq,Zhu:2016puf,Qian:2016pau}, the de-correlation of the flow vector~\cite{Heinz:2013bua,Gardim:2012im,Khachatryan:2015oea}, etc., have also been intensively measured and studied in the high energy Pb--Pb collisions at the LHC. Together with the sophisticated event-by-event simulations from hydrodynamics and hybrid models, these different flow observables provide important information on the properties of the QGP fireball and help to constrain the the initial conditions of the colliding systems~\cite{Voloshin:2008dg,Snellings:2011sz,Heinz:2013th,Luzum:2013yya,Gale:2013da,Jia:2014jca,
Song:2013gia}.
The measurements of the azimuthal correlations in small systems, e.g. in p--Pb and p--p collisions at the LHC, were originally aimed to provide the reference data for the high-energy nucleus-nucleus collisions. However, lots of unexpected phenomena were discovered in experiments, which indicates the development of collective flow in the small systems. As the collision energy increased to the LHC regime, the multiplicities in "ultra-central" p--Pb and p--p collisions is comparable to the ones in peripheral Pb--Pb collisions, where that final state interactions become possibly sufficient to develop the collective expansion. A comparison of the
two particle correlations in high multiplicity p--Pb collisions at $\sqrt{s_{\rm NN}}=$ 5.02 TeV and in peripheral Pb--Pb collisions at $\sqrt{s_{\rm NN}}=$ 2.76 TeV show a surprisingly similar correlation structures for these events with similar multiplicity cuts ~\cite{CMS:2012qk,Abelev:2012ola,Aad:2013fja,Khachatryan:2015waa}. Besides, a changing sign of the 4-particle cumulants~\cite{Aad:2013fja,Abelev:2014mda,Khachatryan:2015waa} and a $v_{2}$ mass mass-ordering feature of identified hadrons~\cite{ABELEV:2013wsa,Khachatryan:2014jra} and other flow-like signals have also been observed in the high multiplicity p-Pb collisions. The related hydrodynamic simulations have successfully reproduced many of these experimental data, which strongly support the observation of collective flow in high-multiplicity p--Pb collisions~\cite{Bozek:2011if,Bozek:2012gr,Bozek:2011if,Bozek:2013ska,Bzdak:2013zma,Qin:2013bha,Werner:2013ipa,Schenke:2014zha}. For p--p collisions at $\sqrt{s_{\rm NN}}=$ 7 TeV and 13 TeV, similar results, but with smaller magnitudes, have been obtained for many of these flow-like observables~\cite{Khachatryan:2010gv,Li:2012hc,Aad:2015gqa,Khachatryan:2015lva,Khachatryan:2016txc,Dusling:2015gta}. Although these measurements may associated with the collective expansion in the small p--p systems, more detailed investigations are still needed to further understand of the physics behind.
In this paper, we will review the recent progress on collective flow and hydrodynamics in large and small systems at the LHC. In Sec.~II and Sec.~III, we will introduce hydrodynamics, hybrid models and flow measurements. In Sec.~IV, we will review recent progress on extracting the QGP viscosity from the flow data at the LHC. In Sec.~V, we will focuses on initial state fluctuations and final state correlates in Pb--Pb collisions at 2.76 A TeV. In Sec.~VI, we will review the correlations and collective flow in small systems. Sec.~VII will briefly summarize and conclude this paper.
\section{Hydrodynamics and hybrid model}
\subsection{Viscous hydrodynamics}
\quad Viscous hydrodynamics is a widely used tool to describe the expansion of the QGP fireball and to study the soft hadron data for the heavy ion collisions at RHIC and the LHC~\cite{Teaney:2009qa,Romatschke:2009im,Huovinen:2013wma,Heinz:2013th,Gale:2013da,Song:2013gia,Romatschke:2007mq,Luzum:2008cw,
Song:2007fn,Song:2007ux,Song:2009gc,Dusling:2007gi,Molnar:2008xj,Bozek:2009dw,Chaudhuri:2009hj,
Schenke:2010rr}. It solves the transport equations of energy momentum tensor and net charge current, which are written as
\begin{subequations}
\begin{eqnarray}
&& \partial_\mu T^{\mu \nu}(x)=0, \\
&& \partial_\mu N^{\mu} (x)=0\, .
\end{eqnarray}
\end{subequations}
If the systems are very close to local equilibrium, the energy momentum tensor and the net baryon charge current can be decomposed as: $T^{\mu \nu}=(e+p) u^{\mu}u^{\nu}-p g^{\mu\nu}$ and $N^{\mu}=nu^{\mu}$. Therefore, the fourteen variables in $T^{\mu\nu}$ and $N^{\mu}$ are reduced to six independent unknowns: the energy density $e$, pressure $p$ and net baryon density $n$, and 3 independent components in the four velocity $u^\mu$. The relativistic hydrodynamics are then simplified as ideal hydrodynamics. With an additional input, the equation of state (EoS) $p=p(n,e)$, and the chosen initial and final conditions, the ideal hydrodynamic equations can be numerically solved to simulate the evolution of the bulk matter for the relativistic heavy ion collisions~\cite{Kolb:2003dz}.
For a near equilibrium system, one need to implement the relativistic viscous hydrodynamics (or the so-called relativistic dissipative fluid dynamics). In the Landau frame, $T^{\mu\nu}$ and $N^{\mu}$ are expressed as: $T^{\mu \nu}=(e+p+\Pi) u^{\mu}u^{\nu}-(p +\Pi)g^{\mu\nu}+\pi^{\mu \nu}$, $N^{\mu}=nu^{\mu}-\frac{n}{e+p}q^{\mu} $. Here, $\pi^{\mu \nu}$ is the shear stress tensor, $\Pi$ is the bulk pressure and $q^{\mu}$ is the heat flow. From the 2nd law of thermal dynamics or from the kinetic theory, one could obtain the viscous equations of $\pi^{\mu \nu}$, $\Pi$ and $q^\mu$, which are expressed as~\cite{Israel:1976tn,Muronga:2004sf}:
\begin{subequations}
\begin{eqnarray}
&&\Delta ^{\mu \alpha} \Delta ^{\nu \beta}\dot{ \pi}_{\alpha \beta}
=-\frac{1}{\tau_{\pi}}\bigg[\pi^{\mu\nu}-2\eta \nabla ^{\langle \mu}u
^{\nu\rangle}-l_{\pi q} \nabla^{\langle\mu} q^{\nu\rangle}
+ \pi_{\mu\nu} \eta T
\partial_{\alpha} \big( \frac{\tau_\pi u^{\alpha}}{2 \eta T} \big)
\bigg], \ \ \ \\
&&\dot{\Pi}=-\frac{1}{\tau_{\Pi}}\bigg[\Pi+\zeta \theta -l_{\Pi q}
\nabla_{\mu} q^{\mu}+\Pi \zeta T
\partial_{\mu} \big( \frac{\tau_\Pi u^{\mu}}{2\zeta T} \big) \bigg],
\quad \\
&&\Delta^{\mu}_{\nu}\dot{q}^{\nu}=-\frac{1}{\tau_{q}}\bigg[q_{\mu}+\lambda
\frac{n T^2}{e+p}\nabla ^{\mu}\frac{\nu}{T} + l_{q \pi}\nabla_\nu
\pi^{\mu\nu}
+ l_{q \Pi}\nabla^\mu \Pi - \lambda T^2 q^{\mu}
\partial_{\mu}\big( \frac{\tau_q u^{\mu}}{2\lambda T^2}
\big)\bigg],
\label{dissi-trans-b}
\end{eqnarray}
\end{subequations}
where $\Delta^{\mu\nu} =g^{\mu\nu}{-}u^\mu u^\nu$,
$\nabla ^{\langle \mu}u ^{\nu\rangle}=\frac{1}{2}(\nabla^\mu u^\nu+\nabla^\nu
u^\mu)-\frac{1}{3}\Delta^{\mu\nu}\partial_\alpha u^\alpha$ and
$\theta=\partial \cdot u$. $\eta$ is the shear viscosity, $\zeta$ is the bulk viscosity, $\lambda$ is the heat conductivity,
and $\tau_{\pi}$, $\tau_{\Pi}$ and $\tau_{q}$ are the corresponding relaxation times.
The above Israel-Stewart formalism can also be obtained from the kinetic
theory~\cite{Baier:2006um,Baier:2007ix,Betz:2008me,Denicol:2012cn,Denicol:2012es} or from
the conformal symmetry constraints~\cite{Baier:2007ix}~\footnote{The traditional 2nd order viscous hydrodynamics works for a near equilibrium system with isotropic momentum distributions. It can not apply to an anisotropic system at very early time~\cite{Martinez:2010sc,Florkowski:2010cf,Jeon:2016uym} or a correlated fluctuating system near the QCD critical point~\cite{Stephanov:2008qz,Stephanov:2011pb,Jiang:2015hri,Jiang:2015cnt} where the traditional expansion of the microscopic distribution function fails. For the recent development on anisotropic hydrodynamics or chiral hydrodynamics, please refer to~\cite{Martinez:2010sc,Florkowski:2010cf,Jeon:2016uym,Martinez:2012tu,Florkowski:2013lza,Ryblewski:2012rr,
Bazow:2013ifa,Bazow:2015cha,Strickland:2016ezq} and~\cite{Paech:2003fe,Nahrgang:2011mg,Nahrgang:2011mv,Herold:2013bi,Herold:2016uvv}.}.
These different derivations give different higher order terms for the 2nd order viscous equations. In general, the contributions of the higher order terms are pretty small or even negligible for a hydrodynamic evolution with small shear and bulk viscosity, which will will not significantly influences the final flow observables~\footnote{Note that, to obtain a good agreement with the microscopic kinetic theory, a proper resummation of the irreducible moments is essential
for the computation of the transport coefficients, especially for a fluid-dynamics with heat flow included. Please refer to~\cite{Denicol:2012vq} for details. }.
\vspace{0.2cm}
\underline{\emph{The equations of state (EoS)}}:
\vspace{0.10cm}
Besides these hydrodynamic equations, one needs to input an EoS to close the system for the numerical simulations or analytical solutions. Currently, many groups use a state-of-the-Art EoS, called s95p-PCE, which combines a parameterized/tablated lattice EoS for the baryon free QGP phase with a hadronic EoS with effective chemical potentials for
the partially chemical equilibrium hadronic phase~\cite{Huovinen:2009yb,Shen:2010uy}.
Ref.~\cite{Huovinen:2009yb} also compared the hydrodynamic calculations using various equations of state constructed with different speed of sound, which found that the spectra and elliptic flow are only slightly influenced by the inputting EoS. The main uncertainties of the hydrodynamic calculations come from the initial conditions, which will be introduced and discussed below.
\vspace{0.2cm}
\underline{\emph{Initial conditions}}:
\vspace{0.10cm}
The initial condition is a necessary input for the hydrodynamic simulations. As an open issue related to the creation and thermalization of the QGP, it brings some uncertainties, more or less, for many flow observables in the hydrodynamic calculations. There are many types of initial condition models developed by different groups. The traditional {\tt Glauber model} assumes zero transverse velocity at the starting time and constructs the initial entropy/energy density profiles from a mixture of the wounded nucleon and binary collision densities~\cite{Kolb:2000sd}. The {\tt KLN} model treat the degrees of freedom of the initial systems as gluons and calculate the initial density profiles from the $k_T$ factorization formula~\cite{Kharzeev:2000ph}. In the later developed Monte-Carlo versions, called ({\tt MC-Glauber} and {\tt MC-KLN})~\cite{Miller:2007ri,Drescher:2006ca,Hirano:2009ah}, the event-by-event fluctuations are built through the positions fluctuations of individual nucleons inside each colliding nuclei. For the {\tt AMPT} initial conditions, the initial profiles are constructed from the energy and momentum decompositions of individual partons, which fluctuate in both momentum and position space~\cite{Bhalerao:2015iya,Pang:2012he,Xu:2016hmp}. With an additional Gaussian smearing factor, the fluctuation scales related to the energy decompositions become changeable, which helps to balance the initial eccentricities at different order. As a successful initial condition model, {\tt IP-Glasma}~\cite{Schenke:2012fw} includes both the nucleon position fluctuations and the color charge fluctuations. It uses the IP-Sat model to generate the wave-functions of high energy nuclei/nucleon and then implements a classical Yang-Mills dynamics to simulate the pre-equilibrium evolution of the early glasma stage. Another successful initial condition model in {\tt EKRT}~\cite{Paatelainen:2013eea,Niemi:2015qia} combines the PQCD minijet production with the gluon saturation and generates the energy density profiles after a pre-equilibrium Bjorken free streaming. The recently developed {\tt T\raisebox{-.5ex}{R}ENTo} model~\cite{Moreland:2014oya} is a parametric initial condition model based on the eikonal entropy deposition via a reduced thickness function. With an introduced entropy deposition parameter, {\tt{T\raisebox{-.5ex}{R}ENTo}} model could reproduce the initial eccentricities of various initial condition models that belong to different classes, such as {\tt MC-KLN}, {\tt MC-Glauber}, {\tt EKRT}, {\tt IP-Glasma} and etc..
Many initial condition models neglect the initial flow from the pre-equilibrium stage. Recently, the effects of pre-equilibrium evolution have been estimated in Ref~\cite{Liu:2015nwa} through evolving the free streaming particles from MC-Glauber and MC-KLN models, which demonstrated that such pre-equilibrium dynamics significantly increases the initial flow and reduces the initial spacial eccentricities. More sophisticated investigations on pre-equilibrium dynamics can be, in principle, carried on within the framework of dynamical models like {\tt EPOS}~\cite{Werner:2010ny}, {\tt AMPT}~\cite{Bhalerao:2015iya,Pang:2012he,Xu:2016hmp}, {\tt EKRT}~\cite{Paatelainen:2013eea,Niemi:2015qia}, {\tt IP-Glasma}~\cite{Schenke:2012fw}, {\tt URQMD}~\cite{Petersen:2009vx,Petersen:1900zz} and etc.. After matching the energy-momentum tensor at a switching point, one could principally obtained 3+1-d fluctuating profiles of initial energy density and initial flow for the succeeding hydrodynamic simulations. However, many past studies focus on the initial state fluctuations on the transverse plane, which neglect the fluctuation patterns along the longitudinal direction. The AMPT + ideal hydrodynamic simulations~\cite{Pang:2012he} demonstrate that evolving early hot spots in the longitudinal directions could dissipate part of the transverse energy, which leads to a suppression of the final flow anisotropy. Recently, the IP-Glasma model has been extended to three dimension with the explicit small x evolutions of the gluon distributions~\cite{Schenke:2016ksl}. Although the related energy momentum tensors can be in principle used in the succeeding hydrodynamic simulations, additional works are still required to further extend the distributions to the large rapidity regime with the consideration of large x effects.
\vspace{0.2cm}
\underline{\emph{Freeze-out / decoupling }}:
\vspace{0.10cm}
Pure hydrodynamic simulations assume free-streaming hadrons directly emit from a decoupling surface defined by a constant temperature or energy density or other kinetic variables\cite{Kolb:2003dz,Teaney:2009qa}. The momentum distributions of various emitted thermal hadrons can be calculated with the Cooper-Frye formula~\cite{Cooper-Frye} using the freeze-out information on the freeze-out surface (For the details of the Cooper-Frye formula, please refer to~\cite{Cooper-Frye,Kolb:2003dz} as well as the following Section II. B for details). With the corresponding decay channels, the unstable hadron resonances delay into stable ones with some momentum distributions that can be further analyzed to compare with the experimental data. In the constant temperature decoupling scenario, the decoupling temperature $T_{dec}$
strongly depends on the EoS and other hydrodynamic inputs. For s95p-PCE, $T_{dec}$ is generally set to 100-120 MeV in order to fit the mean $p_T$ of various hadrons with a sufficient build up of the radial flow~\cite{Kolb:2003dz,Shen:2010uy}.
\subsection{Hybrid models}
\quad A hybrid model matches the hydrodynamic description of the QGP fluid to a hadron cascade simulation for the evolution of the hadron resonance gas at a switching temperature near $T_c$. The early ideal hydrodynamics + hadron
cascade hybrid model simulations have showed the hadronic matter is highly viscous, which largely suppress the elliptic flow when compared with the pure hydrodynamic calculations with a partially chemical equilibrium EoS~\cite{Hirano:2005wx}. Motivated by this, different groups have extended 2+1-d or 3+1-d viscous hydrodynamics with a hadronic afterburner~\cite{Song:2010aq,Ryu:2012at,Karpenko:2013ama}. Such hybrid models give a more realistic description for the hadronic evolution of the QGP fireball, which also naturally imprint the off-equilibrium chemical and thermal freeze-out procedures of various hadron species.
The key component of a hybrid model is the particle event generator that convert the hydrodynamic
output on the switching hyper surface into various hadrons for the succeeding hadron cascade simulations.
More specifically, such Monte Carlo event generator produces particles
with specific momentum and position information according to the differential Cooper-Frye
formula~\cite{Song:2010aq}:
\begin{eqnarray}
E\frac{d^3N_i}{d^3p}(x) &=& \frac{g_i}{(2\pi)^3}
p\cdot d^3\sigma(x)\, f_i(x,p)
\end{eqnarray}
Where $f_i(x,p)$ is the distribution function of hadron species $i$,
$g_i$ is the corresponding degeneracy factor and
$d^3\sigma_\mu(x)$ is a surface element on the hyper-surface
$\Sigma$, e.g., defined by a constant switching temperature $T_{sw}$.
Generally, the switching temperature $T_{sw}$ is set to around 160 MeV,
which is
close to the phase transition temperature of the QCD matter at zero
chemical chemical potential~\cite{Ding:2015ona}. For a viscous QGP fluid,
the distribution function $f(x,p)$ include an ideal part and an off-equilibrium part
$f=f_0+\delta f$, where $\delta f$ generally takes the form:
$\delta f=f_0 \bigl(1{\mp}f_0\bigr)\frac{p^\mu p^\nu \pi_{\mu\nu}}{2T^2\left(e{+}p\right)}$~\cite{
Romatschke:2007mq,Luzum:2008cw,Song:2007fn,Song:2007ux,Song:2009gc,Dusling:2007gi}
~\footnote{The full off-equilibrium distribution includes the contributions from shear stress tensor, bulk pressure and heat flow: $\delta f=\delta f_{shear}+ \delta f_{bulk}+ \delta f_{heat}.$ For the bulk viscous correction, there
are different proposed forms of $\delta f_{bulk}$~\cite{Dusling:2011fd,Noronha-Hostler:2013gga}, which brings certain amount of uncertainties for some related flow observables. Considering this complicity as well as the negligible heat conductivity, one generally takes this simple form of $\delta f$ with only shear viscous correction for the viscous hydrodynamics and hybrid model calculations at top RHIC and the LHC energies.}.
After converting the fluid into many individual hadrons of various species, the hybrid model implement a hadron cascade model to simulate the
microscopic evolution of the hadron resonance gas. The hadron cascade model, for example,
Ultra-relativistic Quantum Molecular Dynamics (UrQMD)~\cite{Bass:1998ca,Bleicher:1999xi} solves a large set of
Boltzmann equations for various hadron species:
\begin{eqnarray}
\frac{d f_i(x,p)}{dt}= C_i(x,p)
\end{eqnarray}
where $f_i(x,p)$ is the distribution function and $C_i(x,p)$ is the collision terms for hadron
species i. With such equations, the hadron cascade model propagate various hadrons with
classical trajectories, together with the elastic scatterings, inelastic scatterings and resonance decays.
After all the collisions and decays cease, the system stops evolution and outputs the information
of produced hadron which can be further analyzed to compared with the experimental data~\cite{Bass:1998ca,Bleicher:1999xi}.
Compared with hydrodynamic calculations, the hybrid model improves the description of the
hadronic evolutions and the decoupling procedure, which leads to a nice descriptions of the flow harmonics of identified hadrons, especially for the mass-splitting between pions and protons~\cite{Song:2013qma,Heinz:2011kt}. Meanwhile, the
imprinted baryon-antibaryon annihilations in the hadronic cascade sector also largely reduce the production of
proton and antiproton, which helps to achieve a nice fit of particle yields of various
hadron species~\cite{Song:2013qma,Song:2012ua}.
\vspace{0.2cm}
\underline{\emph{2+1-d vs 3+1-d model}}:
\vspace{0.10cm}
For hydrodynamics or hybrid models, the 2+1-d simulations with a longitudinal boost invariance are more
computational efficient than the full 3+1-d simulations. Before 2010, many developed viscous hydrodynamic codes are (2+1)-dimensional using the Bjorken approximation~\cite{Romatschke:2007mq,Luzum:2008cw,Song:2007fn,Song:2007ux,Song:2009gc,Dusling:2007gi,
Molnar:2008xj,Bozek:2009dw,Chaudhuri:2009hj}. The published VISHNU code is also basically a (2+1)-d hybrid code since it implements the (2+1)-d viscous hydrodynamic simulations for the evolution of the QGP phase. Although the succeeding UrQMD afterburner are (3+1)-dimensional, the longitudinal boost invariance are still approximately conserved at mid-rapidity after the hadronic evolution~\cite{Song:2010aq}. Recently, several groups~\cite{Schenke:2010rr,Bozek:2011ua,Vredevoogd:2012ui,Nonaka:2013uaa,DelZanna:2013eua,Karpenko:2013wva} further developed the full (3+1)-d viscous hydrodynamics or hybrid models without a longitudinal boost invariance. Such full (3+1)-d simulations
could provide full space-time evolution profiles for the EM and hard probes. They can also be widely used to investigate the longitudinal fluctuations, to study the physics for asymmetric collision systems, such as p+Pb, d+Au and Cu+Au, etc, and to provide more realistic calculations / predictions for the heavy ion collisions at lower collision energies.
\subsection{Event-by-event simulations}
\quad As introduced in Sec.II A, the initial profiles of the created QGP fireball fluctuate event-by-event, which leads to
the final state correlations and collective flow for the nucleus-nucleus collisions at RHIC and the LHC~\cite{Alver:2006wh,Miller:2003kd,Alver:2008zza}. For computational efficiency, the early hydrodynamics or hybrid model simulations input smooth initial profiles obtained through averaging a large number of events generated from some specific fluctuating initial conditions and then implement the so-called \texttt{single-shot simulations}. An alternative approach is the \texttt{event-by-event simulations}, which simultaneously run a large number of simulations with the input of individually fluctuating initial profiles. Past research has showed, due to the the approximate linear hydrodynamic response $v_2 \propto \varepsilon_2$ and $v_3 \propto \varepsilon_3$, the elliptic and triangular flow can be nicely described by the single-shot hydrodynamic simulations with properly chosen initial conditions and well tuned parameter sets.
However, such the single shot simulation fails to describe other higher order flow harmonics due to the mode coupling effects. Furthermore, some flow observables, such as event-by-event flow harmonics~\cite{Aad:2013xma,Gale:2012rq}, the event-plane correlations~\cite{Aad:2014fla,Qiu:2012uy}, and the correlations between different flow harmonics~\cite{Aad:2015lwa,Qian:2016pau,ALICE:2016kpq,Zhu:2016puf,Giacalone:2016afq}, etc., can not be directly calculated by the single-shot hydrodynamics or hybrid models, which requires to implement the event-by-event simulations (please also refer to Sec. V for details).
Since 2010, many groups have developed the event-by-event hydrodynamics / hybrid models to study the initial fluctuations, hydrodynamic response and the corresponding final state correlations~\cite{Schenke:2012fw,Gale:2012rq,Pang:2012he,Petersen:2010cw,Qin:2010pf,Holopainen:2010gz,Qiu:2011iv,Qiu:2012uy,
Shen:2014vra,Paatelainen:2013eea,Niemi:2015qia,Bhalerao:2015iya,Xu:2016hmp}.
In general, such event-by-event simulations is computational expansive. For example, the iEBE-VISHNU simulations for the
correlations between flow harmonics have used 30000 CPU hours in Tianhe-1A National Supercomputing Center in Tianjin China.
Recently, the OSU-Kent group has developed the massively parallel simulations for 3+1-d viscous hydrodynamics on graphics processing units with CUDA and demonstrated that such GPU simulations are approximately two orders of magnitude faster than the corresponding simulations from CPU~\cite{Bazow:2016yra}. With the development of computer science and the reduced cost of GPU, the GPU-based simulations will become a popular trend for the massive hydrodynamic calculations in the future.
\section{Flow method}
\label{sec:method}
The anisotropic flow evaluates the anisotropy in particle momentum distributions correlated with the
flow symmetry plane $\Psi_{n}$~\cite{Ollitrault:1992bk}. The various characteristic patterns of the anisotropic flow can be obtained from a Fourier expansion of the event averaged azimuthal particle distribution~\cite{Voloshin:1994mz}:
\begin{equation}
\frac{{\rm d} N}{{\rm d} \varphi} \propto 1+ 2 \sum_{n=1}^{\infty} v_{n} \, e^{in(\varphi-\Psi_{n})}
\end{equation}
where $v_{n} = \langle cos \, n(\varphi - \Psi_n) \rangle$ is anisotropic flow and $\Psi_{n}$ is the corresponding flow symmetry plane.
Since the flow symmetry plane is not a direct observable, the anisotropic flow $v_{n}$ can not
be measured directly. A popular approach is the event-plane method~\cite{Poskanzer:1998yz}, which has been widely used to calculate the azimuthal correlation of emitted particles with respect to the event-plane. However, it was found that the results from event-plane method strongly depends on the resolution of the event-plane, which introduces an uncontrolled bias in the measurement~\cite{Luzum:2012da}.
As an alternative approach, the multi-particle azimuthal correlations method~\cite{Bilandzic:2010jr,Bilandzic:2013kga} has been developed and improved in the past ten years, which allows an unambiguous measurement of the underlying anisotropic flow and eliminates the detector bias.
\vspace{0.2cm}
\underline{\emph{2- and multi-particle correlations}}
\vspace{0.10cm}
Azimuthal correlations of 2 or multi-particles are calculated in two steps~\cite{Bilandzic:2010jr,Bilandzic:2013kga}. First, one obtains an average over all particles in a single-event, and then calculate an average over all events. The single-event 2-particle correlation is defined as:
\begin{eqnarray}
\langle cos \,n(\varphi_1 - \varphi_2) \rangle = \langle e^{in(\varphi_1 - \varphi_2)} \rangle \qquad \qquad \qquad \qquad \qquad
\label{eq:2pc}
\end{eqnarray}
Here, $\langle ... \rangle$ denotes an average over all particles in a single-event.
An average of the 2-particle correlation over all events is generally denoted by $\langle \langle ... \rangle \rangle= \langle \langle e^{in(\varphi_1 - \varphi_2)} \rangle \rangle$. Such correlations can serve as an estimate of the flow harmonics $v_n$ without the knowledge of the symmetry plane, which can also be demonstrated as:
\begin{eqnarray}
\qquad \langle \langle e^{in(\varphi_1 - \varphi_2)} \rangle \rangle &=& \langle \langle e^{in(\varphi_1 - \Psi_n - \varphi_2 + \Psi_n)} \rangle \rangle \nonumber \\
&=&\langle \langle e^{in(\varphi_1 - \Psi_n)} \rangle \langle e^{in(\varphi_2 - \Psi_n)} \rangle + \delta_n \rangle \approx \langle v_n^2 \rangle + \delta_n, \quad
\label{eq:22-pc}
\end{eqnarray}
where $\delta_n$ is called non-flow. It is a term related to the statistical fluctuations, which implies that $\langle AB \rangle \neq \langle A \rangle \langle B \rangle$, or originated from the 2-particle correlations that is not associated with the collective expansion~\cite{Snellings:2011sz}.
The formulas above can be extended to a generic notation for the average single-event k-particle correlators with mixed harmonics:
\begin{eqnarray}
\langle \cos(n_1\varphi_1\! + \!n_2\varphi_2\!+\!\cdots\!+\!n_i\varphi_i) \rangle \,(n_1\geq n_2 \geq \cdots \geq n_i)
\qquad \qquad
\label{eq:mm-pc}
\end{eqnarray}
Here, the azimuthal angle $\varphi_i$ belongs to the reconstructed particle $i$. The self-correlations should be removed completely and only genuine multi-particle correlations left. For simplicity, we also denote this $k$-particle correlators as $\langle k \rangle_{n_{1}, n_{2}, ..., n_{k}}$ in the following context. As the case for the 2-particle correlator,
the subsequent average over all events can obtained in a similar way described in Eqs.~(\ref{eq:22-pc}). For details, please refer to~\cite{Bhalerao:2011yg}.
Calculations for the single event averaged multi-particle correlators require a large amount of the computational resources, which significantly increases for higher order correlations. A successful way to calculate these correlators in a single loop over particles in one event can be achieved by the Q-vectors, which will be introduced in the following text.
\vspace{0.2cm}
\underline{\emph{Q-cumulant method}}
\vspace{0.10cm}
In the Q-Cumulant method~\cite{Bilandzic:2010jr}, the single-event averaged correlations are calculated in terms of a $Q_{n}$-vector, which is defined as:
\begin{equation}
Q_{n} \equiv \sum_{i=1}^M e^{in\phi_i}\,, \qquad \qquad \qquad \qquad
\label{a:Qvector}
\end{equation}
where $M$ is the number of particles in a specific event, and $\phi_i$ is the azimuthal angle of the $i$-th particle.
For azimuthal correlations involving only single harmonic, the single-event average 2-, and 4-particle azimuthal correlations can be calculated as:
\begin{eqnarray}
&&\langle 2 \rangle_{n,-n} = \frac{|Q_{n}|^{2} - M} {M(M-1)} \label{Eq:2pc}\\
&&\langle 4 \rangle_{n,n,-n,-n} = \frac{ \, |Q_{n}|^{4} + |Q_{2n}|^{2} - 2 \cdot {\rm{Re}} \left( Q_{2n}Q_{n}^{*}Q_{n}^{*} \right)- 2 [ 2(M-2) \cdot |Q_{n}|^{2} - M(M-3) ] \, } {[ M(M-1)(M-2)(M-3) ]} \nonumber.
\end{eqnarray}
After averaging the correlators over whole event sample, one obtains the 2- and 4-particle cumulants:
\begin{eqnarray}
c_{n}\{2\} = \langle \langle 2 \rangle \rangle_{n,-n}, \qquad
c_{n}\{4\} = \langle \langle 4 \rangle \rangle_{n,n,-n,-n} - 2 \, \langle \langle 2 \rangle \rangle_{n,-n}^{2}. \qquad \qquad
\label{Eq:c46}
\end{eqnarray}
Eventually, the 2- and 4-particle and reference (integrated) flow harmonics can be calculated as:
\begin{eqnarray}
v_n\{2\} &= \sqrt{c_n\{2\}}, \qquad v_n\{4\} &= \sqrt[4]{-c_n\{4\}}. \qquad \qquad \qquad \qquad \qquad \qquad
\end{eqnarray}
The differential flow harmonics for identified or all charged hadrons can be obtained from a single-event correlators averaged over only these particles of interest within an event. For the limitation of space, we will not further outline the lengthy formula, but refer to~\cite{Bilandzic:2010jr} for details.
As pointed out above, the non-flow effects, originated from resonance decays, jets and etc., could strongly influence the calculated flow harmonics, especially for the ones obtained from 2-particle correlations. In order to largely suppress the non-flow contribution, a successful method of applying an $|\Delta \eta|$ gap to 2-particle correlations has been developed. In this method, an analyzed event is divided into 2 sub-events with certain $|\Delta \eta|$ separation. After obtained the Q-vectors for each sub-event separately, the single-event average 2-particle correlation with $|\Delta \eta|$ gap can be calculated as:
\begin{equation}
\langle 2 \rangle_{n,-n}^{|\Delta \eta|} = \frac{Q_{n}^{A}\cdot Q_{n}^{B*}} {M^{A}\cdot M_{B}}, \qquad \qquad \qquad \qquad
\end{equation}
where $A$ and $B$ denote two different sub-events. The corresponding final flow harmonics are usually denoted as: $v_n\{2, |\Delta \eta| > X\}$, which can be obtained in the same way as the above reference flow without $|\Delta \eta|$ gap.
\vspace{0.2cm}
\underline{\emph{Generic framework}}
\vspace{0.10cm}
In 2013, a generic framework was developed~\cite{Bilandzic:2013kga}, which enables exact and efficient evaluation of all multi-particle azimuthal correlations. This framework can be used along with a correction framework for systematic biases in anisotropic flow analyses due to the Non-Uniform Acceptance (NUA) and Non-Uniform Efficiency (NUE) effects.
For an event with multiplicity $M$, it was proposed to construct two sets for azimuthal angles of the particles $\{\varphi_1,\varphi_2,\ldots,\varphi_M \}$ and for the the corresponding weights $\{w_1,w_2,\ldots,w_M \}$.
With these two sets, one can calculate the weighted $Q_{n}$-vectors in each event, which is defined as:
\begin{equation}
Q_{n,p} \equiv \sum_{i=1}^{M}w_i^p\,e^{in\varphi_i} \,. \qquad \qquad \qquad \qquad
\label{eq:Qvector}
\end{equation}
where $w_i$ is the weight and p is the power of the weight. Correspondingly, the $i$-particle correlator is defined as:
\begin{eqnarray}
&&\Num{m}{n_1,n_2,\ldots,n_m}\equiv
\displaystyle\sum_{\begin{subarray}{c}i_1,i_2,\ldots,i_m=1\\i_1\neq i_2\neq \ldots\neq i_m\end{subarray}}^{M}\!\!\!\!\!w_{i_1}w_{i_2}\cdots w_{i_m}\,e^{i(n_1\varphi_{i_1}+n_2\varphi_{i_2}+\cdots+n_m\varphi_{i_m})}
\label{eq:num}
\end{eqnarray}
Here, the $i$-particle correlator is denoted as $\Num{m}{n_1,n_2,\ldots,n_m}$ for convenience. One could also introduce a shortcut $\Den{m}{n_1,n_2,\ldots,n_m}= \Num{m}{0,0,\ldots,0}$ and then calculate the single-event average of multi-particle azimuthal correlations via:
\begin{equation}
\langle m \rangle_{n_1,n_2,\ldots,n_m} = \frac{\Num{m}{n_1,n_2,\ldots,n_m}}{\Den{m}{n_1,n_2,\ldots,n_m}}.
\qquad \qquad \qquad \qquad \qquad
\label{eq:MPCGF}
\end{equation}
Based on this generic framework, one could explicitly outline the results for the 2- and 4-particle correlators, which can be analytically expressed in terms of the $Q_{n,p}$-vectors defined in the above context. The single-even average 2- and 4-particle correlations could be then calculated as:
\begin{eqnarray}
&&\langle 2 \rangle_{n_{1}, n_{2}} = \frac{\Num{2}{n_1,n_2}}{\Den{2}{n_1,n_2}}, \qquad
\langle 4 \rangle_{n_{1}, n_{2}, n_{3}, n_{4}} = \frac{\Num{4}{n_1, n_2, n_3, n_4}}{\Den{4}{n_1, n_2, n_3, n_4}}.
\label{eq:Ev4PC}
\end{eqnarray}
Here $\Num{2}{n_1,n_2}$ and $\Den{2}{n_1,n_2}$ could be obtained as:
\begin{subequations}
\begin{eqnarray}
&&\Num{2}{n_1,n_2}=Q_{n_1,1} Q_{n_2,1}-Q_{n_1+n_2,2}, \\
&&\Den{2}{n_1,n_2}=\Num{2}{0,0} = Q_{0,1}^2-Q_{0,2}\,. \qquad \qquad \qquad \qquad
\label{eq:2pCorrelation}
\end{eqnarray}
\end{subequations}
Similarly, one can calculate ${\rm N}\langle 4 \rangle_{n_1,n_2,n_3,n_4}$ and ${\rm D}\langle 4 \rangle_{n_1,n_2,n_3,n_4}$ as follows:
\begin{subequations}
\begin{eqnarray}
&&{\rm N}\langle4\rangle_{n_1, n_2, n_3, n_4}= Q_{n_1,1} Q_{n_2,1} Q_{n_3,1} Q_{n_4,1}-Q_{n_1+n_2,2} Q_{n_3,1} Q_{n_4,1}
-Q_{n_2,1} Q_{n_1+n_3,2} Q_{n_4,1}\nonumber\\
&&\quad -Q_{n_1,1} Q_{n_2+n_3,2} Q_{n_4,1}+2 Q_{n_1+n_2+n_3,3} Q_{n_4,1}
-Q_{n_2,1}Q_{n_3,1} Q_{n_1+n_4,2}+Q_{n_2+n_3,2} Q_{n_1+n_4,2}\nonumber\\
&&\quad -Q_{n_1,1} Q_{n_3,1} Q_{n_2+n_4,2}+Q_{n_1+n_3,2} Q_{n_2+n_4,2}+2 Q_{n_3,1} Q_{n_1+n_2+n_4,3}
-Q_{n_1,1} Q_{n_2,1} Q_{n_3+n_4,2}\nonumber\\
&&\quad +Q_{n_1+n_2,2}Q_{n_3+n_4,2} +2 Q_{n_2,1} Q_{n_1+n_3+n_4,3}+2 Q_{n_1,1} Q_{n_2+n_3+n_4,3}
-6 Q_{n_1+n_2+n_3+n_4,4}\,,\\
&&{\rm D}\langle4\rangle_{n_1, n_2, n_3, n_4}=\Num{4}{0,0,0,0}
=Q_{0, 1}^4 - 6 Q_{0, 1}^2 Q_{0, 2} + 3 Q_{0, 2}^2
+ 8 Q_{0, 1} Q_{0, 3} - 6 Q_{0, 4}\,.
\label{eq:4pCorrelation}
\end{eqnarray}
\end{subequations}
The analogous results for higher-order correlators and differential flow can be written out in a similar manner. The details can be found in~\cite{Bilandzic:2013kga}.
Last but not least, the generic framework not only correct the NUA and NUE effects exactly and efficiently, it can also be applied in any order of multi-particle correlations for the cases where their direct implementation was not feasible before. For instance, Eqs.~(\ref{eq:Ev2PC}) and~(\ref{eq:Ev4PC}) could be used in Symmetric cumulants $SC(4,2)$ (discussed in Sec.V) by calculating 4-particle correlation of $\langle 4 \rangle_{4,2,-4,-2}$, and 2-particle correlations $\langle 2 \rangle_{2,-2}$ and $\langle 2 \rangle_{4,-4}$.
\section{Extracting the QGP viscosity from flow harmonics}\label{sec:artwork}
\subsection{Semi-quantitative extractions of the QGP shear viscosity}
\quad The hydrodynamic calculations from different groups have shown that the flow harmonics are sensitive to the QGP shear viscosity $\eta/s$, which can be used to study the transport properties of the hot QCD matter~\cite{Heinz:2013th,Gale:2013da,Teaney:2009qa,Song:2013gia,
Romatschke:2009im,Huovinen:2013wma,Romatschke:2007mq,Luzum:2008cw,Song:2007fn,Song:2007ux,Song:2009gc,Dusling:2007gi,Molnar:2008xj,Bozek:2009dw,Chaudhuri:2009hj,
Schenke:2010rr}. Around 2008, the INT group made an early extraction of the QGP shear viscosity
from the integrated and differential elliptic flow data in 200 A GeV Au--Au collisions, using
the 2+1-d viscous simulations with optical Glauber and KLN initializations~\cite{Romatschke:2007mq,Luzum:2008cw}. They found these two initial conditions bring large uncertainties for the extracted value of $\eta/s$ around O(100\%).
However, it is not reliable to directly read the value of $\eta/s$ from a direct model to data comparison since
their model calculation neglect the high viscous and even off equilibrium hadronic evolution,
which only treat such stage as a pure viscous fluid expansion with both chemical and thermal equilibrium. Ref~\cite{Luzum:2008cw,Song:2008hj} further estimated the effects from the late hadronic evolution and
concluded that the extracted value of the specific QGP shear viscosity $(\eta/s)_{QGP}$ can not
exceed an upper limit around $5\times\frac{1}{4\pi}$.
\begin{figure*}[t]
\center
\includegraphics[width=0.9\linewidth, height=5.8cm]{Fig1-etaS}
\caption{(Color online) Eccentricity-scaled elliptic flow as a function of final multiplicity per
area. The theoretical results are calculated from VISHNU hybrid model calculations with MC-Glauber (left) and MC-KLN (right) initial conditions~\cite{Song:2010mg}. The experimental data are taken from Ref.~\cite{Ollitrault:2009ie}.}
\end{figure*}
For a realistic description for the evolution and decoupling of the hadronic matter, the OSU-LBL group further developed the VISHNU hybrid model~\cite{Song:2010aq} that combines the 2+1-d viscous hydrodynamics with a hadron cascade model-{\tt UrQMD},
and then made a semi-qualitative extraction of the QGP shear viscosity from the integrated elliptic flow data in 200 A GeV Au--Au collisions~\cite{Song:2010mg,Song:2011hk}. Fig.~1 shows the eccentricity-scaled integrated elliptic flow, calculated from VISHNU with MC-Glauber and MC-KLN initial conditions together with a comparison with the corrected experimental data with the non-flow and fluctuation effects removed~\cite{Ollitrault:2009ie}. From Fig.~1, one finds $\frac{1}{4\pi}<(\eta/s)_{QGP}< 2.5\times\frac{1}{4\pi}$, where the main uncertainties of the extracted $(\eta/s)_{QGP}$ are still come from the undetermined initial conditions. Meanwhile, the corresponding VISHNU simulations with both MC-Glauber and MC-KLN initial conditions could nicely describe the $p_T$-spectra and differential elliptic flow harmonics $v_2(p_T)$ for all charged and identified hadrons at various centrality bins in 200 A GeV Au--Au collisions~\cite{Song:2011hk}. Compared with the early extractions in Ref.~\cite{Romatschke:2007mq}, the precision of the extracted value of $(\eta/s)_{QGP}$ is largely increased due to a better description of the highly viscous hadronic stage.
In Ref.~\cite{Song:2011qa}, the VISHNU simulations were further extrapolated to the LHC energies, which systematically investigated the soft hadron data in 2.76 A TeV the Pb--Pb collisions. The related calculations have showed, with the same $(\eta/s)_{QGP}$ extracted at top RHIC energies, VISHNU slightly over-predicts the ALICE flow data at the LHC. After slightly increasing $(\eta/s)_{QGP}$ (for the MC-KLN initial conditions, $(\eta/s)_{QGP}$ increases from $\sim$0.16 to $\sim$ 0.20), VISHNU achieves a better description of the elliptic flow of all charged hadrons at varies centralities~\cite{Song:2011qa}.
\begin{figure*}[tb]
\begin{center}
\includegraphics[width=7.5cm,height=5.8cm]{Fig2-MUSIC-1}
\includegraphics[width=7.5cm,height=5.8cm]{Fig2-MUSIC-2}
\vspace{0.0cm}
\caption{(Color online) Root-mean-square anisotropic flow coefficients $\langle v_n^2 \rangle ^{1/2}$ and
$v_n(p_T)$ in 2.76 A TeV Pb--Pb collisions. The theoretical curves are calculated from
MUSIC with IP-Glasma initial conditions~\cite{Gale:2012rq}. The experimental data in left and right panels are measured by the ALICE collaboration \cite{ALICE:2011ab} and the ATLAS collaboration, respectively.}
\label{fig:vnCent}
\end{center}
\vspace{-0.5cm}
\end{figure*}
\begin{figure*}[htb]
\begin{centering}
\includegraphics[scale=0.82]{Fig3-vnPID}
\end{centering}
\vspace{-7mm}
\caption{(Color online) $v_{n}(p_T)$ ($n=2,3,4$) of pions, kaons and protons in 2.76 A TeV Pb--Pb collisions, calculated from {\tt{iEBE-VISHNU}} with {\tt{AMPT}} initial conditions~\cite{Xu:2016hmp}. The experimental data are taken from the ALICE paper~\cite{Abelev:2014pua,Mohammadi:2016umt}.
\label{fig:flowharmonics} }
\end{figure*}
Many of the early hydrodynamic or hybrid model simulations (includes these 2+1-d hydrodynamic and VISHNU calculations mentioned above)~\cite{Song:2010aq,Song:2010mg,Song:2011hk,Song:2011qa,Song:2013qma,Heinz:2011kt,Zhu:2015dfa} are belong to the category of single-shot simulations, which input smooth initial energy/entropy profiles from early initial condition models or input some smoothed profiles obtained from averaging millions of events from some specific fluctuating initial condition models. Correspondingly, the effects from initial state fluctuations are neglected. Around 2012, the Mcgill group further developed event-by-event 3+1-d viscous hydrodynamic simulations with the IP-Glasma pre-equilibrium dynamics (MUSIC + IP-Glasma) and calculated the flow harmonics at different orders at RHIC and the LHC~\cite{Gale:2012rq}. Fig.~2 shows the integrated and differential $v_n$($n=$ 2...5) of all charged hadrons
in 2.76 A TeV Pb--Pb collisions. Impressively, these different flow harmonic data are nicely described by the MUSIC simulations with $\eta/s=0.2$ or a temperature dependent $\eta/s(T)$ at various centralities. Meanwhile, their simulations also shows the averaged QGP viscosity are slightly larger at the LHC than at RHIC as found in~\cite{Song:2011qa}. Compared with the VISHNU simulations~\cite{Song:2010mg,Song:2011hk, Song:2011qa}, these MUSIC calculations are pure hydrodynamic, which does not specially treat the hadronic evolution with a hadronic afterburner. However, the main results will not be significantly changed since the flow harmonics at the LHC energies are mainly developed (or even reach saturation) in the QGP phase.
For the hydrodynamic simulation with IP-Glasma initial conditions, a balanced initial eccentricities at different order are generated at the beginning, which helps to achieve a simultaneous fit of the elliptic flow, triangular flow and other higher order harmonics. In contrast, the hydrodynamic calculations with either Mc-Glauber or Mc-KLN initial conditions fail to simultaneously describe all the flow harmonics $v_n$ at different order ($n=$ 2 ... 5) although they can nicely fit the elliptic flow data with a well-tuned QGP shear viscosity. Therefore, these higher-order flow harmonic measurements disfavor these two initial conditions, which also motivated the later developments of other initial condition models. In short, the extracted value of the QGP viscosity may largely influenced by the initial conditions used in the hydrodynamic calculations. Meanwhile, higher order flow harmonics as well as other flow observables (please also refer to Sec. V for details) could put straight constrains for the initial condition models and for the extracted value of the QGP shear viscosity.
Besides these flow data of all charged hadrons, the flow harmonics of identified hadrons could reveal more information on the hadronic evolution of the hot QCD matter and provide additional test for extracted values of the QGP transport coefficients obtained from the soft hadron data of all charged hadrons. Ref.~\cite{Song:2013qma} and ~\cite{Heinz:2011kt} have shown, for the extracted constant QGP shear viscosity obtained from the elliptic flow in 2.76 A TeV Pb--Pb collisions, VISHNU hybrid model could nicely describe the differential elliptic flow data of pions, kaons and protons~\cite{Song:2013qma,Heinz:2011kt}. Meanwhile, it could also roughly fit the elliptic flow data of strange and multi-strange hadrons ($\Lambda$, $\Xi$ and $\Omega$) measured at the LHC ~\cite{Zhu:2015dfa}. Recently, the ALICE collaboration further measured the higher order flow harmonics of identified hadrons in 2.76 A TeV Pb--Pb collisions, which showed that the triangular and quadratic flow harmonics of pions, kaons and protons present similar mass ordering as the case for the elliptic flow~\cite{Adam:2016nfo}. In Ref.~\cite{Xu:2016hmp},
the PKU group implement the iEBE-VISHNU hybrid model with the AMPT initial conditions to investigate the flow harmonics of identified hadrons $v_n(p_T)$ ($n=$ 2,3,4) at the LHC. After tuning the Guassian smearing factor for initial energy decompositions and the QGP shear viscosity, the differential $v_n$ of all charged hadrons can be nicely described by the
iEBE-VISHNU simulations. As show in Fig.~3, iEBE-VISHNU also nicely describes the $v_n$ data of pions, kaons and protons, especially reproduces correct mass-orderings for these different flow harmonics.
Ref~\cite{Xu:2016hmp} also showed the pure hydrodynamic simulations do not generate enough mass-splittings between the $v_n$ of pions and protons. The late hadronic evolution in the iEBE-VISHNU
re-distributes the anisotropy flow to various hadron species through the microscopic hadronic scatterings, which enhances
the $v_n$ mass splitting between pions and protons and leads to a nice description of the
experimental data~\cite{Xu:2016hmp}.
\vspace{0.15cm}
\underline{\emph{The issues of bulk viscosity}}
\vspace{0.10cm}
For simplicity, the early semi-quantitative extraction of the QGP shear viscosity at RHIC and the
LHC neglects the effects from bulk viscosity~\footnote{At the LHC and top RHIC energies, the heat
conductivity can be neglected due to the almost vanishing net baryon density.}.
The (0+1)-d viscous hydrodynamic calculations without transverse expansion
~\cite{Torrieri:2008ip,Rajagopal:2009yw} suggested that, for a uniform system
undergoing rapid boost-invariant longitudinal expansion, the bulk pressures can turn
into negative values, leading to mechanically unstable fluid with negative effective
pressure. The 2+1-d viscous hydrodynamics with single shoot simulations showed that the bulk viscosity also suppresses the elliptic flow as the shear viscosity~\cite{Song:2009rh,Song:2009je,Monnai:2009ad,Denicol:2009am,
Noronha-Hostler:2014dqa},
but with smaller efforts due to the critical slowing down near the QCD phase transition~\cite{Song:2009rh}.
Recently, the 3+1-d event-by-event simulations from MUSIC found that the bulk viscosity largely influence the average transverse momentum of identified hadrons~\cite{Ryu:2015vwa}. For the MUSIC calculation with the IP-Glasma initial condition, the fitting of the $p_T$ spectra are largely improved by a properly chosen bulk viscosity, which also leads to
a consistent descriptions of other soft hadron data, such as the integrated and differential flow harmonics at various centralities in 2.76 A TeV Pb--Pb collisions.
\begin{figure}[tbh]
\vspace{5mm}
\begin{centering}
\includegraphics[scale=0.9]{Fig4-Duke-etas}
\end{centering}
\vspace{0mm}
\caption{(Color online) Estimated temperature dependence of the shear viscosity $(\eta/s)(T)$ above the QCD phase transition (for $T_c > 154$ MeV), obtained from a multi-parameter model to data comparison~\cite{Bernhard:2016tnd}.
\label{fig:flowharmonics} }
\end{figure}
\subsection{Quantitative extractions of the QGP shear and bulk viscosity with massive data evaluations}
For the flow calculations and predictions at RHIC or at the LHC, most of the hydrodynamics or hybrid model simulations, with different type of initial conditions, input a constant value of the specific QGP shear viscosity and neglect the effects
of bulk viscosity. The early model calculations also revealed that the averaged QGP shear viscosity changes with the collision energies, which is slightly larger at the LHC than at RHIC~\cite{Song:2011qa,Gale:2013da,Karpenko:2013ama,Song:2012ua}.
It is thus very important to extract a temperature-dependent QGP shear viscosity $(\eta/s)_{QGP}(T)$ from the massive soft hadron data in relativistic heavy ion collisions. For the purposes of massive data evaluations,
the Livermore group developed the CHIMERA algorithm (a comprehensive heavy ion model evaluation and reporting algorithm), and extracted of the initial temperature and the QGP shear viscosity from a simultaneous fit of the $p_T$ spectra, elliptic flow, and HBT radii in 200 A GeV Au + Au collisions~\cite{Soltz:2012rk}. Note that this early massive hydrodynamic simulations around 2012 assume the QGP shear viscosity is a constant value and the bulk viscosity is zero, together with an input of the traditional MC-Glauber initial condition which has been ruled out by some later flow measurements.
To avoid the limitations of simultaneously tuning multiple free parameters in the early work~\cite{Soltz:2012rk}, the Duke-OSU group implemented the Bayesian method to the event-by-event hybrid model simulations~\cite{Bernhard:2015hxa}, and then quantitatively estimated the properties of the QGP through a multi-parameter model to data comparison, using the parametric T\raisebox{-.5ex}{R}ENTo initial conditions~\cite{Bernhard:2016tnd}. With the new developed massive data evaluating techniques, the global fitting of the multiplicity, transverse momentum, and flow data at the LHC constrain the free parameters in the T\raisebox{-.5ex}{R}ENTo model, which also give an extracted temperature-dependent specific shear viscosity and bulk viscosity.
\begin{figure*}[tbh]
\begin{centering}
\includegraphics[width=15.0cm,height=6.2cm]{Fig5-Duke}
\end{centering}
\vspace{0.0mm}
\caption{(Color online) Multiplicities, mean $p_T$ of all charged and identified hadrons
and the integrated $v_n$ ($n=$ 2,3,4) of all charged hadrons in 2.76 A TeV Pb--Pb collisions, calculated from
event-by-event hybrid model with the high-probability
parameters extracted from the massive data fitting~\cite{Bernhard:2016tnd}.
The data are from the ALICE experiment~\cite{Abelev:2013vea,ALICE:2011ab}.}
\vspace{-0.3cm}
\end{figure*}
Fig.~4 shows the estimated temperature dependent shear viscosity $(\eta/s)(T)$ from the DUKE-OSU group, obtained from the massive data fitting in 2.76 A TeV Pb+Pb collisions. The blue line is the median with a blue band showing the 90\% credible region. Correspondingly, a nonzero bulk viscosity with a peak near the QCD phase transition has also been extracted simultaneously (For details, please refer to~\cite{Bernhard:2016tnd}).
With these extracted QGP transport coefficients other extracted most probable parameters, the event-by-event hybrid simulations give an excellent overall fit for the multiplicities and mean $p_T$ of all charged and identified hadrons
and the integrated $v_n$ ($n=$ 2,3,4) of all charged hadrons from the most central collisions to the peripheral collisions in Pb--Pb collisions at the LHC, as shown in Fig.~5.
Note that this extracted $\eta/s(T)$, within the uncertainty band, is compatible with the well-known KSS bound $\eta/s < 1/4\pi$ \cite{Danielewicz:1984ww, Policastro:2001yc, Kovtun:2004de}, which also supports several early semi-quantitative extractions of the QGP viscosity at RHIC and the LHC. For example, the extracted specific QGP viscosity $\frac{1}{4\pi}<(\eta/s)_{QGP}< 2.5\times\frac{1}{4\pi}$ from
the VISHNU calculations with MC-Glauber and MC-KLN initial conditions~\cite{Song:2010mg,Song:2011hk} and the
implemented $\eta/s = 0.095$ (with the same bulk viscosity parametrization ) in the MUSIC simulations with the IP-Glasma initialization~\cite{Ryu:2015vwa} are both consistent with this quantitative extracted results from the DUKE-OSU collaborations. The early EKRT viscous hydrodynamic calculations for the flow data at RHIC and the LHC also prefer a temperature-dependent $\eta/s(T)$ with a positive slope~\cite{Niemi:2015qia}.
Compared with the early extraction of the QGP viscosity with specific initial condition, Ref.~\cite{Bernhard:2016tnd} implement the parametric {\tt T\raisebox{-.5ex}{R}ENTo} model that could smoothly interpolates among various
initial condition schemes through tuning the related parameters. It is thus an ideal initial state model for the massive model-to-data comparison, which helps to make a simultaneous constraint for the initial conditions and the QGP transport
coefficients. It was found that initial entropy deposition from the constrained T\raisebox{-.5ex}{R}ENTo model with fixed parameters is approximately proportional to the geometric mean of the participant nuclear densities, which gives similar scaling as the successful EKRT and IP-Glasma initial conditions.
In Ref.~\cite{Auvinen:2016tgi}, the Bayesian statistical analysis was extended to the massive data fitting in Au--Au collisions at $\sqrt{s_{\rm NN}}=$ 19.6, 39 and 62.4 GeV. It was found that the extracted constant QGP specific shear viscosity $\eta/s$ decreases with the increase of collision energy, which shows a similar result obtained from the early hybrid model calculations~\cite{Karpenko:2013ama}. In the future, a combined massive data fitting at RHIC (including BES) and the LHC will give more precise temperature-dependent transport coefficients of the QGP.
\section{Initial state fluctuations and final state correlations}
The event-by-event initial state fluctuations of the created QGP fireballs lead to the final state correlations, which produce the elliptic flow, triangular flow and other higher-order flow harmonics as observed in the experiments at RHIC and the LHC~\cite{Alver:2010gr,Alver:2010dn,Adare:2011tg,Gardim:2011xv,Adamczyk:2013waa,ALICE:2011ab,Aamodt:2011by,ATLAS:2012at}.
The QGP viscosity largely suppresses flow harmonics at different order $v_n$. As reviewed in last section, the transport properties of the QGP fireball have been extracted from these flow data with the event-by-event hydrodynamics / hybrid model simulations\cite{Song:2011qa, Schenke:2012fw,Gale:2012rq}. In this section, we will review other flow observable, such as event-by-event $v_n$ distribution, the event plane correlations, the correlations of flow harmonics, etc., that are more sensitive to the details of model calculations, which may provide additional constrains for the initial state models and
for the extracted QGP transport coefficients in the future.
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=6.75cm,height=12cm]{Fig6-VnDis-1}
\includegraphics[width=6.75cm,height=12cm]{Fig6-VnDis-2}
\caption{(Color online) Scaled event-by-event distributions of $v_n$ $($n=$ 2,3,4)$ from MUSIC simulations with the IP-Glasma initial conditions~\cite{Gale:2012rq,Gale:2013da}, together with a comparison with the ATLAS data~\cite{Aad:2013xma}.}
\label{fig:vnenDist-20-25}
\end{center}
\vspace{-0.7cm}
\end{figure}
\vspace{0.2cm}
\underline{\emph{Event-by-event $v_n$ distribution}}:
\vspace{0.10cm}
The flow harmonics $v_{n}$ are generally measured within a base of the event-average, which
mainly reflects the hydrodynamic response to the averaged initial eccentricity coefficients
$\varepsilon_{n}$ within some centrality bin. With a large amount of particles produced per event at the LHC,
a direct measurement of the event-by-event $v_{n}$ distribution becomes possible, which provides
more information on the initial state fluctuation and the underlying probability density function.
Around 2012, the ATLAS Collaboration made the first measurement of the event-by-event distributions of $v_{n}$ ($n = 2,\ 3, \ 4$) in Pb--Pb collisions at $\sqrt{s_{\rm NN}}=$ 2.76 TeV~\cite{Aad:2013xma}. Fig.~6 shows the MUSIC hydrodynamic calculations nicely describe the ATLAS data with the IP-Glamsa initial conditions. It also shows, for n=2 and 3, the rescaled $v_{n} / \left< v_{n} \right>$ distributions mostly follow the ${\varepsilon_n}/{\left< \varepsilon_n \right>}$ distributions from the initial state, which are not sensitive to the details of the hydrodynamic evolution~\cite{Gale:2012rq}. Due to the mode couplings effects for higher flow harmonics,
the distributions of $v_{4} / \left< v_{4} \right>$ are not necessarily follow ${\varepsilon_4}/{\left< \varepsilon_4 \right>}$, especially for non-central Pb+Pb collisions. The hydrodynamic evolution balances the distributions of $v_{4} / \left< v_{4} \right>$, making a nice description of the experimental data. In Ref.~\cite{Aad:2013xma}, the measured $v_n$ distributions were compared with the $\varepsilon_n$ distributions from MC-Glauber and MC-KLN models, which demonstrated certain deviations between model and data for most of the centrality classes. The $v_n$ distributions thus provide strong constrains on the initial state models, which do not favor the MC-Glauber and MC-KLN initial conditions.
\begin{figure*}[tbh]
\centering
\includegraphics[width=15.0cm,height=5.0cm]{Fig7-ratio}
\caption{(Color online) The ratio $v_{n}\{2\}/v_{n}[2]$ at various centralities in 2.76 A TeV Pb--Pb collisions. The theoretical lines are calculated from VISH2+1 with MC-Glauber and MC-KLN initial conditions~\cite{Heinz:2013bua}, the experimental data are measured by the ALICE collaborations~\cite{Zhou:2014bba}.}
\label{fig:ptdepV}
\end{figure*}
The ATLAS measurements can also be used to examining the underlying $p.d.f.$ of the $v_{n}$ distributions. The most popular parameterizations are the Bessel-Gaussian distributions~\cite{Voloshin:2007pc}:
\begin{equation}
p(v_{n}) = \frac{v_{n}}{\sigma^{2}} I_{0}\left( \frac{v_{n} v_{n}}{\sigma^{2}} \right) {\rm exp} \left( - \frac{ v_{0}^{2} + v_{n}^{2}} {2\sigma^{2}} \right) ,
\end{equation}
where $v_{0}$ is the anisotropic flow from the reaction plane $\Psi_{\rm RP}$ and $\sigma$ is the anisotropic flow fluctuation.
It was reported that the Bessel-Gaussian distribution could nicely describe the $v_{2}$ distributions for mid-central collisions~\cite{Voloshin:2007pc, Broniowski:2007ft}. Without the constraint of $\varepsilon_{2} <$ 1 for each event, it is not expected to work well in peripheral collisions~\cite{Yan:2014afa}.
To fix this problem, a new function, named ``Elliptic Power" distribution, was proposed in~\cite{Yan:2014afa}, which are expressed as:
\begin{equation}
p(v_{n}) = \frac{\alpha \, v_{n}}{\pi} \left( 1- v_{0}^{2} \right)^{\alpha + \frac{1}{2}} \int_0^{2\pi} \frac{ \left( 1- v_{n}^{2} \right)^{\alpha - 1} \, \mathrm{d}\phi} { \left( 1- v_{0} \,v_{n} \cos \phi \right)^{2\alpha +1}},
\end{equation}
where $\alpha$ quantifies the fluctuations and $v_{0}$ has the same meaning as the Bessel-Gaussian parameterizations.
As a promising candidate of underlying $p.d.f.$ of $v_{n}$ distribution, the Elliptic-Power function can nicely describe the event-by-event $v_{2}$ and $v_{3}$ distributions~\cite{Yan:2014afa,Zhou:2015eya}. However, it can not give an equally nice fitting for these distributions of higher flow harmonics ($n\geqslant 4$), which are largely influenced by the non-linear hydrodynamic response. For details, please refer to~\cite{Yan:2014afa,Zhou:2015eya}.
\vspace{0.2cm}
\underline{\emph{De-correlations of the flow-vector $V_{n}$}}:
\vspace{0.10cm}
Recently, it was realized that the produced particles at different transverse momentum $p_T$ and rapidity y do not share a common flow angle or event plane. Such transverse momentum and rapidity dependent flow angles fluctuate event-by-event, which
also breaks the factorizations of the flow harmonics~\cite{Heinz:2013bua,Gardim:2012im}. To evaluate the de-correlations of the flow-vector, especially on the transverse momentum dependence, two new observables, $v_{n}\{2\}/v_{n}[2]$ and the factorization ratio $r_{n}$, have been proposed, which are defined as:
\begin{eqnarray}
\frac{v_{n}\{2\}}{v_{n}[2]}(p_{\rm T}^{a}) &=& \frac{ \left< v_{n}^{a} v_{n} \cos \left[ n \left( \Psi_{n}^{a} - \Psi_{n} \right) \right] \right>} {\left< v_{n}^{a} v_{n}^{a} \right>^{1/2} \left< v_{n} v_{n} \right>^{1/2} }; \\
r_{n} &=& \frac{ \left< v_{n}^{a} v_{n}^{b} \cos \left[ n \left( \Psi_{n}^{a} - \Psi_{n}^{b} \right) \right] \right>} { \left< v_{n}^{a} v_{n}^{a} \right>^{1/2} \left< v_{n}^{b} v_{n}^{b} \right>^{1/2} }
\end{eqnarray}
where $v_{n}^{a}$, $\Psi_{n}^{a}$ (or $v_{n}^{b}$, $\Psi_{n}^{b}$) are the $n^{th}$-order flow harmonics and flow angle at the transverse momentum $p_{\rm T}^{a}$ (or $p_{\rm T}^{b}$).
The $p_{\rm T}$ dependent fluctuations of the flow angle and magnitude make $v_{n}\{2\}/v_{n}[2] < $ and $r_n$ deviated from 1. As shown in Fig.~\ref{fig:ptdepV}, these deviations from unity have already been observed in experiment and qualitatively described by the related hydrodynamic calculations~\cite{Heinz:2013bua}, which indicates the existence of the $p_{\rm T}$ dependent fluctuations of flow angle and magnitude.
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.95\linewidth, height=7cm]{Fig8-EPcorr}
\caption{(Color online) Centrality dependent event-plane correlations, calculated from event-by-event $\tt{VISH2+1}$ hydrodynamic simulations with MC-Glauber and MC-KLN initial conditions~\cite{Qiu:2012uy}. The data are measured by the
ATLAS collaborations~\cite{Aad:2014fla}.}
\label{fig:heinz2}
\end{figure*}
The fluctuations in the longitudinal direction have also been investigated both in experiment and in theory~\cite{Khachatryan:2015oea,Pang:2014pxa,Pang:2015zrq,Xiao:2015dma,Ma:2016fve}. Ref.~\cite{Pang:2015zrq} found that the
the final state de-correlations of the anisotropic flows in different pseudo rapidity regime is associated with the
spatial longitudinal de-correlations from the initial state. It also predicted a larger longitudinal
decorrelations at RHIC than the ones at the LHC, which provide opportunities to further study
the longitudinal fluctuation structures of the initial stage.
\vspace{0.2cm}
\underline{\emph{Event-plane correlations}}:
\vspace{0.10cm}
The correlations between different flow vectors could reveal more information on the initial state fluctuations and the hydrodynamic response~\cite{Zhou:2016eiz}.
In Ref.~\cite{Aad:2014fla}, the ATLAS Collaboration has measured the event-plane correlations
among two or three event-plane angles, $\langle \cos(c_nn\Psi_n + c_mm\Psi_m) \rangle$ and $\langle \cos(c_nn\Psi_n + c_mm\Psi_m + c_hh\Psi_h) \rangle$, in 2.76 A TeV Pb--Pb collisions and observed several different centrality-dependent trends for these correlators. It was also reported that the MC-Glauber model, which only involves the correlations from the initial state, can not reproduce the trends for many of these correlators~\cite{Aad:2014fla}. Using event-by-event hydrodynamics with MC-Glauber and MC-KLN initial conditions, Qiu and Heinz have systematically calculated the event-plane correlations and demonstrated the hydrodynamic evolution is essential for an overall qualitative description of various flow angle correlations~\cite{Qiu:2012uy}. Fig.~8 presents the model to data comparisons for several selected correlations functions which shows, although correlation strength is sensitive to the initial conditions and the QGP shear viscosity, hydrodynamics successfully reproduce the centrality-dependent trend of these event-plane correlations. In contrast, the correlations of the initial eccentricity plane show large discrepancies with the measured and calculated event correlations of the final produced particles, including magnitudes, qualitative centrality dependence, and even in signs~\cite{Qiu:2012uy}. In~\cite{Aad:2014fla, Bhalerao:2013ina}, it was found that the AMPT simulations are also able to roughly reproduce the ATLAS data with well tuned parameters. These different model calculations involving final state interactions~\cite{Qiu:2012uy,Aad:2014fla,Bhalerao:2013ina} demonstrate that the observed event-plane correlations are not solely driven by the initial geometry, but largely influenced by the complicated evolution of the QGP fireball.
\begin{figure*}[t]
\centering
\includegraphics[width=0.49\textwidth]{Fig9-linear_nonlinear_422}
\includegraphics[width=0.49\textwidth]{Fig9-linear_nonlinear_235}
\caption{(Color online) The separate contributions from linear, non-linear and combined response to the event-plane correlations~\cite{Teaney:2013dta}, together with a comparison with the ATLAS data~\cite{Aad:2014fla}.}
\label{fig:yan}
\end{figure*}
Using a nonlinear response formalism, Ref.~\cite{Teaney:2013dta} calculated the event plane correlations from the initial energy density expanded with the cumulants method,which roughly reproduces the centrality-dependent trends of several selected correlations. It is also found that the non-linear response of the medium have strong influence on these related correlators. As shown in Fig.~9, the linear response alone is not able to describe the $\langle \cos(4(\Psi_2 - \Psi_4)) \rangle$ and $\langle \cos(2\Psi_2 + 3\Psi_3 - 5\Psi_5) \rangle$ correlators, while a good description of the data can be achieved after combining the contributions of both linear and non-linear response.
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.95\textwidth]{Fig10-aliceSC}
\caption{(Color online) The centrality dependence of symmetric cumulants $SC(4,2)$ and $SC(3,2)$ in 2.76 A TeV Pb--Pb collisions~\cite{ALICE:2016kpq}. }
\label{fig:sc_ALICE}
\end{figure*}
\begin{figure*}[tbh]
\centering
\includegraphics[width=0.95\linewidth,height=10.0cm]{Fig11-vnvm}
\caption{(Color online) The centrality dependence of normalized symmetric cumulants ${\rm NSC}(m,n)$ and the corresponding normalized symmetric cumulants of the initial eccentricity coefficients ${\rm NSC}^{\varepsilon}(m,n)$ in 2.76 A TeV Pb--Pb collisions, calculated from event-by-event VISH2+1 simulations with MC-Glauber, MC-KLN and AMPT initial conditions. ~\cite{Zhu:2016puf}.}
\label{fig:sc_hydro4}
\end{figure*}
\vspace{0.2cm}
\underline{\emph{Correlations of flow harmonics}}:
\vspace{0.10cm}
Besides the event-plane correlations, the correlations between different flow harmonics are other important observables closely related to the corrections of the flow vectors, that could further reveal the initial state correlations and the hydrodynamic response. Using the Event-Shape Engineering (ESE)~\cite{Schukraft:2012ah}, the ATLAS Collaboration firstly measured the correlations between flow harmonics based on the 2-particle correlations
and found that $v_{2}$ and $v_{3}$ are anti-correlated, $v_{2}$ and $v_{4}$ are correlated~\cite{Aad:2015lwa}~\footnote{For the related qualitative investigations from hydrodynamics, please refer to~\cite{Qian:2016pau}}. Recently, a new observable, called Symmetric Cumulants $SC^{v}(m, n)$, were proposed as an alternative approach to measure the correlations between different flow harmonics. It is defined as $SC^{v}(m, n)= \left< v_{m}^{2} \, v_{n}^{2} \right> - \left< v_{m}^{2} \right> \left< v_{n}^{2} \right>$ and can be measured by the multi-particle cumulant method. The related Monte-Carlo model simulations imply that $SC^{v}(m, n)$ is insensitive to the non-flow effects~\cite{ALICE:2016kpq}. Besides, $SC^{v}(m, n)$ is independent on the symmetry plane correlations by design~\cite{Bilandzic:2013kga}.
Fig.~\ref{fig:sc_ALICE} (left) shows the centrality dependent symmetric cummulants $SC^{v}(4, 2)$ and $SC^{v}(3, 2)$ in 2.76 A TeV Pb--Pb collisions, measured from ALICE~\cite{ALICE:2016kpq} and calculated from the EKRT event-by-event hydrodynamics~\cite{Niemi:2015qia}. The positive values of $SC^{v}(4, 2)$ and negative values of $SC^{v}(3, 2)$ are consistent with the early observation from ATLAS~\cite{Aad:2015lwa}, which also illustrates that $v_{2}$ is anti-correlated with $v_{3}$, but correlated with $v_{4}$. A comparison between the model calculations and the experimental data in Fig.~\ref{fig:sc_ALICE} also shows that, although hydrodynamics could successfully reproduce the integrated flow harmonics $v_{n}$, it can only qualitatively, but not quantitatively describe the correlations between these harmonics.
In Ref.~\cite{Zhu:2016puf}, the symmetric cumulants $SC^{v}(m, n)$ and other related observables have been systematically calculated by the event-by-event viscous hydrodynamics VISH2+1 with a focus on investigating the influences from different initial conditions and QGP shear viscosity. Like the case of the early EKRT hydrodynamic simulations, all of these VISH2+1 simulations with MC-Glauber, MC-KLN and AMPT initial conditions could capture the sign and centrality dependence of $SC^{v}(4, 2)$ and $SC^{v}(3, 2)$, but not be able to archive a simultaneous quantitative descriptions of these two symmetric cumulants for all centrality intervals. Comparing with the individual flow harmonic $v_2$ and $v_3$, the symmetric cumulants $SC^{v}(4,2)$ and $SC^{v}(3,2)$ are more sensitive to the details of the theoretical calculations. Ref.~\cite{Zhu:2016puf} also predicted other
symmetric cumulants $SC^{v}(5, 2)$, $SC^{v}(5, 3)$, and $SC^{v}(4, 3)$ and found that $v_2$ and $v_5$, $v_3$ and $v_5$ are correlated, $v_3$ and $v_4$ are anti-correlated for various centralities.
In order to get rid of the influences from individual flow harmonics, it was suggested to normalize $SC^{v}(m,n)$ by dividing the products $\left<v_m^2\right>\left<v_n^2\right>$~\cite{ALICE:2016kpq}. Fig.~\ref{fig:sc_ALICE} (right) and Fig.~\ref{fig:sc_hydro4} (a,b,c,g,h) plots the normalized symmetric cumulants $NSC^{v}(n,m)$ ($NSC^{v}(n,m)$ = $SC^{v}(n,m)$/$\left<v_n^2\right>\left<v_m^2\right>$) in 2.76 A TeV Pb--Pb collisions. $NSC^{v}(4,2)$ exhibits a clear sensitivity to the initial conditions and the $\eta/s(T)$ parameterizations, which could provide additional constrains for the initial geometry and
the transport coefficients of the hot QCD matter. In contrast $NSC^{v}(3,2)$ is insensitive to the detailed setting of $\eta/s$ and the used initial conditions. Fig.~\ref{fig:sc_hydro4} also shows that the values of $NSC^{v}(3,2)$ is compatible to the ones of $NSC^{\varepsilon}(3,2)$ from the initial state due to the linear response of $v_2$ ($v_3$) to $\varepsilon_2$ ($\varepsilon_3$). Note that these different $NSC^{v}(3, 2)$ curves in Fig.~\ref{fig:sc_hydro4} (g) are almost overlap with each other, which also roughly fit the normalized ALICE data. In contrast, the predicted $NSC^{v}(4, 2)$, $NSC^{v}(5, 2)$, and $NSC^{v}(5, 3)$ are sensitive to both initial conditions and $\eta/s$. Due to the nonlinear hydrodynamic response,
$NSC^{v}(4, 3)$ does not necessarily follow the sign of $NSC^{\varepsilon}(4, 3)$ for some certain initial conditions.
In a recent work~\cite{Giacalone:2016afq}, the $NSC^{v}(m,n)$ are expressed in terms of symmetry plane correlations and moments of $v_{2}$ and $v_{3}$. Considering the relative flow fluctuations of $v_{3}$ is stronger than $v_{2}$, one expects smaller values for $NSC^{v}(5,2)$ compared to $NSC^{v}(5,3)$, as shown in Fig.~\ref{fig:sc_hydro4}.
On the other hand, it was predicted that $NSC^{v}(m,n)$ involving $v_{4}$ and $v_{5}$ increases with $\eta/s$ in the same way as the symmetry plane correlations~\cite{Teaney:2012ke, Teaney:2012gu}, which qualitatively agrees with the results in Fig.~\ref{fig:sc_hydro4} from most central collisions to semi-peripheral collisions. \\
As discussed above, the low flow harmonics, $v_{2}$ or $v_{3}$, is mainly determined by a linear response to the initial eccentricity $\varepsilon_{2}$ or $\varepsilon_{3}$, while higher flow harmonics($v_{n}$ with $n>$ 3) not only contains the contributions from the linear response of the corresponding $\varepsilon_{n}$, but also has additional contributions from lower order initial anisotropy coefficients. These additional contributions are usually called non-linear response of higher flow harmonics~\cite{Bhalerao:2014xra,Yan:2015jma}. In Ref.~\cite{Giacalone:2016afq}, it was proposed that a direct connection between symmetry plane correlations and the flow harmonic correlations $ NSC^{v}(m,n)$ could be built from the nonlinear hydrodynamic response of higher flow harmonics. Besides, the past hydrodynamic calculations have shown that the contributions of nonlinear response can explain the symmetry plane correlations and its centrality dependence~\cite{Yan:2015jma,Qian:2016pau}. Recently, the proposed nonlinear hydrodynamic coefficient~\cite{Yan:2015jma} has been systematically studied and measured~\cite{Qian:2016pau,ALICE:NLR,CMS:NLR}, which could be used to further constrain the initial conditions and $\eta/s$, and to provide a better understand of the correlations between different flow harmonics.
\section{Correlations and Collective flow in small systems}
\subsection{p--Pb collisions at $\sqrt{s_{\rm NN}}=$ 5.02 TeV }
High energy proton-lead (p-Pb) collisions at the LHC was originally aimed to study the cold nuclear matter effects and provide the corresponding reference data for Pb--Pb collisions at the LHC. However, lots of unexpected collective phenomena have been observed in experiments. For example, the measured two particle correlations showed a symmetric double ridge structure on both near-and away-side in high multiplicity p--Pb collisions at $\sqrt{s_{\rm NN}}=$ 5.02 TeV~\cite{CMS:2012qk,Abelev:2012ola,Aad:2013fja,Khachatryan:2015waa}.
Besides, negative 4- and 8-particle cumulants and positive 6-particle cumulants have been observed in the high multiplicity events~\cite{Aad:2013fja,Abelev:2014mda,Khachatryan:2015waa}. In particular, all the multi-particle cumulants (including 4-, 6- and 8-particles cumulants) are compatible to the ones obtained from all-particle correlations with Lee-Yang Zero's method, which corresponds to $v_{2}\{4\} \approx v_{2}\{6\} \approx v_{2}\{8\} \approx v_{2}\{{\rm LYZ}\}$~\cite{Khachatryan:2015waa}), as shown in Fig.~\ref{fig:CMSpPb} (This observation has also been confirmed by the later ATLAS~\cite{Aad:2013fja} and ALICE Collaborations~\cite{Abelev:2014mda} measurements).
Meanwhile, the obtained $v_{2}$ from two or four-particle cumulants are comparable to the ones from Pb--Pb collisions at 2.76 TeV~\cite{Aad:2013fja,Chatrchyan:2013nka,ABELEV:2013wsa,Khachatryan:2015waa}. Recently,
the ALICE collaboration has extended the investigated of anisotropic collectivity via azimuthal correlations of identified hadrons~\cite{ABELEV:2013wsa,Khachatryan:2014jra}. A typical mass-ordering feature among the $v_{2}$ of pions, kaons and protons is observed in high multiplicity p-Pb collisions~\cite{ABELEV:2013wsa}. Similarly, the CMS Collaboration found a $v_2$ mass-ordering between ${\rm K_{S}^{0}}$ and $\Lambda(\overline{\Lambda})$~\cite{Khachatryan:2014jra}.
There are many theoretical efforts attempt to provide explanation for the flow-like behavior of the p--Pb collisions. In general they can be divided into two big categories that doesn't involve the final-state evolution of the medium but only account for initial-state effects~\cite{Dusling:2012iga,Dusling:2012cg,Dusling:2012wy,
Dusling:2013qoz,Dusling:2014oha,Kovner:2012jm,Dumitru:2014dra,Dumitru:2014vka,Noronha:2014vva}, and that include the final-state interactions, such as the hydrodynamics or kinetic model description~\cite{Bozek:2011if,Bozek:2012gr,Bozek:2011if,Bozek:2013ska,Bzdak:2013zma,Qin:2013bha,
Werner:2013ipa,Schenke:2014zha,Bzdak:2014dia,Ma:2014pva,Bozek:2015swa,Koop:2015wea,Li:2016ubw,Zhou:2015iba}.
In this section, we will focus on reviewing the hydrodynamic calculations as well as the kinetic model investigations on the flow-like signals in the small p--Pb systems.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{Fig12-CMS}
\caption{(Color online) Multiplicity dependence of $v_2$, obtained from Fourier decomposition of 2-particle azimuthal correlations, from multi-particle cumulants, and via LYZ method, in Pb--Pb collisions at $\sqrt{s_{\rm NN}}=$ 2.76 TeV (left) and p--Pb collisions at $\sqrt{s_{\rm NN}}=$ 5.02 TeV (right)~\cite{Khachatryan:2015waa}.}
\label{fig:CMSpPb}
\end{figure*}
\begin{figure*}[t]
\includegraphics[angle=0,width=0.49 \textwidth, height=6.2cm]{Fig13-Bozek-v23cms}
\includegraphics[angle=0,width=0.49 \textwidth, height=6.2cm]{Fig13-Bozek-v2id}
\caption{(Color online) The hydrodynamic calculations of the elliptic and triangular flow coefficient of all charged particles (left panel) and elliptic flow of identified hadrons (right panel) in p-Pb collisions at $\sqrt{s_{\rm NN}}=$ 5.02 TeV~\cite{Bozek:2013ska}, together with a comparison with the CMS~\cite{Chatrchyan:2013nka} and ALICE data
data~\cite{ABELEV:2013wsa}.
\label{fig:v23}}
\end{figure*}
\vspace{0.2cm}
\underline{\emph{Results from hydrodynamic simulations}}:
\vspace{0.10cm}
Hydrodynamics is a useful tool to simulate the collective expansion of the created systems and quantitatively study and predict the final flow observable. Recently, the holographic duality calculations have shown that the size of the produced droplet is $\sim 1/T_{eff}$~\cite{Chesler:2015bba,Chesler:2016ceu}, which indicate that hydrodynamics is possibly applicable for the small systems created in the high energy p--Pb and p--p collisions. Using 3+1-d hydrodynamic or hybrid model simulations, different groups has systematically studied the the multiplicities, mean $p_T$, final state correlations and related flow data
in p--Pb collisions at $\sqrt{s_{\rm NN}}=$ 5.02 TeV~\cite{Bozek:2012gr,Bozek:2011if,Bozek:2013ska,Bzdak:2013zma,Qin:2013bha,Werner:2013ipa,Schenke:2014zha}.
In general, these hydrodynamic calculations could semi-quantitatively described these different soft hadron data, which support the observation of collective flow in experiments of high energy p--Pb collisions.
Fig.13 (left) presents the hydrodynamic calculations for flow coefficients $v_2$ and $v_3$ of all charged hadrons in high multiplicity p--Pb collisions, which give a roughly fit of the data from the CMS collaborations~\cite{Bozek:2013ska}. It was also found such fluid evolution also develop the radial flow, which
leads to a flatter transverse momentum spectra for various hadron species. As shown in Ref~\cite{Bozek:2013ska}, the average transverse momentum of the identified hadrons in p--Pb collisions can be consistently fitted by the hydrodynamic simulations. In contrast, the HIJING model without any collective expansion fails to describe the data. In the hydrodynamic language, the interaction between radial and elliptic flow re-distribute the total momentum anisotropy to various hadron species, leading to a mass ordering of the flow harmonics. Fig.13 (right) shows that the hydrodynamic simulations roughly reproduce the $v_2$ mass-ordering of pions. kaons and protons. Note that, other hydrodynamic calculations with different initial conditions and transport coefficients also obtained similar results. For details, please refer to~\cite{Bzdak:2013zma,Qin:2013bha,Werner:2013ipa,Schenke:2014zha}.
Ref.~\cite{Bozek:2013uha} has shown that, in order to reproduce the multiplicity distribution of p--Pb collisions using the
hydrodynamic calculations with Glauber initial conditions, the implementation of additional negative binomial fluctuations
are necessary. Correspondingly, initial eccentricities are also modified, which leads to a simultaneous fit of the $v_2\{2\}$ and $v_2\{4\}$ data. In contrast, the early IP-glasma initial condition generates the initial energy distributions with an imprinted spherical shape of protons, which yields a very small $v_2$ for the p--Pb collision systems~\cite{Schenke:2014zha}. This motivates the recent investigations of the proton structure within the saturation framework, which indicates that the shape of the protons also fluctuate event-by-event~\cite{Mantysaari:2016ykx,Mantysaari:2016jaz}.
Note that the flow-like signals have also been observed in d--Au and $^3\mathrm{He}$--Au collisions at RHIC. Compared to the p--A collisions at the LHC, the d--Au and $^3\mathrm{He}$--Au collisions provide controlled initial geometry deformations, which are less sensitive to the details of initial state models and are helpful to check the hydrodynamic caculations. Recently, the STAR and PHENIX collaboration has measured the elliptic flow $v_2$ in d--Au collisions at $\sqrt{s_{\rm NN}}=$ 200 GeV and the elliptic and triangular flow $v_2$ and $v_3$ in $^3\mathrm{He}$--Au collisions at $\sqrt{s_{\rm NN}}=$ 200 GeV ~\cite{Adare:2013piz,Adare:2014keg,Adamczyk:2015xjc,Adare:2015ctn}. The hydrodynamic calculations from different groups, using various initial conditions and the QGP shear viscosity, roughly described these extracted flow data. It was also found that $v_2$ and $v_3$ follows $\varepsilon_2$ and $\varepsilon_3$ from the initial state, which give a support for the collective expansion in these small systems created at RHIC~\cite{Bzdak:2013zma,Qin:2013bha,Koop:2015trj,Bozek:2015qpa,Romatschke:2015gxa}
.
Compared with the case in Pb--Pb collisions, the initial sizes of the created systems in p--Pb collisions are much smaller. The subsequent collective expansion is expected to enlarge the size of the fireball, where the corresponding radii at the freeze-out can be measured by the Hanbury-Brown Twiss (HBT) correlations. In Ref~\cite{Adam:2015pya}, the ALICE collaboration has measured the three-dimensional pion femtoscopic radii in p--Pb collisions at $\sqrt{s_{\rm NN}}=$ 5.02 TeV, which showed that the size of the p--Pb systems is in between the ones obtained from p--p collisions and peripheral Pb--Pb collisions. In general, the hydrodynamic calculations could roughly describe the HBT measurements, while the quantitative values from different model calculations are sensitive to the initial conditions and the imprinted initial sizes of the created fireball~\cite{Romatschke:2015gxa,Bozek:2014fxa,Shapoval:2013jca}.
In~\cite{Niemi:2014wta}, the validity of hydrodynamics for large Pb--Pb and small p--Pb systems at the LHC has been evaluated through tracing the space time evolution of the Knudsen number. It was found for Pb--Pb collisions, hydrodynamic simulations with $\eta/s \sim 1/4\pi$ are always within the validity regime with the Knudsen numbers well below one. However, the related simulations for smaller p--A systems shows that the hydrodynamic descriptions has broken down at the $T_{dec} = 100 \ \mathrm{MeV}$ freeze-out boundary even using a minimum QGP
shear viscosity as a input. Although such investigations will not preclude the collective flow and final state
interactions, it is worthwhile to explore the physics of the small p--Pb systems within other framework beyond hydrodynamics.
\begin{figure*}[t]
\centering
\includegraphics[width=0.6\textwidth,height=5.6cm]{Fig14-ppb-fig3}
\includegraphics[width=0.35\textwidth,height=5.6cm]{Fig14-ppb-fig4}
\caption{(Color online) Centrality dependence of $c_{2}\{2\}$ (left) and $c_{2}\{4\}$ (right), calculated from UrQMD~\cite{Zhou:2015iba} and measured by ALICE~\cite{Abelev:2014mda}.}
\label{figure:c224}
\end{figure*}
\vspace{0.2cm}
\underline{\emph{Results from other approaches}}:
\vspace{0.10cm}
Without the final state interactions, the long range rapidity correlations in high energy
p--p and p--Pb collisions have been calculated with the framework of Color Glass Condensate (CGC),
which shows a good agreement with the di-hadron data from the CMS, ATLAS and ALICE~\cite{Dusling:2012iga,Dusling:2012cg,Dusling:2012wy,Dusling:2013qoz}. However the odd harmonics data
disfavor this early CGC calculations without the rescattering contributions~\cite{Dusling:2014oha}. Without a proper hadronization procedure, such calculations can
also not predict the flow data of the identified hadrons.
Recently, it was proposed that a presence of the colored domains inside the proton and the nucleus breaks
rotational invariance, which helps to generate elliptic and triangular flow during the scattrings between
a dilute projectile of valence quarks and the
nucleus~\cite{Kovner:2012jm,Dumitru:2014dra,Dumitru:2014vka,Noronha:2014vva}.
An alternative approach is the classical Yang-Mills simulations, which treat
both proton and nucleus as dense QCD objects with high gluon
occupancy and are more appropriate to describe the early
time evolution of the created p--Pb systems in the high multiplicity events.
Within such framework, Schenke and his collaborators have calculated the
single and double inclusive gluon distributions and extracted the associated $p_T$ dependent
elliptic and triangular flow of gluons in high energy p--A collisions~\cite{Schenke:2015aqa}. They found that the final state
effects in the classical Yang-Mills evolution build up a non-zero triangular flow, but only slightly
modify the large elliptic flow of gluons created from the initial state~\cite{Schenke:2015aqa}. Although this investigation
only focus on the flow anisotropy of gluons,
the obtained large value of $v_2$ and $v_3$ indicate such pre-equilibrium dynamics
should be combined with the model calculations of the final state interactions, such as hydrodynamics
or the Boltzmann simulations.
The flow signals in the p--Pb collisions have also been investigated within the framework of multiphase transport model (AMPT)~\cite{Bzdak:2014dia,Ma:2014pva,Bozek:2015swa,Koop:2015wea,Li:2016ubw}. With a tuned cross-sections within the allowed range $\sigma\sim 1.5-3 \ \mathrm{mb}$, AMPT nicely fit the two particle correlations and the extracted $v_2$ and $v_3$ coefficients in high energy p--Pb collisions~\cite{Bzdak:2014dia,Ma:2014pva}. Ref.~\cite{Bzdak:2014dia,Li:2016ubw} has shown that AMPT generates a mass-ordering of $v_2$ and $v_3$ for various hadron species with the coalescence process tuning on. It was also surprisingly observed that the collective behavior in AMPT is built up by a small amount of interactions, where each parton undergoes two collisions on average. The escape mechanism prosed in ~\cite{Li:2016ubw, Li:2016flp} seems to be responsible for the anisotropy buildup in AMPT, but is dramatically different from the traditional flow development picture of hydrodynamics due to the strong interactions.
\begin{figure*}[t]
\centering
\includegraphics[width=0.4\textwidth]{Fig15-ppb-fig9a}
\includegraphics[width=0.4\textwidth]{Fig15-ppb-fig9b}
\caption{(Color online) $v_{2}(\it{p}_{\rm T})$ of pions, kaons and protons in p--Pb collisions at $\sqrt{s_{_{\rm NN}}} =$ 5.02 TeV, calculated from {\tt UrQMD} with and without M-M and M-B collisions~\cite{Zhou:2015iba}.}
\label{fig:pidv2Gap02}
\end{figure*}
With an assumption that the high energy p--Pb collisions do not reach the threshold to create the QGP, but only produce pure hadronic systems, Ref~\cite{Zhou:2015iba} systematically investigated the 2 and 4 particle correlations of all charged and identified hadrons, using the hadron cascade model Ultra-relativistic Quantum Molecular Dynamics (UrQMD )~\cite{Bass:1998ca,Bleicher:1999xi,Petersen:2008kb}. Fig.~14 shows the two and four -particle cumulants $c_{2}\{2\}$ and $c_{2}\{4\}$ of all charged hadrons, calculated from UrQMD and measured from ALICE. In general, $c_{2}\{2\}$ decreases with the increase of the pseudorapidity gap, which is agree with the expectation of suppressing the non-flow effects with a large pseudorapidity gap. However, UrQMD still presents a strong centrality dependence of $c_{2}\{2\}$ for $|\Delta \eta|>1.0$, which indicates that the remaining non-flow effects are still strong there. In Fig.~\ref{figure:c224} (right), the $c_{2}\{4\}$ from ALICE exhibits a transition from positive to negative values, which indicate the creation of flow-dominated systems for the high multiplicity events. In contrast, $c_{2}\{4\}$ from UrQMD simulations keeps positive for all multiplicity classes, which illustrates that the p--Pb systems created by UrQMD are non-flow dominated.
However, the generally believed collective expansion feature, the mass-ordering of $v_{2}(p_{\rm T})$, are reproduced in
the UrQMD simulations. Fig.~\ref{fig:pidv2Gap02} shows that these high multiplicity events from UrQMD
present a clear $v_2$ mass-ordering among pions, kaons and protons, which are qualitatively agrees with the corresponding ALICE measurement~\cite{ABELEV:2013wsa}. In UrQMD, the meson-baryon (M-B) cross sections from AQM are about 50\% larger than the meson-meson (M-M) ones, which leads to the $v_{2}$ splitting between mesons and baryons in the UrQMD simulations. Fig.~\ref{fig:pidv2Gap02} also shows, after switching off the M-B and M-M interaction channels, the characteristic feature of $v_{2}$ mass-ordering disappears. Therefore, even without enough flow generation, the hadronic interactions still lead to a $v_{2}$ mass-ordering feature for a hadronic p--Pb system.
In Ref~\cite{Romatschke:2015dha}, the created p--Pb systems are described by non-interacting free-streaming particles, following with a harmonization procedure and a hadronic cascade evolution. Such non-hydrodynamic simulations showed,
although the elliptic flow are under-predicted, the triangular and quadrupolar flow are raised by the free-streaming
evolution, which are comparable to the ones obtained from the hydrodynamic simulations. Meanwhile, the $v_n$ mass-orderings
among pions, kaons and protons have also been observed in such non-hydrodynamic p--Pb systems due to
the hadronic interactions during the late evolution.
\subsection{p--p collisions at $\sqrt{s_{\rm NN}}=$ 7 TeV and 13 TeV}
Like the case for high energy p--Pb collisions, the long-range two-particle azimuthal correlations with a large pseudo-rapidity separation have also been observed in high-multiplicity p--p collisions at the LHC, which provides new insights for the novel dynamics of the small QCD systems~\cite{Khachatryan:2010gv,Li:2012hc,Khachatryan:2015lva,Aad:2015gqa,Khachatryan:2016txc}.
For p--Pb collisions at $\sqrt{s_{\rm NN}}=$ 5.02 TeV, the extensive measurements of the 2 particle and multi-particle correlations, extracted flow harmonics for all charged and identified hadrons, as well as the supportive hydrodynamic calculations strongly indicates that collective expansion has been developed in the small p--Pb systems. However, for high-energy p--p collisions at the LHC, the nature of the observed long-range correlation is still an open question (For different theoretical interpretations, please refer to~\cite{Dusling:2013qoz,Dusling:2012iga,Dumitru:2010iy,Levin:2011fb,Tribedy:2011aa,
Bozek:2010pb,Bzdak:2013zma,Werner:2010ss,Schenke:2014zha,Schenke:2016lrs,Dusling:2015gta}).
Recently, the ATLAS Collaboration has measured the Fourier coefficients $v_{n}$ in p--p collisions at $\sqrt{s_{\rm NN}}=$ 13 TeV, using the two-particle correlations as a function of the relative azimuthal-angle and pseudo-rapidity~\cite{Aad:2015gqa}. It was found that the extracted $v_{2}$ is approximately a constant as a function of multiplicity and its $p_{\rm T}$ dependence is very similar to the one measured in p--Pb and Pb--Pb collisions~\cite{Aad:2015gqa}.
The CMS collaboration further measured the $v_n$ coefficients for all charged hadrons, as well as for $K_S^0$ and $\Lambda/ \overline{\Lambda}$ in p--p collisions at $\sqrt{s_{\rm NN}}=$ 5, 7 and 13 TeV, which
observed a clear $v_2$ mass-ordering among all charged hadrons, $K_S^0$ and $\Lambda/ \overline{\Lambda}$~\cite{Khachatryan:2016txc}.
Furthermore, the CMS collaboration has measured the multi-particle cumulants, the key observable to probe the anisotropic collectivity. A negative sign of $c_{2}\{4\}$ and a positive sign of $c_{2}\{6\}$ appeared in the high multiplicity p--p collisions at $\sqrt{s_{\rm NN}}=$ 13 TeV ~\cite{Khachatryan:2016txc}, which seems to indicate the development of anisotropic collectivity in high energy p--p collisions. However, the ATLAS Collaboration reported in Hard Probe 2016 conference that the multiplicity fluctuations could significantly bias the measurements of multi-particle cumulants~\cite{ATLAS:HP2016}, which indicates that non-flow might mimic the flow signal by pushing the $c_2\{4\}$ to negative values. In order to avoid the bias from multiplicity fluctuations, the so-called ``Method1'', which using the same multiplicity selection for the calculations of cumulants and $N_{\rm trk}$, is applied. The obtained $c_{2}\{4\}$, which is less affected by multiplicity fluctuations, does not show negative sign for the multiplicity regions where negative values of $c_{2}\{4\}$ was reported by CMS.
For small systems, it is also very important to address and evaluate the non-flow effects. Generally, the multi-particle cumulants, e.g. $c_2\{4\}$, are able to suppress the non-flow of two-particle correlations in traditional Au+Au or Pb+Pb collisions. However, the non-flow contributions to the multi-particle correlations are still remained and might play an non-negligible role in the small p--p collision systems. Recently, the ALICE and ATLAS Collaborations have proposed new 4-particle cumulant methods with $|\Delta\eta|$ gap separation, using 2- or 3-subevents~\cite{ALICE:QM2017,ATLAS:QM2017}. By selecting particles from different regions separated by a $|\Delta\eta|$ gap, it is possible to further suppress the non-flow contributions in the multi-particle cumulants. This has been verified in the PYTHIA simulations~\cite{Jia:2017hbm}. The preliminary measurements in p--p collisions at 13 TeV, reported in QM2017~\cite{ALICE:QM2017,ATLAS:QM2017}, have shown that the non-flow effects are suppressed with these new 4-particle cumulant methods. A negative sign of the 4-particle cumulant was observed by ATLAS collaboration after implementing the 3-subevent method, while ALICE has not confirm the negative sign of $c_2\{4\}$ with a $|\Delta\eta|$ gap separation due to the limited statistics and relatively smaller acceptance.
Besides the multi-particle cumulants for single flow harmonics, the CMS Collaboration also measured the symmetric cumulants SC(m,n) and normalized symmetric cumulants NSC(m,n) in p--p, p--Pb and Pb--Pb collisions~\cite{CMS:QM2017}. It was found that the normalized NSC(3,2) are similar in p--Pb and Pb--Pb collisions, indicating that these two systems present similar initial state fluctuation patterns for the correlations between $\varepsilon_2$ and $\varepsilon_3$. While, the normalized NSC(4,2) shows certain orderings for the p--p, p--Pb and Pb--Pb collision systems, which may associates with the different non-linear response and non-flow effects between the large and small systems.
In short, these recent measurements in p--p collisions $\sqrt{s_{\rm NN}}=$ 13 TeV are aimed to evaluate whether or not collective flow has been created in high multiplicity p--p collisions. Future investigations, from both experimental and theoretical sides, are very crucial to further address this question and for a deep understanding of the underline physics
in the small collision systems.
\section{Summary}
In this paper, we briefly reviewed the collective flow and hydrodynamics in large and small systems at the LHC.
One of the important messages we would like to convey to readers was that hydrodynamics and hybrid models are
important and useful tools to study various flow observables in high energy nucleus-nucleus and nucleus-nucleons collisions.
With a properly chosen initial condition and well tuned QGP transport coefficients, hydrodynamics and hybrid models can quantitatively describe the flow harmonics coefficients $v_n$ of all charged hadrons and make very nice predictions for the flow data of identified hadrons. The massive-data fitting of the flow harmonics and other related soft hadron data, using the sophisticated hybrid model simulations, have extracted the functions of the temperature-dependent QGP shear and bulk viscosities at the LHC, which demonstrated that the created QGP is an almost perfect fluid with very small shear viscosity close to the KSS bound.
For some flow observables in the high energy Pb--Pb collisions, e.g. the event plane correlations, the correlations between different flow harmonics, etc., hydrodynamic and hybrid models can qualitatively, but not quantitatively, describe the data.
However, such qualitatively descriptions can still be considered as a success of the hydrodynamics, considering that the initial state fluctuations contain different intrinsic patens from the ones extracted from final state correlations. The succeeding hydrodynamic evolution drastically change some of these initial state correlations, even the signs, making a quantitatively description of the data. On the other hand, these flow data are more sensitive to the details of theoretical model calculations. A further study of these flow observables could reveals more information on the initial state fluctuations, non-linear hydrodynamic response and etc., which could also help us to further constraint the initial state models and to precisely extract the QGP transport coefficients in the future.
As a hot research topic, the flow-like signals in high energy p--Pb and p--p collisions at the LHC have been widely investigated in both experiment and theory. For the high multiplicity p--Pb collisions, the observation of the changing sign of the 4 particle cumulants, the $v_2$ mass orderings, and the supportive calculations from hydrodynamics, etc., strongly indicated the development of collective expansion in the small p--Pb systems. For the high energy p--p collisions, some similar results, but with smaller magnitudes, have been observed for many flow-like observables. Although these measurements may also associated with the collective expansion, more detailed investigations are still needed to further understand of the physics in the small p--p systems.\\
\noindent\textbf{\emph{Acknowledgments}}
This work is supported by the NSFC and the MOST under grant Nos.11435001, 11675004 and 2015CB856900 and by the Danish Council for Independent Research, Natural Sciences, and the Danish National Research Foundation (Danmarks Grundforskningsfond).
|
1,108,101,565,260 | arxiv | \section{CRM Simulations of RCE}
We begin by studying precipitation change in one of the simplest systems in which the radiative constraint on precipitation \eqnref{rad_precip_constraint} operates, namely tropical oceanic radiative-convective equilibrium (RCE) with fixed sea-surface temperature. This system approximates the real tropics, where the majority of Earth's precipitation occurs \citep{simpson1988}, and like the GCMs exhibits precipitation increases of roughly $2 -3\%\ \ensuremath{\mathrm{K^{-1}}}$ \citep{romps2011, muller2011b}.
We simulate RCE using Das Atmosph\"arische Modell \citep[DAM,][]{romps2008}, a three-dimensional, fully-compressible, non-hydrostatic cloud-resolving model, coupled to radiation via the Rapid Radiative Transfer Model
\citep[RRTM,][]{mlawer1997}. DAM employs the six-class Lin-Lord-Krueger microphysics scheme \citep{lin1983, lord1984, krueger1995}, and relies on implicit LES \citep{margolin2006} for sub-grid scale transport, so no explicit sub-grid scale turbulence scheme is used.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{rhov.pdf}
\caption{Profiles of $\ensuremath{\rho_\mathrm{v}}(T)$ from our RCE simulations at various \ensuremath{T_\mathrm{s}}, with both linear and log scales. These profiles are `\ensuremath{T_\mathrm{s}}-invariant' in the sense that $\ensuremath{\rho_\mathrm{v}}(T)$ does not depend on \ensuremath{T_\mathrm{s}}, i.e. that the \ensuremath{\rho_\mathrm{v}}\ profiles at different \ensuremath{T_\mathrm{s}}\ collapse onto a single curve.
\label{rhov_fig}
}
\end{center}
\end{figure}
Our RCE simulations ran on a square doubly-periodic domain of horizontal dimension $L=72$ km, with a horizontal resolution of $dx=1$ km. The vertical grid stretched smoothly from 50 m resolution below 1000 m to 250 m resolution between 1000 m and 5000 m, and then to 500 m up to the model top at 30 km. We calculated surface heat and moisture fluxes using a bulk aerodynamic formula, and used a pre-industrial \ensuremath{\mathrm{CO_2}}\ concentration of 280 ppm with no ozone except where specified otherwise. To explore precipitation changes with warming we ran five experiments at surface temperatures of $\ensuremath{T_\mathrm{s}}=(280,290,300,310,320)$ K. Our runs branched off the equilibrated runs described in \cite{romps2014}, and were run for 60 days to iron out any artifacts from changing the domain and resolution. All vertical profiles are time-mean and domain-mean, averaged over the last 20 days of each run.
Since we run with prescribed \ensuremath{T_\mathrm{s}}, our warming experiments are somewhat artificial, in that the warming is not driven by increases in \ensuremath{\mathrm{CO_2}}. This has the advantage that we isolate part of the physics and thus have a better chance at arriving at a simple description, but has the disadvantage that we omit the direct effect of increased \ensuremath{\mathrm{CO_2}}\ on atmospheric cooling and hence precipitation, an effect of roughly -1 \ensuremath{\mathrm{W/m^2}}/K \citep{pendergrass2014}. This omission does not affect our main conclusions about precipitation change.
\section{\ensuremath{T_\mathrm{s}}-invariance of Flux Divergences}
\label{Ts_invariance}
The simple behavior of radiative cooling alluded to above begins with the key fact that the water vapor density
\begin{equation}
\ensuremath{\rho_\mathrm{v}} = \ensuremath{\mathrm{RH}}\frac{\ensuremath{p^*_{\mathrm{v}}}(T)}{\ensuremath{R_\mathrm{v}} T} \;
\label{rhov}
\end{equation}
is (up to variations in relative humidity \ensuremath{\mathrm{RH}}) a function of temperature only. [Note that it has been shown recently that RH is itself a function of $T$ in RCE \citep{romps2014}. Also note that here $p_v^*$ is the saturation vapor pressure of water, and all other symbols have their usual meaning.] If we use $T$ as a vertical coordinate, Eqn. \eqnref{rhov} then tells us that the function $\ensuremath{\rho_\mathrm{v}}(T)$ does not depend on \ensuremath{T_\mathrm{s}}. This is what we mean by `\ensuremath{T_\mathrm{s}}-invariance'. We verify \ensuremath{T_\mathrm{s}}-invariance of $\ensuremath{\rho_\mathrm{v}}(T)$ in Fig. \ref{rhov_fig}, where indeed the \ensuremath{\rho_\mathrm{v}}\ profiles at different \ensuremath{T_\mathrm{s}}\ collapse onto a single curve when plotted in temperature coordinates.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{pptflw_tinv_dam.pdf}
\caption{LW flux divergence $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}}$, as diagnosed from RRTM coupled to our CRM RCE simulations at \ensuremath{T_\mathrm{s}}=(280,\ 290,\ 300,\ 310) K (the 320 K simulation is omitted for clarity). Fluxes are plotted from the lifting condensation level of each simulation to 22.5 km for clarity, and in height, pressure, and temperature coordinates to emphasize the \ensuremath{T_\mathrm{s}}-invariance of $(-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}})(T)$. The gray dotted line in the right panel plots $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}} = 0$, and shows the \ensuremath{T_\mathrm{s}}-invariance of $\ensuremath{T_\mathrm{tp}} \approx 185$ K.
\label{pptflw_tinv_dam}
}
\end{center}
\end{figure}
For wavenumbers $k$ outside of spectral bands where other trace gases (like \ensuremath{\mathrm{CO_2}}\ and \ensuremath{\mathrm{O_3}}) dominate, the optical depth $\ensuremath{\tau_k}$ is just
\begin{equation}
\tau_k(z) = \int_z^\infty \kappa(k) \ensuremath{\rho_\mathrm{v}}(z') \, dz' \;
\label{tauz}
\end{equation}
where $\kappa(k)$ is a mass absorption coefficient (units $\mathrm{m^2/kg}$) whose pressure-broadening and temperature scaling we neglect. Changing the integration variable to temperature $T'$ yields
\begin{equation}
\tau_k(T) \approx \int_{\ensuremath{T_\mathrm{tp}}}^T \kappa(k) \ensuremath{\rho_\mathrm{v}}(T') \, \frac{dT'}{\Gamma} \; ,
\label{tauT}
\end{equation}
where we neglect stratospheric water vapor and take the lower limit of the integral to be the tropopause temperature $\ensuremath{T_\mathrm{tp}} \approx 185$ K, where radiative cooling goes to 0 (see Figs. \ref{pptflw_tinv_dam} and \ref{pptfsw_tinv_dam}, which also show that \ensuremath{T_\mathrm{tp}}\ is \ensuremath{T_\mathrm{s}}-invariant). The only quantity in Eqn. \eqnref{tauT} that might still exhibit some \ensuremath{T_\mathrm{s}}-dependence is the moist lapse rate $\Gamma$, but Figure 2 of \cite{ingram2010} shows that when $\Gamma$ is considered a function of temperature, it too is fairly \ensuremath{T_\mathrm{s}}-invariant. Equation \eqnref{tauT} then implies that \ensuremath{\tau_k}\ profiles at any $k$ exhibit the same \ensuremath{T_\mathrm{s}}-invariance as \ensuremath{\rho_\mathrm{v}}. This argument was also made by \cite{ingram2010}, and its essence goes back to \cite{simpson1928}.
To connect all this with radiative cooling, we invoke the cooling-to-space approximation \citep[e.g.,][]{thomas2002, rodgers1966}, which says that the spectrally resolved LW flux divergence in temperature coordinates $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}}_k$ (units $\ensuremath{\mathrm{W/m^2}}/\mathrm{K}/\ensuremath{\mathrm{cm^{-1}}}$, minus sign introduced to maintain a consistent sign with $\ensuremath{\partial_z} \ensuremath{F^\mathrm{LW}}_k$) is approximately
\begin{equation}
-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}}_k \approx - \pi B_k(T) \frac{d (e^{-\ensuremath{\tau_k}(T)})}{dT} \ ,
\label{cts_spectral}
\end{equation}
where the transmission function $e^{-\ensuremath{\tau_k}}$ gives the fraction of radiation emitted at a given height that travels unabsorbed out to space. Since the Planck function $B_k(T)$ is \ensuremath{T_\mathrm{s}}-invariant, and $\ensuremath{\tau_k}(T)$ is as well, we also expect $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}}_k$ to be \ensuremath{T_\mathrm{s}}-invariant. Since this holds for all $k$ where water vapor dominates, it should also hold approximately for the spectrally integrated LW flux divergence $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}}$ ($\ensuremath{\mathrm{W/m^2}}/\mathrm{K}$). This is confirmed in Fig. \ref{pptflw_tinv_dam}, which plots $(-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}})(T)$ as diagnosed from RRTM coupled to our RCE simulations. That figure also plots $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}}$ as functions of $z$ and $p$, to emphasize that this invariance only holds when $T$ is used as the vertical coordinate.
A similar argument holds for the SW flux divergence. If $I_k$ is the incident solar flux at wavenumber $k$, and neglecting reflection and scattering in the near-infrared,
then without further approximation we have
\begin{equation}
-\ensuremath{\partial_T} \ensuremath{F^\mathrm{SW}}_k = - I_k \der{(e^{-\ensuremath{\tau_k}(T)})}{T}
\
\end{equation}
\citep[c.f.][eqn. 9.26]{thomas2002}. This equation is similar to \eqnref{cts_spectral} but with $B_k(T) \rightarrow I_k$, and since $I_k$ is also \ensuremath{T_\mathrm{s}}-invariant, we can argue as above that $(-\ensuremath{\partial_T} \ensuremath{F^\mathrm{SW}})(T)$ should be \ensuremath{T_\mathrm{s}}-invariant. This is confirmed in Fig. \ref{pptfsw_tinv_dam}, where again the simple behavior of $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{SW}}$ in temperature coordinates is contrasted with that in height and pressure coordinates.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{pptfsw_tinv_dam.pdf}
\caption{As in Fig. \ref{pptflw_tinv_dam}, but for SW instead of LW.
\label{pptfsw_tinv_dam}
}
\end{center}
\end{figure}
The fluxes used in Figs. \ref{pptflw_tinv_dam} and \ref{pptfsw_tinv_dam} are all-sky fluxes, but the foregoing argument was for clear-sky fluxes. This is permissible because cloud fractions in our RCE simulations are low (attaining a maximum of $\sim 10 \%$ at the anvil height in our simulations), so it is the clear-sky physics which dominates. We will touch upon cloud radiative effects in section \ref{sec_GCMs}, when we assess how well these CRM results generalize to GCMs.
\section{A simple picture for column-integrated radiative cooling} \label{sec_simple_Q}
Now that we have established the \ensuremath{T_\mathrm{s}}-invariance of radiative flux divergences, we can construct a simple, quantitative picture of how column-integrated radiative cooling, and hence precipitation, changes with surface temperature.
Let $F$ denote radiative flux in a particular channel -- LW, SW, or Net (LW+SW) -- and $Q$ the associated column-integrated free-tropospheric radiative cooling. We consider the free troposphere, rather than the full troposphere, because the radiative constraint on precipitation
\begin{equation}
LP \approx \ensuremath{Q_\mathrm{net}}
\label{p_constraint}
\end{equation}
holds best for the free troposphere \citep{ogorman2012}. We define the free troposphere here as being above the lifting condensation level \ensuremath{T_\mathrm{LCL}}\ and below the tropopause \ensuremath{T_\mathrm{tp}}.
The basic idea is to write $Q$ as an integral of $-\ensuremath{\partial_T} F$ in temperature coordinates:
\begin{equation}
Q = \int_{\ensuremath{T_\mathrm{tp}}}^{\ensuremath{T_\mathrm{LCL}}} (-\partial_{T'} F) dT' \ .
\nonumber
\end{equation}
If we approximate the change in \ensuremath{T_\mathrm{LCL}}\ as equal to the change in \ensuremath{T_\mathrm{s}}, then the change in $Q$ with surface temperature is simply
\begin{equation}
\der{Q}{\ensuremath{T_\mathrm{s}}} \ =\ \left. -\ensuremath{\partial_T} F\right|_{\ensuremath{T_\mathrm{LCL}}} \; .
\label{dqdts}
\end{equation}
In other words, since the tropospheric cooling profile $(-\ensuremath{\partial_T} F)(T)$ is independent of \ensuremath{T_\mathrm{s}}, increasing \ensuremath{T_\mathrm{s}}\ just exposes more of this profile. The contribution of this new section of the $(-\ensuremath{\partial_T} F)(T)$ curve to $Q$ is given by \eqnref{dqdts}. A cartoon of this argument is given in Fig. \ref{dqdts_cartoon}. For finite changes in \ensuremath{T_\mathrm{s}}, Eqn. \eqnref{dqdts} approximates $(-\ensuremath{\partial_T} F)(T)$ in the newly exposed region as equal to $-\ensuremath{\partial_T} F$ at the LCL of the base state, but for small enough changes in \ensuremath{T_\mathrm{s}}\ this approximation should be adequate. Specializing Eqn. \eqnref{dqdts} to the Net channel and invoking \eqnref{p_constraint} then yields a \emph{prognostic} equation for precipitation change with surface warming.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5,trim=0cm 0cm 0cm 5cm,clip=true]{dqdts_cartoon.pdf}
\caption{Cartoon depicting the increase in $Q$ with \ensuremath{T_\mathrm{s}}\ in Eqn. \eqnref{dqdts}. Increasing the temperature range of the troposphere exposes more of the \ensuremath{T_\mathrm{s}}-invariant curve $(\ensuremath{\partial_T} F)(T)$ (blue lines). The contribution of this newly exposed region to column-integrated cooling is given by Eqn. \eqnref{dqdts}.
\label{dqdts_cartoon}
}
\end{center}
\end{figure}
Let us test the predictive power of Eqn. \eqnref{dqdts}. The panels of Fig. \ref{Qnet_varsst} plot $Q(\ensuremath{T_\mathrm{s}})$ as diagnosed directly from our CRM simulations, along with estimates of the slope of this curve diagnosed via Eqn. \eqnref{dqdts}, for the SW, LW, and Net channels (\ensuremath{T_\mathrm{LCL}}\ is diagnosed as $T$ at the low-level maximum in cloud fraction). Precipitation $P$ is also plotted alongside $\ensuremath{Q_\mathrm{net}}$. Figure \ref{Qnet_varsst} shows that Eqn. \eqnref{dqdts} captures the changes in cooling in all channels. Furthermore, since $P$ tracks \ensuremath{Q_\mathrm{net}}\ closely for $290\leq \ensuremath{T_\mathrm{s}} \leq 310$ K, Eqn. \eqnref{dqdts} also captures precipitation changes, at least in this temperature regime.
We also see that Eqn. \eqnref{dqdts} predicts a \emph{decrease} in \ensuremath{Q_\mathrm{net}}\ with \ensuremath{T_\mathrm{s}}\ at \ensuremath{T_\mathrm{s}}=320 K; this is not an error in our diagnostic equation \eqnref{dqdts} , but rather a real effect due to the fact that $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}}$ tends towards zero with increasing $T$ while -\ensuremath{\partial_T} \ensuremath{F^\mathrm{SW}}\ is staying roughly constant. (This behavior of $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}}$ is likely related to runaway greenhouse physics, known to set in at roughly 310 K \citep{goldblatt2013}.) This leads to radiative heating, rather than cooling, in the lower troposphere, which violates the basic radiative-convective paradigm; it is perhaps then no surprise that the constraint \eqnref{p_constraint} appears to break down in this \ensuremath{T_\mathrm{s}}\ regime. An analogous high \ensuremath{T_\mathrm{s}}\ breakdown of the radiative constraint on precipitation can also be found in energetically consistent experiments \citep{lehir2009, pierrehumbert1999}. The radiative constraint also breaks down at low \ensuremath{T_\mathrm{s}}\ (i.e. $\ensuremath{T_\mathrm{s}} \leq 280$ K), where sensible heat fluxes start to dominate over latent heat fluxes. Thus, Eqn. \eqnref{dqdts} has explanatory power for precipitation changes at temperatures somewhat greater than or equal to Earth's mean temperature of 288 K. Outside the $290\leq \ensuremath{T_\mathrm{s}} \leq 310$ K range, other constraints besides our purely radiative one seem to be required to predict changes in $P$.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{Qnet_varsst.pdf}
\caption{Column-integrated cooling $Q$ vs. \ensuremath{T_\mathrm{s}}\ (black circles), along with slopes $d Q/d \ensuremath{T_\mathrm{s}}$ (red lines) as diagnosed from \eqnref{dqdts}. These are shown for the SW (left), LW (center) and Net (right) channels. The black dashed lines connect the black circles and give a benchmark slope against which to compare the red lines. The `Net' panel also gives CRM-diagnosed precipitation values in blue stars. See text for discussion.
\label{Qnet_varsst}
}
\end{center}
\end{figure}
\section{Why does precipitation increase at $2 -3\%\ \ensuremath{\mathrm{K^{-1}}}$?} \label{sec_1percent}
The results in Fig. \ref{Qnet_varsst} show that our framework has some predictive power for explaining changes in \ensuremath{Q_\mathrm{net}}\ and hence $P$ in RCE. Let us then try to use this framework to answer the question posed in the introduction, namely: why does mean precipitation increase at $2 -3\%\ \ensuremath{\mathrm{K^{-1}}}$?
First, let us confirm in a back-of-the-envelope fashion that Eqn. \eqnref{dqdts} indeed gives a $2 -3\%\ \ensuremath{\mathrm{K^{-1}}}$ increase in $P$. Combining \eqnref{p_constraint} and \eqnref{dqdts} gives
\begin{equation}
\frac{d \ln P}{d \ensuremath{T_\mathrm{s}}} \ \approx\ \frac{(-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}})(\ensuremath{T_\mathrm{LCL}})}{\ensuremath{Q_\mathrm{net}}} \; .
\label{precip_estimate}
\end{equation}
For \ensuremath{T_\mathrm{s}}=300 K, where $(-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}})(\ensuremath{T_\mathrm{LCL}}) \approx 3 \ \ensuremath{\mathrm{W/m^2}}/\mathrm{K}$ and $\ensuremath{Q_\mathrm{net}} = 104\ \ensuremath{\mathrm{W/m^2}}$, we find $\frac{d \ln P}{d \ensuremath{T_\mathrm{s}}}= 3\%\ \ensuremath{\mathrm{K^{-1}}}$, as expected.
Now, suppose we take \ensuremath{T_\mathrm{s}}=300 K and try to simply parametrize the net cooling as $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}} \propto (T-\ensuremath{T_\mathrm{tp}})^\beta$. Further suppose (motivated by inspection of Figs. \ref{pptflw_tinv_dam} and \ref{pptfsw_tinv_dam}) that $\beta \approx 2$, i.e. that $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ is roughly quadratic in $(T-\ensuremath{T_\mathrm{tp}})$. Then the full tropospheric radiative cooling is $Q\sim (\ensuremath{T_\mathrm{s}}-\ensuremath{T_\mathrm{tp}})^{\beta+1}$, and hence
\begin{equation}
\frac{d \ln Q}{d \ensuremath{T_\mathrm{s}}} = \frac{\beta+1}{\ensuremath{T_\mathrm{s}}-\ensuremath{T_\mathrm{tp}}}\ . \label{dqdts_approx}
\end{equation}
Note that $\ensuremath{T_\mathrm{s}}-\ensuremath{T_\mathrm{tp}}$ is the \emph{depth of the troposphere expressed in temperature coordinates}. For \ensuremath{T_\mathrm{s}}= 300 K this depth is roughly 100 K, and so \eqnref{dqdts_approx} gives roughly 3 \% \ensuremath{\mathrm{K^{-1}}}, consistent with the result from Eqn. \eqnref{precip_estimate}.
On the other hand, if $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ were constant throughout the depth of the troposphere, i.e. $\beta=0$, then $Q$ would just scale with $\ensuremath{T_\mathrm{s}}-\ensuremath{T_\mathrm{tp}}$. But then it is clear that, since \emph{a 1 K increase in \ensuremath{T_\mathrm{s}}\ is a $1\%$ increase in tropospheric depth \ensuremath{T_\mathrm{s}}-\ensuremath{T_\mathrm{tp}}}, $Q$ should increase at 1 \% \ensuremath{\mathrm{K^{-1}}}. The fact that Q increases somewhat faster than that can then be understood as a result of the fact that $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ is increasing, not constant, with $T$, i.e. that $\beta>0$ in Eqn. \eqnref{dqdts_approx}.
\section{Extension to GCMs} \label{sec_GCMs}
The framework presented so far has an appealing simplicity. But, the question remains as to whether our results generalize from RCE to much more realistic GCM simulations. Before trying to predict precipitation change in GCMs, we must first check whether \ensuremath{T_\mathrm{s}}-invariance holds in GCMs, in some sense. We do this by binning GCM columns by their local \ensuremath{T_\mathrm{s}}, computing an average $-\ensuremath{\partial_T} F$ profile for each bin, and then checking the \ensuremath{T_\mathrm{s}}-invariance of each of these profiles.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{fnet_ipsl.pdf}
\caption{ Profiles of $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ for various \ensuremath{T_\mathrm{s}}\ bins for the AMIP and AMIP4K runs of IPSL-CM5A-LR. These profiles show that \ensuremath{T_\mathrm{s}}-invariance holds in this GCM in the mid and upper troposphere, but not near the surface (i.e. $T \lesssim \ensuremath{T_\mathrm{s}}$).
\label{fnet_ipsl}
}
\end{center}
\end{figure}
For this we utilize the AMIP and AMIP4K output in the CMIP5 archive. These experiments are atmosphere-only, and feature observed SSTs (AMIP) as well as uniform perturbations to those observed SSTs (AMIP4K), with no change in \ensuremath{\mathrm{CO_2}}\ concentration; as such they are good analogs to our CRM experiments. The AMIP4K experiment was part of the CFMIP protocol, which also requested the output of vertically-resolved radiative fluxes, rather than just surface and TOA fluxes, allowing us to compute $-\ensuremath{\partial_T} F$ profiles.
Six models participated in the AMIP and AMIP4K CFMIP experiments and provided the output we require. We begin by analyzing one of them, IPSL-CM5A-LR. Figure \ref{fnet_ipsl} shows profiles of average $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ for six of our \ensuremath{T_\mathrm{s}}\ bins, where for each \ensuremath{T_\mathrm{s}}\ bin the average is taken over all columns from the last 30 years of the simulation for which the lowest model-level air temperature lies in the range $(\ensuremath{T_\mathrm{s}},\ensuremath{T_\mathrm{s}} +2\ensuremath{\mathrm{K}})$. For the AMIP4K calculation in each panel the $\ensuremath{T_\mathrm{s}} +4\ensuremath{\mathrm{K}}$ bin is used, so as to compare roughly the same columns between the two simulations. More details on this calculation are given in the Appendix, which also shows the decomposition of these profiles into their LW and SW components.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{fnet_all.pdf}
\caption{ Profiles of $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ for the \ensuremath{T_\mathrm{s}}=290 K (AMIP) and 294 K (AMIP4K) bins for all six CFMIP models. These profiles show that \ensuremath{T_\mathrm{s}}-invariance in the mid and upper troposphere holds across models.
\label{fnet_all}
}
\end{center}
\end{figure}
The take-away from Figure \ref{fnet_ipsl} is that for a given \ensuremath{T_\mathrm{s}}, \ensuremath{T_\mathrm{s}}-invariance seems to hold quite well in the mid and upper troposphere, but not in the lower troposphere. This appears to be due to cloud and circulation effects, such as inversion layers and their associated low clouds, which appear to stay at a fixed pressure (rather than temperature) with warming. This yields features in the $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ profiles that appear to shift down to higher temperatures with warming. This stands in contrast to the CRM $-\ensuremath{\partial_T} F$ profiles, which under warming reach the new \ensuremath{T_\mathrm{s}}\ not by \emph{shifting} down but by \emph{extending} downward.
Such a qualitative difference between the GCMs and our CRM means that Eqn. \eqnref{dqdts} cannot be applied to the GCMs, at least not without modification. At the same time, we do see that \ensuremath{T_\mathrm{s}}-invariance holds throughout much of the atmosphere for many \ensuremath{T_\mathrm{s}}\ regimes, and may thus still be a useful approximation for other problems. Of course, Fig. \ref{fnet_ipsl} only establishes this for one model, so robustness across models still needs to be checked. We do this in Fig. \ref{fnet_all}, which shows average $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ profiles for the $\ensuremath{T_\mathrm{s}}= 290$ (AMIP) and 294 (AMIP4K) bins for all six of our CFMIP models. These panels show that the IPSL model is not an outlier, and that \ensuremath{T_\mathrm{s}}-invariance in the mid and upper troposphere is robust across models.
\section{Summary and Discussion}
We summarize our findings as follows:
\begin{itemize}
\item Radiative cooling profiles in temperature coordinates are \ensuremath{T_\mathrm{s}}-invariant. This \ensuremath{T_\mathrm{s}}-invariance holds for the shortwave and longwave separately, as well as together (Figs. \ref{pptflw_tinv_dam}, \ref{pptfsw_tinv_dam}).
\item For RCE, this \ensuremath{T_\mathrm{s}}-invariance yields a simple model for how column-integrated cooling and precipitation change with \ensuremath{T_\mathrm{s}}\ [Eqn. \eqnref{dqdts}]. This model captures the simulated changes (Fig. \ref{Qnet_varsst}), and also leads to an even simpler model [Eqn. \eqnref{dqdts_approx}] which yields insight into why precipitation changes are $2 -3\%\ \ensuremath{\mathrm{K^{-1}}}$ in RCE.
\item For \ensuremath{T_\mathrm{s}}-binned $-\ensuremath{\partial_T} F$ profiles from GCMs, \ensuremath{T_\mathrm{s}}-invariance holds in the mid and upper troposphere, but not near the surface (Figs. \ref{fnet_ipsl}, \ref{fnet_all}).
\end{itemize}
An obvious question left unanswered here is whether the $2 -3\%\ \ensuremath{\mathrm{K^{-1}}}$ increase in precipitation found in GCMs is at all physically analogous to that in CRMs, or whether different physics is at play. Inspection of Fig. \ref{fnet_all} shows that the near-surface peak in $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ appears to stay at a fixed pressure with surface warming, but also increases in magnitude. Presumably this is due to an increase in (either cloudy or clear-sky) Planck emission, but further work would be needed to check this and model it in such a way as to give a prognostic expression for $d\ensuremath{Q_\mathrm{net}}/d\ensuremath{T_\mathrm{s}}$.
There are also unanswered questions regarding the argument given in Section \ref{Ts_invariance}. For instance, to what degree are optical depth profiles for water vapor lines actually \ensuremath{T_\mathrm{s}}-invariant, as claimed here and by \cite{ingram2010}? Would a line-by-line calculation verify this? Also, what are the conditions for the cooling-to-space approximation to be valid? And finally, why does the radiative tropopause temperature \ensuremath{T_\mathrm{tp}}\ appear to be fixed in our simulations? This bears a certain resemblance to the FAT hypothesis but is distinct from it, as the radiative tropopause and anvil peak are distinct features of the atmosphere and occur at quite different heights (approximately 17 km and 11 km, respectively, in the present day deep tropics).
There is also the question of robustness of our RCE results to choice of CRM. While CRMs do not employ as many parameterizations as GCMs, they must still choose sub-grid turbulence and microphysics schemes, which can lead to substantial uncertainties in some variables including cloud cover \citep[e.g.][]{tsushima2015, igel2014}. Since the arguments given here were clear-sky arguments and relied on the low values of cloud fraction exhibited by DAM, it is possible that the \ensuremath{T_\mathrm{s}}-invariance exhibited here may not hold as well in other CRMs. The upcoming RCE Model Intercomparison Project \citep[RCEMIP,][]{wing2017b} would make an ideal venue for investigating this.
Finally, we should note that the $7\%\ \ensuremath{\mathrm{K^{-1}}}$ Clausius-Clapeyron scaling of $\ensuremath{p^*_{\mathrm{v}}}(T)$ plays no role in setting the $2 -3\%\ \ensuremath{\mathrm{K^{-1}}}$ scaling of \ensuremath{Q_\mathrm{net}}\ and $P$. This can be seen most directly by appealing to Eqn. \eqnref{dqdts_approx}, which is a simple consequence of the \ensuremath{T_\mathrm{s}}-invariance of -\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}\ and \ensuremath{T_\mathrm{tp}}. The \ensuremath{T_\mathrm{s}}-invariance of -\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}\ follows from $\ensuremath{p^*_{\mathrm{v}}}(T)$ being a function of temperature only, with no requirement that $p_v^*(T)$ even be exponential in $T$, let alone that $d\ln \ensuremath{p^*_{\mathrm{v}}}/dT \approx 0.07 \ \ensuremath{\mathrm{K^{-1}}}$. Our arguments thus suggest that this value could be doubled or halved without affecting the scaling of $P$. Thus, the Clausius-Clapeyron and mean precipitation scalings should be thought of as independent constraints, one thermodynamic and one radiative, with different physical origins. That they are independent and may thus be combined without circularity is what makes them powerful, allowing for e.g. a prediction of how convective mass fluxes change with warming \citep[][]{held2006}.
\section{Appendix}
This appendix describes in detail our calculation of bin-averaged flux divergence profiles from GCM output.
For a GCM column at a given longitude, latitude, and time, we must first identify a range of tropospheric model levels $k$ over which the temperature $T$ varies monotonically. We identify the uppermost of these levels \ensuremath{k_\mathrm{max}}\ as the minimum $k>10$ for which
$T[k+1]>T[k]$. If none such exists (i.e. no stratospheric inversion) then $\ensuremath{k_\mathrm{max}}$ takes its highest possible value (i.e. model top).
The minimum $k$ value $\ensuremath{k_\mathrm{min}}$ equals 1 if there is no inversion below \ensuremath{k_\mathrm{max}}, and otherwise is the largest $k< \ensuremath{k_\mathrm{max}}$ such that $T[k]>T[k-1]$. We then interpolate the column's SW and LW radiative fluxes over this $T$ range onto a uniform $T$ grid running from 150 to 350 K in increments of 2 K, and assign these interpolated profiles, weighted by column area, to the appropriate \ensuremath{T_\mathrm{s}}\ bin using $T[1]$ (where \ensuremath{T_\mathrm{s}}-binning is done with the same uniform grid as for vertical levels $T$). We repeat this for each GCM column over the last 30 years of each simulation, keeping track of the accumulated column area for each bin and $T$ level. This allows us to produce an area-weighted average flux profile in each bin, where in a given bin the total area represented at each $T$ level drops off at lower and higher $T$ (due to small variations in $T[\ensuremath{k_\mathrm{min}}]$ and $T[\ensuremath{k_\mathrm{max}}]$ within the bin). These average flux profiles (one per bin) may then be differentiated with respect to $T$, yielding the $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{net}}$ profiles shown in Fig. \ref{fnet_ipsl} and \ref{fnet_all}. To reduce noise in these figures, the profiles are cut off once the total area at a given $T$ is less than half of the maximum value in the vertical (where this maximum value is taken throughout most of that bin's tropospheric $T$ range, as expected).
The decomposition of these net flux divergence profiles into their LW and SW components is given in Fig. \ref{fswlw_all}. These panels show that mid and upper tropospheric \ensuremath{T_\mathrm{s}}-invariance holds for the LW and SW separately in the GCMs, just as for the CRM.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{fswlw_all.pdf}
\caption{ Profiles of $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{SW}}$ and $-\ensuremath{\partial_T} \ensuremath{F^\mathrm{LW}}$ for the \ensuremath{T_\mathrm{s}}=290 K (AMIP) and 294 K (AMIP4K) bins for all six CFMIP models. These profiles show that \ensuremath{T_\mathrm{s}}-invariance in the mid and upper troposphere across models holds for both the LW and SW separately.
\label{fswlw_all}
}
\end{center}
\end{figure}
\bibliographystyle{apa}
|
1,108,101,565,261 | arxiv | \section{Introduction}
This paper is motivated by the study of the nonhomogeneous linear recursion
\begin{equation} \label{eq:IntroLinear}
R \stackrel{\mathcal{D}}{=} \sum_{i=1}^N C_i R_i + Q,
\end{equation}
where $(Q, N, C_1, C_2, \dots)$ is a nonnegative random vector with $N \in \mathbb{N} \cup \{\infty\}$, $\mathbb{N} = \{0, 1, 2, 3, \dots\}$, $P(Q >0) > 0$, and $\{R_i\}_{i \in \mathbb{N}}$ is a sequence of iid random variables, independent of $(Q, N, C_1, C_2, \dots)$, having the same distribution as $R$. This recursion appeared recently in the stochastic analysis of Google's PageRank algorithm, see \cite{Volk_Litv_08, Jel_Olv_09} and the references therein for the latest work in the area. These types of weighted recursions, also studied in the literature on weighted branching processes \cite{Rosler_93} and branching random walks \cite{Biggins_77}, are found in the probabilistic analysis of other algorithms as well \cite{Ros_Rus_01, Nei_Rus_04}, e.g., Quicksort algorithm \cite{Fill_Jan_01}.
In order to study the preceding recursion in its full generality we extend the implicit renewal theory of Goldie \cite{Goldie_91} to cover recursions on trees. The extension of Goldie's theorem is presented in Theorem~\ref{T.NewGoldie} of Section~\ref{S.Renewal}. One of the observations that allows this extension is that an appropriately constructed measure on a weighted branching tree is a renewal measure, see Lemma \ref{L.RenewalMeasure} and equation \eqref{eq:RenewalMeasure}. In the remainder of the paper we apply the newly developed framework to analyze a number of linear and non-linear stochastic recursions on trees, starting with \eqref{eq:IntroLinear}. Note that the majority of the work in the rest of the paper goes into the application of the main theorem to specific problems.
In this regard, in Section~\ref{S.LinearRec}, we first construct an explicit solution \eqref{eq:ExplicitConstr} to \eqref{eq:IntroLinear} on a weighted branching tree and then provide sufficient conditions for the finiteness of moments and the uniqueness of this solution in Lemmas \ref{L.Moments_R} and \ref{L.Convergence}, respectively. Furthermore, it is worth noting that our moment estimates are explicit, see Lemma~\ref{L.GeneralMoment}, which may be of independent interest. Then, the main result, which characterizes the power-tail behavior of $R$ is presented in Theorem \ref{T.LinearRecursion}. In addition, for integer power exponent ($\alpha \in \{ 1, 2, 3, \dots\}$) the asymptotic tail behavior can be explicitly computed as stated in Corollary \ref{C.explicit}. Furthermore, for non integer $\alpha$, Lemma \ref{L.Alpha_Moments} yields an explicit bound on the tail behavior of $R$. Related work in the literature of weighted branching processes (WBPs) for the case when $N = \infty$ and $Q, \{C_i\}$ are nonnegative deterministic constants can be found in \cite{Rosler_93} (see Theorem 5), and more recently, for real valued constants, in \cite{Alsm_Rosl_05}. However, these deterministic assumptions fall outside of the scope of this paper; for more details see the remarks after Theorem~\ref{T.LinearRecursion} in Section~\ref{SS.MainLinear}.
Next, we show how our technique can be applied to study the tail asymptotics of the solution to the critical, $E\left[ \sum_{i=1}^N C_i \right] = 1$, homogeneous linear equation
\begin{equation} \label{eq:IntroLinearHomog}
R \stackrel{\mathcal{D}}{=} \sum_{i=1}^N C_i R_i,
\end{equation}
where $(N, C_1, C_2, \dots)$ is a nonnegative random vector with $N \in \mathbb{N} \cup \{\infty\}$ and $\{ R_i\}_{i\in \mathbb{N}}$ is a sequence of iid random variables independent of $(N, C_1, C_2, \dots)$ having the same distribution as $R$. This type of recursion has been studied to a great extent under a variety of names, including branching random walks and multiplicative cascades. Our work is more closely related to the results of \cite{Liu_00} and \cite{Iksanov_04}, where the conditions for power-tail asymptotics of the distribution of $R$ with power exponent $\alpha>1$ were derived. In Theorem \ref{T.LinearHomog} of Section \ref{SS.MainLinear} we provide an alternative derivation of Theorem 2.2 in \cite{Liu_00} and Proposition 7 in \cite{Iksanov_04}. Furthermore, we note that our method yields a more explicit characterization of the power-tail proportionality constant, see Corollary~\ref{C.explicitHom}. For the full description of the set of solutions to \eqref{eq:IntroLinearHomog} see the very recent work in \cite{Als_Big_Mei_10}. For additional references on weighted branching processes and multiplicative cascades see \cite{Alsm_Kuhl_07, Liu_00, Liu_98, Way_Will_95, Nei_Rus_04} and the references therein. For earlier historical references see \cite{Kah_Pey_76, Holl_Ligg_81, Durr_Ligg_83}.
As an additional illustration of the newly developed framework, in Section \ref{S.MaxRec} we study the recursion
\begin{equation} \label{eq:IntroMaximum}
R \stackrel{\mathcal{D}}{=} \left(\bigvee_{i=1}^N C_i R_i \right) \vee Q,
\end{equation}
where $(Q, N, C_1, C_2, \dots)$ is a nonnegative random vector with $N \in \mathbb{N} \cup \{\infty\}$, \linebreak $P\left(Q >0 \right)>0$ and $\{R_i\}_{i\in \mathbb{N}}$ is a sequence of iid random variables independent of $(Q, N, C_1, C_2, \dots)$ having the same distribution as $R$. We characterize the tail behavior of $P(R > x)$ in Theorem \ref{T.MaximumRecursion}. Similarly to the homogeneous linear case, this recursion was previously studied in \cite{Alsm_Rosl_08} under the assumption that $Q \equiv 0$, $N = \infty$, and the $\{C_i\}$ are real valued deterministic constants.
The more closely related case of $Q \equiv 0$ and $\{C_i \} \geq 0$ being random was studied earlier in \cite{Jag_Ros_04}.
Furthermore, these max-type stochastic recursions appear in a wide variety of applications, ranging from the average case analysis of algorithms to statistical physics; see \cite{Aldo_Band_05} for a recent survey.
We conclude the paper with a brief discussion of other non-linear recursions that could be studied using the developed techniques, including the solution to
$$R \stackrel{\mathcal{D}}{=} \left(\bigvee_{i=1}^N C_i R_i \right) + Q.$$
The majority of the proofs are postponed to Section~\ref{S.Proofs}.
\section{Model description} \label{S.ModelDescription}
First we construct a random tree $\mathcal{T}$. We use the notation $\emptyset$ to denote the root node of $\mathcal{T}$, and $A_n$, $n \geq 0$, to denote the set of all individuals in the $n$th generation of $\mathcal{T}$, $A_0 = \{\emptyset\}$. Let $Z_n$ be the number of individuals in the $n$th generation, that is, $Z_n = |A_n|$, where $| \cdot |$ denotes the cardinality of a set; in particular, $Z_0 = 1$.
Next, let $\mathbb{N}_+ = \{1, 2, 3, \dots\}$ be the set of positive integers and let $U = \bigcup_{k=0}^\infty (\mathbb{N}_+)^k$ be the set of all finite sequences ${\bf i} = (i_1, i_2, \dots, i_n)$, where by convention $\mathbb{N}_+^0 = \{ \emptyset\}$ contains the null sequence $\emptyset$. To ease the exposition, for a sequence ${\bf i} = (i_1, i_2, \dots, i_k) \in U$ we write ${\bf i}|n = (i_1, i_2, \dots, i_n)$, provided $k \geq n$, and ${\bf i}|0 = \emptyset$ to denote the index truncation at level $n$, $n \geq 0$. Also, for ${\bf i} \in A_1$ we simply use the notation ${\bf i} = i_1$, that is, without the parenthesis. Similarly, for ${\bf i} = (i_1, \dots, i_n)$ we will use $({\bf i}, j) = (i_1,\dots, i_n, j)$ to denote the index concatenation operation, if ${\bf i} = \emptyset$, then $({\bf i}, j) = j$.
We iteratively construct the tree as follows. Let $N$ be the number of individuals born to the root node $\emptyset$, $N_\emptyset = N$, and let $\{N_{\bf i} \}_{{\bf i} \in U}$ be iid copies of $N$. Define now
\begin{align}
A_1 &= \{ i \in \mathbb{N}_+: 1 \leq i \leq N \}, \notag \\
A_n &= \{ (i_1, i_2, \dots, i_n) \in U: (i_1, \dots, i_{n-1}) \in A_{n-1}, 1 \leq i_n \leq N_{(i_1, \dots, i_{n-1})} \}. \label{eq:AnDef}
\end{align}
It follows that the number of individuals $Z_n = |A_n|$ in the $n$th generation, $n \geq 1$, satisfies the branching recursion
$$Z_{n} = \sum_{{\bf i} \in A_{n-1}} N_{\bf i}.$$
Now, we construct the weighted branching tree $\mathcal{T}_{Q,C}$ as follows. The root node $\emptyset$ is assigned a vector $(Q_\emptyset, N_\emptyset, C_{(\emptyset, 1)}, C_{(\emptyset, 2)}, \dots) = (Q, N, C_1, C_2, \dots)$ with $N \in \mathbb{N} \cup \{\infty\}$ and $P(Q > 0) > 0$; $N$ determines the number of nodes in the first generation of $\mathcal{T}$ according to \eqref{eq:AnDef}. Each node in the first generation is then assigned an iid copy $(Q_i, N_i, C_{(i,1)}, C_{(i, 2)}, \dots)$ of the root vector and the $\{N_i\}$ are used to define the second generation in $\mathcal{T}$ according to \eqref{eq:AnDef}.
In general, for $n\ge 2$, to each node ${\bf i} \in A_{n-1}$, we assign an iid copy
$(Q_{\bf i}, N_{\bf i}, C_{({\bf i},1)}, C_{({\bf i}, 2)}, \dots)$ of the root vector and construct
$A_{n} = \{({\bf i}, i_{n}): {\bf i} \in A_{n-1}, 1 \leq i_{n} \leq N_{\bf i}\}$; the vectors $(Q_{\bf i}, N_{\bf i}, C_{({\bf i},1)}, C_{({\bf i}, 2)}, \dots)$, ${\bf i} \in A_{n-1}$ are chosen independently of all the
previously assigned vectors $(Q_{\bf j}, N_{\bf j}, C_{({\bf j},1)}, C_{({\bf j}, 2)}, \dots)$, ${\bf j} \in A_{k}, 0\le k\le n-2$.
For each node in $\mathcal{T}_{Q,C}$ we also define the weight $\Pi_{(i_1,\dots,i_n)}$ via the recursion
$$ \Pi_{i_1} =C_{i_1}, \qquad \Pi_{(i_1,\dots,i_n)} = C_{(i_1,\dots, i_n)} \Pi_{(i_1,\dots,i_{n-1})}, \quad n \geq 2,$$
where $\Pi =1$ is the weight of the root node. Note that the weight $\Pi_{(i_1,\dots, i_n)}$ is equal to the product of all the weights $C_{(\cdot)}$ along the branch leading to node $(i_1, \dots, i_n)$, as depicted in Figure \ref{F.Tree}.
In some places, e.g. in the following section, the value of $Q$ may be of no importance, and thus we will consider a
weighted branching tree defined by the smaller vector $(N, C_1, C_2, \dots)$.
This tree can be obtained form $\mathcal{T}_{Q,C}$ by simply disregarding the values for $Q_{(\cdot)}$ and is denoted by $\mathcal{T}_C$.
\begin{center}
\begin{figure}[h,t]
\begin{picture}(430,160)(0,0)
\put(0,0){\includegraphics[scale = 0.8, bb = 0 510 500 700, clip]{Tree}}
\put(125,150){\small $\Pi = 1$}
\put(69,83){\small $\Pi_{1}$}
\put(131,83){\small $\Pi_{2}$}
\put(219,83){\small $\Pi_{3}$}
\put(22,17){\small $\Pi_{(1,1)}$}
\put(78,17){\small $\Pi_{(1,2)}$}
\put(126,17){\small $\Pi_{(2,1)}$}
\put(162,17){\small $\Pi_{(3,1)}$}
\put(213,17){\small $\Pi_{(3,2)}$}
\put(268,17){\small $\Pi_{(3,3)}$}
\put(350,150){\small $Z_0 = 1$}
\put(350,83){\small $Z_1 = 3$}
\put(350,17){\small $Z_2 = 6$}
\end{picture}
\caption{Weighted branching tree}\label{F.Tree}
\end{figure}
\end{center}
Studying the tail behavior of the solutions to recursions and fixed point equations embedded in this weighted branching tree is the objective of this paper.
\section{Implicit renewal theorem on trees} \label{S.Renewal}
In this section we present an extension of Goldie's Implicit Renewal Theorem \cite{Goldie_91} to weighted branching trees. The observation that facilitates this generalization is the following lemma which shows that a certain measure on a tree is actually a product measure; a similar measure was used in a different context in \cite{Biggins_Kyprianou_77}. Its proof is given in Section~\ref{SS.ImplicitProofs} for completeness. Throughout the paper we use the standard convention $0^\alpha \log 0 = 0$ for all $\alpha > 0$.
\begin{lem} \label{L.RenewalMeasure}
Let $\mathcal{T}_{C}$ be the weighted branching tree defined by the nonnegative vector $(N, C_1, C_2, \dots)$, where $N \in \mathbb{N} \cup \{\infty\}$. For any $n \in \mathbb{N}$ and ${\bf i} \in A_n$,
let $V_{{\bf i}} = \log \Pi_{\bf i}$. For $\alpha > 0$ define the measure
$$\mu_n(dt) = e^{\alpha t} E\left[ \sum_{{\bf i} \in A_n} \mathop{\hskip0pt{1}}\nolimits(V_{{\bf i}} \in dt ) \right], \quad n = 1, 2, \dots,$$
and let $\eta(dt) = \mu_1(dt)$.
Suppose that there exists $j \geq 1$ with $P(N\ge j,C_j>0)~>~0$ such that the measure $P(\log C_j\in du, C_j > 0, N\ge j)$ is nonarithmetic, \linebreak $0 < E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] ~< ~\infty$ and $E\left[ \sum_{i=1}^N C_i^\alpha \right] = 1$. Then, $\eta(\cdot)$ is a nonarithmetic probability measure on $\mathbb{R}$ that places no mass at $-\infty$ and has mean
$$\int_{-\infty}^\infty u\, \eta(du) = E\left[ \sum_{j=1}^N C_j^\alpha \log C_j \right] .$$
Furthermore, $\mu_n(dt) = \eta^{*n}(dt)$, where $\eta^{*n}$ denotes the $n$th convolution of $\eta$ with itself.
\end{lem}
We now present a generalization of Goldie's Implicit Renewal Theorem \cite{Goldie_91} that will enable the analysis of recursions on weighted branching trees. Note that except for the independence assumption, the random variable $R$ and the vector $(N, C_1, C_2, \dots)$ are arbitrary, and therefore the applicability of this theorem goes beyond the recursions that we study here. Throughout the paper we use $g(x) \sim f(x)$ as $x \to \infty$ to denote $\lim_{x \to \infty} g(x)/f(x) = 1$.
\begin{thm} \label{T.NewGoldie}
Let $(N, C_1, C_2, \dots)$ be a nonnegative random vector, where $N \in \mathbb{N} \cup \{\infty\}$.
Suppose that there exists $j \geq 1$ with $P(N\ge j,C_j>0)>0$ such that the measure $P(\log C_j\in du, C_j > 0, N\ge j)$ is nonarithmetic.
Assume further that \linebreak
$0 < E\left[ \sum_{j=1}^N C_j^\alpha \log C_j \right] < \infty$, $E\left[ \sum_{j=1}^N C_j^\alpha \right] = 1$, $E\left[ \sum_{j=1}^N C_j^\gamma \right] < \infty$ for some $0 \leq \gamma < \alpha$, and that $R$ is independent of $(N, C_1, C_2, \dots)$ with $E[R^\beta] < \infty$ for any $0< \beta < \alpha$. If
\begin{equation} \label{eq:Goldie_condition}
\int_0^\infty \left| P(R > t) - E\left[ \sum_{j=1}^N \mathop{\hskip0pt{1}}\nolimits (C_j R > t ) \right] \right| t^{\alpha-1} dt < \infty,
\end{equation}
then
$$P(R > t) \sim H t^{-\alpha}, \qquad t \to \infty,$$
where $0 \leq H < \infty$ is given by
\begin{align*}
H &= \frac{1}{E\left[ \sum_{j=1}^N C_j^\alpha \log C_j \right] } \int_{0}^\infty v^{\alpha-1} \left( P(R > v) - E\left[ \sum_{j=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{j} R > v ) \right] \right) dv .
\end{align*}
\end{thm}
{\sc Remarks:} (i) As pointed out in \cite{Goldie_91}, the statement of the theorem only has content when $R$ has infinite moment of order $\alpha$, since otherwise the constant $H$ is zero. (ii) Similarly as in \cite{Goldie_91}, this theorem can be generalized to incorporate negative weights $\{C_i\}$ at the expense of additional technical complications. However, when the $\{C_i\} \geq 0$ and $R$ is real-valued, one can use exactly the same proof to derive the asymptotics of $P(-R > t)$; we omit the statement here since our applications do not require it. (iii) When the $\{\log C_i\}$ are lattice valued, a similar version of the theorem can be derived by using the corresponding Renewal Theorem for lattice random walks. (iv) It appears, as noted in \cite{Goldie_91}, that some of the early ideas of applying renewal theory to study the power tail asymptotics of autoregressive processes (perpetuities) is due to \cite{Kesten_73} and \cite{Grincevicius_75}. The proof given below follows the corresponding proof in \cite{Goldie_91}.
\begin{proof}[Proof of Theorem \ref{T.NewGoldie}]
Let $\mathcal{T}_{C}$ be the weighted branching tree defined by the nonnegative vector $(N, C_1, C_2, \dots)$. For each ${\bf i} \in A_n$ and all $k \leq n$ define $V_{{\bf i}|k} = \log \Pi_{{\bf i}|k}$; note that $\Pi_{{\bf i}|k}$ is independent of $N_{{\bf i}|k}$ but not of $N_{{\bf i}|s}$ for any $0\leq s \leq k-1$. Also note that ${\bf i}|n = {\bf i}$ since ${\bf i} \in A_n$. Let $\mathcal{F}_k$, $k \geq 1$, denote the $\sigma$-algebra generated by $\left\{ (N_{\bf i}, C_{({\bf i}, 1)}, C_{({\bf i}, 2)}, \dots) : {\bf i} \in A_j, 0 \leq j \leq k-1 \right\}$, and let $\mathcal{F}_0 = \sigma(\emptyset, \Omega)$, $\Pi_{{\bf i}|0} \equiv 1$. Assume also that $R$ is independent of the entire weighted tree, $\mathcal{T}_{C}$. Then, for any $t \in \mathbb{R}$, we can write $P(R > e^t)$ via a telescoping sum as follows (note that all the expectations in \eqref{eq:telescoping} are finite by Markov's inequality and \eqref{eq:PiMoments})
\begin{align}
&P(R > e^t) \notag \\
&= \sum_{k=0}^{n-1} \left( E\left[ \sum_{({\bf i}|k) \in A_{k}} \mathop{\hskip0pt{1}}\nolimits(\Pi_{{\bf i}|k} R > e^t ) \right] - E\left[ \sum_{({\bf i}|k+1) \in A_{k+1}} \mathop{\hskip0pt{1}}\nolimits(\Pi_{{\bf i}|k+1} R > e^t) \right] \right) \label{eq:telescoping} \\
&\hspace{5mm} + E\left[ \sum_{({\bf i}|n) \in A_n} \mathop{\hskip0pt{1}}\nolimits(\Pi_{{\bf i}|n} R > e^t ) \right] \notag \\
&= \sum_{k=0}^{n-1} E\left[ \sum_{({\bf i}|k) \in A_{k}} \left( \mathop{\hskip0pt{1}}\nolimits (\Pi_{{\bf i}|k} R > e^t) - \sum_{j=1}^{N_{{\bf i}|k}} \mathop{\hskip0pt{1}}\nolimits(\Pi_{{\bf i}|k} C_{({\bf i}|k,j)} R > e^t) \right) \right] \notag \\
&\hspace{5mm} + E\left[ \sum_{({\bf i}|n) \in A_n} \mathop{\hskip0pt{1}}\nolimits(\Pi_{{\bf i}|n} R > e^t ) \right] \notag \\
&= \sum_{k=0}^{n-1} E\left[ \sum_{({\bf i}|k) \in A_{k}} E\left[ \left. \mathop{\hskip0pt{1}}\nolimits( R > e^{t-V_{{\bf i}|k}} ) - \sum_{j=1}^{N_{{\bf i}|k}} \mathop{\hskip0pt{1}}\nolimits( C_{({\bf i}|k,j)} R > e^{t-V_{{\bf i}|k}} ) \right| \mathcal{F}_k \right] \right] \notag \\
&\hspace{5mm} + E\left[ \sum_{({\bf i}|n) \in A_n} \mathop{\hskip0pt{1}}\nolimits(\Pi_{{\bf i}|n} R > e^t ) \right] .
\end{align}
Now, define the measures $\mu_n$ according to Lemma \ref{L.RenewalMeasure} and let
$$\nu_n(dt) = \sum_{k=0}^n \mu_k(dt), \qquad g(t) = e^{\alpha t} \left( P(R > e^t) - E\left[ \sum_{j=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{j} R > e^{t} ) \right] \right),$$
$$r(t) = e^{\alpha t} P(R > e^t) \qquad \text{and} \qquad \delta_n(t) = e^{\alpha t} E\left[ \sum_{({\bf i}|n) \in A_n} \mathop{\hskip0pt{1}}\nolimits( \Pi_{{\bf i}|n} R > e^t ) \right].$$
Recall that $R$ and $(N_{{\bf i}|k}, C_{({\bf i}|k,1)}, C_{({\bf i}|k,2)}, \dots)$ are independent of $\mathcal{F}_k$, from where it follows that
$$E\left[ \left. \mathop{\hskip0pt{1}}\nolimits( R > e^{t-V_{{\bf i}|k}} ) - \sum_{j=1}^{N_{{\bf i}|k}} \mathop{\hskip0pt{1}}\nolimits( C_{({\bf i}|k,j)} R > e^{t-V_{{\bf i}|k}} ) \right| \mathcal{F}_k \right] = e^{\alpha(V_{{\bf i}|k}-t)} g\left( t-V_{{\bf i}|k} \right). $$
Then, for any $t \in \mathbb{R}$ and $n \in \mathbb{N}$,
\begin{align*}
r(t) &= \sum_{k=0}^{n-1} E\left[ \sum_{({\bf i}|k) \in A_k} e^{\alpha V_{{\bf i}|k}} g(t-V_{{\bf i}|k}) \right] + \delta_n(t) = (g*\nu_{n-1})(t) + \delta_n(t).
\end{align*}
Next, define the operator $\breve{f}(t) = \int_{-\infty}^t e^{-(t-u)} f(u) \, du$ and note that
\begin{equation} \label{eq:SmoothOperator}
\breve{r}(t) = (\breve{g}* \nu_{n-1})(t) + \breve{\delta}_n(t).
\end{equation}
Now, we will show that one can let $n \to \infty$ in the preceding identity. To this end, let $\eta(du) = \mu_1(du)$, and note that by Lemma \ref{L.RenewalMeasure} $\eta(\cdot)$ is a nonarithmetic probability measure on $\mathbb{R}$ that places no mass at $-\infty$ and has mean,
$$\mu \triangleq \int_{-\infty}^\infty u\, \eta(du) = E\left[ \sum_{j=1}^N C_j^\alpha \log C_j \right] > 0 .$$
Moreover, by Lemma \ref{L.RenewalMeasure},
\begin{equation} \label{eq:RenewalMeasure}
\nu(dt) \triangleq \sum_{k=0}^\infty e^{\alpha t} E\left[ \sum_{({\bf i}|k) \in A_{k}} \mathop{\hskip0pt{1}}\nolimits(V_{{\bf i}|k} \in dt ) \right] = \sum_{k=0}^\infty \eta^{*k}(dt)
\end{equation}
is its renewal measure. Since $\mu \neq 0$, then $(|f|*\nu)(t) < \infty$ for all $t$ whenever $f$ is directly Riemann integrable. By \eqref{eq:Goldie_condition} we know that $g \in L_1$, so by Lemma 9.1 from \cite{Goldie_91}, $\breve{g}$ is directly Riemann integrable, resulting in $(|\breve{g}|*\nu)(t) < \infty$ for all $t$. Thus, $(|\breve{g}|*\nu)(t) = E\left[ \sum_{k=0}^\infty \sum_{({\bf i}|k) \in A_{k}} e^{\alpha V_{{\bf i}|k}} |\breve{g}(t-V_{{\bf i}|k})| \right] < \infty$, which implies that $E\left[ \sum_{k=0}^\infty \sum_{({\bf i}|k) \in A_{k}} e^{\alpha V_{{\bf i}|k}} \breve{g}(t-V_{{\bf i}|k}) \right]$ exists and, by Fubini's theorem,
\begin{align*}
(\breve{g}*\nu)(t) &= E\left[ \sum_{k=0}^\infty \sum_{({\bf i}|k) \in A_{k}} e^{\alpha V_{{\bf i}|k}} \breve{g}(t-V_{{\bf i}|k}) \right] \\
&= \sum_{k=0}^\infty E\left[ \sum_{({\bf i}|k) \in A_{k}} e^{\alpha V_{{\bf i}|k}} \breve{g}(t-V_{{\bf i}|k}) \right] = \lim_{n\to \infty} (\breve{g}*\nu_n)(t).
\end{align*}
To see that $\breve{\delta}_n(t) \to 0$ as $n \to \infty$ for all fixed $t$, note that from the assumptions $0 < E\left[ \sum_{j=1}^N C_j^\alpha \log C_j \right] < \infty$, $E\left[ \sum_{j=1}^N C_j^\alpha \right] = 1$, and $E\left[ \sum_{j=1}^N C_j^\gamma \right] < \infty$ for some $0 \leq \gamma < \alpha$, there exists $0 < \beta <\alpha$ such that $E\left[ \sum_{j=1}^N C_j^{\beta} \right] < 1$ (by convexity). Then, for such $\beta$,
\begin{align}
\breve{\delta}_n(t) &= \int_{-\infty}^t e^{-(t-u)} e^{\alpha u} E\left[ \sum_{({\bf i}|n) \in A_n} \mathop{\hskip0pt{1}}\nolimits\left( \Pi_{{\bf i}|n} R > e^{u} \right) \right] du \notag \\
&\leq e^{(\alpha-\beta)t} E\left[ \sum_{({\bf i}|n) \in A_n} \int_{-\infty}^t e^{ \beta u} \mathop{\hskip0pt{1}}\nolimits\left(\Pi_{{\bf i}|n} R > e^u \right) du \, \right] \notag \\
&= e^{(\alpha-\beta) t} E\left[ \sum_{({\bf i}|n) \in A_n} \int_{-\infty}^{\min\{t, \log(\Pi_{{\bf i}|n} R)\}} e^{\beta u} du \, \right] \notag \\
&\leq \frac{e^{(\alpha-\beta) t}}{\beta} E\left[ \sum_{({\bf i}|n) \in A_n} (\Pi_{{\bf i}|n} R)^{\beta} \right] . \label{eq:delta_error}
\end{align}
It remains to show that the expectation in \eqref{eq:delta_error} converges to zero as $n \to \infty$. First note that from the independence of $R$ and $\mathcal{T}_C$,
$$ E\left[ \sum_{({\bf i}|n) \in A_n} (\Pi_{{\bf i}|n} R)^{\beta} \right] = E[R^\beta] E\left[ \sum_{({\bf i}|n) \in A_n} (\Pi_{{\bf i}|n})^{\beta} \right],$$
where $E[R^\beta] < \infty$, for $0 < \beta < \alpha$. For the expectation involving $\Pi_{{\bf i}|n}$ condition on $\mathcal{F}_{n-1}$ and use the independence of $(N_{{\bf i}|n-1}, C_{({\bf i}|n-1, 1)}, C_{({\bf i}|n-1, 2)}, \dots)$ from $\mathcal{F}_{n-1}$ as follows
\begin{align}
E\left[ \sum_{({\bf i}|n) \in A_n} (\Pi_{{\bf i}|n})^{\beta} \right] &= E\left[ \sum_{({\bf i}|n-1) \in A_{n-1}} E\left[ \left. \sum_{j=1}^{N_{{\bf i}|n-1}} (\Pi_{{\bf i}|n-1})^{\beta} C_{({\bf i}|n-1,j)}^{\beta} \right| \mathcal{F}_{n-1} \right] \right] \notag \\
&= E\left[ \sum_{({\bf i}|n-1) \in A_{n-1}} (\Pi_{{\bf i}|n-1})^{\beta} E\left[ \left. \sum_{j=1}^{N_{{\bf i}|n-1}} C_{({\bf i}|n-1,j)}^{\beta} \right| \mathcal{F}_{n-1} \right] \right] \notag \\
&= E\left[ \sum_{j=1}^N C_j^{\beta} \right] E\left[ \sum_{({\bf i}|n-1) \in A_{n-1}} (\Pi_{{\bf i}|n-1})^{\beta} \right] \notag \\
&= \left( E\left[ \sum_{j=1}^{N} C_{j}^{\beta} \right] \right)^n \qquad \text{(iterating $n-1$ times)}. \label{eq:PiMoments}
\end{align}
Since $E\left[ \sum_{j=1}^{N} C_{j}^{\beta} \right] < 1$, then the above converges to zero as $n \to \infty$. Hence, the preceding arguments allow us to pass $n \to \infty$ in \eqref{eq:SmoothOperator}, and obtain
$$\breve{r}(t) = (\breve{g}*\nu)(t).$$
Now, by the key renewal theorem for two-sided random walks, see Theorem 4.2 in \cite{Ath_McD_Ney_78},
$$e^{- t} \int_{0}^{e^t} v^{\alpha} P(R > v) \, dv = \breve{r}(t) \to \frac{1}{\mu} \int_{-\infty}^\infty \breve{g}(u) \, du \triangleq H, \qquad t \to \infty.$$
Clearly, $H \geq 0$ since the left-hand side of the preceding equation is positive, and thus, by
Lemma 9.3 in \cite{Goldie_91},
$$P(R > t) \sim H t^{-\alpha}, \qquad t \to \infty.$$
Finally,
\begin{align*}
H &= \frac{1}{\mu} \int_{-\infty}^\infty \int_{-\infty}^u e^{-(u-t)} g(t) \, dt \, du \\
&= \frac{1}{\mu} \int_{-\infty}^\infty e^{ t} g(t) \int_{t}^\infty e^{- u} \, du \, dt \\
&= \frac{1}{ \mu} \int_{-\infty}^\infty g(t) \, dt \\
&= \frac{1}{ \mu} \int_{-\infty}^\infty e^{\alpha t} \left( P(R > e^t) - E\left[ \sum_{j=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{j} R > e^{t} ) \right] \right) dt \\
&= \frac{1}{ \mu} \int_{0}^\infty v^{\alpha-1} \left( P(R > v) - E\left[ \sum_{j=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{j} R > v ) \right] \right) dv.
\end{align*}
\end{proof}
\vspace{-15pt}
\section{The linear recursion: $R = \sum_{i=1}^N C_i R_i + Q$} \label{S.LinearRec}
Motivated by the information ranking problem on the Internet, e.g. Google's PageRank algorithm \cite{Jel_Olv_09, Volk_Litv_08}, in this section we apply the implicit renewal theory for trees developed in the previous section to the following linear recursion:
\begin{equation} \label{eq:Linear}
R \stackrel{\mathcal{D}}{=} \sum_{i=1}^N C_i R_i + Q,
\end{equation}
where $(Q,N, C_1, C_2, \dots)$ is a nonnegative random vector with $N \in \mathbb{N} \cup \{\infty\}$, \linebreak $P(Q > 0 ) > 0$, and $\{R_i\}_{i\in \mathbb{N}}$ is a sequence of iid random variables independent of $(Q, N, C_1, C_2, \dots)$ having the same distribution as $R$. Note that the power tail of $R$ in the critical homogeneous case $(Q \equiv 0)$ was previously studied in \cite{Liu_00} and \cite{Iksanov_04}. In Section~\ref{SS.Homogeneous} we will give an alternative derivation of those results using our method and will provide pointers to the appropriate literature.
As for the nonhomogeneous case, the first result we need to establish is the existence and finiteness of a solution to \eqref{eq:Linear}. For the purpose of existence we will provide an explicit construction of the solution $R$ to \eqref{eq:Linear} on a tree. Note that such constructed $R$ will be the main object of study of this section.
Recall that throughout the paper the convention is to denote the random vector associated to the root node $\emptyset$ by $(Q, N, C_1, C_2, \dots) \equiv (Q_\emptyset, N_\emptyset, C_{(\emptyset, 1)}, C_{(\emptyset, 2)}, \dots)$.
We now define the process
\begin{equation} \label{eq:W_k}
W_0 = Q, \quad W_n = \sum_{{\bf i} \in A_n} Q_{{\bf i}} \Pi_{{\bf i}}, \qquad n \geq 1,
\end{equation}
on the weighted branching tree $\mathcal{T}_{Q, C}$, as constructed in Section \ref{S.ModelDescription}. Define the process $\{R^{(n)}\}_{n \geq 0}$ according to
\begin{equation} \label{eq:R_nDef}
R^{(n)} = \sum_{k=0}^n W_k , \qquad n \geq 0,
\end{equation}
that is, $R^{(n)}$ is the sum of the weights of all the nodes on the tree up to the $n$th generation. It is not hard to see that $R^{(n)}$ satisfies the recursion
\begin{equation} \label{eq:LinearRecSamplePath}
R^{(n)} = \sum_{j=1}^{N_\emptyset} C_{(\emptyset,j)} R^{(n-1)}_{j} + Q_{\emptyset} = \sum_{j=1}^{N} C_{j} R^{(n-1)}_{j} + Q, \qquad n \geq 1,
\end{equation}
where $\{R_{j}^{(n-1)}\}$ are independent copies of $R^{(n-1)}$ corresponding to the tree starting with individual $j$ in the first generation and ending on the $n$th generation; note that $R_j^{(0)} = Q_j$. Similarly, since the tree structure repeats itself after the first generation, $W_n$ satisfies
\begin{align}
W_n &= \sum_{{\bf i} \in A_n} Q_{{\bf i}} \Pi_{{\bf i}} \notag\\
&= \sum_{k = 1}^{N_\emptyset} C_{(\emptyset,k)}
\sum_{(k,\dots,i_n) \in A_n} Q_{(k,\dots,i_n)} \prod_{j=2}^n C_{(k,\dots,i_j)} \notag\\
&\stackrel{\mathcal{D}}{=} \sum_{k=1}^N C_k W_{(n-1),k},\label{eq:WnRec}
\end{align}
where $\{W_{(n-1),k}\}$ is a sequence of iid random variables independent of $(N, C_1, C_2, \dots)$ and having the same distribution as $W_{n-1}$.
Next, define the random variable $R$ according to
\begin{equation} \label{eq:ExplicitConstr}
R \triangleq \lim_{n\to \infty} R^{(n)} = \sum_{k=0}^\infty W_k ,
\end{equation}
where the limit is properly defined by \eqref{eq:R_nDef} and monotonicity. Hence, it is easy to verify, by applying monotone convergence in \eqref{eq:LinearRecSamplePath}, that $R$ must solve
$$R = \sum_{j=1}^{N_\emptyset} C_{(\emptyset,j)} R_{j}^{(\infty)} + Q_\emptyset = \sum_{j=1}^N C_{j} R_j^{(\infty)} + Q ,$$
where $\{R_j^{(\infty)}\}_{j \in \mathbb{N}}$ are iid, have the same distribution as $R$, and are independent of $(Q, N, C_1, C_2, \dots)$.
The derivation provided above implies in particular the existence of a solution in distribution to \eqref{eq:Linear}. Moreover, under additional technical conditions, $R$ is the unique solution under iterations as we will define and show in the following section. The constructed $R$, as defined in \eqref{eq:ExplicitConstr}, is the main object of study in the remainder of this section.
\subsection{Moments of $W_n$ and $R$} \label{SS.MomentsLinear}
In this section we derive estimates for the moments of $W_n$ and $R$. We start by stating a lemma about the moments of a sum of random variables. The proofs of Lemmas \ref{L.Alpha_Moments}, \ref{L.MomentSmaller_1} and \ref{L.GeneralMoment} are given in Section~\ref{SS.MomentsProofs}.
\begin{lem} \label{L.Alpha_Moments}
For any $k \in \mathbb{N} \cup \{\infty\}$ let $\{C_i \}^k_{i=1}$ be a sequence of nonnegative random variables and let $\{Y_i\}_{i = 1}^k$ be a sequence of nonnegative iid random variables, independent of the $\{C_i\}$, having the same distribution as $Y$. For $\beta > 1$ set $p = \lceil \beta \rceil \in \{2, 3, 4, \dots\}$, and if $k = \infty$ assume that $\sum_{i=1}^\infty C_i Y_i < \infty$ a.s. Then,
$$E\left[ \left( \sum_{i=1}^k C_i Y_i \right)^\beta - \sum_{i=1}^k (C_iY_i)^\beta \right] \leq \left( E\left[ Y^{p-1} \right] \right)^{\beta/(p-1)} E\left[ \left(\sum_{i=1}^k C_i \right)^\beta \right].$$
\end{lem}
{\sc Remark:} Note that the preceding lemma does not exclude the case when \linebreak $E \left[ \left( \sum_{i=1}^k C_i Y_i \right)^\beta \right] = \infty$ but $E\left[ \left( \sum_{i=1}^k C_i Y_i \right)^\beta - \sum_{i=1}^k (C_iY_i)^\beta \right] < \infty$.
\bigskip
We now give estimates for the $\beta$-moments of $W_n$ for $\beta \in (0, 1]$ and $\beta > 1$ in Lemmas~\ref{L.MomentSmaller_1} and \ref{L.GeneralMoment}, respectively. Throughout the rest of the paper define \linebreak $\rho_\beta = E\left[ \sum_{i=1}^N C_i^\beta \right]$ for any $\beta > 0$, and $\rho \equiv \rho_1$.
\begin{lem} \label{L.MomentSmaller_1}
For $0 < \beta \leq 1$ and all $n \geq 0$,
$$E[ W_n^\beta ] \leq E[Q^\beta] \rho_\beta^{n}.$$
\end{lem}
\begin{lem} \label{L.GeneralMoment}
For $\beta > 1$ suppose $E[Q^\beta]< \infty$, $E\left[ \left( \sum_{i=1}^N C_i \right)^\beta \right] < \infty$, and $\rho \vee \rho_\beta < 1$. Then, there exists a constant $K_\beta > 0$ such that for all $n \geq 0$,
\begin{equation*}
E[ W_n^\beta ] \leq K_\beta ( \rho \vee \rho_\beta )^{n}.
\end{equation*}
\end{lem}
Now we are ready to establish the finiteness of moments of the solution $R$ given by \eqref{eq:ExplicitConstr} in Section \ref{S.LinearRec}. The proof of this lemma uses well known contraction arguments, but for completeness we provide the details below.
\begin{lem} \label{L.Moments_R}
Assume that $E[Q^\beta] < \infty$ for some $\beta > 0$. In addition, suppose that either (i) $\rho_\beta < 1$ if $0 < \beta < 1$, or (ii) $(\rho \vee \rho_\beta) < 1$ and $E\left[\left( \sum_{i=1}^N C_i \right)^\beta\right] < \infty$ if $\beta \geq 1$. Then, $E[R^\gamma] < \infty$ for all $0 < \gamma \leq \beta$, and in particular, $R < \infty$ a.s. Moreover, if $\beta \geq 1$, $R^{(n)} \stackrel{L_\beta}{\to} R$, where $L_\beta$ stands for convergence in $(E|\cdot|^\beta)^{1/\beta}$ norm.
\end{lem}
{\sc Remark:} It is interesting to observe that for $\beta > 1$ the conditions $\rho_\beta < 1$ and $E\left[ \left( \sum_{i=1}^N C_i \right)^\beta \right] < \infty$ are consistent with Theorem 3.1 in \cite{Alsm_Kuhl_07}, Proposition 4 in \cite{Iksanov_04} and Theorem 2.1 in \cite{Liu_00},
which give the conditions for the finiteness of the $\beta$-moment of the solution to the related critical ($\rho_1 = 1$) homogeneous ($Q \equiv 0$) equation.
\begin{proof}
Let
$$\eta = \begin{cases} \rho_\beta & \text{ if }\beta < 1 \\ \rho \vee \rho_\beta, & \text{ if } \beta \geq 1. \end{cases}$$
Then by Lemmas \ref{L.MomentSmaller_1} and \ref{L.GeneralMoment},
\begin{equation} \label{eq:EW_n}
E[W_n^\beta] \leq K \eta^n
\end{equation}
for some $K > 0$. Suppose $\beta \geq 1$, then, by monotone convergence and Minkowski's inequality,
\begin{align*}
E[R^\beta] &= E\left[ \lim_{n\to\infty} \left(\sum_{k=0}^n W_k \right)^\beta \right] = \lim_{n\to \infty} E\left[ \left(\sum_{k=0}^n W_k\right)^\beta \right] \\
&\leq \lim_{n\to\infty} \left( \sum_{k=0}^n \left( E[W_k^\beta] \right)^{1/\beta} \right)^\beta \leq K \left( \sum_{k=0}^\infty \eta^{k/\beta} \right)^\beta < \infty.
\end{align*}
This implies that $R < \infty$ a.s. When $0 < \beta \le 1$ use the inequality $\left( \sum_{k=0}^n y_k \right)^\beta \leq \sum_{k=0}^n y_k^\beta$ for any $y_i \geq 0$ instead of Minkowski's inequality. Furthermore, for any \linebreak $0 < \gamma \leq \beta$,
$$E[R^\gamma] = E\left[ (R^\beta)^{\gamma/\beta}\right] \leq \left(E[R^\beta] \right)^{\gamma/\beta} < \infty.$$
That $R^{(n)} \stackrel{L_\beta}{\to} R$ whenever $\beta \geq 1$ follows from noting that $E[|R^{(n)} - R|^\beta]$ \linebreak $= E\left[ \left( \sum_{k = n+1}^\infty W_k \right)^\beta \right]$ and applying the same arguments used above to obtain the bound $E[|R^{(n)} - R|^\beta] \leq K \eta^{n+1}/(1 - \eta^{1/\beta})^\beta$.
\end{proof}
Next, we show that under some technical conditions, the iteration of recursion \eqref{eq:Linear} results in a process that converges in distribution to $R$ for any initial condition $R_0^*$. To this end, consider a weighted branching tree $\mathcal{T}_{Q,C}$, as defined in Section \ref{S.ModelDescription}. Now, define
$$R_n^* \triangleq R^{(n-1)} + W_n(R_0^*), \qquad n \geq 1,$$
where $R^{(n-1)}$ is given by \eqref{eq:R_nDef},
\begin{equation} \label{eq:LastWeights}
W_n(R_0^*) = \sum_{{\bf i} \in A_n} R_{0,{\bf i}}^* \Pi_{\bf i},
\end{equation}
and $\{ R_{0,{\bf i}}^*\}_{{\bf i} \in U}$ are iid copies of an initial value $R_0^*$, independent of the entire weighted tree $\mathcal{T}_{Q,C}$. It follows from \eqref{eq:LinearRecSamplePath} and \eqref{eq:LastWeights} that, for $n \geq 0$,
\begin{equation} \label{eq:StarRecursion}
R_{n+1}^* = \sum_{j=1}^{N} C_j R_{j}^{(n-1)} + Q + W_{n+1}(R_0^*) = \sum_{j=1}^{N} C_j \left( R_{j}^{(n-1)} + \sum_{{\bf i} \in A_{n,j}} R_{0,{\bf i}}^* \prod_{k=2}^n C_{(j,\dots,i_k)} \right) + Q,
\end{equation}
where $\{ R_{j}^{(n-1)} \}$ are independent copies of $R^{(n-1)}$ corresponding to the tree starting with individual $j$ in the first generation and ending on the $n$th generation, and $A_{n,j}$ is the set of all nodes in the $(n+1)$th generation that are descendants of individual $j$ in the first generation. It follows that
$$R_{n+1}^*= \sum_{j=1}^N C_j R_{n,j}^* + Q,$$
where $\{R_{n,j}^*\}$ are the expressions inside the parenthesis in \eqref{eq:StarRecursion}. Clearly, $\{R_{n,j}^*\}$ are iid copies of $R_n^*$, thus we show that $R_n^*$ is equal in distribution to the process derived by iterating \eqref{eq:Linear} with an initial condition $R_0^*$. The following lemma shows that $R_n^* \Rightarrow R$ for any initial condition $R_0^*$ satisfying a moment assumption, where $\Rightarrow$ denotes convergence in distribution.
\begin{lem} \label{L.Convergence}
For any initial condition $R_0^* \geq 0$, if $E[Q^\beta], E[(R_0^*)^\beta] < \infty$ and $\rho_\beta = E\left[ \sum_{i=1}^N C_i^\beta \right] < 1$ for some $0 < \beta \leq 1$, then
$$R_n^* \Rightarrow R,$$
with $E[R^\beta] < \infty$. Furthermore, under these assumptions, the distribution of $R$ is the unique solution with finite $\beta$-moment to recursion \eqref{eq:Linear}.
\end{lem}
\begin{proof}
Since $R^{(n )}\to R$ a.s., the result will follow from Slutsky's Theorem (see Theorem 25.4, p. 332 in \cite{Billingsley_1995}) once we show that $W_n(R_0^*) \Rightarrow 0$. To this end, note that $W_n(R_0^*)$, as defined by \eqref{eq:LastWeights}, is the same as $W_n$ if we substitute the $Q_{{\bf i}}$ by the $R_{0,{\bf i}}^*$. Then, for every $\epsilon > 0$ we have that
\begin{align*}
P( W_n(R_0^*) > \epsilon) &\leq \epsilon^{-\beta} E[ W_n(R_0^*)^\beta] \\
&\leq \epsilon^{-\beta} \rho_\beta^n E[(R_0^*)^\beta] \qquad \text{(by Lemma \ref{L.MomentSmaller_1})} .
\end{align*}
Since by assumption the right-hand side converges to zero as $n \to \infty$, then $R_n^* \Rightarrow R$. Furthermore, $E[R^\beta] < \infty$ by Lemma \ref{L.Moments_R}. Clearly, under the assumptions, the distribution of $R$ represents the unique solution to \eqref{eq:Linear}, since any other possible solution with finite $\beta$-moment would have to converge to the same limit.
\end{proof}
{\sc Remarks:} (i) Note that when $E[N] < 1$ the branching tree is a.s. finite and no conditions on the $\{C_i\}$ are necessary for $R < \infty$ a.s. This corresponds to the second condition in Theorem 1 of \cite{Brandt_86}. (ii) In view of the same theorem from \cite{Brandt_86}, one could possibly establish the convergence of $R_n^* \Rightarrow R < \infty$ under milder conditions. However, since in this paper we only study the power tails of $R$, the assumptions of Lemma \ref{L.Convergence} are not restrictive. (iii) Note that if $E\left[ \sum_{i=1}^N C_i^\alpha \right] = 1$ with $\alpha \in (0, 1]$, then there might not be a $0 < \beta < \alpha$ for which $E\left[ \sum_{i=1}^N C_i^\beta \right] < 1$, e.g., the case of deterministic $C_i$'s that was studied in \cite{Rosler_93}.
\subsection{Main result} \label{SS.MainLinear}
We now characterize the tail behavior of the distribution of the solution $R$ to the nonhomogeneous equation \eqref{eq:Linear},
as defined by \eqref{eq:ExplicitConstr}.
\begin{thm} \label{T.LinearRecursion}
Let $(Q, N, C_1, C_2, \dots)$ be a nonnegative random vector, with $N \in \mathbb{N} \cup \{\infty\}$, $P(Q > 0) > 0$
and $R$ be the solution to \eqref{eq:Linear} given by \eqref{eq:ExplicitConstr}.
Suppose that there exists $j \geq 1$ with $P(N\ge j,C_j>0)>0$ such that the measure $P\left(\log C_j\in du, C_j > 0, N\ge j\right)$ is nonarithmetic, and
that for some $\alpha > 0$, $E[Q^\alpha] < \infty$, $0 < E \left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] < \infty$ and $ E \left[ \sum_{i=1}^N C_i^\alpha \right] = 1$. In addition, assume
\begin{enumerate}
\item $E\left[ \sum_{i=1}^N C_i \right] < 1$ and $E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha \right] < \infty$, if $\alpha > 1$; or,
\item $E\left[ \left( \sum_{i=1}^N C_i^{\alpha/(1+\epsilon)}\right)^{1+\epsilon} \right] < \infty$ for some $0 < \epsilon< 1$, if $0 < \alpha \leq 1$.
\end{enumerate}
Then,
$$P(R > t) \sim H t^{-\alpha}, \qquad t \to \infty,$$
where $0 \leq H < \infty$ is given by
\begin{align*}
H &= \frac{1}{E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] } \int_{0}^\infty v^{\alpha-1} \left( P(R > v) - E\left[ \sum_{i=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{i} R > v ) \right] \right) dv \\
&= \frac{E\left[ \left( \sum_{i=1}^N C_i R_i +Q \right)^\alpha - \sum_{i=1}^N (C_i R_i )^\alpha \right]}{\alpha E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] }.
\end{align*}
\end{thm}
{\sc Remarks:} (i) The nonhomogeneous equation has been previously studied for the special case when $Q$ and the $\{C_i\}$ are deterministic constants. In particular, Theorem 5 of \cite{Rosler_93} analyzes the solutions to \eqref{eq:Linear} when $Q$ and the $\{C_i\}$ are nonnegative deterministic constants,
which, when $\sum_{i=1}^N C_i^\alpha =1$, $\alpha>0$, implies that $C_i \leq 1$ for all $i$ and $\sum_{i} C_i^\alpha \log C_i \leq 0$, falling outside of the scope of this paper. The solutions to \eqref{eq:Linear} for the case when $Q$ and the $C_i$'s are real valued deterministic constants were analyzed in \cite{Alsm_Rosl_05}. For the very recent work (published on arXiv after the first draft of this paper) that characterizes all the solutions to \eqref{eq:Linear} for $Q$ and $\{C_i\}$ random see \cite{Alsm_Mein_10}. (ii) When $\alpha > 1$, the condition $E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha \right] < \infty$ is needed to ensure that the tail of $R$ is not dominated by $N$. In particular, if the $\{C_i\}$ are iid and independent of $N$, the condition reduces to $E[N^\alpha] < \infty$ since $E[C^\alpha] < \infty$ is implied by the other conditions; see Theorems 4.2 and 5.4 in \cite{Jel_Olv_09}. Furthermore, when $0 < \alpha \leq 1$ the condition $E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha \right] < \infty$ is redundant since $E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha \right] \leq E\left[ \sum_{i=1}^N C_i^\alpha \right] = 1$, and the additional condition $E\left[ \left( \sum_{i=1}^N C_i^{\alpha/(1+\epsilon)} \right)^{1+\epsilon} \right] < \infty$ is needed. When the $\{C_i \}$ are iid and independent of $N$, the latter condition reduces to $E[N^{1+\epsilon}] < \infty$ (given the other assumptions), which is consistent with Theorem 4.2 in \cite{Jel_Olv_09}.
(iii) Note that the second expression for $H$ is more suitable for actually computing it, especially in the case of $\alpha$ being an integer, as will be stated in the forthcoming Corollary~\ref{C.explicit}. When $\alpha>1$ is not an integer, we can derive an explicit upper bound on $H$ by using Lemma~\ref{L.Max_Approx}.
Regarding the lower bound, the elementary inequality
$\left( \sum_{i=1}^k x_i \right)^\alpha \ge \sum_{i=1}^k x_i^\alpha$ for $\alpha\ge1$ and $x_i \geq 0$, yields
$$
H\ge \frac{E\left[ Q^\alpha \right]}{\alpha E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] }>0.
$$
Similarly, for $0<\alpha<1$, using the corresponding inequality
$\left( \sum_{i=1}^k x_i \right)^\alpha \le \sum_{i=1}^k x_i^\alpha$ for $0<\alpha\le1$, $x_i \geq 0$, we obtain
$H\le {E\left[ Q^\alpha \right]}/{\left(\alpha E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] \right)}.$
(iv) Let us also observe that the solution $R$, given by \eqref{eq:ExplicitConstr}, to equation \eqref{eq:Linear}
may be a constant (non power law) $R=r>0$ when
$P(r=Q+r \sum_{i=1}^N C_i)=1$. However, similarly as in remark (i), such a solution is excluded from the theorem since
$P(r=Q+r\sum_{i=1}^N C_i)=1$ implies $E[\sum_i C_i^\alpha \log C_i]\le 0, \alpha>0$.
Before proceeding with the proof of Theorem \ref{T.LinearRecursion}, we need the following two technical results; their proofs are given in Section \ref{SS.LinearProofs}. Lemma \ref{L.Max_Approx} below will also be used in subsequent sections for other recursions. With some abuse of notation, we will use throughout the paper $\max_{1 \leq i \leq N} x_i$ to denote $\sup_{1 \leq i < N+1} x_i$ in case $N = \infty$.
\begin{lem} \label{L.Max_Approx}
Suppose $(N, C_1, C_2, \dots)$ is a nonnegative random vector, with $N \in \mathbb{N} \cup \{\infty\}$ and let $\{R_i\}_{i \in \mathbb{N}}$ be a sequence of iid nonnegative random variables independent of $(N, C_1, C_2, \dots)$ having the same distribution as $R$. For $\alpha >0$, suppose that $\sum_{i=1}^N (C_i R_i)^\alpha < \infty$ a.s. and $E[R^\beta]< \infty$ for any $0 < \beta < \alpha$. Furthermore, assume that $E\left[ \left( \sum_{i=1}^N C_i^{\alpha/(1+\epsilon)}\right)^{1+\epsilon} \right] < \infty$ for some $0 < \epsilon< 1$. Then,
\begin{align*}
0 &\leq \int_{0}^\infty \left( E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t ) \right] - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right) t^{\alpha -1} \, dt \\
&= \frac{1}{\alpha} E \left[ \sum_{i=1}^N \left(C_i R_i \right)^\alpha - \left( \max_{1\leq i \leq N} C_i R_i \right)^\alpha \right] < \infty.
\end{align*}
\end{lem}
\begin{lem} \label{L.ExtraQ}
Let $(Q, N, C_1, C_2, \dots)$ be a nonnegative vector with $N \in \mathbb{N} \cup \{\infty\}$ and let $\{R_i\}$ be a sequence of iid random variables, independent of $(Q, N, C_1, C_2, \dots)$. Suppose that for some $\alpha > 1$ we have $E[Q^\alpha] < \infty$, $E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha \right] < \infty$, $E[R^\beta] < \infty$ for any $0 < \beta < \alpha$, and $\sum_{i=1}^N C_i R_i < \infty$ a.s. Then
$$E \left[ \left( \sum_{i=1}^N C_i R_i + Q \right)^\alpha - \sum_{i=1}^N \left( C_i R_i \right)^\alpha \right] < \infty.$$
\end{lem}
\bigskip
\begin{proof}[Proof of Theorem \ref{T.LinearRecursion}]
By Lemma \ref{L.Moments_R}, we know that $E[R^\beta] < \infty$ for any $0 < \beta < \alpha$. To verify that $E\left[ \sum_{i=1}^N C_i^\gamma \right] < \infty$ for some $0 \leq \gamma < \alpha$ note that if $\alpha > 1$ we have, by the assumptions of the theorem and Jensen's inequality,
$$E\left[ \sum_{i=1}^N C_i^\gamma \right] \leq E\left[ \left( \sum_{i=1}^N C_i \right)^\gamma \right] \leq \left( E\left[ \left(\sum_{i=1}^N C_i \right)^\alpha \right] \right)^{\gamma/\alpha} < \infty$$
for any $1 \leq \gamma < \alpha$. If $0 < \alpha \leq 1$, then for $\gamma = \alpha (1+\epsilon/2)/(1+\epsilon) < \alpha$ we have
$$E\left[ \sum_{i=1}^N C_i^\gamma \right] \leq E\left[ \left( \sum_{i=1}^N C_i^{\alpha/(1+\epsilon)} \right)^{1+\epsilon/2} \right] \leq \left( E\left[ \left( \sum_{i=1}^N C_i^{\alpha/(1+\epsilon)} \right)^{1+\epsilon} \right] \right)^\frac{1+\epsilon/2}{1+\epsilon} < \infty.$$
The statement of the theorem with the first expression for $H$ will follow from Theorem \ref{T.NewGoldie} once we prove that condition \eqref{eq:Goldie_condition} holds. To this end, define
$$R^* = \sum_{i=1}^N C_i R_i + Q.$$
Then,
\begin{align*}
\left| P(R>t) - E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > t ) \right] \right| &\leq \left| P(R > t) - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right| \\
&\hspace{5mm} + \left| P\left( \max_{1\leq i \leq N} C_i R_i > t \right) - E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > t ) \right] \right|.
\end{align*}
Since $R \stackrel{\mathcal{D}}{=} R^* \geq \max_{1\leq i\leq N} C_i R_i$, the first absolute value disappears. For the second one, note that
\begin{align*}
&E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > t ) \right] - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \\
&= E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > t ) \right] - E\left[ \mathop{\hskip0pt{1}}\nolimits\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right] \geq 0.
\end{align*}
Now it follows that
\begin{align}
&\left| P(R > t) - E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > t ) \right] \right| \notag \\
&\leq P(R > t) - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \notag \\
&\hspace{5mm} + E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > t ) \right] - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) . \label{eq:Term2}
\end{align}
Note that the integral corresponding to \eqref{eq:Term2} is finite by Lemma \ref{L.Max_Approx} if we show that the assumptions of Lemma~\ref{L.Max_Approx} are satisfied when $\alpha > 1$. Note that in this case we can choose $\epsilon > 0$ such that $\alpha/(1+\epsilon) \geq 1$ and use the inequality
\begin{equation} \label{eq:concaveSum}
\sum_{i=1}^k x_i^\beta \le \left( \sum_{i=1}^k x_i \right)^\beta
\end{equation}
for $\beta \ge 1$, $x_i \geq 0$, $k \leq \infty$ to obtain
$$E\left[ \left( \sum_{i=1}^N C_i^{\alpha/(1+\epsilon)} \right)^{1+\epsilon} \right] \leq E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha \right] < \infty.$$
Therefore, it only remains to show that
\begin{equation} \label{eq:RecVsMax}
\int_0^\infty \left( P(R > t) - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right) t^{\alpha-1} \, dt < \infty.
\end{equation}
To see this, note that $R \stackrel{\mathcal{D}}{=} R^*$ and $1(R^* > t) - 1(\max_{1\leq i\leq N} C_i R_i > t) \geq 0$, and thus, by Fubini's theorem, we have
\begin{equation*}
\int_0^\infty \left( P(R > t) - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right) t^{\alpha-1} \, dt = \frac{1}{\alpha} E \left[ (R^*)^\alpha - \left( \max_{1\leq i \leq N} C_i R_i \right)^\alpha \right].
\end{equation*}
If $0 < \alpha \leq 1$, we apply \eqref{eq:concaveSum} to obtain
$$E \left[ (R^*)^\alpha - \left( \max_{1\leq i \leq N} C_i R_i \right)^\alpha \right] \leq E \left[ Q^\alpha + \sum_{i=1}^N (C_iR_i)^\alpha - \left( \max_{1\leq i \leq N} C_i R_i \right)^\alpha \right],$$
which is finite by Lemma \ref{L.Max_Approx} and the assumption $E[Q^\alpha] < \infty$.
If $\alpha > 1$, we have $\left(\sum_{i=1}^k x_i \right)^\alpha \geq \sum_{i=1}^k x_i^\alpha$, $x_i \geq 0$, $k \leq \infty$, implying that we can split the expectation as follows
\begin{align*}
E \left[ (R^*)^\alpha - \left( \max_{1\leq i \leq N} C_i R_i \right)^\alpha \right] &= E \left[ (R^*)^\alpha - \sum_{i=1}^N \left(C_i R_i \right)^\alpha \right] \\
&\hspace{5mm} + E \left[ \sum_{i=1}^N \left(C_i R_i \right)^\alpha - \left( \max_{1\leq i \leq N} C_i R_i \right)^\alpha \right],
\end{align*}
which can be done since both expressions inside the expectations on the right-hand side are nonnegative. The first expectation is finite by Lemma \ref{L.ExtraQ} and the second expectation is again finite by Lemma \ref{L.Max_Approx}.
Finally, applying Theorem \ref{T.NewGoldie} gives
$$P(R > t) \sim H t^{-\alpha},$$
where $H = \left( E\left[ \sum_{j=1}^N C_j^\alpha \log C_j \right] \right)^{-1} \int_{0}^\infty v^{\alpha-1} \left( P(R > v) - E\left[ \sum_{j=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{j} R > v ) \right] \right) dv$.
To obtain the second expression for $H$ note that
\begin{align}
&\int_{0}^\infty v^{\alpha-1} \left( P(R > v) - E\left[ \sum_{j=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{j} R > v) \right] \right) dv \notag \\
&= \int_0^\infty v^{\alpha-1} E\left[\mathop{\hskip0pt{1}}\nolimits\left(\sum_{i=1}^N C_i R_i + Q > v \right) - \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > v) \right] \, dv \notag \\
&= E \left[ \int_0^\infty v^{\alpha-1} \left( \mathop{\hskip0pt{1}}\nolimits\left(\sum_{i=1}^N C_i R_i + Q > v \right) - \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > v) \right) dv \right] \label{eq:Fubini} \\
&= E \left[ \int_0^{\sum_{i=1}^N C_i R_i + Q} v^{\alpha-1} dv - \sum_{i=1}^N \int_0^{C_i R_i} v^{\alpha-1} dv \right] \label{eq:DiffIntegrals} \\
&= \frac{1}{\alpha} E\left[ \left( \sum_{i=1}^N C_i R_i + Q \right)^\alpha - \sum_{i=1}^N (C_i R_i)^\alpha \right] , \notag
\end{align}
where \eqref{eq:Fubini} is justified by Fubini's Theorem and the integrability of
\begin{align*}
&v^{\alpha-1} \left| \mathop{\hskip0pt{1}}\nolimits\left(\sum_{i=1}^N C_i R_i + Q > v \right) - \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > v) \right| \\
&\leq v^{\alpha-1} \left( \mathop{\hskip0pt{1}}\nolimits\left(\sum_{i=1}^N C_i R_i + Q > v \right) - \mathop{\hskip0pt{1}}\nolimits\left( \max_{1\leq i \leq N} C_i R_i > v \right) \right) \\
&\hspace{5mm} + v^{\alpha-1} \left( \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > v) - \mathop{\hskip0pt{1}}\nolimits\left( \max_{1\leq i \leq N} C_i R_i > v \right) \right),
\end{align*}
which is a consequence of \eqref{eq:RecVsMax} and Lemma \ref{L.Max_Approx}; and \eqref{eq:DiffIntegrals} follows from the observation that
$$v^{\alpha-1} \mathop{\hskip0pt{1}}\nolimits\left(\sum_{i=1}^N C_i R_i + Q > v\right) \qquad \text{and} \qquad v^{\alpha-1} \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > v)$$
are each almost surely absolutely integrable with respect to $v$ as well.
This completes the proof.
\end{proof}
As indicated earlier, when $\alpha\ge 1$ is an integer, we can obtain the following explicit expression for $H$.
\begin{cor} \label{C.explicit}
For integer $\alpha \geq 1$, and under the same assumptions of Theorem \ref{T.LinearRecursion}, the constant $H$ can be explicitly computed as a function of $E[R^k], E[C^k], E[Q^k]$, $0 \leq k \leq \alpha-1$. In particular, for $\alpha = 1$,
$$H = \frac{E[Q]}{E\left[ \sum_{i=1}^N C_i \log C_i \right] },$$
and for $\alpha = 2$,
\begin{align*}
H &= \frac{E[Q^2] + 2 E[R] E\left[ Q \sum_{i=1}^N C_i \right] + 2 (E[R])^2 E\left[ \sum_{i=1}^N \sum_{j=i+1}^N C_i C_j \right] }{2 E\left[ \sum_{i=1}^N C_i^2 \log C_i \right] },\\
&E[R] = \frac{E[Q]}{1-E\left[ \sum_{i=1}^N C_i \right]}.
\end{align*}
\end{cor}
\begin{proof}
The proof follows directly from multinomial expansions of the second expression for $H$ in Theorem~\ref{T.LinearRecursion}.
\end{proof}
\subsection{The homogeneous recursion} \label{SS.Homogeneous}
In this section we briefly describe how the methodology developed in the previous sections can
be applied to study the critical, $E\left[ \sum_{i=1}^N C_i \right] = 1$, homogeneous linear recursion
\begin{equation} \label{eq:LinearHomogeneous}
R \stackrel{\mathcal{D}}{=} \sum_{i=1}^N C_i R_i,
\end{equation}
where $(N, C_1, C_2, \dots)$ is a nonnegative random vector with $N \in \mathbb{N} \cup \{\infty\}$ and $\{ R_i\}_{i \in \mathbb{N}}$ is a sequence of iid random variables independent of $(N, C_1, C_2, \dots)$ having the same distribution as $R$. This equation has been studied extensively in the literature under various different assumptions; for recent results see
\cite{Liu_00, Iksanov_04, Alsm_Kuhl_07} and the references therein.
Based on the model from Section~\ref{S.LinearRec} we can construct a solution to \eqref{eq:LinearHomogeneous} as follows.
Consider the process $\{W_n\}_{n\ge 0}$ defined by \eqref{eq:W_k} with $Q_{\bf i}\equiv 1$. Then, the $\{W_n\}$ satisfy in distribution
the homogeneous recursion in \eqref{eq:WnRec} and, given that $E\left[ \sum_{i=1}^N C_i \right] = 1$,
we have $E[W_n]=1$. Hence, $\{W_n\}_{n\ge 0}$ is a nonnegative martingale and by the martingale convergence theorem $W_n\to R$ a.s. with $E[R]\leq 1$.
Next, provided that
$$
E\left[ \sum_{i=1}^N C_i \log C_i \right] < 0 \quad\text{ and }\quad
E\left[ \left( \sum_{i=1}^N C_i \right) \log^+ \left( \sum_{i=1}^N C_i \right) \right] < \infty$$
it can be shown that $E[R]=1$, see Theorem 1.1(d) in \cite{Alsm_Kuhl_07} (see also Theorem 2 in \cite{Liu_00});
$\log^+x=\max(\log x,0)$.
Furthermore, as argued in equation (1.9) of \cite{Alsm_Kuhl_07}, it can easily be shown that this $R$ is a solution to \eqref{eq:LinearHomogeneous}.
Note that the same construction of the solution $R$ on a branching tree was given in \cite{Alsm_Kuhl_07} and \cite{Liu_00}.
Since the solutions to \eqref{eq:LinearHomogeneous} are scale invariant, this construction also shows that for any $m>0$ there is a solution $R$
with mean $m$; or equivalently, it is enough to study the solutions with mean $1$.
Moreover, under additional assumptions it can be shown that this constructed $R$ is the only solution with mean $1$, e.g. see \cite{Liu_98,Liu_00,Iksanov_04}.
However, it is not the objective of this section to study the uniqueness of this solution, rather we focus on studying the tail behavior of any such possible solution
(since our Theorem~~\ref{T.NewGoldie} does not require the uniqueness of $R$).
As a side note, we point out that \eqref{eq:LinearHomogeneous} can have solutions if $E\left[ \sum_{i=1}^N C_i^\beta \right]=1$
for some $0<\beta<1$, as studied in \cite{Liu_98,Iksanov_04}.
A version of the following theorem, with a possibly less explicit constant, was previously proved in Theorem~2.2 in \cite{Liu_00}
and Proposition~7 in \cite{Iksanov_04}; they also study the lattice case. Regarding the lattice case, as pointed out earlier in the remark after
Theorem~\ref{T.NewGoldie}, all the results in this paper can be developed for this case as well by using the corresponding renewal theorem.
\begin{thm} \label{T.LinearHomog}
Suppose that there exists $j \geq 1$ with $P(N\ge j,C_j>0)>0$ such that the measure $P(\log C_j\in du, C_j > 0, N\ge j)$ is nonarithmetic.
Suppose further that for some $\alpha > 1$, $E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha \right] < \infty$, $E \left[ \sum_{i=1}^N C_i^\alpha \log^+ C_i \right] < \infty$ and $E\left[ \sum_{i=1}^N C_i \right] = E \left[ \sum_{i=1}^N C_i^\alpha \right] = 1$. Then, equation \eqref{eq:LinearHomogeneous} has a solution $R$ with $0<E[R] <\infty$ such that
$$P(R > t) \sim H t^{-\alpha}, \qquad t \to \infty,$$
where $0 \leq H < \infty$ is given by
\begin{align*}
H &= \frac{1}{E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] } \int_{0}^\infty v^{\alpha-1} \left( P(R > v) - E\left[ \sum_{i=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{i} R > v ) \right] \right) dv \\
&= \frac{E\left[ \left( \sum_{i=1}^N C_i R_i \right)^\alpha - \sum_{i=1}^N (C_i R_i )^\alpha \right]}{\alpha E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] }.
\end{align*}
Furthermore, if $P(\tilde{N} \geq 2) >0$, $\tilde{N}= \sum_{i=1}^N 1(C_i>0)$, then $H>0$.
\end{thm}
\begin{proof}
By the assumptions, the function $\varphi(\theta) \triangleq E\left[ \sum_{i=1}^N C_j^\theta \right]$ is convex, finite, and continuous on $[1, \alpha]$, since $\varphi(1) = \varphi(\alpha) = 1$.
Furthermore, by standard arguments, it can be shown that both $\varphi'(\theta)$ and $\varphi''(\theta)$ exist on the open interval $(1, \alpha)$ and, in particular,
$$\varphi''(\theta) = E\left[ \sum_{i=1}^N C_i^\theta (\log C_i)^2 \right].$$
Clearly, $\varphi''(\theta) > 0$ provided that $P( C_i \in \{0,1\},1 \leq i \leq N) < 1$. To see that this is indeed the case, note that $E\left[ \sum_{i=1}^N C_i \right] = 1$ implies that $P(C_i \equiv 0, 1 \leq i \leq N) < 1$, which combined with the nonarithmetic assumption yields $P( C_i \in \{0,1\},1 \leq i \leq N) < 1$. Hence, there exists $1 < \theta_1 < \theta_2 < \alpha$ such that $\varphi'(\theta_1) < 0$ and $\varphi'(\theta_2) > 0$,
implying by the monotonicity of $\varphi'(\cdot)$ and monotone convergence that
\begin{equation} \label{eq:Derivative_alpha}
0 < \varphi'(\alpha-) = E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] \leq E\left[ \sum_{i=1}^N C_i^{\alpha} \log^+ C_i \right] < \infty \qquad \text{and}
\end{equation}
\begin{equation*}
\varphi'(1+) = E\left[ \sum_{i=1}^N C_i \log C_i \right] < 0 .
\end{equation*}
The last expression and the observation $E\left[ \left( \sum_{i=1}^N C_i \right) \log^+ \left( \sum_{i=1}^N C_i \right) \right] < \infty$ (implied by $E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha \right] < \infty$) yields, as argued at the beginning of this section, that recursion \eqref{eq:LinearHomogeneous} has a solution with finite positive mean, see Theorem 1.1(d) and equation (1.9) in \cite{Alsm_Kuhl_07} (see also Theorem 2 in \cite{Liu_00}).
Next, in order to apply Theorem \ref{T.NewGoldie}, we use \eqref{eq:Derivative_alpha} and $E[ R^\beta] < \infty$ for any $0 < \beta < \alpha$; the latter follows from Theorem 3.1 in \cite{Alsm_Kuhl_07} and the strict convexity of $\varphi(\cdot)$ argued above (see also, Proposition~4 in \cite{Iksanov_04} and Theorem~2.1 in \cite{Liu_00}). The rest of the proof, except for the $H>0$ part, proceeds exactly as that of Theorem~\ref{T.LinearRecursion} by setting $Q \equiv 0$.
Regarding the $H>0$ statement, the assumption $P(\tilde{N} \geq 2) >0$ implies that there exist $1 \leq n \leq \infty$ and $1\le i_1< i_2 < n+1$ such that $P(N = n, C_{i_1}>0,C_{i_2}>0)>0$, which further yields, for some $\delta>0$,
\begin{equation}
\label{eq:N2}
P(N\ge i_2, C_{i_1}>\delta,C_{i_2}>\delta)>0.
\end{equation}
Next, by using the inequality $\left( x_1 + x_2 \right)^\alpha \geq x_1^\alpha + x_2^\alpha$ for $x_1, x_2 \geq 0$ and $\alpha > 1$,
the second expressions for $H$ in the theorem can be bounded from below by
\begin{equation}
\label{eq:Hlb1}
H\ge \frac{E\left[ 1(N\ge i_2) \left(\left(C_{i_1} R_{i_1} + C_{i_2} R_{i_2}\right)^\alpha - (C_{i_1} R_{i_1})^\alpha - (C_{i_2} R_{i_2})^\alpha\right) \right]}{\alpha E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] }.
\end{equation}
To further bound the numerator in \eqref{eq:Hlb1} we define the function
$$
f(x)=(1+x)^\alpha-1-x^\alpha- c x^{\alpha-\epsilon},
$$
where $0<\epsilon<\alpha-1$, $0<c<2^\gamma-1$ and $\gamma=\alpha-1-\epsilon$. It can be shown that $f(x) \geq 0$ for $x \in [0,1]$, since $f(0)=0$ and $f'(x)\ge \alpha x^\gamma ((1+1/x)^\gamma-1-c)\ge 0$ on $[0,1]$.
Hence, by using the inequality $f(x) \geq 0$, we derive for $x_1\ge 0, x_2\ge 0$, $\max\{x_1,x_2\}>0$
and $x={\min\{x_1,x_2\}}/{\max\{x_1,x_2\}}$
\begin{align*}
(x_1+x_2)^\alpha-x_1^\alpha-x_2^\alpha &= (\max\{x_1,x_2\})^\alpha \left((1+x)^\alpha - 1 - x^\alpha \right)
\\
&\geq c (\max\{x_1,x_2\})^\alpha x^{\alpha-\epsilon}\ge c (\min\{x_1,x_2\})^{\alpha};
\end{align*}
the inequality clearly holds even if $\max\{x_1,x_2\}=0$ since both of its sides are zero.
Thus, by applying this last inequality to \eqref{eq:Hlb1} and using \eqref{eq:N2}, we obtain
\begin{align*}
H &\geq \frac{c E\left[ 1(N\ge i_2) \left(\min \left\{C_{i_1} R_{i_1} , C_{i_2} R_{i_2}\right\}\right)^\alpha \right]}{\alpha E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] } \\
&\geq \frac{c \delta^\alpha P(N\ge i_2, C_{i_1}>\delta, C_{i_2}>\delta) E[\left(\min \{R_{i_1},R_{i_2}\}\right)^\alpha] }{\alpha E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] }>0.
\end{align*}
This completes the proof.
\end{proof}
{\sc Remarks:} (i) Note that the assumptions of Theorem \ref{T.LinearHomog} differ slightly from those of Theorem \ref{T.LinearRecursion} in the condition $0 < E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] <~\infty$, which is implied by $E\left[ \sum_{i=1}^N C_i^\alpha \log^+ C_i \right] < \infty$, the strict convexity of $\varphi(\theta) = E\left[ \sum_{i=1}^N C_i^\theta \right]$ and the hypothesis that $\varphi(1) = \varphi(\alpha) = 1$, as argued in the preceding proof.
(ii) The assumption $P(\tilde{N} \geq 2) >0$ is the minimal one to ensure the existence of a nontrivial solution,
see conditions (H0) in \cite{Liu_98} and (C4) in \cite{Alsm_Kuhl_07}. Otherwise, when $P(\tilde{N} \le 1) =1$, $W_n$ is a simple
multiplicative random walk with no branching; clearly, in this case our expression for $H$ reduces to zero.
Also, if $P(\sum_{i=1}^N C_i=1)=1$, $R$ can only be a constant; see the remark above
Theorem~2.1 in \cite{Liu_00}.
However, this last case is excluded from the theorem since $P(\sum_{i=1}^N C_i=1)=1$ implies $C_i\le 1$ a.s.,
which, in conjunction with $\varphi(\alpha) = 1, \alpha>1$, yields $P( C_i \in \{0,1\},1 \leq i \leq N) = 1$,
but this cannot happen due to the nonarithmetic assumption.
(iii) Note also that condition (C3) in \cite{Alsm_Kuhl_07} (equivalent to $P( C_i \in \{0,1\},1 \leq i \leq N) < 1$ in our notation)
is implied by the nonarithmetic assumption of our theorem.
Interestingly enough, if this last condition fails, Lemma~1.1 of \cite{Liu_98} shows that equation \eqref{eq:LinearHomogeneous} has no nontrivial solutions.
(iv) As stated earlier, a version of this theorem was proved in Theorem 2.2 of \cite{Liu_00}, by transforming recursion \eqref{eq:LinearHomogeneous} into a first order difference (autoregressive/perpetuity) equation on a different probability space, see Lemma 4.1 in \cite{Liu_00}. However, it appears that the method from \cite{Liu_00} does not extend to the nonhomogeneous and non-linear problems that we cover here, since the proof of Lemma~4.1 in \cite{Liu_00} critically depends on having both $E[R] = 1$ and $E\left[ \sum_{i=1}^N C_i \right] = 1$.
Similarly as in Corollary \ref{C.explicit}, the constant $H$ can be computed explicitly for integer $\alpha \geq 2$.
\begin{cor}
\label{C.explicitHom}
For integer $\alpha \geq 2$, and under the same assumptions of Theorem \ref{T.LinearHomog}, the constant $H$ can be explicitly computed as a function of $E[R^k], E[C^k]$, $1 \leq k \leq \alpha-1$. In particular, for $\alpha = 2$,
$$H = \frac{ E\left[ \sum_{i=1}^N \sum_{j=i+1}^N C_i C_j \right] }{E\left[ \sum_{i=1}^N C_i^2 \log C_i \right] }.$$
\end{cor}
\begin{proof}
The proof follows directly from multinomial expansions of the second expression for $H$ in Theorem~\ref{T.LinearHomog}.
\end{proof}
We also want to point out that for non-integer general $\alpha > 1$ we can use Lemma \ref{L.Alpha_Moments} to obtain the following bound for $H$,
$$H \leq \frac{ \left( E\left[ R^{p-1} \right] \right)^{\alpha/(p-1)} E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha\right]}{\alpha E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] },$$
where $p = \lceil \alpha \rceil$.
\section{The maximum recursion: $R = \left( \bigvee_{i=1}^N C_i R_i \right) \vee Q$} \label{S.MaxRec}
In order to show the general applicability of the implicit renewal theorem, we study in this section the following non-linear recursion:
\begin{equation} \label{eq:Maximum}
R \stackrel{\mathcal{D}}{=} \left(\bigvee_{i=1}^N C_i R_i \right) \vee Q,
\end{equation}
where $(Q, N, C_1, C_2, \dots)$ is a nonnegative random vector with $N \in \mathbb{N} \cup \{\infty\}$, \linebreak $P(Q > 0) > 0$ and $\{R_i\}_{i\in \mathbb{N}}$ is a sequence of iid random variables independent of $(Q, N, C_1, C_2, \dots)$ having the same distribution as $R$. Note that in the case of page ranking applications, where the $\{R_i\}$ represent the ranks of the neighboring pages, the potential ranking algorithm defined by the preceding recursion, determines the rank of a page as a weighted version of the most highly ranked neighboring page. In other words, the highest ranked reference has the dominant impact. Similarly to the homogeneous linear case, this recursion was previously studied in \cite{Alsm_Rosl_08} under the assumption that $Q \equiv 0$, $N = \infty$, and the $\{C_i\}$ are real valued deterministic constants. The more closely related case of $Q \equiv 0$ and $\{C_i \} \geq 0$ being random was studied earlier in \cite{Jag_Ros_04}. Furthermore, these max-type stochastic recursions appear in a wide variety of applications, ranging from the average case analysis of algorithms to statistical physics; see \cite{Aldo_Band_05} for a recent survey.
Using standard arguments, we start by constructing a solution to \eqref{eq:Maximum} on a tree and then we show that this solution is finite a.s. and unique under iterations and some moment conditions, similar to what was done for the nonhomogeneous linear recursion in Section \ref{S.LinearRec}. Our main result of this section is stated in Theorem \ref{T.MaximumRecursion}.
Following the same notation as in Section \ref{S.LinearRec}, define the process
\begin{equation} \label{eq:V_k}
V_n = \bigvee_{{\bf i} \in A_n} Q_{{\bf i}} \Pi_{{\bf i}}, \qquad n \geq 0,
\end{equation}
on the weighted branching tree $\mathcal{T}_{Q, C}$, as constructed in Section \ref{S.ModelDescription}.
Recall that the convention is that $(Q, N, C_1, C_2, \dots) = (Q_\emptyset, N_\emptyset, C_{(\emptyset, 1)}, C_{(\emptyset, 2)}, \dots)$ denotes the random vector corresponding to the root node.
With a possible abuse of notation relative to Section \ref{S.LinearRec}, define the process $\{R^{(n)}\}_{n \geq 0}$ according to
$$R^{(n)} = \bigvee _{k=0}^n V_k, \qquad n \geq 0.$$
Just as with the linear recursion from Section \ref{S.LinearRec}, it is not hard to see that $R^{(n)}$ satisfies the recursion
\begin{equation} \label{eq:MaxRecSamplePath}
R^{(n)} = \left( \bigvee_{j=1}^{N_\emptyset} C_{(\emptyset, j)} R_j^{(n-1)} \right) \vee Q_\emptyset = \left( \bigvee_{j=1}^{N} C_{j} R_j^{(n-1)} \right) \vee Q ,
\end{equation}
where $\{R_j^{(n-1)} \}$ are independent copies of $R^{(n-1)}$ corresponding to the tree starting with individual $j$ in the first generation and ending on the $n$th generation. One can also verify that
$$V_n = \bigvee_{k=1}^{N_{\emptyset}} C_{(\emptyset,k)} \bigvee_{(k,\dots, i_n) \in A_n}
Q_{(k,\dots, i_n)} \prod_{j=2}^n C_{(k,\dots,i_j)} \stackrel{\mathcal{D}}{=} \bigvee_{k=1}^N C_k V_{(n-1),k},$$
where $\{V_{(n-1),k}\}$ is a sequence of iid random variables independent of $(N, C_1, C_2, \dots)$ and having the same distribution as $V_{n-1}$.
We now define the random variable $R$ according to
\begin{equation}
\label{eq:maxR}
R \triangleq \lim_{n\to \infty} R^{(n)} = \bigvee_{k=0}^\infty V_k.
\end{equation}
Note that $R^{(n)}$ is monotone increasing sample-pathwise, so $R$ is well defined. Also, by monotonicity of $R^{(n)}$, \eqref{eq:MaxRecSamplePath} and monotone convergence, we obtain that $R$ solves
$$R = \left( \bigvee_{j=1}^{N_{\emptyset}} C_{(\emptyset, j)} R_j^{(\infty)} \right) \vee Q_{\emptyset} = \left( \bigvee_{j=1}^{N} C_{j} R_j^{(\infty)} \right) \vee Q,$$
where $\{R_j^{(\infty)} \}_{j \in \mathbb{N}}$ are iid copies of $R$, independent of $(Q, N, C_1, C_2, \dots)$.
Clearly this implies that $R$, as defined by \eqref{eq:maxR}, is a solution in distribution to \eqref{eq:Maximum}. However, this solution might be $\infty$.
Now, we establish the finiteness of the moments of $R$, and in particular that $R < \infty$ a.s., in the following lemma; its proof uses standard contraction arguments but is included for completeness.
\begin{lem} \label{L.Moments_R_Max}
Assume that $\rho_\beta = E\left[ \sum_{i=1}^N C_i^\beta \right]<1$ and
$E[Q^\beta] < \infty$ for some $\beta>0$. Then, $E[R^\gamma] < \infty$ for all $0 < \gamma \leq \beta$, and in particular, $R < \infty$ a.s.
Moreover, if $\beta \geq 1$, $R^{(n)} \stackrel{L_\beta}{\to} R$, where $L_\beta$ stands for convergence in $(E|\cdot|^\beta)^{1/\beta}$ norm.
\end{lem}
\begin{proof}
By following the same steps leading to \eqref{eq:PiMoments}, we obtain that for any $k\ge 0$,
\begin{equation}\label{eq:Vmoment}
E[V_k^\beta] = E\left[ \bigvee_{{\bf i} \in A_k} Q_{\bf i}^\beta \Pi_{\bf i}^\beta \right] \leq E\left[ \sum_{{\bf i} \in A_k} Q_{\bf i}^\beta \Pi_{\bf i}^\beta \right] = E[Q^\beta] \rho_\beta^k.\end{equation}
Hence,
$$E[R^\beta] = E\left[ \bigvee_{k=0}^\infty V_k^\beta \right] \leq E\left[ \sum_{k=0}^\infty V_k^\beta \right] \leq \frac{E[Q^\beta]}{1-\rho_\beta} < \infty,$$
implying that $E[R^\gamma] < \infty$ for all $0 < \gamma \leq \beta$.
That $R^{(n)} \stackrel{L_\beta}{\to} R$ whenever $\beta\geq 1$ follows from noting that
$E[|R^{(n)} - R|^\beta] \le E\left[ \left( \bigvee_{k = n+1}^\infty V_k \right)^\beta \right] \leq E\left[ \sum_{k = n+1}^\infty V_k^\beta \right]$
and applying the preceding geometric bound for $E[V_k^\beta]$.
\end{proof}
Just as with the linear recursion from Section \ref{S.LinearRec}, we can define the process $\{R_n^*\}$ as
\begin{equation*}
R_n^* \triangleq R^{(n-1)} \vee V_n(R_0^*), \qquad n \geq 1,
\end{equation*}
where
\begin{equation} \label{eq:MaxLastWeights}
V_n(R_0^*) = \bigvee_{{\bf i} \in A_n} R^*_{0,{\bf i}} \Pi_{{\bf i}},
\end{equation}
and $\{ R_{0,{\bf i}}^*\}_{{\bf i} \in U}$ are iid copies of an initial value $R_0^*$, independent of the entire weighted tree $\mathcal{T}_{Q,C}$. It follows from \eqref{eq:MaxRecSamplePath} and \eqref{eq:MaxLastWeights} that
\begin{equation*}
R_{n+1}^* = \bigvee_{j=1}^{N} C_j \left( R_{j}^{(n-1)} \vee \bigvee_{{\bf i} \in A_{n,j}} R_{0,{\bf i}}^* \prod_{k=2}^n C_{(j,\dots,i_k)} \right) \vee Q = \bigvee_{j=1}^N C_j R_{n,j}^* \vee Q,
\end{equation*}
where $\{ R_{j}^{(n-1)} \}$ are independent copies of $R^{(n-1)}$ corresponding to the tree starting with individual $j$ in the first generation and ending on the $n$th generation, and $A_{n,j}$ is the set of all nodes in the $(n+1)$th generation that are descendants of individual $j$ in the first generation. Moreover, $\{R_{n,j}^*\}$ are iid copies of $R_n^*$, and thus, $R_n^*$ is equal in distribution to the process obtained by iterating \eqref{eq:Maximum} with an initial condition $R_0^*$. This process can be shown to converge in distribution to $R$ for any initial condition $R_0^*$ satisfying the following moment condition.
\begin{lem} \label{L.ConvergenceMax}
For any $R_0^* \geq 0$, if $E[Q^\beta], E[(R_0^*)^\beta] < \infty$ and $\rho_\beta < 1$ for some $\beta >0$, then
$$R_n^* \Rightarrow R,$$
with $E[R^\beta] < \infty$. Furthermore, under these assumptions, the distribution of $R$ is the unique solution with finite $\beta$-moment to recursion \eqref{eq:Maximum}.
\end{lem}
\begin{proof}
The result will follow from Slutsky's Theorem (see Theorem 25.4, p. 332 in \cite{Billingsley_1995}) once we show that $V_n(R_0^*) \Rightarrow 0$. To this end, recall that $V_n(R_0^*)$ is the same as $V_n$ if we substitute the $Q_{{\bf i}}$ by the $R_{0,{\bf i}}^*$. Then, for every $\epsilon > 0$ we have that
\begin{align*}
P( V_n(R_0^*) > \epsilon) &\leq \epsilon^{-\beta} E[ V_n(R_0^*)^\beta] \\
&\leq \epsilon^{-\beta} \rho_\beta^n E[(R_0^*)^\beta] \qquad \text{(by \eqref{eq:Vmoment})} .
\end{align*}
Since by assumption the right-hand side converges to zero as $n \to \infty$, then $R_n^* \Rightarrow R$. Furthermore, $E[R^\beta] < \infty$ by Lemma \ref{L.Moments_R_Max}. Clearly, under the assumptions, the distribution of $R$ represents the unique solution to \eqref{eq:Maximum}, since any other possible solution with finite $\beta$-moment would have to converge to the same limit.
\end{proof}
Now we are ready to state the main result of this section.
\begin{thm} \label{T.MaximumRecursion}
Let $(Q, N, C_1, C_2, \dots)$ be a nonnegative random vector, with $N \in \mathbb{N} \cup \{\infty\}$, $P(Q > 0) > 0$
and $R$ be the solution to \eqref{eq:Maximum} given by \eqref{eq:maxR}.
Suppose that there exists $j \geq 1$ with $P(N\ge j,C_j>0)>0$ such that the measure $P(\log C_j\in du, C_j > 0, N\ge j)$ is nonarithmetic, and
that for some $\alpha > 0$, $E[Q^\alpha] < \infty$, $0 < E \left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] < \infty$ and $ E \left[ \sum_{i=1}^N C_i^\alpha \right] = 1$. In addition, assume
\begin{enumerate}
\item $E\left[ \left( \sum_{i=1}^N C_i \right)^\alpha \right] < \infty$, , if $\alpha > 1$; or,
\item $E\left[ \left( \sum_{i=1}^N C_i^{\alpha/(1+\epsilon)}\right)^{1+\epsilon} \right] < \infty$ for some $0 < \epsilon< 1$, if $0 < \alpha \leq 1$.
\end{enumerate}
Then,
$$P(R > t) \sim H t^{-\alpha}, \qquad t \to \infty,$$
where $0 \leq H < \infty$ is given by
\begin{align*}
H &= \frac{1}{E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] } \int_{0}^\infty v^{\alpha-1} \left( P(R > v) - E\left[ \sum_{i=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{i} R > v ) \right] \right) dv \\
&= \frac{E\left[ \left( \bigvee_{i=1}^N C_i R_i \right)^\alpha \vee Q^\alpha - \sum_{i=1}^N (C_i R_i )^\alpha \right]}{\alpha E\left[ \sum_{i=1}^N C_i^\alpha \log C_i \right] }.
\end{align*}
\end{thm}
\begin{proof}
By Lemma \ref{L.Moments_R_Max} we know that $E[R^\beta] < \infty$ for any $0 < \beta < \alpha$. The same arguments used in the proof of Theorem \ref{T.LinearRecursion} give that $E\left[ \sum_{i=1}^N C_i^\gamma \right] < \infty$ for some $0 \leq \gamma < \alpha$. The statement of the theorem with the first expression for $H$ will follow from Theorem \ref{T.NewGoldie} once we prove that condition \eqref{eq:Goldie_condition} holds. Define
$$R^* = \left( \bigvee_{i=1}^N C_i R_i \right) \vee Q.$$
Then,
\begin{align*}
\left| P(R>t) - E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > t ) \right] \right| &\leq \left| P(R > t) - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right| \\
&\hspace{5mm} + \left| P\left( \max_{1\leq i \leq N} C_i R_i > t \right) - E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_iR_i > t ) \right] \right|.
\end{align*}
Since $R \stackrel{\mathcal{D}}{=} R^* \geq \max_{1\leq i\leq N} C_i R_i$, the first absolute value disappears. The integral corresponding to the second term is finite by Lemma \ref{L.Max_Approx}, just as in the proof of Theorem~\ref{T.LinearRecursion}. To see that the integral corresponding to the first term,
$$\int_0^\infty \left( P(R > t) - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right) t^{\alpha-1} \, dt, $$
is finite we proceed as in the proof of Theorem \ref{T.LinearRecursion}. First we use Fubini's Theorem to obtain that
\begin{align*}
&\int_0^\infty \left( P(R > t) - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right) t^{\alpha-1} \, dt \\
&= \frac{1}{\alpha} E \left[ (R^*)^\alpha - \left( \max_{1\leq i \leq N} C_i R_i \right)^\alpha \right] \\
&= \frac{1}{\alpha} E \left[ \left( \bigvee_{i=1}^N C_i R_i \right)^\alpha \vee Q^\alpha - \left( \bigvee_{i=1}^N C_i R_i \right)^\alpha \right] \\
&\leq \frac{E[Q^\alpha]}{\alpha}.
\end{align*}
Now, applying Theorem \ref{T.NewGoldie} gives
$$P(R > t) \sim H t^{-\alpha},$$
where $H = \left( E\left[ \sum_{j=1}^N C_j^\alpha \log C_j \right] \right)^{-1} \int_{0}^\infty v^{\alpha-1} \left( P(R > v) - E\left[ \sum_{j=1}^{N} \mathop{\hskip0pt{1}}\nolimits(C_{j} R > v ) \right] \right) dv$.
The same steps used in the proof of Theorem \ref{T.LinearRecursion} give the second expression for $H$.
\end{proof}
\section{Other recursions and concluding remarks}
As an illustration of the generality of the methodology presented in this paper, we mention in this section other recursions that fall within its scope. One example that is closely related to the recursions from Sections~\ref{S.LinearRec} and \ref{S.MaxRec} is the following
\begin{equation} \label{eq:Max_Sum_Rec}
R \stackrel{\mathcal{D}}{=} \left( \bigvee_{i=1}^N C_i R_i \right) + Q,
\end{equation}
where $(Q, N, C_1, C_2, \dots)$ is a nonnegative vector with $N \in \mathbb{N} \cup \{\infty\}$, $P(Q > 0) > 0$, and $\{R_i \}_{i \in \mathbb{N}}$ is a sequence of iid random variables independent of $(Q, N, C_1, C_2, \dots)$ having the same distribution as $R$. Recursion \eqref{eq:Max_Sum_Rec} was termed ``discounted tree sums" in \cite{Aldo_Band_05}; for additional details on the existence and uniqueness of its solution see Section 4.4 in \cite{Aldo_Band_05}.
Similarly as in \cite{Goldie_91}, it appears that one could study other non-linear recursions on trees using implicit renewal theory. For example, one could analyze the solution to the equation
$$R \stackrel{\mathcal{D}}{=} \sum_{i=1}^N \left( C_i R_i + B_i \sqrt{R_i} \right) + Q,$$
where $(Q, N, C_1, C_2, \dots)$ is a nonnegative vector with $N \in \mathbb{N} \cup \{\infty\}$, $P(Q > 0) > 0$, and $\{R, R_i \}_{i \geq 1}$ is a sequence of iid random variables independent of $(Q, N, C_1, C_2, \dots)$. Here, the primary difficulty would be in establishing the existence and uniqueness of the solution as well as the finiteness of moments.
\section{Proofs} \label{S.Proofs}
\subsection{Implicit renewal theorem on trees} \label{SS.ImplicitProofs}
We give in this section the proof of Lemma \ref{L.RenewalMeasure}.
\begin{proof}[Proof of Lemma \ref{L.RenewalMeasure}]
Observe that the measure $E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(\log C_i \in du, C_i>0) \right]$ is nonarithmetic (nonlattice) by our assumption
since, if we assume to the contrary that it is lattice on a lattice set $L$, then on the complement $L^c$ of this set we have
$$0=E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(\log C_i \in L^c, C_i>0 ) \right]\ge P(\log C_j \in L^c, C_j>0,N \ge j)>0,$$
which is a contradiction. Hence, $\eta$ is nonarithmetic as well, and it places no mass at $-\infty$ due to the exponential term $e^{\alpha u}$.
To see that it is a probability measure note that
\begin{align*}
\int_{-\infty}^{\infty} \eta(du) &= \int_{-\infty}^\infty e^{\alpha u}E\left[ \sum_{j=1}^N \mathop{\hskip0pt{1}}\nolimits(\log C_j \in du ) \right] \\
&= E\left[ \sum_{j=1}^N \int_{-\infty}^\infty e^{\alpha u} \mathop{\hskip0pt{1}}\nolimits(\log C_j \in du ) \right] \qquad \text{(by Fubini's Theorem)} \\
&= E\left[ \sum_{j=1}^N C_j^\alpha \right] = 1.
\end{align*}
Similarly, its mean is given by
$$\int_{-\infty}^\infty u \eta(du) = E\left[ \sum_{j=1}^N C_j^\alpha \log C_j \right] .$$
To show that $\mu_n = \eta^{*n}$ we proceed by induction. Let $\mathcal{F}_n$ denote the $\sigma$-algebra generated by $\left\{ (N_{\bf i}, C_{({\bf i}, 1)}, C_{({\bf i}, 2)}, \dots) : {\bf i} \in A_j, 0 \leq j \leq n-1 \right\}$, $\mathcal{F}_0 = \sigma(\emptyset, \Omega)$, and for each ${\bf i} \in A_n$ set $V_{{\bf i}} = \log \Pi_{{\bf i}}$. Hence, using this notation we derive
\begin{align*}
\mu_{n+1}((-\infty,t]) &= \int_{-\infty}^t e^{\alpha u} E\left[ \sum_{{\bf i} \in A_{n}} \sum_{j=1}^{N_{{\bf i}}} \mathop{\hskip0pt{1}}\nolimits(V_{{\bf i}} + \log C_{({\bf i},j)} \in du ) \right] \\
&= \int_{-\infty}^t e^{\alpha u} E\left[ \sum_{{\bf i} \in A_{n}} E\left[ \left. \sum_{j=1}^{N_{{\bf i}}} \mathop{\hskip0pt{1}}\nolimits(V_{{\bf i}} + \log C_{({\bf i},j)} \in du ) \right| \mathcal{F}_n \right] \right] \\
&= E\left[ \sum_{{\bf i} \in A_{n}} e^{\alpha V_{\bf i}} \int_{-\infty}^t e^{\alpha (u- V_{\bf i})} E\left[ \left. \sum_{j=1}^{N_{{\bf i}}} \mathop{\hskip0pt{1}}\nolimits( \log C_{({\bf i},j)} \in du - V_{\bf i} ) \right| \mathcal{F}_n \right] \right] \\
&=E\left[ \sum_{{\bf i} \in A_{n}} e^{\alpha V_{\bf i}} \eta((-\infty, t - V_{{\bf i}}]) \right] \\
&= \int_{-\infty}^\infty \eta((-\infty,t-v]) \mu_n(dv),
\end{align*}
where in the fourth equality we used the independence of $(N_{{\bf i}}, C_{({\bf i}, 1)}, C_{({\bf i}, 2)}, \dots)$ from $\mathcal{F}_n$. Therefore $\mu_{n+1}(dt) = (\eta*\mu_n)(dt)$ and the induction hypothesis gives the result.
\end{proof}
\subsection{Moments of $W_n$} \label{SS.MomentsProofs}
In this section we prove Lemmas \ref{L.Alpha_Moments}, \ref{L.MomentSmaller_1} and \ref{L.GeneralMoment}. We also include a result that provides bounds for $E[W_n^p]$ for integer $p$, which will be used in the proof of Lemma \ref{L.GeneralMoment}.
\begin{proof}[Proof of Lemma \ref{L.Alpha_Moments}]
Let $p = \lceil \beta \rceil \in \{2,3,\dots\}$ and $\gamma = \beta/p \in (\beta/(\beta+1), 1]$. Suppose first that $k \in \mathbb{N}$ and define $A_p(k) = \{ (j_1, \dots, j_k) \in \mathbb{N}^k: j_1 + \dots + j_k = p, 0 \leq j_i < p\}$. Then, for any sequence of nonnegative numbers $\{ y_i \}_{i \geq 1}$ we have
\begin{align}
\left( \sum_{i=1}^k y_i \right)^\beta &= \left( \sum_{i=1}^k y_i \right)^{p \gamma} \notag \\
&= \left( \sum_{i=1}^k y_i^p + \sum_{(j_1,\dots,j_k) \in A_p(k)} \binom{p}{j_1,\dots,j_k} y_1^{j_1} \cdots y_k^{j_k} \right)^\gamma \notag \\
&\leq \sum_{i=1}^k y_i^{p\gamma} + \left( \sum_{(j_1,\dots,j_k) \in A_p(k)} \binom{p}{j_1,\dots,j_k} y_1^{j_1} \cdots y_k^{j_k} \right)^\gamma, \label{eq:scalarBound}
\end{align}
where for the last step we used the well known inequality $\left( \sum_{i=1}^k x_i \right)^\gamma \leq \sum_{i=1}^k x_i^\gamma$ for $0 < \gamma \leq 1$ and $x_i \geq 0$. We now use the conditional Jensen's inequality to obtain
\begin{align*}
&E\left[ \left( \sum_{i=1}^k C_i Y_i \right)^\beta - \sum_{i=1}^k (C_iY_i)^{\beta} \right] \\
&\leq E\left[ \left( \sum_{(j_1,\dots,j_k) \in A_p(k)} \binom{p}{j_1,\dots,j_k} (C_1Y_1)^{j_1} \cdots (C_k Y_k)^{j_k} \right)^\gamma \right] \qquad \text{(by \eqref{eq:scalarBound})} \\
&\leq E\left[ \left( E\left[ \left. \sum_{(j_1,\dots,j_k) \in A_p(k)} \binom{p}{j_1,\dots,j_k} (C_1Y_1)^{j_1} \cdots (C_k Y_k)^{j_k} \right| C_1,\dots, C_k \right] \right)^\gamma \right] \\
&= E \left[ \left( \sum_{(j_1,\dots,j_k) \in A_p(k)} \binom{p}{j_1,\dots,j_k} C_1^{j_1} \cdots C_k^{j_k} E\left[ \left. Y_1^{j_1} \cdots Y_k^{j_k} \right| C_1,\dots, C_k \right] \right)^\gamma \right].
\end{align*}
Since $\{Y_i\}$ is a sequence of iid random variables having the same distribution as $Y$, independent of the $C_i$'s we have that
$$E\left[ \left. Y_1^{j_1} \cdots Y_k^{j_k} \right| C_1,\dots, C_k \right] = E\left[ Y_1^{j_1} \cdots Y_k^{j_k} \right] = || Y ||_{j_1}^{j_1} \cdots || Y ||_{j_k}^{j_k},$$
where $|| Y ||_\kappa = \left( E[|Y|^\kappa] \right)^{1/\kappa}$ for $\kappa \geq 1$ and $|| Y ||_0 \triangleq 1$. Since $|| Y ||_\kappa$ is increasing for $\kappa \geq 1$ it follows that $|| Y ||_{j_i}^{j_i} \leq || Y ||_{p-1}^{j_i}$. Hence
$$|| Y ||_{j_1}^{j_1} \cdots || Y ||_{j_k}^{j_k} \leq || Y ||_{p-1}^p,$$
which in turn implies that
\begin{align*}
E\left[ \left( \sum_{i=1}^k C_iY_i \right)^\beta - \sum_{i=1}^k (C_i Y_i)^{\beta} \right] &\leq E \left[ \left( \sum_{(j_1,\dots,j_k) \in A_p(k)} \binom{p}{j_1,\dots,j_k} C_1^{j_1} \cdots C_k^{j_k} || Y ||_{p-1}^p \right)^\gamma \right] \\
&= || Y ||_{p-1}^\beta E\left[ \left( \left(\sum_{i=1}^k C_i \right)^p - \sum_{i=1}^k C_i^p \right)^\gamma \right] \\
&\leq || Y ||_{p-1}^\beta E\left[ \left(\sum_{i=1}^k C_i \right)^\beta \right].
\end{align*}
This completes the proof for $k$ finite.
When $k = \infty$, first note that from the well known inequality $\left( x_1 + x_2 \right)^\beta \geq x_1^\beta + x_2^\beta$ for $x_1, x_2 \geq 0$ and $\beta > 1$ we obtain the monotonicity in $k$ of the following difference
$$\left( \sum_{i=1}^{k+1} C_i Y_i \right)^\beta - \sum_{i=1}^{k+1} (C_i Y_i)^\beta \geq \left( \sum_{i=1}^{k} C_i Y_i \right)^\beta - \sum_{i=1}^k (C_i Y_i)^\beta \geq 0.$$
Hence,
\begin{align}
&E\left[ \left( \sum_{i=1}^\infty C_iY_i \right)^\beta - \sum_{i=1}^\infty (C_i Y_i)^{\beta} \right] \notag \\
&= \lim_{k\to \infty} E\left[ \left( \left( \sum_{i=1}^k C_iY_i \right)^\beta - \sum_{i=1}^k (C_i Y_i)^{\beta} \right) \right] \label{eq:Exchange1} \\
&\leq \lim_{k\to \infty} E\left[ \left( \sum_{(j_1,\dots,j_k) \in A_p(k)} \binom{p}{j_1,\dots,j_k} (C_1Y_1)^{j_1} \cdots (C_k Y_k)^{j_k} \right)^\gamma \right] \notag \\
&\leq \lim_{k \to \infty} || Y ||_{p-1}^\beta E\left[ \left(\sum_{i=1}^k C_i \right)^\beta \right] \notag \\
&= || Y ||_{p-1}^\beta E\left[ \left(\sum_{i=1}^\infty C_i \right)^\beta \right] , \label{eq:Exchange2}
\end{align}
where \eqref{eq:Exchange1} and \eqref{eq:Exchange2} are justified by monotone convergence.
\end{proof}
\begin{proof}[Proof of Lemma \ref{L.MomentSmaller_1}]
We use the well known inequality $\left( \sum_{i=1}^k y_i \right)^\beta \leq \sum_{i=1}^k y_i^\beta$ for $0 < \beta \leq 1$, $y_i \geq 0$, $k \leq \infty$, to obtain
\begin{align}
E[W_n^\beta] &= E\left[ \left( \sum_{i=1}^N C_i W_{(n-1),i} \right)^\beta \right] \notag \\
&\leq E\left[ \sum_{i=1}^N C_i^\beta W_{(n-1),i}^\beta \right] \notag \\
&= E[W_{n-1}^\beta] \rho_\beta \qquad \text{(by conditioning on $N, C_i$ and Fubini's theorem)} \notag \\
&\leq \rho_\beta^{n} E[W_0^\beta] \qquad \text{(iterating $n$ times)} \notag \\
&= \rho_\beta^{n} E[Q^\beta] .
\end{align}
\end{proof}
Before proving the moment inequality for general $\beta > 1$, we will show first the corresponding result for integer moments.
\begin{lem} \label{L.IntegerMoment}
Let $p \in \{2, 3, \dots\}$ and recall that $\rho_p = E\left[ \sum_{i=1}^N C_i^p \right]$, $\rho \equiv \rho_1$. Suppose $E[Q^p]< \infty$, $E\left[ \left( \sum_{i=1}^N C_i \right)^p \right] < \infty$, and $\rho \vee \rho_p < 1$. Then, there exists a constant $K_p > 0$ such that
$$E[ W_n^p ] \leq K_p \left( \rho \vee \rho_p \right)^n $$
for all $n \geq 0$.
\end{lem}
\begin{proof}
We will give an induction proof in $p$. For $p = 2$ we have
\begin{align*}
E[W_n^2] &= E\left[ \left( \sum_{i=1}^N C_i W_{(n-1),i} \right)^2 \right] \\
&= E\left[ \sum_{i=1}^N C_i^2 W_{(n-1),i}^2 + \sum_{i \neq j} C_i W_{(n-1),i} C_j W_{(n-1),j} \right] \\
&= E[W_{n-1}^2] E\left[ \sum_{i=1}^N C_i^2 \right] + \left( E[W_{n-1}] \right)^2 E\left[ \sum_{i \neq j} C_i C_j \right] \\
&\hspace{4.5cm} \text{(by conditioning on $N, C_i$ and Fubini's theorem)} \\
&\leq \rho_2 E[W_{n-1}^2] + E\left[ \left( \sum_{i=1}^N C_i \right)^2 \right] \left( E[W_{n-1}] \right)^2 .
\end{align*}
Using the preceding recursion and noting that,
$$E[W_{n-1}] = \rho^{n-1} E[Q],$$
we obtain
\begin{equation} \label{eq:2_recur}
E[W_n^2] \leq \rho_2 E[W_{n-1}^2] + K \rho^{2(n-1)},
\end{equation}
where $K =E\left[ \left( \sum_{i=1}^N C_i \right)^2 \right] \left( E[Q] \right)^2$. Now, iterating \eqref{eq:2_recur} gives
\begin{align*}
E[W_n^2] &\leq \rho_2 \left( \rho_2 E[W_{n-2}^2] + K \rho^{2(n-2)} \right) + K \rho^{2(n-1)} \\
&\leq \rho_2^{n-1} \left( \rho_2 E[W_{0}^2] + K \right) + K \sum_{i=0}^{n-2} \rho_2^i \, \rho^{2(n-1-i)} \\
&= \rho_2^n E[Q^2] + K \sum_{i=0}^{n-1} \rho_2^i \, \rho^{2(n-1-i)} \\
&\leq (\rho_2 \vee \rho)^n E[Q^2] + K (\rho_2 \vee \rho)^n \sum_{i=0}^{n-1} (\rho_2 \vee \rho)^{n-2 - i } \\
&\leq \left( E[Q^2] + \frac{K}{\rho_2 \vee \rho} \sum_{j=0}^{\infty} (\rho_2 \vee \rho)^{j} \right) (\rho_2 \vee \rho)^n \\
&= K_2 (\rho_2 \vee \rho)^n ,
\end{align*}
which completes the case $p = 2$.
Suppose now that there exists a constant $K_{p-1} > 0$ such that
\begin{equation} \label{eq:Induction_p}
E[W_n^{p-1}] \leq K_{p-1} \left( \rho_{p-1} \vee \rho \right)^n
\end{equation}
for all $n \geq 0$. Then, by Lemmas \ref{L.Alpha_Moments} and \ref{L.MomentSmaller_1}, we have
\begin{align*}
E[W_n^p] &= E\left[ \left( \sum_{i=1}^N C_i W_{(n-1),i} \right)^p - \sum_{i=1}^N C_i^p W_{(n-1),i}^p \right] + E\left[ \sum_{i=1}^N C_i^p W_{(n-1),i}^p \right] \\
&\leq \left( E\left[ W_{n-1}^{p-1} \right] \right)^{p/(p-1)} E\left[ \left( \sum_{i=1}^N C_i \right)^p \right] + E\left[ \sum_{i=1}^N C_i^p W_{(n-1),i}^p \right] \\
&= \left( E\left[ W_{n-1}^{p-1} \right] \right)^{p/(p-1)} E\left[ \left( \sum_{i=1}^N C_i \right)^p \right] + \rho_p E\left[ W_{n-1}^{p} \right] \\
&\leq E\left[ \left( \sum_{i=1}^N C_i \right)^p \right] (K_{p-1})^{p/(p-1)} (\rho_{p-1} \vee \rho)^{(n-1)p/(p-1)} + \rho_p E[W_{n-1}^p],
\end{align*}
where in the second equality we conditioned on $N, C_i$ and used Fubini's theorem, and the last inequality corresponds to the induction hypothesis. We then obtain the recursion
\begin{equation} \label{eq:p_recur}
E[W_n^p] \leq \rho_p E[W_{n-1}^p] + K (\rho_{p-1} \vee \rho)^{\frac{(n-1)p}{p-1}},
\end{equation}
where $K = E\left[ \left( \sum_{i=1}^N C_i \right)^p \right] (K_{p-1})^{p/(p-1)}$. Iterating \eqref{eq:p_recur} as for the case $p=2$ gives
\begin{align}
E[W_n^p] &\leq \rho_p^n E[Q^p] + K \sum_{i=0}^{n-1} \rho_p^i \, (\rho_{p-1} \vee \rho)^{(n-1-i)p/(p-1)} \notag \\
&\leq (\rho_p \vee \rho)^n E[Q^p] + K \sum_{i=0}^{n-1} (\rho_p \vee \rho)^{((n-1)p -i)/(p-1) } \label{eq:MGFconvexity} \\
\phantom{E[W_n^p]}&= (\rho_p \vee \rho)^n E[Q^p] + K (\rho_p \vee \rho)^{n-1} \sum_{i=0}^{n-1} (\rho_p \vee \rho)^{(n- i - 1)/(p-1)} \notag \\
&\leq \left( E[Q^p] + K (\rho_p \vee \rho)^{-1} \sum_{j=0}^{\infty} (\rho_p \vee \rho)^{\frac{j}{p-1}} \right) (\rho_p \vee \rho)^n \notag \\
&= K_p (\rho_p \vee \rho)^n, \notag
\end{align}
where in \eqref{eq:MGFconvexity} we used the convexity of $\varphi(\beta) = \rho_\beta$, i.e. $\rho_{p-1} = \varphi(p-1) \leq \varphi(1) \vee \varphi(p) = \rho \vee \rho_p$.
\end{proof}
The proof for the general $\beta$-moment, $\beta > 1$, is given below.
\begin{proof}[Proof of Lemma \ref{L.GeneralMoment}]
Set $p = \lceil \beta \rceil \geq \beta > 1$. Then, by Lemmas \ref{L.Alpha_Moments} and \ref{L.MomentSmaller_1},
\begin{align*}
E[W_n^\beta] &= E\left[ \left( \sum_{i=1}^N C_i W_{(n-1),i} \right)^\beta - \sum_{i=1}^N C_i^\beta W_{(n-1),i}^\beta \right] + E\left[ \sum_{i=1}^N C_i^\beta W_{(n-1),i}^\beta \right] \\
&\leq \left( E\left[ W_{n-1}^{p-1} \right] \right)^{\beta/(p-1)} E\left[ \left( \sum_{i=1}^N C_i \right)^\beta \right] + E\left[ \sum_{i=1}^N C_i^\beta W_{(n-1),i}^\beta \right] \\
&= \left( E\left[ W_{n-1}^{p-1} \right] \right)^{\beta/(p-1)} E\left[ \left( \sum_{i=1}^N C_i \right)^\beta \right] + \rho_\beta E\left[ W_{n-1}^{\beta} \right] .
\end{align*}
By Lemma \ref{L.IntegerMoment},
\begin{align*}
E[W_n^\beta] &\leq \rho_\beta E[W_{n-1}^\beta] + E\left[ \left( \sum_{i=1}^N C_i \right)^\beta \right] (K_{p-1})^{\beta/(p-1)} (\rho_{p-1} \vee \rho)^{(n-1)\beta/(p-1)} \\
&= \rho_\beta E[ W_{n-1}^\beta] + K (\rho_{p-1} \vee \rho)^{(n-1)\gamma},
\end{align*}
where $\gamma = \beta/(p-1) > 1$. Finally, iterating the preceding bound $n-1$ times gives
\begin{align*}
E[W_n^\beta] &\leq \rho_\beta^n E[W_0^\beta] + K \sum_{i=0}^{n-1} \rho_\beta^i (\rho \vee \rho_{p-1})^{\gamma(n-1-i)} \\
&\leq E[W_0^\beta] (\rho \vee \rho_\beta)^n + K \sum_{i=0}^{n-1} (\rho \vee \rho_\beta)^{\gamma(n-1-i) + i} \qquad \text{(by convexity of $\varphi(\beta) = \rho_\beta$)} \\
&= E[Q^\beta] (\rho \vee \rho_\beta)^n + K (\rho \vee \rho_\beta)^{n-1} \sum_{i=0}^{n-1} (\rho \vee \rho_\beta)^{(\gamma-1) i} \\
&\leq K_\beta (\rho \vee \rho_\beta)^n .
\end{align*}
This completes the proof.
\end{proof}
\subsection{Linear nonhomogeneous recursion} \label{SS.LinearProofs}
In this section we give the proofs of the technical Lemmas \ref{L.Max_Approx} and \ref{L.ExtraQ} for the linear recursion.
\begin{proof}[Proof of Lemma \ref{L.Max_Approx}]
Note that the integral is positive since
\begin{align*}
P\left( \max_{1\leq i \leq N} C_i R_i > t \right) = E\left[ \mathop{\hskip0pt{1}}\nolimits\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right] &\leq E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits\left( C_i R_i > t \right) \right] .
\end{align*}
To see that the integral is equal to the expectation involving the $\alpha$-moments note that
\begin{align*}
&\int_{0}^\infty \left( E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t) \right] - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right) t^{\alpha -1} \, dt \\
&= \int_0^\infty E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t) - \mathop{\hskip0pt{1}}\nolimits\left(\max_{1\leq i \leq N} C_i R_i > t \right) \right] t^{\alpha -1} \, dt \\
&= E\left[ \int_0^\infty \left( \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t) - \mathop{\hskip0pt{1}}\nolimits\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right) t^{\alpha -1} \, dt \right] \qquad \text{(by Fubini's theorem)} \\
&= E\left[ \sum_{i=1}^N \frac{1}{\alpha} (C_i R_i)^{\alpha} - \frac{1}{\alpha} \left(\max_{1\leq i \leq N} C_i R_i \right)^{\alpha} \right] ,
\end{align*}
where the last equality is justified by the assumption that $\sum_{i=1}^N (C_i R_i)^\alpha < \infty$ a.s.
It now remains to show that the integral (expectation) is finite. To do this let ${\bf X} = (N, C_1, C_2, \dots)$. Similar arguments to those used above give
\begin{align*}
&\int_{0}^\infty \left( E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t ) \right] - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right) t^{\alpha -1} \, dt \\
&= \int_0^\infty E\left[ E\left[ \left. \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t) - \mathop{\hskip0pt{1}}\nolimits\left(\max_{1\leq i \leq N} C_i R_i > t \right) \right| {\bf X} \right] \right] t^{\alpha -1} \, dt \\
&= E\left[ \int_0^\infty E\left[ \left. \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t) - \mathop{\hskip0pt{1}}\nolimits\left(\max_{1\leq i \leq N} C_i R_i > t \right) \right| {\bf X} \right] t^{\alpha -1} \, dt \right],
\end{align*}
where in the last step we used Fubini's theorem. Furthermore,
\begin{align*}
&E\left[ \left. \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t) - \mathop{\hskip0pt{1}}\nolimits\left(\max_{1\leq i \leq N} C_i R_i > t \right) \right| {\bf X} \right] \\
&= E\left[ \left. \mathop{\hskip0pt{1}}\nolimits\left( \max_{1\leq i \leq N} C_i R_i \leq t \right) \right| {\bf X} \right] - 1 + \sum_{i=1}^N E\left[ \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t) | {\bf X}\right].
\end{align*}
Note that given ${\bf X}$, the random variables $C_i R_i$ are independent (since the $R$'s are), so if we let $\overline{F}(t) = P(R > t)$, then
$$E\left[ \left. \mathop{\hskip0pt{1}}\nolimits\left( \max_{1\leq i \leq N} C_i R_i \leq t \right) \right| {\bf X} \right] = \prod_{i=1}^N E\left[ \left. \mathop{\hskip0pt{1}}\nolimits\left( C_i R_i \leq t \right) \right| {\bf X} \right] = \prod_{i=1}^N \left(1 - \overline{F}(t/C_i) \right).$$
We now use the inequality $1 - x \leq e^{-x}$ for $x \geq 0$ to obtain
$$\prod_{i=1}^N \left(1 - \overline{F}(t/C_i) \right) \leq e^{-\sum_{i=1}^N \overline{F}(t/C_i)}.$$
Next, let $\delta = \alpha\epsilon/(1+\epsilon)$ and set $\beta = \alpha-\delta$. By Markov's inequality,
$$\sum_{i=1}^N \overline{F}(t/C_i) \leq \sum_{i=1}^N E[(C_i R)^\beta | C_i] t^{-\beta} = t^{-\beta} E[R^\beta] \sum_{i=1}^N C_i^\beta.$$
Now, define the function $g(x) = e^{-x} - 1+ x$ and note that $g(x)$ is increasing for $x \geq 0$. Therefore,
$$g\left( \sum_{i=1}^N \overline{F}(t/C_i) \right) \leq g\left(t^{-\beta} E[R^\beta] \sum_{i=1}^N C_i^\beta \right).$$
This observation combined with the previous derivations gives
\begin{align*}
&\int_0^\infty E\left[ \left. \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t) - \mathop{\hskip0pt{1}}\nolimits\left(\max_{1\leq i \leq N} C_i R_i > t \right) \right| {\bf X} \right] t^{\alpha -1} \, dt \\
&\leq \int_0^\infty \left( e^{-r S_\beta t^{-\beta}} - 1 + r S_\beta t^{-\beta} \right) t^{\alpha-1} dt,
\end{align*}
where $S_\beta = \sum_{i=1}^N C_i^\beta$ and $r = E[R^\beta] < \infty$. Hence, using the change of variables $u =r S_\beta t^{-\beta}$ gives
\begin{align*}
\int_{0}^\infty \left( e^{-rS_\beta t^{-\beta}} - 1 + r S_\beta t^{-\beta} \right) t^{\alpha -1} \, dt &= \beta^{-1} (r S_\beta)^{\alpha/\beta} \int_0^\infty \left( e^{-u} - 1 + u \right) u^{-\alpha/\beta -1} \, du.
\end{align*}
Our choice of $\beta = \alpha-\delta$ guarantees that $1 < \alpha/\beta = 1+\epsilon < 2$. To see that the (non-random) integral is finite note that
$e^{-x} -1 + x \leq x^2/2$ and $e^{-x} - 1 \leq 0$ for any $x \geq 0$, implying
\begin{align*}
\int_0^\infty \left( e^{-u} - 1 + u \right) u^{-\alpha/\beta -1} \, du &\leq \frac{1}{2} \int_0^1 u^{1-\alpha/\beta} \, du + \int_1^\infty u^{-\alpha/\beta } \, du \\
&= \frac{1}{2(2-\alpha/\beta)} + \frac{1}{\alpha/\beta-1} \triangleq K \beta < \infty.
\end{align*}
Now, it follows that
\begin{align*}
&\int_{0}^\infty \left( E\left[ \sum_{i=1}^N \mathop{\hskip0pt{1}}\nolimits(C_i R_i > t) \right] - P\left( \max_{1\leq i \leq N} C_i R_i > t \right) \right) t^{\alpha -1} \, dt \\
&\leq E\left[ K (r S_\beta)^{\alpha/\beta} \right] = K r^{\alpha/\beta} E\left[ \left( \sum_{i=1}^N C_i^\beta \right)^{\alpha/\beta} \right] .
\end{align*}
The last expectation is finite by assumption ($\alpha/\beta = 1 + \epsilon$), which completes the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{L.ExtraQ}]
Let $S = \sum_{i=1}^N C_i R_i < \infty$ a.s., $p = \lceil \alpha \rceil$ and note that $1 \leq p-1 < \alpha$. Then, since $(S+Q)^\alpha - S^\alpha \geq 0$ and $S^\alpha - \sum_{i=1}^N (C_i R_i)^\alpha \geq 0$, we can break the expectation as follows
\begin{align*}
E \left[ (S+Q)^\alpha - \sum_{i=1}^N \left(C_i R_i \right)^\alpha \right] &= E\left[ (S+Q)^\alpha - S^\alpha \right] + E\left[ \left( \sum_{i=1}^N C_i R_i \right)^\alpha - \sum_{i=1}^N (C_iR_i)^\alpha \right] \\
&\leq E\left[ (S+Q)^\alpha - S^\alpha \right] + \left( E[ R^{p-1} ] \right)^{\alpha/(p-1)} E\left[ \left(\sum_{i=1}^N C_i \right)^\alpha \right] ,
\end{align*}
where the inequality is justified by Lemma \ref{L.Alpha_Moments}. The second expectation is finite since by assumption $E[R^\beta] < \infty$ for any $0 <\beta < \alpha$. For the first expectation we use the inequality
$$(x+t)^\kappa \leq \begin{cases}
x^\kappa + t^\kappa, & 0 < \kappa \leq 1, \\
x^\kappa + \kappa (x+t)^{\kappa-1} t, & \kappa > 1,
\end{cases}$$
for any $x,t \geq 0$. We apply the second inequality $p-1$ times and then the first one to obtain
\begin{align*}
(x+t)^\alpha \leq x^\alpha + \alpha (x+t)^{\alpha-1} t \leq \dots &\leq x^\alpha + \sum_{i=1}^{p-2} \alpha^i x^{\alpha-i} t^i + \alpha^{p-1} (x+t)^{\alpha-p+1} t^{p-1} \\
&\leq x^\alpha + \alpha^p t^\alpha + \alpha^p\sum_{i=1}^{p-1} x^{\alpha-i} t^i.
\end{align*}
Hence, it follows that
\begin{equation} \label{eq:Alpha_diff}
E\left[(S+Q)^\alpha - S^\alpha\right] \leq \alpha^p E[Q^\alpha] + \alpha^p\sum_{i=1}^{p-1} E[S^{\alpha-i} Q^i].
\end{equation}
To see that each of the expectations involving a product of $S$ and $Q$ is finite let ${\bf X} = (Q, N, C_1, C_2, \dots)$ and note that for $i = p-1$,
\begin{align}
&E[S^{\alpha-p+1} Q^{p-1}] \\
&= E\left[ E\left[ \left. \left( Q^{(p-1)/(\alpha-p+1)} \sum_{j=1}^N C_j R_j \right)^{\alpha-p+1} \right| {\bf X} \right] \right] \notag \\
&\leq E\left[ \left( E\left[ \left. Q^{(p-1)/(\alpha-p+1)} \sum_{j=1}^N C_j R_j \right| {\bf X} \right] \right)^{\alpha-p+1} \right] \quad \text{(by Jensen's inequality)} \notag \\
&= (E[ R ])^{\alpha-p+1} E\left[ Q^{p-1} \left( \sum_{j=1}^N C_j \right)^{\alpha-p+1} \right] , \label{eq:ProdMoments_1}
\end{align}
where the last equality was obtained by using the independence of $\{ R_j\}$ and ${\bf X}$.
For $1 \leq i \leq p-2$ let $q_i = \lceil \alpha-i \rceil$ and condition on $Q$ and ${\bf X}$, respectively, to obtain
\begin{align}
E[S^{\alpha-i} Q^i] &= E\left[ \left( S^{\alpha-i} - \sum_{j=1}^N (C_j R_j)^{\alpha-i} \right) Q^i \right] + E\left[ Q^i \sum_{j=1}^N (C_j R_j)^{\alpha-i} \right] \notag \\
&= E\left[ Q^i E\left[ \left. S^{\alpha-i} - \sum_{j=1}^N (C_j R_j)^{\alpha-i} \right| Q \right] \right] + E[ R^{\alpha-i}] E\left[ Q^i \sum_{j=1}^N C_j^{\alpha-i} \right] \notag
\end{align}
\begin{align}
&\leq E\left[ Q^i \left( E[R^{q_i - 1} | Q] \right)^{\frac{\alpha-i}{q_i - 1}} E\left[ \left. \left( \sum_{j=1}^N C_j \right)^{\alpha-i} \right| Q \right] \right] \\
&\hspace{5mm} + E[ R^{\alpha-i}] E\left[ Q^i \left( \sum_{j=1}^N C_j \right)^{\alpha-i} \right] \notag \\
&= \left( \left( E[R^{q_i - 1}] \right)^{\frac{\alpha-i}{q_i - 1}} + E[R^{\alpha-i}] \right) E\left[ Q^i \left( \sum_{j=1}^N C_j \right)^{\alpha-i} \right] , \label{eq:ProdMoments_2}
\end{align}
where for the inequality we used Lemma \ref{L.Alpha_Moments} ($\alpha-i > 1$) and the inequality $\sum_{i=1}^k y_i^\beta \leq \left( \sum_{i=1}^k y_i \right)^\beta$ for any $\beta \geq 1$ and $y_i \geq 0$. Note that all the expectations involving $R$ in \eqref{eq:ProdMoments_1} and \eqref{eq:ProdMoments_2} are finite since $E[R^\beta] < \infty$ for all $0 < \beta < \alpha$ by assumption. Next, observe that all the other expectations are of the form $E\left[ Q^i \left( \sum_{j=1}^N C_j \right)^{\alpha-i} \right]$ for $1 \leq i \leq p-1$. To see that these are finite use H\"older's inequality with $q = \alpha/(\alpha-i)$ and $r = \alpha/i$ to obtain
\begin{align*}
E\left[ Q^i \left( \sum_{j=1}^N C_j \right)^{\alpha-i} \right] &\leq \left|\left|\left( \sum_{j=1}^N C_j \right)^{\alpha-i} \right|\right|_q ||Q^i||_r \\
&= \left( E\left[ \left( \sum_{j=1}^N C_j \right)^{\alpha} \right] \right)^{1/q} \left( E \left[ Q^\alpha \right] \right)^{1/r} < \infty.
\end{align*}
\end{proof}
\section*{Acknowledgements}
We would like to thank an anonymous referee and Matthias Meiners for their helpful comments.
|
1,108,101,565,262 | arxiv | \section{Introduction}
One of the key features of a physical system for quantum information
processing (QIP) is quantum entanglement. The problem of entanglement
of multipartite systems is far from being completely understood, and
it has numerous interesting aspects.
One of the possible approaches to multipartite entanglement is to
search for quantum states with prescribed bipartite entanglement
properties~\cite{KoashiBI00,PleschB03,PleschB02}. This is a nontrivial
task as there exist limitations on the bipartite entanglement in
multipartite systems, which were quantified by Coffmann, Kundu and
Wootters~\cite{CoffmanKW00}. In a pioneering work, O'Connor and
Wootters~\cite{OConnorW01} have considered a system of quantum bits,
and have searched for an entangled state of these with maximal
bipartite entanglement. This state appears to be the ground state of
the antiferromagnetic Ising model, the spins representing the qubits.
This illustrates the relation between states of maximal bipartite
entanglement and the spin couplings known from statistical physics. We
will refer to this approach as the question of \emph{direct bipartite
entanglement}, as the relevant quantity is the bipartite
entanglement present in the system as it is.
Another approach to the problem of multipartite entanglement is
related to cluster~\cite{BriegelR01} and graph~\cite{HeinEB04} states.
These are genuine multipartite entangled states, which can be
projected onto a maximally entangled state of any chosen two spins by
a von Neumann measurement on the others. Such states arise dynamically
in a system of spins with pairwise Ising couplings. They constitute
the fundamental entangled resource for one-way quantum
computers~\cite{RaussendorfB01,RaussendorfBB03}. It is an interesting
property of the Ising dynamics in this case, that it transforms a
whole basis of product states into a basis which consists of cluster
or graph states. In this way a basis transformation from a product
state basis to a special -- in a sense maximally -- entangled basis is
realized.
These states are the starting points for the second approach, the
bipartite entanglement in multipartite systems available via assistive
measurements on all but two subsystems. The two key concepts in its
quantitative description are entanglement of
assistance~\cite{DiVincensoFMSTU} (or concurrence of
assistance~\cite{LaustsenVV03}, quantifying the entanglement available
via assistive measurements, and localizable
entanglement~\cite{VerstraetePC04b,quantph0411123}. The computational
feasibility of concurrence of assistance for a pair of qubits makes
the quantitative study of a part of this question feasible.
One of our aims is to relate the above two approaches. We will show
that the optimizations of direct and measurement assisted bipartite
entanglement are indeed related. Our other task is to study these
generic features in actual spin systems, as such systems do appear
quite naturally in this context.
Coupled spin systems have attracted a vast amount of research interest
in the quantum information community recently. The couplings studied
in statistical physics allow for performing certain tasks in QIP such
as e.g. quantum state
transfer~\cite{Bose03,ChristandlDEL04,OsborneN04}, realization of
quantum gates~\cite{SchuchS03,YungLB04}, and quantum
cloning~\cite{ChiaraFMMM04}. As the systems of coupled spins are
appropriate models for solid state systems, and also for quantum
states in optical lattices in certain cases~\cite{Garcia-RipollC03},
they bear actual practical relevance.
In the second part of this paper we focus on dynamical generation of
entanglement. We consider a system initially in a pure product state,
and investigate the entanglement of the states of the system
throughout the evolution. The ``prototype'' of such entanglement
generation is that of cluster and graph states. The various aspects of
the dynamical behavior of entanglement in spin systems has been
considered by several authors
recently~\cite{AmicoOPRP04,Subrahmanyam04a,PlastinaAOF04,quantph0409039,quantph0409048,VidalPA04}.
In addition to interpolating between the two approaches to bipartite
entanglement in multipartite systems, we consider the possibility of
controlling the process through the initial state of the system. We
address the following question. Is it possible to dynamically generate
states with optimal direct bipartite entanglement? We find a positive
answer, and also that the same couplings are capable of producing
states with high bipartite entanglement available via measurements, if
a different initial state is chosen. Our main tool of describing
measurement assisted bipartite entanglement will be concurrence of
assistance. We will examine the possibility of controlling the
behavior of this entanglement generation by the initial state of the
system. This is analogous to the control of quantum operations in
programmable quantum
circuits~\cite{quantph0102037,prl79_321,pra65_022301,pra66_042302}.
Finally we show that a suitably chosen magnetic field can enable
couplings different from Ising to create whole entangled bases
resembling those of cluster states regarding concurrence of
assistance. (Note that the generation of cluster states with non-Ising
couplings was considered very recently in Ref.~\cite{quantph0410145})
In addition, the application of magnetic field in the case of Ising
couplings can temporally enhance the presence of high pairwise
concurrence of assistance.
As we are mainly interested in illustrating generic features and
certain examples of entanglement behavior, a part of our results
concerning actual spin systems is simply computed by numerical
diagonalization of the appropriate Hamiltonians, even though we
present some analytical considerations where we find them appropriate.
Thus some of our considerations are limited to an order of 10 spins,
even though according to the numerical experience, they seem to be
scalable. This number coincides with that of the quantum bits expected
to be available in quantum computers in the near future. As the
realization of the discussed couplings is not necessarily restricted
to spins, our results may become directly applicable in such systems.
We consider two topologies of the pairwise interactions: a \emph{ring}
where each spin interacts with its two neighbors, and also the
\emph{star} topology where the interaction is mediated by a central
spin interacting with all the others. This was found interesting from
the point of view of entanglement distribution~\cite{HuttonB04} and
also from other aspects of its dynamics~\cite{BreuerBP04} recently.
The paper is organized as follows: in the introductory
Section~\ref{sect:entangmeas} we briefly review the entanglement
measures we use in the following. Section~\ref{sect:graphstates} is
devoted to the review of the dynamical generation of cluster and graph
states in spin systems, which is the background of the second part of
the paper. In Section~\ref{sect:upb} we present two interesting
properties of concurrence of assistance, which relates the two above
mentioned approaches to bipartite entanglement in multipartite systems,
and will be useful in the following. In Section~\ref{sec:control}, the
controlled generation of specific entangled states is addressed.
Section~\ref{sect:bases} is devoted to the enhanced generation of
certain entangled bases with the help of magnetic field.
Section~\ref{sect:concl} summarizes our results.
\section{Entanglement measures}
\label{sect:entangmeas}
In this Section we give an overview in a nutshell of the entanglement
measures and related quantities that will be used throughout this
paper.
\paragraph{One-tangle.}
For a bipartite system $A\bar{A}$ (A being a qubit, $\bar{A}$ being
the rest of the system) in the pure state $\Ket{\Psi}_{A\bar{A}}$, the
one-tangle~\cite{HillW97} of either of the subsystems
\begin{equation}
T\left(\Ket{\Psi}_{A{\bar{A}}}\right)=
4\det(\varrho_{A})
\label{eq:entanglement}
\end{equation}
(where $\varrho_{A}=\mathop{\mbox{tr}}\nolimits_{\bar{A}}\Ket{\Psi}_{A\bar{A}}\Bra{\Psi}$), is a
measure of entanglement. It quantifies the entanglement between the
qubit $A$ and the rest of the system, including all multipartite
entanglement between qubit A and the sets all the subsystems in
$\bar{A}$.
Although there is an extension of one-tangle to mixed states, it is
not computationally feasible except for the case of 2 qubits, in which
case one-tangle is equal to the square of concurrence. This justifies
the following interpretation: the square root of one-tangle is the
concurrence of such a two-qubit system in a pure state, for which the
density matrix of one of the qubits is equal to that of qubit A. This
means, it would be the concurrence itself if the subsystem $\bar{A}$
were also a qubit.
\paragraph{Concurrence.}
Having a bipartite system in a mixed state, a way of defining their
entanglement is to consider the average entanglement of all the pure
state decompositions of the state. This quantity is termed as the
\emph{entanglement of formation}:
\begin{equation}
E(\varrho)=\min\sum_{i}p_{i}E(\Ket{\Psi_{i}}),\quad\text{so
that}\,\,\sum_{i}p_{i}\Ket{\Psi_{i}}\Bra{\Psi_{i}}=\varrho.
\label{eq:entform}
\end{equation}
This is a kind of generalization of the entanglement defined in
Eq.~\eqref{eq:entanglement}. Its additivity is one of the most
interesting open questions of QIT.
The definition of entanglement of formation supports the following
interpretation: imagine that the bipartite system as a whole is a
subsystem of a large system. Entanglement of formation measures the
bipartite entanglement available on average if everything but the
bipartite subsystem is simply dropped.
If the system in argument consists of two qubits, there is a closed
form for entanglement of formation found by
Wootters~\cite{Wootters98}. This consideration includes another
entanglement measure.
Given the two-qubit density matrix $\varrho$, one calculates the
matrix \begin{equation}
\tilde{\varrho}=(\sigma^{(y)}\otimes\sigma^{(y)})\varrho^{*}(\sigma^{(y)}\otimes\sigma^{(y)}),
\label{eq:wootterstilde}
\end{equation}
where $*$ stands for complex conjugation in the product-state basis.
$\tilde{\rho}$ describes a very unphysical state for an entangled
state, while it is a density matrix for product states.
In the next step one calculates the eigenvalues $\lambda_{i}$
($i=1\ldots4$) of the Hermitian matrix
\begin{equation}
\label{eq:rhomatrix}
\hat{R}=\sqrt{\sqrt{\varrho}\tilde{\varrho}\sqrt{\varrho}},
\end{equation}
which are in fact square roots of the eigenvalues of the non-Hermitian
matrix
\begin{equation}
\hat{R}_{2}=\varrho\tilde{\varrho}.
\label{eq:R2}
\end{equation}
Concurrence is then defined as
\begin{equation}
C(\varrho)=\max(0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}),
\label{eq:concurrence}
\end{equation}
where the eigenvalues are put into a decreasing order.
Entanglement of formation is a monotonously increasing function of
concurrence:
\begin{eqnarray}
E(\varrho)=h\left(\frac{1+\sqrt{1-C(\varrho)^{2}}}{2}\right),\nonumber \\
h(x):=-x\log_{2}(x)-(1-x)\log_{2}(1-x).
\end{eqnarray}
Thus concurrence can be used as an entanglement measure on its own
right.
In multipartite systems the one-tangle and concurrence are linked by
the Coffmann-Kundu-Wootters inequalities
\begin{equation}
\label{eq:CKW}
T_k \geq \sum\limits_{l\neq k} C_{kl}^2
\end{equation}
which have be proven initially for three qubits in a pure state and
certain classes of multi-qubit states. For a long time they were
conjectured to be true in general. This conjecture was very recently
proven~\cite{quantph0502176}. These inequalities set limitations to
the bipartite entanglement that can be present in a multipartite
system.
\paragraph{Concurrence of assistance.}
Consider again a bipartite system described by the density operator
$\varrho$. One can follow a route complementary to that in case of
entanglement of formation and ask what is the \emph{maximum} average
entanglement available amongst the pure state realizations, termed as
the \emph{entanglement of assistance}~\cite{Wootters98}:
\begin{eqnarray}
E_{\text{assist}}(\varrho)=\max\sum_{i}p_{i}E(\Ket{\Psi_{i}}),
\nonumber \\
\text{so that}\,\,\sum_{i}p_{i}\Ket{\Psi_{i}}\Bra{\Psi_{i}}=\varrho,&
\label{eq:entass}
\end{eqnarray}
c.f. Eq.~\eqref{eq:entform}.
Interpreting again the bipartite system as a subsystem of a larger
system, one can consider that the whole system is in a pure state,
that is, we have a purification of $\varrho$ at hand. In this case
entanglement of assistance describes the maximum entanglement
available on average in the bipartite system, when a collaborating
third party, instead of omitting the rest of the system as in the case
of entanglement of formation, makes optimal von Neumann measurements
on it. Although entanglement of assistance is not an entanglement
measure according to some definitions, it is a very informative
quantity regarding entanglement.
Having a system of two qubits, one can also use concurrence instead of
entanglement in Eq.~\eqref{eq:entass}, yielding the definition of
\emph{concurrence of assistance}:
\begin{eqnarray}
C_{\text{assist}}(\varrho)=\max\sum_{i}p_{i}C(\Ket{\Psi_{i}}\Bra{\Psi_{i}}),
\nonumber \\
\text{so that}\,\,\sum_{i}p_{i}\Ket{\Psi_{i}}\Bra{\Psi_{i}}=\varrho.&
\label{eq:concass}
\end{eqnarray}
The advantage of this quantity is, that it can be easily calculated
for two qubits. As it is shown in~\cite{LaustsenVV03}, it is simply
\begin{equation}
C_{\text{assist}}(\varrho)=
\mathop{\mbox{tr}}\nolimits\sqrt{\sqrt{\varrho}\tilde{\varrho}\sqrt{\varrho}}=
\sum_{i=1}^{4}\lambda_{i},
\label{eq:cassist}
\end{equation}
c.f. Eq.~\eqref{eq:concurrence}. Note that this quantity is
essentially a fidelity between the physical density matrix $\varrho$
and the matrix $\tilde{\varrho}$, which is physical for separable
states only.
Thanks to the formula in Eq.~\eqref{eq:cassist}, concurrence of
assistance is not only an informative quantity, but it is as feasible
as concurrence itself in the case of qubit pairs.
\section{Graph states revisited}
\label{sect:graphstates}
In this Section we briefly review the properties of the Ising dynamics
for spin-1/2 particles without magnetic field, which are known from
Refs.~\cite{BriegelR01,HeinEB04}. We will talk about spins in this
context, and the $\hat \sigma^{(z)}$ eigenstates will represent the computational
basis: $\ket{0}=\ket{\uparrow}$, $\ket{1}=\ket{\downarrow}$. Consider a
set of spins, with pairwise interactions between them:
\begin{equation}
\label{eq:IsingnoB}
\hat H = -\sum\limits_{\langle k,l \rangle}
\hat \sigma^{(x)}_k \otimes \hat \sigma^{(x)}_l
\end{equation}
where the summation ${\langle k,l \rangle}$ goes over those spins
which interact with each other. (Hence the name graph states for the
states to be considered here: the geometry can be envisaged as a
graph, where the vertices are the spins, and the edges represent
pairwise Ising interactions.) As the summands in
Eq.~\eqref{eq:IsingnoB} commute, the time evolution can be written as
a product of two-spin unitaries
\begin{equation}
\label{eq:U}
\hat U(\tau) =e^{-i\hat H \tau}=
\prod\limits_{\langle k,l \rangle}
\hat U_{k,l}(\tau),
\end{equation}
where
\begin{equation}
\label{eq:Ukltau}
\hat U_{k,l}(\tau)=e^{i \hat \sigma^{(x)}_k \otimes \hat \sigma^{(x)}_l \tau}.
\end{equation}
Here $\tau$ stands for the scaled time measured in arbitrary units.
First we study the time instant $\tau=\frac{\pi}{4}$: one may directly
verify that
\begin{equation}
\label{eq:Ukl}
\hat U_{k,l}=
\frac{1}{\sqrt{2}}
\left(
\hat 1 +i \hat \sigma^{(x)} _k \otimes \hat \sigma^{(x)} _l
\right).
\end{equation}
The evolution operators without a time argument will denote those for
$\tau=\frac{\pi}{4}$ in what follows. These describe conditional
phase gates in a suitably chosen basis. Let us assume that the system
is initially in a state $\ket{e_m}$ of the computational basis, a
common eigenvector of all the $\hat \sigma^{(z)}$-s:
\begin{equation}
\label{eq:szeig}
\hat \sigma^{(z)}_n \ket{e_m} = e_{n,m} \ket{e_m}, \qquad e_{n,m}=\pm 1.
\end{equation}
The state $\hat U \ket{e_m}$ will be an eigenvector of the following
complete set of commuting observables:
\begin{equation}
\label{eq:K}
\hat K_n=\hat U \hat \sigma^{(z)} _n \hat U^\dag,
\end{equation}
with the same eigenvalues as the $e_{n,m}$-s in Eq.~\eqref{eq:szeig}.
The operators $\hat K_n$ in Eqs.~\eqref{eq:K} depend on the geometry of
the graph. They can be evaluated simply by utilizing the following
relations:
\begin{eqnarray}
\label{eq:Ucomm}
\hat U_{k,l} \hat \sigma^{(x)} _k \hat U_{k,l}^\dag &=& \hat \sigma^{(x)} _k \nonumber \\
\hat U_{k,l} \hat \sigma^{(y)} _k \hat U_{k,l}^\dag &=& -\hat \sigma^{(z)} _k \otimes \hat \sigma^{(x)} _l
\nonumber \\
\hat U_{k,l} \hat \sigma^{(z)} _k \hat U_{k,l}^\dag &=& \hat \sigma^{(y)} _k \otimes \hat \sigma^{(x)} _l
\nonumber \\
\hat U_{k,l} \hat \sigma^{(x,y,z)} _m \hat U_{k,l}^\dag &=&
\sigma_m^{(x,y,z)}
\quad (m\neq k,l).
\end{eqnarray}
which can be verified directly by substituting Eq.~\eqref{eq:Ukl} into
Eq.~\eqref{eq:K}. The joint eigenstates of these operators are termed
as \emph{graph states}~\cite{HeinEB04}. It can be shown that many of
the so arising states corresponding to different graphs are local
unitary equivalent.
As an example, consider a ring of $N$ spins with pairwise Ising
interaction. In this case
\begin{equation}
\label{eq:Kring}
\hat K^{\text{(ring)}}_l=
-\hat \sigma^{(x)}_{l-1}\otimes \hat \sigma^{(z)}_{l} \otimes \hat \sigma^{(x)}_{l+1},
\end{equation}
where the arithmetics in the indices is understood in the modulo N
sense. The common eigenstates of these commuting variables are termed
as \emph{cluster states}, and they were introduced in
Ref.~\cite{BriegelR01}, although in a different basis. They are
suitable as an entangled resource for one-way quantum
computers~\cite{RaussendorfB01}.
Note that $\hat U(\pi)=-\hat 1$ in general. Specially for a ring
topology, $\hat U(\pi/2)=-\hat 1$ holds too. This means that the
evolution is periodic: at such time instants the initial state appears
again, which is a computational basis state. Thus the Ising dynamics
without magnetic field produces oscillations between the computational
basis state and a graph (or in some of the cases, cluster) state. The
achieved graph state is selected by the initial basis state.
To obtain a more complete picture on the whole process of the
entanglement oscillations, we plot the temporal behavior of the
entanglement quantities in Fig.~\ref{fig:entangosc} for the ring
topology.
\begin{figure}[htbp]
\centering
\includegraphics{Koniorczyk_spins_fig1.eps}
\caption{(Color online.)
Overlap with the initial state and entanglement measures
for the first two qubits, during the entanglement oscillations for
five spins in a ring, generated by the Ising Hamiltonian without
magnetic field in~\eqref{eq:IsingnoB}. In the initial state all
spins are up, thus in state $\ket{0}$ if we consider qubits. The
plotted quantities are dimensionless.}
\label{fig:entangosc}
\end{figure}
In the figure we observe that the concurrence of assistance of the
qubit pair is almost equal to the square root of one-tangle of one of
the constituent spins. We will show later in this paper that the
square root of one tangle is an upper bound for concurrence of
assistance. Thus for the states in argument, the entanglement of a
subsystem with the rest of the system can be indeed ``focused'' to a
pair of qubits via suitably chosen measurement on the rest of the
system. This is obvious for the cluster states, but it appears to hold
for the most of the time evolution.
The dynamical entanglement behavior of the systems in argument can be
controlled by the appropriate choice of the initial state. Consider
for instance the following polarized initial state:
\begin{equation}
\label{eq:instate_Ising}
\Ket{\Psi_{\text{A}}(t=0)} =
\mathop{\otimes}\limits_{k=1}^N
\left(
\cos \left(\frac{\theta}{2}\right) \ket{0}_k +
\sin \left(\frac{\theta}{2}\right) \ket{1}_k
\right).
\end{equation}
The ``A''index reflects that \emph{all} the spins are rotated from the
$z$ direction in the same way. This state can be prepared by a
simultaneous one-qubit rotation, which is available even in optical
lattice systems. If $\theta=l\pi$ ($l$ being integer), we obtain the
graph state periodically, while for $\theta=l\pi/2$ the state is
stationary, thus no entanglement will be generated. Between these
values, the entanglement measured by one-tangle or concurrence of
assistance is a monotonous and continuous function of $\theta$ for all
values of time. Thus by varying this parameter of the initial state,
one can control the amount of the generated entanglement.
From the above discussion we find that Ising dynamics without magnetic
field has the following properties from the point of view of
entanglement generation:
\begin{enumerate}
\item The generated bipartite entanglement is always small.
\item In the case of the cluster states one can project the state with
certainty to a maximally entangled pair of two spins by a
measurement on the others. Moreover, required measurement is a local
one.
\item \emph{All} the states of the computational basis are
periodically transferred into states which have properties 1-2.
\item One can control the amount of the dynamically generated
entanglement by a parameter of the initial state, which can be
altered by the same local rotation applied on all the spins.
\end{enumerate}
During our investigations we will check which of these properties may
arise under different couplings, initial states and topologies.
\section{Two properties of concurrence of assistance}
\label{sect:upb}
In this Section we present two properties of concurrence of
assistance for multi-qubit systems.
Our first proposition formulates an upper bound of concurrence of
assistance.
\begin{theorem}
\label{thm:upb}
For an arbitrary state of two qubits $A$ and $B$, square root of the
one-tangle of either qubits serves as an upper bound for concurrence
of assistance, i.e.:
\begin{equation}
\label{eq:lemmst}
\sqrt{T_A}\geq C^{\text{assist}}_{AB}.
\end{equation}
\end{theorem}
Proof: Consider the ensemble realization of the state $\varrho_{AB}$
of the qubits A,B
\begin{equation}
\label{eq:bndpr1}
\varrho_{AB}=\sum_k p_k \ket{\xi_k} \bra{\xi_k}
\end{equation}
which provides the maximum in
Eq.~\eqref{eq:concass}, and use the notation
\begin{equation}
\label{eq:rhok}
\varrho_k=\mathop{\mbox{tr}}\nolimits_B \ket{\xi_k} \bra{\xi_k},
\end{equation}
thus
\begin{equation}
\label{eq:rhoa}
\varrho_{A}=\mathop{\mbox{tr}}\nolimits_B \varrho_{AB}=\sum_k p_k\varrho_k,
\end{equation}
due to the linearity of the partial trace. Substituting
Eq.~\eqref{eq:rhoa} into the definition in Eq.~\eqref{eq:entanglement}
we obtain
\begin{equation}
\label{eq:sqt}
\sqrt{T_A}=2\sqrt{\det\left( \sum_k p_k \varrho_k\right)},
\end{equation}
while according to the definition in Eq.~\eqref{eq:concass},
\begin{equation}
\label{eq:cassp}
C^{\text{assist}}_{AB}=2\sum_k \sqrt{\det(p_k\rho_k)},
\end{equation}
where we have exploited the fact that for pure states
\begin{equation}
C( \ket{\xi_k})=2\sqrt{\det \varrho_k}.
\end{equation}
Substituting Eqs.~\eqref{eq:sqt} and ~\eqref{eq:cassp} into the
statement of the Proposition in inequality~\eqref{eq:lemmst}, what we
have to show is that
\begin{equation}
\sum_k \sqrt{\det(p_k\varrho_k)} \leq
\sqrt{\det\left( \sum_k p_k \varrho_k\right)}.
\end{equation}
This is a consequence of the recursive application of the
inequality~\eqref{eq:mainineq}, which is proven in
Appendix~\ref{app:ineqproof}. \hfill QED.
Intuitively, in the spirit of the considerations concerning lower
bound of localizable entanglement in Ref.~\cite{quantph0411123}, we can
claim that a local measurement on the ancillary systems of a
purification of $\varrho_{AB}$ cannot create additional entanglement
between the spin $A$ and the rest of the system $\bar{A}$, as such a
measurement is an operation on the complementary system. Thus, by
choosing the optimal measurement we can, at best, concentrate all of
the originally available entanglement ($\sqrt{T_{A}}$) into the
entanglement between the qubits $A$ and $B$.
The appearance of the one-tangle in the context of concurrence of
assistance suggests that there might be some relation with the CKW
inequalities, and this is the case indeed. Nevertheless, it is simple
to prove the following:
\begin{theorem}
\label{thm:ckw}
For a system of three qubits $A$,$B$,$C$ in a pure state,
\begin{equation}
C_{AB}=C^{\text{assist}}_{AB}\ {\mathrm{and}}\
C_{AC}=C^{\text{assist}}_{AC}
\end{equation}
implies that the Coffmann-Kundu-Wootters inequalities in
Eq.~\eqref{eq:CKW} are saturated, thus
\begin{equation}
C_{AB}^2+C_{AC}^2=T_A
\end{equation}
holds
\end{theorem}
This immediately follows from the same derivation as in
Ref.~\cite{CoffmanKW00} by exploiting the fact that the matrices $R_2$
of Eq.~\eqref{eq:R2} for subsystems $AB$ and $AC$ have rank one due to
the conditions of the proposition. (C.f. Eqs.~\eqref{eq:concurrence}
and~\eqref{eq:cassist}).
Proposition~\ref{thm:ckw} relates the direct and measurement assisted
approach to bipartite entanglement in multipartite systems. The
question remains open, of course, whether it is true for more
parties, too.
As already pointed out in Section~\ref{sect:graphstates}, for the
graph states themselves $\sqrt{T_A} = C^{\text{assist}}_{AB}=1$, and
besides $\sqrt{T_A} \approx C^{\text{assist}}_{AB}$ holds throughout
the whole time evolution generated by Ising couplings. According to
Proposition~\ref{thm:upb} it is correct to call such states as those with
maximal concurrence of assistance. Meanwhile $C_{AB}\ll
C^{\text{assist}}_{AB}$, which suggests that CKW inequalities are far
from being saturated, which is indeed the case. The generated
entanglement is essentially multipartite, but it can be converted to
bipartite via a measurement. On the other hand, if CKW inequalities
are saturated, then we can expect concurrence of assistance being
below the square-root of one-tangle. Besides, the question naturally
arises, whether it is possible to dynamically create entanglement
oscillations in spin systems which saturate CKW inequalities instead.
\section{Controlled generation of concurrence and concurrence of assistance}
\label{sec:control}
Now we turn our attention to spin-1/2 systems as those naturally
realize multi-qubit systems. We assign the $\hat \sigma^{(z)}$ eigenstates as the
computational basis states as $\ket{0}=\ket{\uparrow}$,
$\ket{1}=\ket{\downarrow}$. We will use the qubit notation for
simplicity.
We have seen in Section~\ref{sect:graphstates} that certain states
with maximal concurrence of assistance can be generated in dynamical
oscillations, and the control over the available entanglement is
realized by the altering of the initial state. This control requires
a simultaneous operation on all the spins, and as for bipartite
entanglement, it effects the entanglement available via assistive
measurements only, as concurrence itself takes low values throughout
the evolution. First we consider whether it is possible to control the
concurrence itself too, and if it is possible to control the evolution
by varying a single spin only.
Consider first a system of $N+1$ spins with XY couplings:
\begin{equation}
\hat H_{XY}=-\sum\limits_{<i,j>} \hat \sigma^{(x)}_i \hat \sigma^{(x)}_j + \hat \sigma^{(y)}_i \hat \sigma^{(y)}_j,
\label{eq:XYnoB}
\end{equation}
in a star topology: spin $0$ is the middle one, while spins $1$ to $N$
are the outer ones, each coupled to the central one. Even though the
summands of the Hamiltonian do not commute, the eigenvalues and
eigenvectors can be calculated. One would expect that the state of the
middle spin can control the entanglement behavior, as the interaction
of the outer spins is mediated by this one. Indeed, if one considers
the initial state where only the middle spin is rotated, the others
point upwards, i.e. they are in the state $\ket{0}$:
\begin{widetext}
\begin{equation}
\label{eq:inXY}
\Ket{\Psi_{\text{M}}(t=0)}=
\left(\cos \left(\frac{\theta}{2}\right) \ket{0}_0 +
\sin \left(\frac{\theta}{2}\right) \ket{1}_0 \right)
\otimes
\mathop{\otimes}\limits_{k=1}^N \ket{0}_k,
\end{equation}
the time evolution, as shown in
Appendix~\ref{app:andyn}, reads
\begin{eqnarray}
\label{eq:XYtime}
\Ket{\Psi_{\text{M}}(t)}=
&\cos \left(\frac{\theta}{2}\right)&
\left(
\ket{0}_0 \otimes \mathop{\otimes}\limits_{k=1}^N \ket{0}_k
\right)
\nonumber \\
+
&\sin \left(\frac{\theta}{2}\right)&
\left(
\cos(2\sqrt{N}t)
\ket{1}_0 \otimes \mathop{\otimes}\limits_{k=1}^N \ket{0}_k
-i\sin(2\sqrt{N}t)
\ket{0}_0 \otimes
\frac{1}{\sqrt{N}}\sum\limits_{l=1}^N
\ket{0,\ldots 0,1_l,0\ldots}
\right).
\end{eqnarray}
\end{widetext}
The rotation of the central spin indeed controls the entanglement
behavior of the system: for $\theta=0$ no entanglement is created,
while for $\theta=\pi$ the maximal entanglement oscillation will
appear. The state is a superposition of a product and an entangled
state depending on $\theta$, thus this parameter controls the
available entanglement continuously.
These entanglement oscillations are different than those in case of
Ising couplings. As shown in Appendix~\ref{app:rankone}, concurrence
is equal to concurrence of assistance in the case of any superposition
of the computational basis states with all spins up and one down. This
means that in the states arising throughout this evolution
measurements do not facilitate ``focusing'' entanglement onto two
spins. Besides, it has been proven in Ref.~\cite{CoffmanKW00} that
these states saturate CKW inequalities in Eq.~\eqref{eq:CKW}, thus the
bipartite entanglement present in the states is maximal. This scheme
provides a dynamical way of preparing multipartite states with maximal
bipartite entanglement, which is controlled by the initial state of
one spin. In addition, it illustrates that Proposition~\ref{thm:ckw}
works for more than two subsystems, which is shown exactly in this
specific case. Note that at certain times the central spin gets
disentangled from the outer ring, which is meanwhile in a state with
highest pairwise concurrence possible. Such a maximally entangled
state is reached for the whole system, too, at different times, see
also in Fig.~\ref{fig:XYfig}/a).
In Fig.~\ref{fig:XYfig} we present the behavior of concurrence and
square root of one tangles for a ring topology, and for an outer spin
in a state different from the others, as an illustration. Here we
consider the initial state producing the maximal entanglement, that
is, one spin is considered to point downwards, while all the others
point upwards. An analytical solution similar to that in
Appendix~\ref{app:andyn} would be feasible too, but more energy
eigenstates have nonzero weights in the initial state. Of course the
functions are not equal for all the spins in such case, but their
behavior is similar to the star topology. According to
Appendix~\ref{app:rankone}, concurrence is equal to concurrence of
assistance, and of course CKW inequalities are saturated.
\begin{figure*}[htbp]
\centering
\includegraphics{Koniorczyk_spins_fig2.eps}
\caption{(Color online.)
Concurrence and one-tangle for spins coupled by XY
interactions in the absence of magnetic field. In Figs. a)-d)
6+1 spins are ordered into a star topology, while in e)-f) a
ring of 6 spins is considered. In the initial state all spins
are up, except for one, which is down. In a)-b) the central spin
while in c)-d) an outer spin is flipped to point upwards.
Figures on the left display concurrences of qubit pairs, those
on the right display square roots of one-tangles as a function
of time. Legend: c: the central central spin, f: an outer spin
which is flipped initially, o$_k$: an outer spin which is the
$k$-th neighbor of the initially flipped one. Time is measured
in arbitrary units, the other quantities are dimensionless. The
figure is obtained from exact numerical diagonalization and
direct calculations.}
\label{fig:XYfig}
\end{figure*}
From the above discussion one might conclude that the XY couplings
``prefer'' to generate pure bipartite entanglement. This is however
not the case. In order to examine this issue, we have plotted the
behavior of entanglement quantities for an XY-coupled star
configuration with the initial state in
Eq.~\eqref{eq:instate_Ising}, that is, the polarized state arising
as a product of all the spins in the same state which is a
superposition of $\ket{0}$ and $\ket{1}$. It appears that in this
case concurrence between two outer spins is heavily suppressed, but
concurrence of assistance takes rather high values for certain
initial states. Moreover, concurrence of assistance is very close to
the square-root of one-tangle, just as in the case of the Ising
couplings. Thus XY couplings can, if the initial state is suitably
chosen, produce states with a high amount of bipartite entanglement
available via assistive measurements. Notice however, that the
square-root of one-tangle is higher than concurrence of assistance,
thus there is also some multipartite entanglement present in the
system which cannot be accessed by assistive measurements.
\begin{figure*}[htbp]
\centering
\includegraphics{Koniorczyk_spins_fig3.eps}
\caption{(Color online.)
Comparison of rotating all spins or the central spin in the
initial state of a 6+1 spin star with XY couplings. Fig. a)
displays the temporal behavior of concurrence if the central spin
is rotated, i.e. the initial state in Eq.~\eqref{eq:inXY} is used,
while the other three figures display the evolution of
concurrence, concurrence of assistance and square-root of
one-tangle with an initial state in Eq.~\eqref{eq:instate_Ising},
that is, all spins in the same superposition of $\ket{0}$ and
$\ket{1}$. All the bipartite quantities correspond to two outer
spins, square-root of one-tangle is that of one of these. $\theta$
stands for the dimensionless parameter of the input state.}
\label{fig:xyallcontrol}
\end{figure*}
Consider now Ising interactions, and ask whether it is sufficient to
rotate just one spin in order to control the amount of available
entanglement, e.g. disable entanglement oscillations. For the
rotation of an outer spin in the star configuration or the ring
topology we have found that entanglement cannot be completely
suppressed. However, if we rotate the central spin in a star topology,
it is possible to control entanglement behavior. This is illustrated
in Fig.~\ref{fig:Isingcontrol}. Similarly to the case of initial state
of~\eqref{eq:instate_Ising}, concurrence of assistance is almost equal
to the square root of one-tangle, while concurrence itself is close to
zero.
\begin{figure*}[htbp]
\centering
\includegraphics{Koniorczyk_spins_fig4.eps}
\caption{(Color online.)
Control of entanglement generation in a system of 6+1
Ising-coupled spins in a star configuration. The central spin is
rotated, i.e. initial state is that in Eq.~\eqref{eq:inXY}, the
others are in the state $\ket{0}$. Figures a) and c) display
temporal behavior of concurrence as a function of parameter
$\theta$ of the initial state, for a) two outer spins and
b) an outer and a central spin. Figure b) shows the difference
between square root of one tangle and concurrence of assistance
for two outer spins. Figure d) shows concurrence for the central
and an outer spin. This quantity is zero for the outer spins.}
\label{fig:Isingcontrol}
\end{figure*}
It is important to note that the possible high value of concurrence of
assistance appears to have nothing to do with the bipartite nature of
the couplings. In order to see this, consider a ring of spins with the
``weird'' threepartite couplings
\begin{equation}
\label{eq:weird}
\hat H_{\text{weird}}= -\sum_k \hat \sigma^{(x)}_{k-1} \hat \sigma^{(y)}_{k} \hat \sigma^{(x)}_{k+1}.
\end{equation}
The temporal behavior of concurrence of assistance and square-root of
one-tangle for neighbors is shown in
Fig.~\ref{fig:weird}. Concurrence of assistance apparently reaches its
upper limit showing that threepartite interaction can also generate
maximal focusable bipartite entanglement.
\begin{figure}[htbp]
\centering
\includegraphics{Koniorczyk_spins_fig5.eps}
\caption{(Color online.)
Time evolution of concurrence of assistance and one-tangle for the
``weird'' Hamiltonian in Eq.~\eqref{eq:weird}, for 6 spins. In the
initial product state all spins point upwards.}
\label{fig:weird}
\end{figure}
In this Section we have shown that it is possible to generate
entanglement oscillations not only between product and graph (or
cluster) states, but also between product states, and states with
maximal possible bipartite entanglement, and control this entanglement
behavior by the initial state.
\section{Entangled bases in the presence of a magnetic field}
\label{sect:bases}
In Section~\ref{sect:graphstates} we have seen that in the absence of
magnetic field the Ising couplings induce such dynamics that
\emph{all} the states of the computational basis evolve into graph
states periodically. In the Heisenberg picture we may interpret this
so that the product of the $\hat \sigma^{(z)}$ operators evolves to such a joint
observable, which has an eigenbasis formed fully by graph states. One
of the key features of such states is that they can be projected onto
a maximally entangled state of any pair of selected spins by a von
Neumann measurement on the rest of the spins. We show here that this
property is preserved, moreover enhanced if the magnetic field is
present.
First we consider the Ising Hamiltonian with a magnetic field pointing
towards a direction characterized by the angle $\phi$:
\begin{equation}
\label{eq:Ising}
\hat H _\text{Ising}= -\sum\limits_{\langle k,l \rangle}
\hat \sigma^{(x)}_k \otimes \hat \sigma^{(x)}_l -
B\sum_k e^{i\frac{\phi}{2}\hat \sigma^{(x)}_k} \hat \sigma^{(z)}_k e^{-i\frac{\phi}{2}\hat \sigma^{(x)}_k}.
\end{equation}
Thus we have two free parameters characterizing the magnetic field,
its magnitude $B$ and direction $\phi$. Note that the rotation of the
magnetic field is equivalent to a rotation of the initial state in
this case.
In particular, we are interested in the temporal behavior of the
concurrence of assistance $C_{\text{assist}}$ for certain pairs of
spins. Therefore we calculate the time evolution of all the states
$\ket{e_i}$ of the computational basis:
\begin{equation}
\label{eq:isingtrstates}
\Ket{e_i'(B,t)}=
\exp\left(-i\hat H _{\text{Ising}}t\right)\Ket{e_i},
\quad i=1\ldots 2^N,
\end{equation}
Then we can evaluate the average
\begin{equation}
\label{eq:ensavg}
{\overline{C_{\text{assist}}}}(B,t)= \frac{1}{2^N}
\sum_i C_{\text{assist}}\left( \Ket{e_i'(B,t)} \right),
\end{equation}
and also the standard deviation
\begin{equation}
\label{eq:ensdev}
\sigma_{C_{\text{assist}}}(B,t)= \sqrt{\overline{C_{\text{assist}}^2}-\overline{C_{\text{assist}}}^2}
\end{equation}
of concurrence of assistance over the computational basis states as
initial states. The deviation is informative regarding the deviation
of the quantity from the average for the different initial states.
A typical result of the calculation is plotted in
Fig.~\ref{fig:Isingbasis}
\begin{figure*}[htbp]
\centering
\includegraphics{Koniorczyk_spins_fig6.eps}
\caption{(Color online.)
Average (a,c) and standard deviation (b,d)
of concurrence of assistance for a pair of outer spins of a star
topology, taken over all the possible computational basis states
as initial states. Ising Hamiltonian with a magnetic field as in
Eq.~\eqref{eq:Ising}, 4+1 spins in a ring topology. In Figs. a)
and b), $\phi=0$, $B$ dependence is plotted in Figs. c) and d),
$B=1$, $\phi$-dependence is plotted. Similar figures are obtained
for different choice of the spin pair, and ring topologies too.}
\label{fig:Isingbasis}
\end{figure*}
For $B=0$ the expected entanglement oscillations are present. If the
magnetic field is nonzero, the system does not tend to return to the
initial product states. Magnetic field resolves many of the the high
degeneracies of the Ising Hamiltonian, and the eigenvalues become
incommensurable. Therefore, even though the evolution of the system
will be almost periodic according to the quantum recurrence
theorem~\cite{BocchieriL57}, the reasonable approximate recurrences
occur after an extremely long time.
For $B\neq 0$, the ensemble average of concurrence of assistance
appears to be rather strictly close to one for quite long time
intervals, while its standard deviation is low. The deviation can be
further suppressed by the suitable choice of magnetic field. This
behavior of concurrence of assistance is very similar to that in
Fig.~\ref{fig:Isingbasis} also for different chosen pair of qubits,
for qubit pairs of a ring topology, and also for different
computationally feasible number of qubits. From this we can conclude
that the elements of the computational basis are transformed into
states which can be projected into nearly maximally entangled states
of chosen two spins via a von Neumann measurement on the rest of the
spins. Otherwise speaking, Ising couplings do take the products of
$\hat \sigma^{(z)}$ matrices to such complete set of commuting operators, whose
eigenstates have the above mentioned property. The temporal duration
of the presence of this property is significantly enhanced by the
magnetic field.
The so arising entanglement is essentially multipartite: the
appearance of the magnetic field does not enhance concurrence of the
qubit pairs as it can be verified by performing the same calculation
with concurrence. Note that the characteristic behavior of the
entanglement as reflected by the Meyer-Wallach measure for the kicked
Ising model, also in the case of the presence of a magnetic field
pointing towards an arbitrary direction was also reported in
\cite{quantph0409039}.
Another relevant question might be whether the required measurements
are local, i.e. how much localizable entanglement is present. To
illustrate this issue in our numerical framework, we have evaluated a
lower bound for localizable entanglement by the mere consideration of
a measurement on the computational basis. According to our experience,
the behavior of the so available bipartite entanglement resembles
that of concurrence of assistance, but takes lower values. However,
quite remarkable bipartite entanglement is still available, which is
in most of the cases still higher than the limit that CKW inequalities
would allow for, without measurements.
Next we investigate the properties of the $XY$-model from the same
point of view: into Eq.~\eqref{eq:isingtrstates} we substitute the
Hamiltonian
\begin{eqnarray}
\label{eq:XY}
\hat H_{\text{XY}} = -\sum\limits_{\langle k,l \rangle}
\left( \hat \sigma^{(x)}_k \otimes \hat \sigma^{(x)}_l +\hat \sigma^{(y)}_k \otimes \hat \sigma^{(y)}_l\right) \nonumber \\
- \sum_k e^{i\frac{\phi}{2}\hat \sigma^{(x)}_k} \hat \sigma^{(z)}_k e^{-i\frac{\phi}{2}\hat \sigma^{(x)}_k}.
\end{eqnarray}
A homogeneous magnetic field parallel to the $z$ does not have any
effect on the entanglement behavior of the system, as
\begin{equation}
\label{eq:commut}
\left[ \sum_l \hat \sigma^{(z)};\sum\limits_{\langle k,l \rangle}
\left( \hat \sigma^{(x)}_k \otimes \hat \sigma^{(x)}_l +\hat \sigma^{(y)}_k \otimes \hat \sigma^{(y)}_l\right)\right]=0
\end{equation}
thus the local rotations generated by $\sum_l \hat \sigma^{(z)}$ can be taken into
account after calculating the effect of the couplings. Therefore we
pick $B=1$, and investigate the dependence of concurrence and
concurrence of assistance on the direction $\phi$ of the field.
The quantities evaluated are again those in Eqs.~\eqref{eq:ensavg}
and~\eqref{eq:ensdev}, both for concurrence and concurrence of
assistance. A typical result is displayed in Fig.~\ref{fig:XYbasis}.
\begin{figure*}
\centering
\includegraphics{Koniorczyk_spins_fig7.eps}
\caption{(Color online.)
Time evolution of averages (a,c) and deviations (b,d)
of concurrence (a,b) and concurrence of assistance (c,d) for two
outer spins of a star configuration of 4+1 spins coupled by the XY
Hamiltonian with magnetic field in \eqref{eq:XY}. Parameter $\phi$
describes the direction of the magnetic field. Similar behavior
was observed for ring topologies and different choice of the qubit
pair too.}
\label{fig:XYbasis}
\end{figure*}
It appears that for $\phi=0$ we obtain oscillations in the average
concurrence, too, while concurrence of assistance is not significantly
higher than concurrence itself. The appropriate choice of the
direction of the magnetic field can suppress concurrence,
significantly enhance concurrence of assistance and decrease its
deviation. Thus even though the couplings are not Ising type, at least
the feature of the Ising couplings that it produces bases with high
concurrence of assistance can be retained.
\section{Conclusions}
\label{sect:concl}
In this paper we have related the problems of maximizing pairwise
concurrence and pairwise concurrence of assistance in a system of
multiple qubits. We have shown that the square root of one tangle of
a qubit is an upper bound for the concurrence of assistance of a qubit
pair containing the particular qubit. We have also shown that for a
certain set of states for which the CKW inequality is known to be
saturated, the concurrence is equal to the concurrence of assistance.
This means that the bipartite subsystem under consideration is not
correlated with the rest of the system via intrinsic multipartite
entanglement.
We have also studied the entanglement behavior of spin-1/2 systems
modeling qubits, from this perspective. We have shown that in a star
configuration of an XY coupled spins entanglement oscillations between
product states and states with maximal bipartite entanglement
according to CKW inequalities can be dynamically generated. The
oscillations can be controlled by rotating the spin which mediates the
interaction, and at some points it gets disentangled from the rest of
the outer ring, which is maximally entangled in the CKW sense. This
maximal entanglement is reached for the whole system, too. We have
shown numerically that the star topology facilitates the similar
control of entanglement oscillations between product and graph states.
The rotation of all the qubits of the initial state on the other hand
leads to different behavior of concurrence of assistance, as the
enhancement of bipartite entanglement to the measurement appears. We
have found similar behavior for different topologies numerically.
According to our numerical results magnetic field can lead to the
temporal enhancement of concurrence of assistance in the entanglement
oscillations starting from the states of the computational basis, in
the case of spins coupled by Ising interactions, arranged into ring or
star topologies. Thereby a special entangled basis can be accessed.
We have found similar behavior for the case of XY couplings: magnetic
field applied along properly chosen direction suppresses concurrence
and enhances concurrence of assistance.
According to the presented results, pairwise couplings between spins
and qubits can be used effectively for different tasks of distributing
bipartite entanglement between multiple parties. It is also possible
to control the dynamical behavior of entanglement by local quantum
operations such as rotation of control qubits. Besides, magnetic
field can be utilized to temporally enhance certain entanglement
features, or to chose between qualitatively different kinds of
entanglement behavior. It would be also interesting to investigate
whether the entangled bases available in the described means are
useful for quantum information processing tasks.
\begin{acknowledgments}
This work was supported by the European Union projects QGATES and
CONQUEST, and by the Slovak Academy of Sciences via the project CE-PI.
M.~K. acknowledges the support of National Scientific Research Fund of
Hungary (OTKA) under contracts Nos. T043287 and T034484. The authors
thank G\'eza T\'oth for useful discussions.
\end{acknowledgments}
|
1,108,101,565,263 | arxiv | \section{Introduction}
\IEEEPARstart{N}{owadays}, there are a number of technologies for identifying a person, and one of them is face recognition, which has been applied in a wide range of applications \cite{Luan2017Disentangled,Jingna2020An}. Additionally, fingerprint recognition, palmprint and action recognition have attracted large attention from the research community \cite{Cao2017Automated,Jia2017Palmprint,LiuDWDYL20}. Gait recognition is a biometric technology that recognizes people at a longer distance by acquiring walking data \cite{Bregler1997Learning,Minxiang2020Distinct}, which aims to analyze the overall human activity rather than part of the human body, and gait is not easily imitated and hence more reliable for the recognition task.
Human beings have a unique visual system, which can directly identify people based on human gait beyond a certain distance. The input data of a gait recognition model is generally a sequence of walking videos or images, and its data collection is similar to face recognition, which is noninvasive. However, due to a large amount of data from image sequences, the computational complexity of gait recognition is relatively high, and the processing is not straightforward. A gait recognition system extracts key features from the images of the joint movement of a walking person. So far, there is no commercialized gait-based identity authentication system.
In previous research, many types of sensors have been used for gait data collection and recognition - image, depth and inertial sensors, which acquire RGB images/videos, motion distance and acceleration/orientation. Among them, captured images have the advantages of convenience, non-contact, and easy understanding. Besides, skeleton data well preserve the integrity of gait movement whilst minimizing the interference of color and shape of the human body in the image. Some existing work can directly extract 3D skeleton nodes from the images of the human body and this character makes up for the shortcomings of 2D RGB images \cite{Liu2017Skeleton,Li2019Attentive}.
On the one hand, traditional gait recognition algorithms have been well studied including Hough transform, contour extraction, and three-dimensional wavelet moments. Model-based algorithms are well investigated in multiple data sources. Besides, deep learning methods, such as convolutional neural networks (CNN) and restricted Boltzmann machine (RBM) have been employed and achieved promising results in gait recognition or other relevant fields \cite{Wu2016A,Hossain2013Multimodal,Qi2016Hierarchically}. Moreover, there are several studies on image segmentation and image classification, which can be applied to the field of gait recognition to solve the problem of multimodal gait data classification \cite{2018Image,2018An}. However, these approaches are not designed to deal with sequential data.
On the other hand, although some feature extraction algorithms take temporal changes into account, they lack the analysis of spatial information. These methods use the relationship between time series to represent the dynamics in the gait cycle, but do not analyze each frame in detail. Specifically, there are several state-of-the-art methods for sequential information acquisition \cite{hochreiter1997long,10.5555/3305381.3305543,Shu,8397466,Zhao2018Dual,Liu2017Skeleton}, such as original, bi-directional, and attention-based LSTM. However, these methods come from the standard LSTM in order to enhance its performance, but the spatial mapping of the input data has not been fully addressed.
Apart from temporal and spatial characteristics, we also consider the structural relationship between the various parts of the human body when analyzing gait, such as the relationship between feet and legs, head and body, and limbs and trunk. Unlike CNN, capsule network (CapsNet) \cite{Sabour2017Dynamic} does not bypass the position of the entity (body parts) within the region (human body). For low-level capsules (body parts), location information is ``place-coded" whereas capsule is active. Using an iterative routing process, each active capsule will choose a capsule in the layer to be its parent in the tree. For the task of gait recognition, this iterative process will be solving the problem of locating the body parts.
In order to extract temporal, spatial, and structural features from multimodal data, we design a capsule network to synchronously analyze gait patterns and changes. It mainly includes two tasks, feature extraction and classification. We extract and classify features, and evaluate the effectiveness of feature extraction in classification. Inspired by the learning process of CapsNet \cite{Sabour2017Dynamic} and the gated recurrent unit (GRU) \cite{hochreiter1997long}, an associated spatio-temporal capsule network (ASTCapsNet) is here proposed for gait recognition, including three modules: a low-level feature extractor, a high-level feature extractor, and a decision-making layer.
The low-level feature extractor aims to extract spatio-temporal features of the input data, including a novel expandable memory module and a convolution layer; the high-level feature extractor mainly performs matrix operation on the feature map in the form of capsules, including the primary capsule layer, the relationship layer and the digital capsule layer with the dynamic routing algorithm.
In the decision-making layer, we design four softmax classifiers to analyze the output of all layers, and the parameters of each layer of the network are optimised until the redefined joint loss converges to the minimum. Moreover, the standard Bayesian model is employed to fit the results of the classifiers and regain as a new feature input. Furthermore, we also propose a relationship layer between the capsules to calculate the relative relationship between the local features of gait.
The main contributions of this paper are summarized as follows:
\begin{itemize}
\item An associated spatio-temporal capsule network (ASTCapsNet) is proposed for spatio-temporal and structural feature extraction and classification using multimodal gait data.
\item Two feature extractors are designed for low- and high-level gait feature analysis and extraction. The low-level feature extractor contains a novel memory module and a convolution layer to extract spatio-temporal features. In the high-level feature extractor, we propose a relationship layer to calculate the relationship matrix between two capsules to ensure the structural integrity of the input.
\item We develop a recurrent memory unit to enable the model to converge faster than the original GRU unit.
\item The decision-making layer is used to construct the evaluation mechanism of ASTCapsNet, through four softmax classifiers and a Bayesian model to select optimal features.
\item Compared against the other state-of-the-art technologies, the proposed method achieves superior results on five challenging datasets, including both normal and abnormal gait data, for multimodal gait recognition.
\end{itemize}
This paper is organized as follows. Section II describes existing studies on gait recognition. Section III introduces the proposed ASTCapsNet model, and Section IV shows experimental results. We discuss the performance of ASTCapsNet, and point out its weaknesses in Section V. We conclude the whole paper in Section VI.
\section{Related Work}
Significant efforts have been made to progress the state-of-the-art of gait recognition. From handcrafted feature recognition to model-based recognition, gait recognition methods have undergone significant progress. It must be pointed out that gait data collection process has become diverse.
\subsection{Multi-Sensor Based Methods}
There are several ways to obtain gait data collected by sensors. The types and output of the sensors are different and can be roughly classified into the following types: cameras, infrared, depth, force sensors, accelerators and gyroscopes.
Camera acquisition is convenient and non-invasive, which can capture people's gait form at a certain distance. Chattopadhyay \textit{et al.} combined front and back view features from RGB-D images. The data acquisition was conducted at the airport security checkpoint with two depth cameras installed on top of the metal detector door outside the yellow line \cite{Chattopadhyay2014Frontal}. Xue \textit{et al.} used an infrared thermal imaging camera to collect gait images to establish an infrared thermal gait database, which can detect the human body and remove noise from complex background \cite{Xue2010Infrared}.
There is also the use of gyroscopes and accelerators to develop wearable measurement modules for measuring inertial signals generated while walking and to assess gait, as a basis for new hemiplegia diagnostic techniques using wearable devices \cite{Park2017Design}.
Besides, some studies were based on foot force datasets. Force sensors were installed on the sole to capture the foot movement data of a patient in order to monitor the progress of neurodegenerative diseases \cite{Zhao2018Dual}.
\subsection{Hand-crafted Features Based Approaches}
Based on the above-mentioned data sources, early gait recognition techniques often use traditional computer vision methods for image understanding and produce hand-crafted features from raw data. Gianaria \textit{et al.}\cite{gianaria2014human} defined a dataset of physical and behavioral features to identify relevant parameters for gait description. Andersson \textit{et al.} \cite{andersson2015person} acquired gait attributes in each gait cycle, such as leg angle, step length, stride length, and anthropometric attributes such as mean and standard deviation.
However, these features are in a low-dimensional space and are mainly based on human experience. Therefore, it is difficult to obtain descriptive features in a high-dimensional space.
\subsection{Model-Based Approaches}
With remarkable advantages that deep learning plays in various fields, gait recognition technologies have also employed deep learning algorithms to model the dynamic changes of gait. When observing human walking, people can describe the structure of the human body and detect motion patterns of human limbs. The structure of the human body can be interpreted based on their prior knowledge. Model-based gait recognition methods play an active role in recent development.
Feng \textit{et al.} proposed a new feature learning method to learn temporal information in a gait sequence for cross-view gait recognition. Heat maps extracted by a CNN-based pose estimation method were used to describe the gait information in every frame, and the standard LSTM was adopted to model a gait sequence \cite{Feng2017Learning}.
Wu \textit{et al.} utilized deep CNNs for human identification via similarity learning, which can recognize the most discriminative changes of gait patterns \cite{Wu2016A}. In order to extract gait spatio-temporal features, CNN and LSTM have also been simultaneously applied to establishing models for gait recognition \cite{Alotaibi2017Improved}. Although these methods explain gait patterns well, the characteristics of gait are not only the stacking of features, but also include structural and positional information, such as the positional relationship between arms and legs, and so on.
The activities of the neurons within CapsNet represent the various properties of a particular entity that is present in the data. These properties can include many different types of instantiation parameters such as pose, deformation, velocity, and texture, which can improve gait recognition. Even though Capsule Network has been applied \cite{Zhao_2019_CVPR,NIPS2018_7823}, gait recognition is seldomly involved.
In this paper, we propose an associated spatio-temporal capsule network (ASTCapsNet) to process the input gait
data and learn more powerful multi-level structured representations at each class of the streaming gait data. The newly proposed low- and high-level feature extractors enhance the capability of our framework for gait recognition in different gait streams. Moreover, we introduce softmax classifiers and the Bayesian model to evaluate the importance of the features generated by each module, which increases the accuracy of classification. Furthermore, we extensively evaluate the proposed gait recognition framework over five datasets, including the KinectUNITO, SDUgait, CASIA-A datasets for people identification, and NDDs, PD datasets for disease diagnosis.
\section{Associated Spatio-Temporal Capsule Network}
\begin{figure*}[htp]
\centering
\includegraphics[width=13cm]{./image/astcapsnet.png}
\caption{The structure of ASTCapsNet. It is divided into three modules: A low-level feature extractor for spatio-temporal feature modeling, including a memory module and a convolution layer; a high-level feature extractor for learning structural and relationship features, including a primary capsule layer, a relationship layer, and a digit capsule layer; and a decision-making layer for classification, including four softmax classifiers and a Bayesian model. Four softmax classifiers behind the first two modules are utilized to evaluate the system performance. The Bayesian model utilizes the outputs of the previous two feature extractors to determine the final class labels. }
\label{capsnet_process}
\end{figure*}
We introduce the proposed architecture, associated spatio-temporal capsule network (ASTCapsNet), for gait recognition in this section. The overall schema of this method is illustrated in Fig. \ref{capsnet_process}. Here, we take the input gait images as an example to illustrate that other sensor data also follows this step. Firstly, the image is fed into the memory module and the convolution layer at the same time, and the generated spatio-temporal features are used as the input of softmax 1 and 2. Then, after the high-level feature extractor receives the low-level spatio-temporal features, the capsule layer and the relationship layer are used to extract the high-level structure and relationship features as the input of softmax 3 and 4, and the decision-making module utilizes the fused feature for the final decision.
In the proposed network, the inputs of ASTCapsNet are continuous sensor output data at each time step including images, force-sensitive data, 3d skeleton joint data, and time series. The structure of the gated recurrent unit (GRU) is performed in the time-domain to model the gait dynamics over data frames. To effectively extract high-level features from low-level features and determine the relationship between them, a dynamic routing algorithm is used for weighing operations. To better deal with different levels of feature extracted by the network, a Bayesian modeling mechanism is also introduced to process the output of each layer for our network.
On the one hand, for extracting spatial information from gait, CNN (convolutional neural network) is one of the most popular methods which can implement feature extraction by accumulating adjustments layer by layer. However, in this process, there is an important information loss - the relative position and relationship between features \cite{Sabour2017Dynamic}. Precisely it is since the pooling operation in CNN can only provide rough position information, allowing the model to ignore small spatial changes, and cannot accurately learn the position correlation of different objects, and to this end, the capsule network (CapsNet) emerged.
On the other hand, for capturing the temporal information, gated recurrent unit (GRU) is a recurrent neural network that can be expanded into several cells (a chainlike model), which can extract information at each time step, and the connection between hidden units is iterative.
\subsection{Low-Level Feature Extractor}
Capsule network (CapsNet) \cite{Sabour2017Dynamic} has verified their superior strength in extracting a spatial feature from different data types \cite{Zhao_2019_CVPR}. Inspired by the success of these approaches in the analysis of different data, we use a convolutional mechanism and capsule structure in CapsNet to extract spatial features. Besides, by adopting the advantages of processing temporal sequential data of GRU, we design a temporal mask added in each layer to represent the unique spatial and temporal features. Specifically, a hierarchical architecture with a spatio-temporal feature extraction layer is utilized in our model to learn a comprehensive representation.
To introduce the ASTCapsNet recurrent neural networks, we begin with the definition of several notations. Suppose we are given an input sequence $D = {\{x_i \in \mathbb{R}^N, i=1,2,...,M\}}$, where $M$ is the sample number and $N$ is the feature dimension in each sample, with corresponding class label sequences $L={\{y_i \in \mathbb{R}^1, i=1,2,...,M\}}$. For the task of gait recognition, our aim is to address three problems: (1) Successfully obtaining spatio-temporal feature. (2) Acquiring high-level features, and (3) seeking the final class label.
\subsubsection{Convolution Layer}
The first building block of our network model is the convolution layer. The purpose of this layer is to use several sliding convolution kernels with a step size of 1 to filter the input data and extract multiple feature maps.
Firstly, we utilize $L_2$ normalization to scale the original input $D=\{x_i\in\mathbb{R}^N, i=1,2,...,M\}$ to [0,1] with the output of $D'$. And then, we feed $D'=\{x'_i\in\mathbb{R}^N, i=1,2,...,M\}$ into memory and convolution modules, respectively. For the pre-processing of $D'$, we reshape each training sample in a format of $K \times T = N$, where $K$ and $T$ are the numbers of rows and columns of each sample so that convolution operation can be carried out in the form of matrix no matter what the data type is. The sizes of the convolutional kernels must be smaller than $K$ and $T$ respectively. The feature map is demonstrated as follows:
\begin{equation}
{h^{l}_{W,b}(x')} =f(W^Tx')= f\left(\sum_{i=1}^M{W_i}^{l-1}*{x'_i}^{(l-1)}+b^{l-1}\right).
\label{conv_1}
\end{equation}
The $*$ operation above is essentially a process in which the convolution kernel (here the shared weight is the convolution kernel) $W_i$ performs convolution operations on all the related feature maps at the ${(l-1})^{th}$ layer, then sums them up and adds a bias parameter $b$ to obtain the final excitation value ${h^{l}_{W,b}(x')}$ at layer $l$ by taking sigmoid.
\subsubsection{The Proposed Recurrent Memory Unit}
We feed the original data into a two-layer memory module to extract temporal information and obtain the corresponding feature matrix. A variant of long short-term memory (LSTM), called the gated recurrent unit (GRU), is proposed to learn long-term dependency information from the input dynamic data, merging the input and forgot gates into an update gate and fusing node and hidden states in each node of the original LSTM, which was first proposed in \cite{Chung2014Empirical}. We improve the performance of the original GRU cell by adding a temporary path and modifying the state settings. Fig. \ref{grucell} shows the internal structure of the proposed recurrent memory unit and illustrates the cooperation of all the gates.
\begin{figure}[!htb]
\centering
\includegraphics[width=7cm]{./image/gru.png}
\centering
\caption{The proposed recurrent memory unit. The internal structure and operations are shown in this figure, and $ctemp$ is the new added path in the original GRU cell.}
\label{grucell}
\end{figure}
Because of slow convergence of the original GRU in gait data classification, we improve the gating structure in its internal deployment nodes by adding a new path and a temporary state to extract representative features. The improved GRU cell is shown in Fig.\ref{grucell}. The new cell of GRU is illustrated in Eqs. (\ref{z_t})-(\ref{impv_h_t})
\begin{equation}
{z_t} = \sigma({W_z}\cdot[{O_{t-1}},{x'_t}]),
\label{z_t}
\end{equation}
\begin{equation}
{r_t} = \sigma({W_r}\cdot[{O_{t-1}},{x'_t}]),
\label{r_t}
\end{equation}
\begin{equation}
{{\tilde h}_t} = \tanh (W\cdot[{r_t}\odot{O_{t-1}},{x'_t}]),
\label{h'_t}
\end{equation}
where $x'_{t}$ and $O_{t-1}$ are the input and previous hidden states respectively in each memory unit, $\sigma$ and $\tanh$ are the logistic functions. $z_t$ indicates the output of update gate to the network at time step $t\in\{1,2,3...,T\}$, $r_t$ denotes the reset gate. The update gate $z_t$ determines whether or not the hidden state is to be updated with a new hidden state $\tilde h$. The reset gate $r_t$ decides whether or not the previous hidden state is ignored.
\begin{equation}
{ctemp} = \tanh({W_{ctemp}}\cdot[{O_{t-1}},{x'_{t}}]),
\label{impv_ctemp}
\end{equation}
\begin{equation}
{c_t} = (1-{z_t})\odot{{\tilde h}_t}+{z_t} \odot {O_{t-1}},
\label{impv_c_t}
\end{equation}
\begin{equation}
{O_t} = {c_t}\odot \sigma({ctemp}),
\label{impv_h_t}
\end{equation}
where $ctemp$ represents the temporary state to select the information of $x'_{t}$ and $O_{t-1}$ while $c_t$ denotes the final state of the original GRU. It means that we take the advantage of the information selector to extract feature from $c_t$. $O_{t}$ indicates actual activation of the proposed node at time step $t\in\{1,2,3...,T\}$. Experiments show that it can converge faster than the original GRU.
Once having obtained the temporal information from the improved GRU, we use a stacking method to mask the output of each convolution layer with temporal features, forming a three-dimensional cube containing both temporal and spatial features. The representation is shown as follows:
\begin{equation}
{g_{st}(x)} = tile({O_t})+{h^{l}_{W,b}(x')},
\label{g(x)}
\end{equation}
where $g_{st}(x)$ is the function to obtain spatio-temporal features, $tile({O_t})$ is the function to copy the input into multi-dimensional data, which is consistent with the output of ${h^{l}_{W,b}(x')}$, and then add it to ${h^{l}_{W,b}(x')}$ to form a tensor $g$. We use a 3D convolutional kernels/capsules with a step size of 1 to get the output of the primary capsule layer $g'=W_{conv}g$.
\subsection{High-Level Feature Extractor}
After having extracted temporal and spatial features (low-level features), we use capsules to process these features to obtain structural and relationship information (high-level features) of gaits, proposed to solve the problem of information loss on CNN. Different capsules represent different target positions, revealing the relative relationship between different spatial and temporal feature points. Then, according to the relationship between capsules, more important features are selected, and higher-level features are extracted by a dynamic routing algorithm.
In CapsNet, detailed posture information (such as exact locations, rotation, thickness, tilt and size of objects) will be saved in the network before we can restore it. Slight changes in the input will bring small changes to the output, and information will be saved. This allows CapsNet to use a simple and unified architecture for different visual tasks. Therefore, we design a capsule layer as the first layer in the high-level feature extractor.
\subsubsection{Capsule Layer}
The calculation process of the primary capsule layer is a stack of 8 parallel convolutional layers, each of which has a 3D convolutional kernel/capsule with a step size of 1.
The operation of this layer is similar to the convolution that is divided into different capsules. Since the output of the capsule is a vector, a powerful dynamic routing mechanism can be used to ensure that the output is sent to the appropriate parent in the previous layer. Therefore, the structure and direction of the feature are better reflected.
The input vector of the primary capsule layer is equivalent to the scalar input of traditional neural network neurons, and the calculation of the vector is equivalent to the mode of propagation and connection between the primary capsule and digit capsule.
The calculation of input vectors can be divided into two stages, i.e. linear combination and dynamic routing. The linear combination process can be expressed by the following formula:
\begin{equation}
{\hat{g'}_{j|i}} = W_{ij}g'_i,
\label{g'_hat}
\end{equation}
where ${\hat{g'}_{j|i}}$ is a linear combination of $g'_i$, which can be seen as the output of the first layer of neurons in the fully-connected network to a certain neuron in the next layer with different strength connections. ${\hat{g'}_{j|i}}$ represents the output vector of the first capsule of the previous layer and the corresponding weight vector ($W_{ij}$ represents the vector rather than the element). ${\hat{g'}_{j|i}}$ can also be understood as the strength of connecting to $j^{th}$ Capsule of the next layer when the former layer is the Capsule $i$.
After determining ${\hat{g'}_{j|i}}$, we need to use the dynamic routing algorithm to allocate the second stage to calculate the output $s_j$:
\begin{equation}
{s_{j}} = \sum_ic_{ij}{\hat{g'}_{j|i}},
\label{s_j}
\end{equation}
The parameter $c_{ij}$ can be updated iteratively. The input $s_j$ of the next capsule can be obtained by Routing, and then squashing non-linear function can activate the input $s_j$.
By adding the routing mechanism to the capsule, a set of coupling coefficients $c_{ij}$ can be found, which are updated and determined iteratively by dynamic routing process:
\begin{equation}
{c_{ij}} = \frac{exp(b_{ij})}{\sum_k exp(b_{ik})}.
\label{routing}
\end{equation}
which makes the prediction vector ${\hat{g'}_{j|i}}$ consistent with the output vector. $b_{ij}$ depends on the location and type of two capsules, and $c_{ij}$ can be iteratively updated with the consistency of the measurement.
\begin{figure*}[!htb]
\centering
\includegraphics[width=11cm]{./image/2caps.png}
\caption{The propagation process between the two capsule layers. There are three steps in this process: 3D convolution ($\hat{g'}_{j|i}$), dynamic routing ($s_j$), and squashing ($v_j$).}
\label{2capsules}
\end{figure*}
The dissemination and distribution of the whole hierarchy can be divided into two parts. The first part is the linear combination between $g'_i$ and ${\hat{g'}_{j|i}}$ in the following figure, and the second part is the routing process between ${\hat{g'}_{j|i}}$ and $s_j$. The information transmission and calculation between the two capsule layers are shown in Fig.\ref{2capsules}.
Assuming there are two capsule units ($g'_i$) in the primary capsule layer, and four capsules are passed from this level to the next level $v_j$. $g'_1$ and $g'_2$ are the tensors obtained from the spatio-temporal feature extraction layer, i.e. capsule units containing a group of neurons, which are multiplied by different weights $W_{ij}$ to form $\hat{g'}_{j|i}$, respectively. The predictive vector is then multiplied by the corresponding coupling coefficient $c_{ij}$ and passed to a specific digit capsule unit. The input $s_j$ of different capsule units is the weighted sum of all possible incoming prediction tensors. Then we collect different input tensors $s_j$. By putting the input tensors into the squashing non-linear function, we have the output tensors $v_j$ of the latter capsule unit. We use the product of the output vector $v_j$ and the corresponding prediction vector ${\hat{g'}_{j|i}}$ to update the coupling coefficient $c_{ij}$. Such an iterative update does not need to apply back-propagation.
The activation of neurons in a capsule indicates the properties of specific entities in data. These properties include different parameters, such as gait data position, size, direction, deformation, speed, reflectivity, color, and texture. The length of the input and output tensors represents the probability of the occurrence of a feature, so its value must be between 0 and 1.
In order to achieve this compression and implement the activation function at the capsule level, a non-linear function called ``squashing" ensures that the length of short tensors can be reduced to almost zero, while the length of long tensors can be reduced to near but no more than 1. The following is the expression of the non-linear function:
\begin{equation}
{v_{j}} = \frac{{||s_j||}^2}{1+{||s_j||}^2}\frac{s_j}{||s_j||},
\label{squashing}
\end{equation}
where ${v_j}$ is the output vector of capsule ${j}$, ${s_j}$ is the weighted sum of all the tensors of the capsule output from the previous layer to capsule ${j}$ of the current layer, and $s_j$ is the input vector of capsule $j$. The non-linear function can be divided into two parts. The first part $\frac{{||s_j||}^2}{1+{||s_j||}^2}$ is the scalar of the input vector $s_j$, and the second part $\frac{s_j}{||s_j||}$ is the unit vector of the input vector $s_j$. The non-linear function not only retains the direction of the input vector but also compresses the length of the input vector into the interval [0,1]. The non-linear function can be regarded as compression and reallocation of the length of the vector, so it can also be regarded as a way of activating the output vector after the input vector.
Obtaining the output high-level feature of the digit capsule layer, we use the $softmax$ function for classification. In order to comprehensively consider the effectiveness of feature extraction of the whole network, we classify the output features of each level and construct a Bayesian model to determine the final classification results using those predictions.
\subsubsection{Relationship Layer}
In terms of the primary capsule layer, all the capsules are independent of each other. The extracted features can not only represent the local features (arms, legs, or head) of gait, but also interpret the relationship between local and global features (the whole body). However, the relationship between capsules has been neglected, so we design a capsule relationship layer to preserve the relationship between capsules, that is, the relationship between local features.
In order to preserve the relative position, direction, and state between local features, we use the transfer matrix of the former capsule and the latter capsule to represent their relationship and send them to the $softmax$ classifier. The specific calculation is as follows:
\begin{equation}
{R_{i}} =g'_i*{(g'_{i+1})}^{-1},~~i<32.
\label{Relationship}
\end{equation}
Assuming $R_{i}$ is the relationship matrix of the capsules in the primary capsule layer. $g'_i$ and $g'_{i+1}$ are adjacent capsules. We use the inverse of ${(g'_{i+1})}$ to find the transfer matrix $R_{i}$.
\subsection{Decision-Making Layer}
According to the output of different feature extraction layers, we add four classifiers to evaluate the features of the four layers and output specific class labels. For the classification of the extracted features, the Bayesian model is built for the output of the four classifiers to select more accurate classification results.
\subsubsection{Classification Layers}
Firstly, we embed a softmax multi-classifier on the GRU feature extractor to evaluate whether or not temporal features are beneficial to classification. Furthermore, to make spatial and temporal features more distinct and easy to classify, we add two fully-connected layers to the convolutional layer, followed by a softmax function to classify the result, and integrate the feature cubes to obtain high-level information.
\subsubsection{Bayesian Modeling Process}
After classification, we have four label vectors. We merge these four results into a label matrix, regard the four label values predicted as features, and use the Bayesian model to maximize the posterior probability of class estimation. Assuming that there are $n$ classes, using $c_1$, $c_2$, and $c_n$ to denote, $X={x_1, x_2, ...,x_n}$ represents each sample in a training batch with a certain label $c_i,i\in[1,n]$.
\begin{equation}
{\max P(c_i|X)} = \max \frac{P(X|c_i)P(c_i)}{P(X)}.
\label{bayes}
\end{equation}
Because $P(X)$ is constant for all the classes, the process of maximizing a posteriori probability $P(c_i|X)$ can be transformed into maximizing probability $P (X|c_i)P(c_i)$. It is usually assumed that the attributes are independent of each other, and prior probability $P(x_1|c_i), P (x_2|c_i),..., P(x_n|c_i)$ can be obtained from the training dataset. According to this method, the probability $P(X|c_i)P(c_i)$ that $X$ belongs to each category of $c_i$ can be calculated for a sample $X$ of an unknown category, and then the category with the greatest probability can be selected as its class label.
\subsection{Loss Function}
Combining standard categorical cross-entropy loss of the first three outputs of ASTCapsNet, and the margin loss $l_{dc}$ of the last output of the digit capsule layer, the final joint loss function is formed as:
\begin{equation}\small
\begin{aligned}
l_{dc}=T_c max(0,m^+-||v_c||)^2 + \lambda(1-T_c) max(0,||v_c||-m^-)^2,\\
\end{aligned}
\label{marginloss}
\end{equation}
where $c$ is the class label of a sample, $T_c$ is the indicate function (1 if $c$ exists, else 0), $m^-$ is the top margin of $||v_c||$, $m^+$ is the bottom margin of $||v_c||$.
\begin{equation}\small
\begin{aligned}
{\mathcal{L}(y,y')} & = l_{tp}+l_{st}+l_{pc}+l_{dc}\\
& = -\sum_j{y_{{tp}_j}'}\log(\frac{e^{{y_{tp}}_j}}{\sum_{i=1}^n e^{{y_{tp}}_i}}) -\sum_j{y_{{st}_j}'}\log(\frac{e^{{y_{st}}_j}}{\sum_{i=1}^n e^{{y_{st}}_i}})\\
&-\sum_j{y_{{pc}_j}'}\log(\frac{e^{{y_{pc}}_j}}{\sum_{i=1}^n e^{{y_{pc}}_i}})+\sum_cl_{dc},
\end{aligned}
\label{loss}
\end{equation}
where $l_{tp}$ and $l_{st}$ represent the classification loss of temporal and spatio-temporal data, $l_{pc}$ denotes the classification loss in the primary capsule layer. $l_{dc}$ is the margin loss in the digit capsule layer. $\mathcal{L}(y,y')$ is the joint loss of the four classification results. In the optimization process, we use Adam optimizer to consider the first and second moments of the gradients in order to reduce the loss quickly. $y_{{tp}_j}',y_{{st}_j}',y_{{pc}_j}'$ denote the $j^{th}$ true label of a training batch while $y_{{tp}_j},y_{{st}_j},y_{{pc}_j}$ represents the predicted label.
\section{Experiment}
The experiment was conducted on five different datasets, and the results of the experiment were evaluated against several state-of-the-art methods. The following is a list of all the comparison methods:
\textbf{Classifiers:}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item DT (decision tree) refers to a decision support tool using a tree-like model of decisions and their possible consequences.
\item GBDT (gradient boosting decision tree) is an iterative decision tree algorithm, which is composed of multiple decision trees. The results of all the trees are accumulated to determine the final label.
\item LR (logistic regression) is a kind of nonlinear regression models and a machine learning method for probability estimation.
\item RF (random forest) is a classifier with multiple decision trees.
\item KNN (k-nearest neighbor) is a classifier that finds $k$ nearest instance and votes to determine the class name of the new instances.
\end{itemize}
\textbf{Deep Models:}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item GRU (gated recurrent unit) is a gating mechanism in recurrent neural networks, which has fewer parameters than LSTM.
\item LSTM (long-short term memory) is a model with expandable nodes that are suitable for temporal data.
\item BiLSTM (bidirectional long-short term memory) is composed of a forward LSTM and a backward LSTM.
\item CapsNet (capsule network) is a deep neural network for dynamic analysis of data structure and spatial information.
\item Attention LSTM is an LSTM model with an attention mechanism.
\item CNN (convolutional neural network) is a network for learning spatial features.
\end{itemize}
Additionally, we also use several gait recognition methods from the literature because they use the dataset mentioned in this paper.
\textbf{Other Studies:}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item TL-LSTM\cite{8397466} is a two-layer LSTM for gait recognition.
\item HCF+SVM\cite{Gianaria2019} is a method using hand-crafted features and SVM for classification.
\item GCP\cite{Hong2017A} is a gait cycle partitioning method.
\item WT+GA\cite{Shi2015A} is the combination of wavelet transform and a genetic algorithm.
\item DT+RF\cite{8713735} is the combination of distance transform and random forests.
\item SD\cite{7532940} (static and dynamic feature extraction method) is a human walking model including the static and dynamic gait features.
\item FLM\cite{8653351} (frame-level matching) is a frame-level matching method for gait recognition.
\item RBF\cite{ZENG2015246} (radial basis function) is a neural network for gait analysis and recognition.
\item DCLSTM\cite{Zhao2018Dual} (dual-channel LSTM) is a temporal LSTM model for multimodal gait recognition.
\item Q-BTDNN\cite{NANCYJANE2016169} (Q-backpropagated time-delay neural network) is presented to identify the gait disturbances in Parkinson's disease.
\item 2D-CNN+LSTM\cite{8781511} is the combination of 2D-CNN and LSTM for gait spatio-temporal information extraction.
\item SFM \cite{10.5555/3305381.3305543} (state-frequency memory) is a recurrent network that allows separating dynamic patterns across different frequency components and their impacts on modeling the temporal contexts of the sequences.
\item CapProNet\cite{NIPS2018_7823} (capsule projection network) can learn an orthogonal projection matrix for each capsule subspace to classify different objects.
\end{itemize}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.6cm,height=3.3cm]{./image/unito_acc.png}
\caption{UNITO}
\label{train_acc:unito}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.6cm,height=3.5cm]{./image/casia_acc.png}
\caption{CASIA-A}
\label{train_acc:casia}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.6cm,height=3.3cm]{./image/sdu_acc.png}
\caption{SDUgait}
\label{train_acc:sdu}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.6cm,height=3.5cm]{./image/ndds_acc.png}
\caption{NDDs}
\label{train_acc:ndds}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.6cm,height=3.5cm]{./image/ga_acc.png}
\caption{PD}
\label{train_acc:ga}
\end{subfigure}
\caption{The training accuracy of the original CapsNet and ASTCapsNet on the five datasets.}
\label{train_acc}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/roc_unito.png}
\caption{UNITO}
\label{roc:unito}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/roc_casia.png}
\caption{CASIA-A}
\label{roc:casia}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/roc_sdu.png}
\caption{SDUgait}
\label{roc:sdu}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/roc_ndds.png}
\caption{NDDs}
\label{roc:ndds}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/roc_ga.png}
\caption{PD}
\label{roc:ga}
\end{subfigure}
\caption{ROC curves of ASTCapsNet on the five datasets. The legend shows the AUC values of different classes in the dataset, and there are too many classes of UNITO, CASIA-A, and SDUgait datasets to be fully represented by the legend, so we only give average AUC values. Each line represents the AUC value of the corresponding class. ASTCapsNet has a high recognition rate of each class and the curve is concentrated in the upper left corner. Besides, the micro-average AUC values are compared, which shows the strong learning ability of ASTCapsNet.}
\label{roc}
\end{figure*}
\subsection{Experimental Setup}
First, we introduce the datasets used in our experiments and then describe the training details.
\subsubsection{Datasets}
We perform experiments on three gait datasets (normal people), including CASIA Gait dataset \cite{CASIA}, UNITO dataset \cite{gianaria2014human}, and SDUgait dataset \cite{SDUgait}, and also verify their extensibility on two gait datasets (neurodegenerative patients) - NDDs dataset \cite{physionet-ndd} and PD dataset \cite{physionet-pd}. These datasets are collected using different sensors, such as cameras, infrared sensors, and force sensors. The data types include human skeletons, gait contour maps, and so on. We will introduce these datasets in detail.
\textbf{UNITO Dataset:} This includes skeleton data of 20 subjects, acquired with Kinect For Windows SDK 1.x. The subjects were asked to walk naturally along a corridor, towards the camera (FRONT view) and away from the camera (REAR view), for 10 times, with a total of 20 gait sequences for each subject (10 FRONT and 10 REAR). The order of the 20 joints is the same as the Kinect Skeleton Map. For each joint, 3D coordinates of subjects are recorded.
\textbf{CASIA Gait Dataset:} In this dataset, we choose Dataset A (former NLPR Gait Dataset), which includes 20 persons. Each person has 12 preprocessed gait contour sequences, 4 sequences for each of the three directions, i.e. parallel, 45 degrees, and 90 degrees to the image plane. The length of the sequences is not identical for the variation of the walker's speed, but it must range from 37 to 127.
\textbf{SDUgait dataset:} The dataset includes 52 subjects, 28 males and 24 females with an average age of 22. Each subject has 20 sequences with at least 6 fixed walking directions and 2 arbitrary directions, and a total of 1040 sequences; The dataset is based on the second-generation Kinect (Kinect V2). The Kinect V2 does not only have a broader view angle but also can produce higher-resolution of depth images with 25 body joints.
\textbf{NDDs dataset:} The dataset contains the gait signals of 48 patients with NDDs (ALS, HD, and PD) and 16 healthy controls (CO). Participants were walking at their usual pace along a 77-m-long hallway for 5 minutes including the recorded signal of stride, swing, and stand times for each leg and double support signals for both legs. An expert physician labeled patients' states from 0 to 13 (0 equal to the most severe state and 13 for a healthy one). To measure time intervals, a 12-bit onboard analog-to-digital converter samples the output of foot switches at 300 Hz \cite{Hausdorff1998Gait}.
\textbf{PD Dataset:} In this study, we have utilized gait signals from PhysioNet. The dataset consists of three PD gait sub-datasets, which are contributed by three researchers (Ga \cite{yogev2005dual}, Ju \cite{hausdorff2007rhythmic} and Si \cite{Frenkel2005Treadmill}). We only select the subset of Ga \cite{yogev2005dual} as an example of PD severity classification.
The dataset includes gait information from 93 patients with the idiopathic PD and 73 healthy controls (average age 66.3, 55\% male). Every participant was asked to walk at their usual, self-selected pace for about two minutes while wearing a pair of shoes with 8 force sensors located under each foot. The sensors measure vertical ground reaction force (VGRF, in Newton) as a function of time at 100 samples per second. The dataset also contains the specific situation of each participant including gender, age, height, weight, and severity level of PD. The PD severity level is graded according to two scales (H\&Y, UPDRS).
\subsubsection{Training Details}
We employ different training settings on these datasets. For the UNITO Dataset, we use 5*5 and 3*3 convolutional kernels in the convolutional layer and primary capsule layer, 7-time steps in LSTM with the batch size of 128, and input dimension of 9 (63D, 21 joints, 3D coordinates), the dropout in LSTM is set to 0.5 to avoid over-fitting.
For CASIA Gait Dataset, we apply all of three directions of the gait contour images and convert the image into text sequences. After resizing the original image to 50 by 50 pixels, the gait data is fed into ASTCapsNet with 9*9 convolutional kernels in the convolutional layer and the primary capsule layer. The time step and input vector are set to 50 and 50 to match the image size. The learning rate is 0.001 in LSTM and other components. The input and time steps in LSTM are 50 and 50 respectively.
For the SDUgait dataset, we only use the skeleton data of the gait, which includes the 3D positions of 21 joints. Differently, the time step in LSTM is set to 7, and the input is 9D (63D, 3*21), that is, the dimension of the input in LSTM is 63. 5*5 and 3*3 convolutional kernels can be used in the convolutional layer and the primary capsule layer respectively.
For the NDDs dataset, the input matrix of a training sample is 12*10 that the input is 12D and the time step is 10. The size of the convolutional kernels is set to 5*5 based on the scale of the input data. The hidden output of LSTM is 128D to represent the temporal feature of the gait.
For the PD Dataset, the size of a training sample is 19*100, which indicates that the input is 19D and the time step is 100. The size of the convolutional kernels is set to 9*9 based on the scale of the input data. The hidden output of LSTM is 128D to represent the temporal feature of gait.
The parameters of other layers are set according to different datasets and fine-tuned to achieve good performance. The model is implemented with Tensorflow.
\subsection{Experiments on Normal Gait Datasets}
In this subsection, we evaluate the ASTCapsNet model on normal gait datasets for identity recognition: UNITO, CASIA and SDUgait datasets.
\subsubsection{Experiments on the UNITO Skeleton Dataset}
\begin{figure}[htp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=9cm]{./image/unito_confusion_matrix.png}
\caption{The confusion matrix of classification results on the UNITO Skeleton Dataset.}
\label{unito_confusion_matrix}
\end{figure}
\begin{table}[!htb]
\centering
\caption{Results on the UNITO skeleton dataset.}
\scalebox{0.8}{
\begin{tabular}{lc|lc|lc}
\hline
\hline
Classifiers&Acc&Deep Models&Acc&Others&Acc\\
\hline
DT&87.29\%& GRU&97.95\%&TL-LSTM\cite{8397466}&97.33\%\\
GBDT&90.82\%&LSTM&95.44\%&HCF+SVM\cite{Gianaria2019}&97.00\%\\
LR&67.51\%&BiLSTM&96.45\%&SFM\cite{10.5555/3305381.3305543}&96.97\%\\
RF&96.47\%&CapsNet&97.33\%&CapProNet\cite{NIPS2018_7823}&98.69\%\\
KNN&90.00\%&Attention LSTM&97.92\%&&\\
&&CNN&95.84\%&&\\
\hline
ASTCapsNet&\textbf{\textit{99.65\%}}\\
\hline
\hline
\end{tabular}}
\label{tb_UNITO}
\end{table}
This experiment aims to verify that our proposed model can perform well in the three-dimensional skeletal dataset, which outperforms the other methods on this dataset.
Firstly, in this dataset, we have 20 subjects for classification, and use 80\% of the data for training and 20\% for testing. The performance of the proposed model is shown in Fig. \ref{unito_confusion_matrix}. It is obvious that the gait recognition rate of the model is higher than 99.0\% for each of 20 subjects, and we finally achieve the test result of 99.65\%.
Furthermore, we compare our model with several advanced algorithms for this dataset in recent years, which are demonstrated in Table \ref{tb_UNITO}. In the light of the results of five traditional classifiers, i.e., decision tree (DT), gradient boosting decision tree (GBDT), logic regression (LR), random forest (RF), k-nearest neighbor (KNN), and temporal models, i.e., gated recurrent unit (GRU), long-short term memory (LSTM), attention LSTM and bi-directional long short-term memory (BiLSTM), we can see that the average result of the temporal model is higher than that of the traditional classifiers, and the performance of RF and attention LSTM is the best among the models.
For other studies, Gianaria \textit{et al.}, \cite{Gianaria2019} uses handcrafted features as input to the SVM classifier and achieves the result of 97\%, which needs to calculate the relative distance between joints (FoRD) and the sway of joints (FoS). Li \textit{et al.}, \cite{8397466} apply a two-layer LSTM in this dataset and obtain the result of 97.33\% which considers the temporal changes of gait. The performance of SFM \cite{10.5555/3305381.3305543} exceeds LSTM and BiLSTM, indicating the advantages of using state-frequency components. Although the original CapProNet\cite{NIPS2018_7823} can perform better than these methods by applying capsule subspace, our model achieves the highest accuracy among them.
Additionally, we also show the training accuracy of the original CapsNet and ASTCapsNet in Fig.\ref{train_acc:unito}. We can see that ASTCapsNet converges faster than the original CapsNet, and the final training accuracy is closer to 100\%. Fig.\ref{roc:unito} shows the receiver operating characteristic curve (ROC) for the evaluation of the proposed model. The curve is close to the upper left corner, and the average area under curve (AUC) value approaches to 1, indicating ASTCapsNet achieves satisfactory performance.
\subsubsection{Experiments on the CASIA Gait Dataset}
\begin{table}[!htb]
\centering
\caption{Results on the CASIA gait dataset.}
\scalebox{0.8}{
\begin{tabular}{lc|lc|lc}
\hline
\hline
Classifiers&Acc&Deep Models&Acc&Others&Acc\\
\hline
DT&67.78\%&GRU&96.97\%&GCP\cite{Hong2017A}&92.50\%\\
GBDT&70.65\%&LSTM&96.73\%&WT+GA\cite{Shi2015A}&97.32\%\\
LR&86.49\%&BiLSTM&97.03\%&DT+RF\cite{8713735}&99.60\%\\
RF&73.32\%&CapsNet&93.91\%&SFM\cite{10.5555/3305381.3305543}&97.51\%\\
KNN&70.71\%&Attention LSTM&96.79\%&CapProNet\cite{NIPS2018_7823}&95.49\%\\
&&CNN&90.83\%&&\\
\hline
ASTCapsNet&\textbf{\textit{99.69\%}}\\
\hline
\hline
\end{tabular}}
\label{tb_casia}
\end{table}
This experiment aims to verify that our proposed model can perform well in the 2D silhouette image dataset, which also outperforms the other methods for this dataset.
In this dataset that includes 20 subjects for identification, we use 80\% of the data for training and 20\% for testing. The performance of the proposed model is shown in Fig. \ref{casia_confusion_matrix}. The correct number and accuracy of the samples on the diagonal line show that the model can accurately determine the identity via gait.
Beyond that, we also compare the experimental results in the past five years, and the model presented in this paper also stands out, which are illustrated in Table \ref{tb_casia}. Because of the temporality of the gait data, the temporal model is more suitable for learning, which achieves better results. Hong \textit{et al.}, \cite{Hong2017A} use a gait cycle partitioning method for gait representation. Distance transform and random forest are also compared for this task \cite{8713735}. \cite{Shi2015A} propose to use wavelet transform and a genetic algorithm for identification, and the result is also shown in Table \ref{tb_casia}. In general, the temporal models are better than the other models, where capsule networks, i.e., CapsNet and CapProNet can effectively analyze the structural and spatial information of gait.
\begin{figure}[htp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=9cm]{./image/casia_confusion_matrix.png}
\caption{The confusion matrix of classification results on the CASIA-A Gait Dataset.}
\label{casia_confusion_matrix}
\end{figure}
Fig.\ref{train_acc:casia} shows the training accuracy of the original CapsNet and ASTCapsNet on the CASIA-A Dataset. After the $5^{th}$ training iteration, ASTCapsNet achieves higher accuracy than the original CapsNet and quickly converges to nearly 100\%. The average AUC value of the CASIA-A dataset is 98.4\%, the true positive rate (TPR) approaches 1 when the false positive rate (FPR) is 0.3, which verifies the effectiveness of the ASTCapsNet (Fig. \ref{roc:casia}).
\subsubsection{Experiments on the SDUGait Dataset}
\begin{figure}[htp]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering
\includegraphics[width=7cm]{./image/sdu_fm.png}
\caption{The confusion matrix of the classification results on the SDUGait Dataset. Since the recognition results of 52 subjects cannot be completely shown due to the page size, we select the subjects labeled 1-10 to show the recognition results which is 99.13\%.}
\label{sdu_confusion_matrix}
\end{figure}
\begin{table}[!htb]
\centering
\caption{Results on the SDUGait gait dataset.}
\scalebox{0.8}{
\begin{tabular}{lc|lc|lc}
\hline
\hline
Classifiers&Acc&Deep Models&Acc&Others&Acc\\
\hline
DT&80.33\%&GRU&95.30\%&SD\cite{7532940}&94.23\%\\
GBDT&85.02\%&LSTM&94.80\%&FLM\cite{8653351}&98.08\%\\
LR&63.08\%&BiLSTM&96.38\%&SFM\cite{10.5555/3305381.3305543}&95.02\%\\
RF&94.26\%&CapsNet&97.00\%&CapProNet\cite{NIPS2018_7823}&97.64\%\\
KNN&85.28\%&Attention LSTM&96.33\%&&\\
&&CNN&92.85\%&&\\
\hline
ASTCapsNet&\textbf{\textit{99.13\%}}\\
\hline
\hline
\end{tabular}}
\label{tb_sdu}
\end{table}
This experiment aims to verify that our proposed model can perform well in the 3D skeletal dataset, which also outperforms the other methods applied to this dataset.
Firstly, in this dataset that contains 52 subjects for classification, we use 80\% of the data for training and 20\% for testing. The performance of the proposed model is shown in Fig. \ref{sdu_confusion_matrix}. There are so many categories that the recognition results of each category are not shown and the outline of the color confusion matrix is displayed. We can also see that the recognition error rate of the model for different gaits is very small.
Moreover, we compare this result with some advanced algorithms for this dataset, which are shown in Table \ref{tb_sdu}. The average result of the temporal model is much higher than that of the traditional classifier, and RF and BiLSTM perform the best. Wang \textit{et al.}, \cite{7532940} obtains the classification result of 94.23\% on this dataset by using the static and dynamic feature extraction methods, being view-invariant for gait recognition. Although we only use the 3D coordinates of the skeletal joints in the SDUGait dataset, the proposed model can also well learn the gait patterns. Besides, Choi \textit{et al.}, \cite{8653351} propose a robust frame-level matching method for gait recognition that minimizes the influence of noisy patterns as well as secures the frame-level discriminative power and achieves accuracy of 98.08\%, little lower than ASTCapsNet. The SFM \cite{10.5555/3305381.3305543} here does not show outstanding advantages due to the sparsity of gait data and unobvious frequency variation. The result of CapProNet\cite{NIPS2018_7823} is still slightly higher than that of CapsNet.
Furthermore, we also show the training accuracy of the original CapsNet and ASTCapsNet in Fig. \ref{train_acc:sdu}. The curves of the training accuracy on this dataset are similar. In the $20^{th}$ training iteration, ASTCapsNet begins to be slightly higher than CapsNet until the end of the training, indicating the stability of the model. Fig. \ref{roc:sdu} shows the ROC curve of the proposed ASTCapsNet. We can see that except for one subject with a high detection error rate, the AUC of the other subjects is more than 97\%, reflecting the high performance of the model.
\subsection{Experiments on Abnormal Gait Datasets}
In this subsection, we evaluate the ASTCapsNet model on the abnormal gait datasets for disease classification and diagnosis: NDDs and PD datasets.
\subsubsection{Experiments on the NDDs Dataset}
\begin{figure}[htp]
\centering
\includegraphics[width=6cm]{./image/ndds_confusion_matrix.png}
\caption{The confusion matrix of classification results on the NDDs Dataset.}
\label{ndds_confusion_matrix}
\end{figure}
\begin{table}[!htb]
\centering
\caption{Results on the NDDs dataset.}
\scalebox{0.8}{
\begin{tabular}{lc|lc|lc}
\hline
\hline
Classifiers&Acc&Deep Models&Acc&Others&Acc\\
\hline
DT&49.34\%&GRU&92.76\%&RBF\cite{ZENG2015246}&93.75\%\\
GBDT&67.11\%&LSTM&92.11\%&DCLSTM\cite{Zhao2018Dual}&95.67\%\\
LR&51.32\%&BiLSTM&93.82\%&SFM\cite{10.5555/3305381.3305543}&93.59\%\\
RF&65.13\%&CapsNet&88.86\%&CapProNet\cite{NIPS2018_7823}&92.11\%\\
KNN&73.03\%&Attention LSTM&93.41\%&&\\
&&CNN&85.58\%&&\\
\hline
ASTCapsNet&\textbf{\textit{95.78\%}}\\
\hline
\hline
\end{tabular}}
\label{tb_ndds}
\end{table}
In this Dataset, our model is used to identify 3 diseases and 1 healthy control group (ALS, HD, PD, and CO). The performance is explained in Fig. \ref{ndds_confusion_matrix}. CO and ALS obtain the result of 100\% while two samples of PD are misclassified into ALS.
Moreover, there are several comparative experiments implemented on this dataset, as illustrated in Table \ref{tb_ndds}. The five classifiers can not provide a good reference because of the small amount of data, and the time series model can capture the dynamic changes of the gait data by more than 90\%. It can be seen that the effect of CapsNet on this dataset is under 90\%, showing a slightly weaker classification ability than CapProNet \cite{NIPS2018_7823}. The effect of the two-channel LSTM using two different sensor data is only inferior to that of ASTCapsNet.
Apart from this observation, we also show the training accuracy of the original CapsNet and ASTCapsNet in Fig. \ref{train_acc:ndds}. The proposed model is approximately 10\% higher than the original CapsNet because the temporal LSTM and Bayesian model are added to compensate for the loss of the features caused by CapsNet. Moreover, the small amount of data does not allow the original CapsNet to learn well. As shown in Fig. \ref{roc:ndds}, the ROC curves of CO tend to be 1, while the curves of ALS, PD, and HD slightly deviate, indicating that the model is more sensitive to the classification of CO due to the big difference between healthy people and patients.
\subsubsection{Experiments on the PD Dataset}
\begin{figure}[htp]
\centering
\includegraphics[width=6.8cm]{./image/ga_confusion_matrix.png}
\caption{The confusion matrix of classification results on the PD Dataset.}
\label{ga_confusion_matrix}
\end{figure}
\begin{table}[!htb]
\centering
\caption{Results on the PD dataset.}
\scalebox{0.8}{
\begin{tabular}{lc|lc|lc}
\hline
\hline
Classifiers&Acc&Deep Models&Acc&Others&Acc\\
\hline
DT&77.90\%&GRU&94.89\%&Q-BTDNN\cite{NANCYJANE2016169}&93.10\%\\
GBDT&84.48\%&LSTM&93.95\%&2D-CNN+LSTM\cite{8781511}&96.00\%\\
LR&57.23\%&BiLSTM&94.78\%&SFM\cite{10.5555/3305381.3305543}&93.63\%\\
RF&91.84\%&CapsNet&95.30\%&CapProNet\cite{NIPS2018_7823}&96.58\%\\
KNN&92.02\%&Attention LSTM&93.53\%&&\\
&&CNN&89.76\%&&\\
\hline
ASTCapsNet&\textbf{\textit{97.31\%}}\\
\hline
\hline
\end{tabular}}
\label{tb_pd}
\end{table}
In the PD Dataset, we only use the Ga \cite{yogev2005dual} subset as an example to classify the severity of Parkinson's disease based on the gait performance. There are three categories of severity according to the criteria of H \& Y scale (2=Bilateral involvement without impairment of balance, 2.5=Mild bilateral disease with recovery on the pull test, 3=Mild to moderate bilateral disease; some postural instability; physically independent), and the four categories are compared with the healthy control group.
After training ASTCapsNet, the performance is shown in Fig. \ref{ga_confusion_matrix}. The model is 100\% for healthy people and 82\% for patients with severity 3. We witness that the model does not distinctly extract patient features for severity 3, which leads to misclassification.
Compared with other experiments on this dataset, our proposed model achieves higher classification accuracy. The details are illustrated in Table \ref{tb_pd}. The results of RF and KNN are better, and GRU also shows advantages in temporal models. Q-BTDNN (Q-backpropagated time-delay neural network) is designed for severity classification which can diagnose the gait disturbances in Parkinson's disease. Another study \cite{8781511} uses both 2D-CNN and LSTM to obtain a promising result of 96.00\%, while CapProNet \cite{NIPS2018_7823} performs better. ASTCapsNet still outperforms the original CapsNet and CapProNet \cite{NIPS2018_7823}.
Additionally, we also show the training accuracy of the original CapsNet and ASTCapsNet in Fig. \ref{train_acc:ga}. We can see that after the $125^{th}$ training iteration, ASTCapsNet begins to achieve slightly higher training accuracy than CapsNet until the end of the iteration. The AUC value of the four-class classification for Parkinson's disease is 97.5\%, a high level of recognition (\ref{roc:ga}). The AUC value of severity 3 is low, which means that the boundary between severity 3 and other severity degrees is not clear, and the difference is minor.
\subsection{Performance of the Whole Model}
In this section, we evaluate and compare the performance of the whole model and its internal components to verify the model's ability to extract and classify features.
\subsubsection{Performance of Split Components}
First, we provide the recognition results of each module in Table \ref{tb_components}. The performance of six individual components is demonstrated on the five datasets respectively. As shown in Fig. \ref{capsnet_process}, the low-level feature extractor includes the memory and convolution modules for spatio-temporal feature extraction. The high-level feature extractor includes the capsule and capsule relationship layers for relationship and structure feature extraction. ``No relationship layer" is the output of ASTCapsNet after we have removed the relationship layer, and ``no memory module" is the output of ASTCapsNet after removing the memory module. ``Bayesian" model is the original Bayesian model, and ``Bayesian model (last layer)" is the total output of ASTCapsNet.
It can be seen that the average output of the high-level feature extractor is better than that of the low-level feature extractor on these five datasets, which reflects the superiority of the module integration scheme. The performance of the ``no relationship layer" and the ``no memory module" is similar, depending on the fitting degree of the model and data. However, the data does not conform to the Gaussian distribution, leading to the poor outcome of using the Bayesian model directly, and the output of the softmax layer conforms to the Gaussian distribution so that the Bayesian model can be placed in the last layer to retain higher accuracy. Briefly, in the comparison of all the individual modules, the high-level feature extractor makes the most, followed by the memory module, and then the relationship layer.
\begin{table*}[!htb]
\centering
\caption{Performance of individual components in ASTCapsNet.}
\scalebox{1}{
\begin{tabular}{lccccccc}
\hline
\hline
Dataset&Low-Level&High-Level&No Relationship&No Memory&Bayesian Model&Bayesian Model\\
&Feature Extractor&Feature Extractor&Layer&Module&(Raw Data)&(Last Layer)\\
\hline
\hline
UNITO&95.87\%&99.56\%&97.41\%&97.10\%&77.54\%&\textbf{\textit{99.65\%}}\\
CASIA-A&97.32\%&99.23\%&97.62\%&95.73\%&73.29\%&\textbf{\textit{99.69\%}}\\
SDUgait&95.33\%&98.56\%&96.50\%&97.94\%&78.56\%&\textbf{\textit{99.13\%}}\\
NDDs&93.64\%&95.33\%&94.14\%&90.39\%&60.85\%&\textbf{\textit{95.78\%}}\\
PD&94.37\%&96.38\%&95.72\%&95.87\%&69.78\%&\textbf{\textit{97.31\%}}\\
\hline
\hline
\end{tabular}}
\label{tb_components}
\end{table*}
\subsubsection{Separability of the Extracted Features}
We discuss the separability of the features extracted by ASTCapsNet, and take the outputs of $softmax~2$ and $softmax~4$ in the low- and high-level feature extractors as an example. All the features are processed by t-SNE (t-distributed stochastic neighbor embedding) for dimensionality reduction. As shown in Fig. \ref{tsne}, the first row represents the performance of the low-level extractor on the five datasets after dimensionality reduction, and the second row represents the performance of the high-level extractor after dimensionality reduction. We can clearly observe that the high-level features are more discriminative than the low-level features, and can represent the critical information of each class. The overlapping probability of low-level features is slightly larger than that of the high-level features. The high-level features have clear boundaries with the other classes after dimensionality reduction and have a high degree of aggregation with the samples in the same class, demonstrating the effectiveness of our proposed method.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/low_unito.png}
\caption{Low-Level Features \\(UNITO)}
\label{tsne:UNITO1}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/low_casia.png}
\caption{Low-Level Features \\(CASIA-A)}
\label{tsne:CASIA-A1}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/low_sdugait.png}
\caption{Low-Level Features \\(SDUgait)}
\label{tsne:SDUgait1}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/low_ndds.png}
\caption{Low-Level Features \\(NDDs)}
\label{tsne:NDDs1}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/low_pd.png}
\caption{Low-Level Features \\(PD)}
\label{tsne:PD1}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/high_unito.png}
\caption{High-Level Features \\(UNITO)}
\label{tsne:UNITO2}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/high_casia.png}
\caption{High-Level Features \\(CASIA-A)}
\label{tsne:CASIA2}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/high_sdugait.png}
\caption{High-Level Features \\(SDUgait)}
\label{tsne:SDUgait2}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/high_ndds.png}
\caption{High-Level Features \\(NDDs)}
\label{tsne:NDDs2}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[width=3.5cm,height=3.5cm]{./image/high_pd.png}
\caption{High-Level Features \\(PD)}
\label{tsne:PD2}
\end{subfigure}
\caption{Dimensionality reduction comparison between the extracted features and original data using t-SNE.}
\label{tsne}
\end{figure*}
\section{Discussion}
\begin{table*}[!htb]
\centering
\caption{Performance of different layers in ASTCapsNet.}
\scalebox{1}{
\begin{tabular}{lcccccc}
\hline
\hline
Dataset&Softmax 1&Softmax 2&Softmax 3&Softmax 4&Total\\
\hline
UNITO&95.99\%&95.87\%&99.12\%&99.56\%&\textbf{\textit{99.65\%}}\\
CASIA Gait&96.65\%&97.32\%&98.22\%&99.23\%&\textbf{\textit{99.69\%}}\\
SDUgait&95.30\%&95.33\%&98.77\%&98.56\%&\textbf{\textit{99.13\%}}\\
NDDs&93.86\%&93.64\%&94.98\%&95.33\%&\textbf{\textit{95.78\%}}\\
PD&94.02\%&94.37\%&96.43\%&96.38\%&\textbf{\textit{97.31\%}}\\
\hline
\hline
\end{tabular}}
\label{tb_split}
\end{table*}
As shown in the experiment, ASTCapsNet can effectively learn the human gait pattern including healthy people and NDDs patients. By experimenting on five gait datasets, the highest accuracy of our model are 99.65\%, 99.69\%, 99.13\%, 95.78\%, 97.31\% respectively, superior to the other gait recognition methods on the classification of these datasets. To evaluate the performance of each component in ASTCapsNet, we compare the classification results of each softmax layer with the output of the Bayesian model, shown in Table \ref{tb_split}. We can find that the performance of softmax 4 surpasses the other layers because this layer contains rich features calculated from the previous layers. Softmax 3 also performs well, reflecting the validity of the relationship layer. Furthermore, the model parameters have been found to affect the performance of classification, and the optimal setting is reported in Section IV.
In the field of gait recognition, there is still a lack of public datasets that have both large view variations and diversity of data. This factor affects the possibility to train a unified and reliable model for practical scenarios in medical diagnosis. For example, if the data of one modality is missing in the training, the final classification system may not be able to effectively handle this modality in testing. Moreover, it is expensive to obtain the labeled gait sequences. In this section, we summarize several possible ways to improve the practicality and effectiveness of our proposed model as follows.
\textbf{A. Cross-view gait recognition.} In our ASTCapsNet, we only recognize the gait from different sources, without considering the relationship between multi-modal gait data. \cite{8374898} and \cite{8466612} have undertaken research and discussion on cross-view and multi-modal gait recognition. We will use canonical correlation analysis (CCA), kernel CCA (KCCA), and other data fusion methods to analyze the correlation between different sensors or views.
\textbf{B. Unsupervised or Semi-Supervised Learning.} The model discussed in this paper only refers to the encoder, which does not have the function of generating new samples and cannot deal with unlabeled gait data. How to utilize unlabeled samples for unsupervised or semi-supervised learning is still an open problem in this study. \cite{8374898} reports multi-task generative adversarial networks (MGANs) for learning view-specific feature representations, which provides a possible way to utilize unlabeled data in our model. We will explore to enhance our method by exploiting unlabeled information in future work.
\section{Conclusion}
In this paper, we have proposed an end-to-end learning model, ASTCapsNet, for gait recognition in multimodal gait sequences. The convolutional layer and a novel recurrent memory unit were introduced to model the dynamics and dependency in the spatio-temporal dimension for low-level feature extraction. Furthermore, a relationship layer was designed for high-level feature extraction, calculated between different capsules. Finally, a Bayesian model was also proposed for ASTCapsNet, with which our network can assess the quality of the features extracted from each layer using softmax. Our proposed method yields superior performance on the benchmark datasets for handling the gait recognition problem. This network will be extended to address the task of multi-view gait detection and recognition for intelligently streaming scalable video sequences, which requires analyzing gait changes in the video sequences and predicts the class of each person in real-time.
\footnotesize
\bibliographystyle{IEEEtran}
|
1,108,101,565,264 | arxiv | \section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\msi@int\textstyle\int}%
\def\tiint{\msi@int\textstyle\iint}%
\def\tiiint{\msi@int\textstyle\iiint}%
\def\tiiiint{\msi@int\textstyle\iiiint}%
\def\tidotsint{\msi@int\textstyle\idotsint}%
\def\toint{\msi@int\textstyle\oint}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\newtoks\temptoksa
\newtoks\temptoksb
\newtoks\temptoksc
\def\msi@int#1#2{%
\def\@temp{{#1#2\the\temptoksc_{\the\temptoksa}^{\the\temptoksb}}
\futurelet\@nextcs
\@int
}
\def\@int{%
\ifx\@nextcs\limits
\typeout{Found limits}%
\temptoksc={\limits}%
\let\@next\@intgobble%
\else\ifx\@nextcs\nolimits
\typeout{Found nolimits}%
\temptoksc={\nolimits}%
\let\@next\@intgobble%
\else
\typeout{Did not find limits or no limits}%
\temptoksc={}%
\let\@next\msi@limits%
\fi\fi
\@next
}%
\def\@intgobble#1{%
\typeout{arg is #1}%
\msi@limits
}
\def\msi@limits{%
\temptoksa={}%
\temptoksb={}%
\@ifnextchar_{\@limitsa}{\@limitsb}%
}
\def\@limitsa_#1{%
\temptoksa={#1}%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsb{%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsc^#1{%
\temptoksb={#1}%
\@ifnextchar_{\@limitsd}{\@temp
}
\def\@limitsd_#1{%
\temptoksa={#1}%
\@temp
}
\def\dint{\msi@int\displaystyle\int}%
\def\diint{\msi@int\displaystyle\iint}%
\def\diiint{\msi@int\displaystyle\iiint}%
\def\diiiint{\msi@int\displaystyle\iiiint}%
\def\didotsint{\msi@int\displaystyle\idotsint}%
\def\doint{\msi@int\displaystyle\oint}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\fi
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
\section{Algebra for $\ell=1$}
Combining elements of the recurrence for $\ell=1$ when $n$ is even, we hav
\begin{align*}
F_{n}(x,a) & =(p+q\,\delta_{x,0})\,F_{n-1}(x,a)+q(1-\delta_{x,a
)F_{n-1}(x+1,a)\\
& =(p+q\,\delta_{x,0})\left[ p\,\delta_{x,a}\,F_{n-2}(x-1,a-1)+p\,F_{n-2
(x-1,a)+q\,F_{n-2}(x,a)\right] +\\
& q(1-\delta_{x,a})\left[ p\,\delta_{x+1,a}\,F_{n-2}(x,a-1)+p\,F_{n-2
(x,a)+q\,F_{n-2}(x+1,a)\right] \\
& =p\,\delta_{x,a}(p+q\,\delta_{x,0})F_{n-2}(x-1,a-1)+p(p+q\,\delta
_{x,0})F_{n-2}(x-1,a)+\\
& p\,q\,\delta_{x+1,a}(1-\delta_{x,a})F_{n-2}(x,a-1)+\\
& q\left[ (p+q\,\delta_{x,0})+p(1-\delta_{x,a})\right] F_{n-2
(x,a)+q^{2}(1-\delta_{x,a})F_{n-2}(x+1,a)
\end{align*}
for all $n\geq1$, $0\leq x\leq a$ and $a\geq1$. \ Our goal is to solve for
$G_{n}(x,a)=F_{2n}(x,a)$. \ Introduce the generating functio
\[
G(\lambda,x,a)
{\displaystyle\sum\limits_{n=1}^{\infty}}
\lambda^{n}G_{n}(x,a)=\lambda\,G_{1}(x,a)
{\displaystyle\sum\limits_{n=1}^{\infty}}
\lambda^{n+1}G_{n+1}(x,a)
\]
from whic
\begin{align*}
G(\lambda,x,a) & =\lambda\,G_{1}(x,a)+\lambda\,p\,\delta_{x,a}(p+q\,\delta
_{x,0})G(\lambda,x-1,a-1)+\lambda\,p(p+q\,\delta_{x,0})G(\lambda,x-1,a)+\\
& \lambda\,p\,q\,\delta_{x+1,a}(1-\delta_{x,a})G(\lambda,x,a-1)+\lambda
\,q\left[ (p+q\,\delta_{x,0})+p(1-\delta_{x,a})\right] G(\lambda,x,a)+\\
& \lambda\,q^{2}(1-\delta_{x,a})G(\lambda,x+1,a)
\end{align*}
follows. Introduce the double generating function\
\[
\tilde{G}(\lambda,\mu,a)
{\displaystyle\sum\limits_{n=1}^{\infty}}
{\displaystyle\sum\limits_{x=0}^{a}}
\lambda^{n}\mu^{x}G_{n}(x,a)
{\displaystyle\sum\limits_{x=0}^{a}}
G(\lambda,x,a)\mu^{x}
\]
and note that becaus
\[
G_{1}(x,a)=p^{2}\delta_{x,1}\delta_{a,1}+p\,q\,\delta_{x,0}\delta
_{a,1}+q\,\delta_{x,0}\delta_{a,0}
\]
we hav
\[
\lambd
{\displaystyle\sum\limits_{x=0}^{a}}
G_{1}(x,a)\mu^{x}=\lambda\,q\,\delta_{a,0}+\lambda\,p\,q\,\delta_{a,1
+\lambda\,p^{2}\mu\,\delta_{a,1}.
\]
Also
\[
\lambda\,p^{2
{\displaystyle\sum\limits_{x=0}^{a}}
\delta_{x,a}G(\lambda,x-1,a-1)\mu^{x}+\lambda\,p\,
{\displaystyle\sum\limits_{x=0}^{a}}
\delta_{x,0}\delta_{x,a}G(\lambda,x-1,a-1)\mu^{x}=\lambda\,p^{2
G(\lambda,a-1,a-1)\mu^{a}+0,
\
\[
\lambda\,p^{2
{\displaystyle\sum\limits_{x=0}^{a}}
G(\lambda,x-1,a)\mu^{x}+\lambda\,p\,
{\displaystyle\sum\limits_{x=0}^{a}}
\delta_{x,0}G(\lambda,x-1,a)\mu^{x}=\lambda\,p^{2}\mu\left( \tilde{G
(\lambda,\mu,a)-G(\lambda,a,a)\mu^{a}\right) +0,
\
\[
\lambda\,p\,
{\displaystyle\sum\limits_{x=0}^{a}}
\delta_{x+1,a}(1-\delta_{x,a})G(\lambda,x,a-1)\mu^{x}=\lambda\,p\,q\,G(\lambda
,a-1,a-1)\mu^{a-1},
\
\[
\lambda\,p\,
{\displaystyle\sum\limits_{x=0}^{a}}
G(\lambda,x,a)\mu^{x}+\lambda\,q^{2
{\displaystyle\sum\limits_{x=0}^{a}}
\delta_{x,0}G(\lambda,x,a)\mu^{x}=\lambda\,p\,q\,\tilde{G}(\lambda
,\mu,a)+\lambda\,q^{2}G(\lambda,0,a)\mu^{0},
\
\[
\lambda\,p\,
{\displaystyle\sum\limits_{x=0}^{a}}
(1-\delta_{x,a})G(\lambda,x,a)\mu^{x}=\lambda\,p\,q\left( \tilde{G
(\lambda,\mu,a)-G(\lambda,a,a)\mu^{a}\right) ,
\
\[
\lambda\,q^{2
{\displaystyle\sum\limits_{x=0}^{a}}
(1-\delta_{x,a})G(\lambda,x+1,a)\mu^{x}=\frac{\lambda\,q^{2}}{\mu}\left(
\tilde{G}(\lambda,\mu,a)-G(\lambda,0,a)\mu^{0}\right) .
\]
We obtai
\begin{align*}
\tilde{G} & =\left( \lambda\,q\,\delta_{a,0}+\lambda\,p\,q\,\delta
_{a,1}+\lambda\,p^{2}\mu\,\delta_{a,1}\right) +\lambda\,p^{2}G(\lambda
,a-1,a-1)\mu^{a}\\
& +\lambda\,p^{2}\mu\left( \tilde{G}(\lambda,\mu,a)-G(\lambda,a,a)\mu
^{a}\right) +\lambda\,p\,q\,G(\lambda,a-1,a-1)\mu^{a-1}\\
& +\lambda\,p\,q\,\tilde{G}(\lambda,\mu,a)+\lambda\,q^{2}G(\lambda
,0,a)+\lambda\,p\,q\left( \tilde{G}(\lambda,\mu,a)-G(\lambda,a,a)\mu
^{a}\right) \\
& +\frac{\lambda\,q^{2}}{\mu}\left( \tilde{G}(\lambda,\mu,a)-G(\lambda
,0,a)\right)
\end{align*}
that is
\begin{align*}
\left( 1-\lambda\,p^{2}\mu-2\lambda\,p\,q-\frac{\lambda\,q^{2}}{\mu}\right)
\tilde{G} & =\left( \lambda\,q\,\delta_{a,0}+\lambda\,p\,q\,\delta
_{a,1}+\lambda\,p^{2}\mu\,\delta_{a,1}\right) \\
& +\left( \lambda\,q^{2}-\frac{\lambda\,q^{2}}{\mu}\right) G(\lambda,0,a)\\
& -\left( \lambda\,p^{2}\mu^{a+1}+\lambda\,p\,q\,\mu^{a}\right)
G(\lambda,a,a)\\
& +\left( \lambda\,p^{2}\mu^{a}+\lambda\,p\,q\,\mu^{a-1}\right)
G(\lambda,a-1,a-1)
\end{align*}
that is
\begin{align}
\left( p^{2}\mu^{2}+2\,p\,q\,\mu-\frac{\mu}{\lambda}+q^{2}\right) \tilde{G}
& =-q\,\mu\,\delta_{a,0}-p\,q\,\mu\,\delta_{a,1}-p^{2}\mu^{2}\delta_{a,1}\\
& -q^{2}\left( \mu-1\right) G(\lambda,0,a)\nonumber\\
& +p\left( p\,\mu^{a+2}+q\,\mu^{a+1}\right) G(\lambda,a,a)\nonumber\\
& -p\left( \,p\,\mu^{a+1}+q\,\mu^{a}\right) G(\lambda,a-1,a-1)\nonumber
\end{align}
after multiplying both sides by $-\mu/\lambda$. \ Examine the special case
$a=1$
\begin{align*}
p^{2}\left( \mu-\frac{\theta}{2p^{2}}\right) \left( \mu-\frac{2q^{2
}{\theta}\right) \tilde{G} & =-p\,q\,\mu-p^{2}\mu^{2}-q^{2}\left(
\mu-1\right) G(\lambda,0,1)\\
& +p\left( p\,\mu^{3}+q\,\mu^{2}\right) G(\lambda,1,1)-p\left( \,p\,\mu
^{2}+q\,\mu\right) G(\lambda,0,0)
\end{align*}
wher
\
\begin{array}
[c]{ccccc
\dfrac{\theta}{2p^{2}}=\dfrac{1-2\,p\,q\,\lambda-\sqrt{1-4\,p\,q\,\lambda
}{2p^{2}\lambda}, & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{equivalently,} & & \dfrac{2q^{2}}{\theta
=\dfrac{1-2\,p\,q\,\lambda+\sqrt{1-4\,p\,q\,\lambda}}{2p^{2}\lambda}.
\end{array}
\]
For future reference, the sum of the two zeroes is $1/(p^{2}\lambda)-2q/p$,
which implies that
\[
\lambda=\frac{1}{p^{2}}\frac{1}{\frac{2q}{p}+\tfrac{\theta}{2p^{2}
+\tfrac{2q^{2}}{\theta}}=\frac{2\,\theta}{\left( \theta+2\,p\,q\right) ^{2
},
\
\[
\frac{p\,\lambda}{1-q\,\lambda}=\frac{\frac{2\,p\,\theta}{\left(
\theta+2\,p\,q\right) ^{2}}}{1-\frac{2\,q\,\theta}{\left( \theta
+2\,p\,q\right) ^{2}}}=\frac{2\,p\,\theta}{\theta^{2}-2(1-2\,p)q\,\theta
+4p^{2}q^{2}}.
\]
Taking $\mu=\theta/(2p^{2})$ and then $\mu=2q^{2}/\theta$, we hav
\[
\left\{
\begin{array}
[c]{l
0=-\dfrac{q\,\theta}{2p}-\dfrac{\theta^{2}}{4p^{2}}-q^{2}\left( \dfrac
{\theta}{2p^{2}}-1\right) G(\lambda,0,1)+p\left( \dfrac{\theta^{3}}{8p^{5
}+\dfrac{q\,\theta^{2}}{4p^{4}}\right) G(\lambda,1,1)\\
\;\;\;\;\;\;\,-p\left( \dfrac{\theta^{2}}{4p^{3}}+\dfrac{q\,\theta}{2p^{2
}\right) G(\lambda,0,0)\\
0=-\dfrac{2\,p\,q^{3}}{\theta}-\dfrac{4p^{2}q^{4}}{\theta^{2}}-q^{2}\left(
\dfrac{2q^{2}}{\theta}-1\right) G(\lambda,0,1)+p\left( \dfrac{8\,p\,q^{6
}{\theta^{3}}+\dfrac{4q^{5}}{\theta^{2}}\right) G(\lambda,1,1)\\
\;\;\;\;\;\;\,-p\left( \dfrac{4\,p\,q^{4}}{\theta^{2}}+\dfrac{2q^{3}}{\theta
}\right) G(\lambda,0,0)
\end{array}
\right.
\]
an
\[
G(\lambda,0,0)
{\displaystyle\sum\limits_{n=1}^{\infty}}
\lambda^{n}G_{n}(0,0)
{\displaystyle\sum\limits_{n=1}^{\infty}}
\lambda^{n}q^{n}=\frac{q\,\lambda\,}{1-q\,\lambda}=\frac{2\,q\,\theta
}{4\,p\,q(\theta+p\,q)+\theta(\theta-2q)};
\]
thus, on eliminating $G(\lambda,0,1)$,
\[
G(\lambda,1,1)=\frac{1}{1-q\,\lambda}\,\frac{2p^{2}\left[ \theta
^{2}-2(1-2\,p)q\,\theta+4p^{2}q^{2}\right] \left( \theta-2\,p\,q\right)
\theta}{2^{4}p^{3}q^{5}\left( \theta-2p^{2}\right) +\theta^{4}\left(
\theta-2q^{2}\right) }.
\]
Now examine the general case $a>1$
\[
\left\{
\begin{array}
[c]{l
0=-q^{2}\left( \dfrac{\theta}{2p^{2}}-1\right) G(\lambda,0,a)+p\left(
\dfrac{\theta^{a+2}}{2^{a+2}p^{2a+3}}+\dfrac{q\,\theta^{a+1}}{2^{a+1}p^{2a+2
}\right) G(\lambda,a,a)\\
\;\;\;\;\;\;\;-p\left( \dfrac{\theta^{a+1}}{2^{a+1}p^{2a+1}}+\dfrac
{q\,\theta^{a}}{2^{a}p^{2a}}\right) G(\lambda,a-1,a-1)\\
0=-q^{2}\left( \dfrac{2q^{2}}{\theta}-1\right) G(\lambda,0,a)+p\left(
\dfrac{2^{a+2}p\,q^{2a+4}}{\theta^{a+2}}+\dfrac{2^{a+1}q^{2a+3}}{\theta^{a+1
}\right) G(\lambda,a,a)\\
\;\;\;\;\;\;\;-p\left( \dfrac{2^{a+1}p\,q^{2a+2}}{\theta^{a+1}}+\dfrac
{2^{a}q^{2a+1}}{\theta^{a}}\right) G(\lambda,a-1,a-1)
\end{array}
\right.
\]
and, on eliminating $G(\lambda,0,a)$
\begin{align*}
G(\lambda,a,a) & =2p^{2}\frac{2^{2a}p^{2a-1}q^{2a+1}\left( \theta
-2p^{2}\right) +\theta^{2a}\left( \theta-2q^{2}\right) }{2^{2a+2
p^{2a+1}q^{2a+3}\left( \theta-2p^{2}\right) +\theta^{2a+2}\left(
\theta-2q^{2}\right) }\theta\,G(\lambda,a-1,a-1)\\
& =\frac{1}{1-q\,\lambda}\,\frac{2^{a}p^{2a}\left[ \theta^{2
-2(1-2\,p)q\,\theta+4p^{2}q^{2}\right] \left( \theta-2\,p\,q\right)
\theta^{a}}{2^{2a+2}p^{2a+1}q^{2a+3}\left( \theta-2p^{2}\right)
+\theta^{2a+2}\left( \theta-2q^{2}\right)
\end{align*}
after iteration. \ Finally, given $a>1$ and taking the limit in formula (1) as
$\mu\rightarrow1$, we hav
\begin{align*}
-\frac{1-\lambda}{\lambda}\tilde{G} & =\left( 1-\frac{1}{\lambda}\right)
\tilde{G}=\left( (p+q)^{2}-\frac{1}{\lambda}\right) \tilde{G}=\left(
p^{2}+2\,p\,q-\frac{1}{\lambda}+q^{2}\right) \tilde{G}=\\
& =p(p+q)G(\lambda,a,a)-p(p+q)G(\lambda,a-1,a-1)=p\,G(\lambda
,a,a)-p\,G(\lambda,a-1,a-1)
\end{align*}
therefor
\[
\tilde{G}=\frac{p\,\lambda}{1-\lambda}\left[ G(\lambda,a-1,a-1)-G(\lambda
,a,a)\right] ,
\]
as was to be shown. \ The case $a=1$ must be treated separately
\begin{align*}
\tilde{G} & =G(\lambda,0,1)+G(\lambda,1,1)\\
& =\frac{2p^{2}}{2p^{2}-\theta}\frac{1}{q^{2}}\left[ \dfrac{q\,\theta
{2p}+\dfrac{\theta^{2}}{4p^{2}}-p\left( \dfrac{\theta^{3}}{8p^{5}
+\dfrac{q\,\theta^{2}}{4p^{4}}\right) G(\lambda,1,1)+p\left( \dfrac
{\theta^{2}}{4p^{3}}+\dfrac{q\,\theta}{2p^{2}}\right) G(\lambda,0,0)\right]
+G(\lambda,1,1)\\
& =\frac{p\,\lambda\left( 1-p\,q\,\lambda\right) }{(1-q\,\lambda)\left[
p\,q^{2}\lambda^{2}-(1+2p)q\,\lambda+1\right]
\end{align*}
consistent with the series expansion in the introduction.
\section{Calculus for $\ell=1$}
Setting $p=q=1/2$, we obtai
\begin{align*}
\tilde{G}(\lambda,1,a) & =\frac{1}{1-\lambda}\left[ \frac{\left( \frac
{1}{2}\right) ^{a-1}\theta^{a}}{\left( \frac{1}{2}\right) ^{2a}+\theta
^{2a}}-\frac{\left( \frac{1}{2}\right) ^{a}\theta^{a+1}}{\left( \frac{1
{2}\right) ^{2a+2}+\theta^{2a+2}}\right] \\
& =\frac{2}{1-\lambda}\left[ \frac{(2\theta)^{a}}{1+(2\theta)^{2a}
-\frac{(2\theta)^{a+1}}{1+(2\theta)^{2a+2}}\right] \\
& =\frac{2}{1-\lambda}\left[ \frac{1}{(2\theta)^{a}+(2\theta)^{-a}}-\frac
{1}{(2\theta)^{a+1}+(2\theta)^{-a-1}}\right]
\end{align*}
and thus have the double generating functio
\begin{align*}
\Psi(\lambda,\nu) &
{\displaystyle\sum\limits_{n=1}^{\infty}}
{\displaystyle\sum\limits_{a=1}^{\infty}}
\lambda^{n}\nu^{a}\mathbb{P}\left\{ M_{2n}=a\right\}
{\displaystyle\sum\limits_{a=1}^{\infty}}
\nu^{a}\tilde{G}(\lambda,1,a)\\
& =\frac{1}{1-\lambda
{\displaystyle\sum\limits_{a=1}^{\infty}}
\nu^{a}\left[ \frac{2}{(2\theta)^{a}+(2\theta)^{-a}}-\frac{2}{(2\theta
)^{a+1}+(2\theta)^{-a-1}}\right] \\
& =\frac{1}{1-\lambda}\left[ \nu^{0}\frac{2}{(2\theta)^{1}+(2\theta)^{-1}}
{\displaystyle\sum\limits_{a=1}^{\infty}}
\left( \nu^{a}-\nu^{a-1}\right) \frac{2}{(2\theta)^{a}+(2\theta)^{-a
}\right] \\
& =\frac{\lambda}{(1-\lambda)(2-\lambda)}-\frac{1-\nu}{1-\lambda
{\displaystyle\sum\limits_{a=1}^{\infty}}
\nu^{a-1}\frac{2}{(2\theta)^{a}+(2\theta)^{-a}}.
\end{align*}
Let us focus on $\mathbb{E}\left( M_{2n}\right) $
\begin{align*
{\displaystyle\sum\limits_{n=1}^{\infty}}
\lambda^{n}\mathbb{E}\left( M_{2n}\right) & =\left. \frac{\partial\Psi
}{\partial\nu}\right\vert _{\nu=1}\\
& =\left. \frac{1}{1-\lambda
{\displaystyle\sum\limits_{a=1}^{\infty}}
\nu^{a-1}\frac{2}{(2\theta)^{a}+(2\theta)^{-a}}-\frac{1-\nu}{1-\lambda
{\displaystyle\sum\limits_{a=1}^{\infty}}
(a-1)\nu^{a-2}\frac{2}{(2\theta)^{a}+(2\theta)^{-a}}\right\vert _{\nu=1}\\
& =\frac{1}{1-\lambda
{\displaystyle\sum\limits_{a=1}^{\infty}}
\frac{2}{(2\theta)^{a}+(2\theta)^{-a}
\end{align*}
which provides that (in an extended sense) the following sequence is Abel
convergent \cite{PP-trffc, PS-trffc}
\begin{align*}
\lim_{n\rightarrow\infty}^{\;\;\;\;\;\;\;\;\ast}\frac{\mathbb{E}\left(
M_{2n}\right) }{\sqrt{n}} & =\lim_{\lambda\rightarrow1^{-}}(1-\lambda)^{3/2
{\displaystyle\sum\limits_{n=1}^{\infty}}
\frac{1}{(1/2)!}\lambda^{n-1/2}\mathbb{E}\left( M_{2n}\right) \\
& =\lim_{\lambda\rightarrow1^{-}}\left( \frac{1-\lambda}{\lambda}\right)
^{1/2
{\displaystyle\sum\limits_{a=1}^{\infty}}
\frac{2}{(2\theta)^{a}+(2\theta)^{-a}}\frac{1}{(1/2)!}.
\end{align*}
Let $2\theta=\exp(-t)$, the
\[
\frac{2}{(2\theta)^{a}+(2\theta)^{-a}}=\frac{2}{e^{at}+e^{-at}
=\operatorname{sech}(at)
\]
and, because $\lambda=2\theta/\left( \theta+\frac{1}{2}\right) ^{2
=2\theta/\left( \theta^{2}+\theta+\frac{1}{4}\right) $,
\[
\dfrac{1-\lambda}{\lambda}=\frac{\theta^{2}+\frac{1}{4}+\theta}{2\theta
}-1=\frac{(2\theta)+(2\theta)^{-1}}{4}-\frac{1}{2}=\frac{\cosh(t)-1}{2}.
\]
We hav
\begin{align*}
\lim_{n\rightarrow\infty}^{\;\;\;\;\;\;\;\;\ast}\frac{\mathbb{E}\left(
M_{2n}\right) }{\sqrt{n}} & =\lim_{t\rightarrow0^{+}}\sqrt{\frac{\cosh
(t)-1}{2}}\frac{1}{(1/2)!
{\displaystyle\sum\limits_{a=1}^{\infty}}
\operatorname{sech}(at)\\
& =\lim_{t\rightarrow0^{+}}\sqrt{\frac{1}{\pi}}
{\displaystyle\sum\limits_{a=1}^{\infty}}
\operatorname{sech}(at)
\end{align*}
since $(\cosh(t)-1)/2\sim t^{2}/4$ and $(1/2)!=\sqrt{\pi}/2$. \ By a Riemann
sum-based argument,
\[
{\displaystyle\sum\limits_{a=\left\lceil \alpha/t\right\rceil }^{\left\lfloor
\beta/t\right\rfloor }}
\operatorname{sech}(at)\rightarro
{\displaystyle\int\limits_{\alpha}^{\beta}}
\operatorname{sech}(b)db
\]
as $t\rightarrow0^{+}$, which in turn give
\[
\lim_{n\rightarrow\infty}^{\;\;\;\;\;\;\;\;\ast}\frac{\mathbb{E}\left(
M_{2n}\right) }{\sqrt{2n}}=\sqrt{\frac{1}{2\pi}
{\displaystyle\int\limits_{0}^{\infty}}
\operatorname{sech}(b)db=\sqrt{\frac{\pi}{8}
\]
as $\alpha\rightarrow0^{+}$ and $\beta\rightarrow\infty$. \ A similar argument
give
\[
\lim_{n\rightarrow\infty}^{\;\;\;\;\;\;\;\;\ast}\frac{\mathbb{E}\left(
M_{2n}^{2}\right) }{2n}=\frac{1}{4
{\displaystyle\int\limits_{0}^{\infty}}
b\operatorname{sech}(b)db=\frac{G}{2
\]
where $G$ is Catalan's constant \cite{F2-trffc}.
\section{Algebra for $\ell=2$}
Combining elements of the recurrence for $\ell=2$ when $n\equiv
4\operatorname{mod}4 $, we hav
\begin{align*}
F_{n}(x,a) & =(p+q\,\delta_{x,0})\,F_{n-1}(x,a)+q(1-\delta_{x,a
)F_{n-1}(x+1,a)\\
& =(p+q\,\delta_{x,0})\left[ (p+q\,\delta_{x,0})\,F_{n-2}(x,a)+q(1-\delta
_{x,a})F_{n-2}(x+1,a)\right] +\\
& q(1-\delta_{x,a})\left[ (p+q\,\delta_{x+1,0})\,F_{n-2}(x+1,a)+q(1-\delta
_{x+1,a})F_{n-2}(x+2,a)\right] \\
& =(p+q\,\delta_{x,0})^{2}F_{n-2}(x,a)+q(2p+q\,\delta_{x,0})(1-\delta
_{x,a})F_{n-2}(x+1,a)+\\
& q^{2}(1-\delta_{x,a})(1-\delta_{x+1,a})F_{n-2}(x+2,a)
\end{align*}
owing to $\delta_{x+1,0}=0$ for all $x\geq0$, an
\begin{align*}
F_{n-2}(x,a) & =p\,\delta_{x,a}\,F_{n-3}(x-1,a-1)+p\,F_{n-3
(x-1,a)+q\,F_{n-3}(x,a)\\
& =p\,\delta_{x,a}\left[ p\,\delta_{x,a}\,F_{n-4}(x-2,a-2)+p\,F_{n-4
(x-2,a-1)+q\,F_{n-4}(x-1,a-1)\right] +\\
& p\left[ p\,\delta_{x-1,a}\,F_{n-4}(x-2,a-1)+p\,F_{n-4}(x-2,a)+q\,F_{n-4
(x-1,a)\right] +\\
& q\left[ p\,\delta_{x,a}\,F_{n-4}(x-1,a-1)+p\,F_{n-4}(x-1,a)+q\,F_{n-4
(x,a)\right] \\
& =p^{2}\delta_{x,a}F_{n-4}(x-2,a-2)+p^{2}\left( \delta_{x,a}+\delta
_{x-1,a}\right) F_{n-4}(x-2,a-1)+p^{2}F_{n-4}(x-2,a)+\\
& 2\,p\,q\,\delta_{x,a}F_{n-4}(x-1,a-1)+2\,p\,q\,F_{n-4}(x-1,a)+q^{2
F_{n-4}(x,a)
\end{align*}
owing to $\delta_{x-1,a-1}=\delta_{x,a}$ always. \ Let\ $G_{n}(x,a)=F_{4n
(x,a)$. \ A recursion for $G_{n}(x,a)$ in terms o
\begin{align*}
& G_{n-1}(x-2,a-2),\;G_{n-1}(x-1,a-2),\;G_{n-1}(x,a-2),\\
& G_{n-1}(x-2,a-1),\;G_{n-1}(x-1,a-1),\;G_{n-1}(x,a-1),\;G_{n-1}(x+1,a-1),\\
& G_{n-1}(x-2,a),\;G_{n-1}(x-1,a),\;G_{n-1}(x,a),\;G_{n-1}(x+1,a),\;G_{n-1
(x+2,a)
\end{align*}
arises (too complicated to reproduce here). \ Note, for example
\[
G_{1}(x,a)=p^{4}\delta_{x,2}\delta_{a,2}+2\,p^{3}q\,\delta_{x,1}\delta
_{a,2}+p^{2}q^{2}\delta_{x,0}\delta_{a,2}+2\,p^{3}q\,\delta_{x,1}\delta
_{a,1}+2\,p(1+p)q^{2}\delta_{x,0}\delta_{a,1}+q^{2}\delta_{x,0}\delta_{a,0}
\]
and henc
\begin{align*}
\lambd
{\displaystyle\sum\limits_{x=0}^{a}}
G_{1}(x,a)\mu^{x} & =\lambda\,q^{2}\delta_{a,0}+2\,\lambda\,p(1+p)q^{2
\delta_{a,1}+2\,\lambda\,p^{3}q\,\mu\,\delta_{a,1}+\\
& \lambda\,p^{2}q^{2}\delta_{a,2}+2\,\lambda\,p^{3}q\,\mu\,\delta
_{a,2}+\lambda\,p^{4}\mu^{2}\,\delta_{a,2}.
\end{align*}
Recall the quadratic coefficient for $\tilde{G}$ we found in formula (1);
here, this becomes a quartic
\begin{equation}
p^{4}\mu^{4}+4\,p^{3}q\,\mu^{3}+6\,p^{2}q^{2}\mu^{2}-\frac{\mu^{2}}{\lambda
}+4\,p\,q^{3}\mu+q^{4
\end{equation}
which factors a
\[
p^{4}\left( \mu-\frac{\theta}{2p^{2}}\right) \left( \mu-\frac{2q^{2
}{\theta}\right) \left( \mu-\frac{\omega}{2p^{2}}\right) \left( \mu
-\frac{2q^{2}}{\omega}\right)
\]
where
\
\begin{array}
[c]{ccccc
\dfrac{\theta}{2p^{2}}=\dfrac{1-2\,p\,q\sqrt{\lambda}-\sqrt{1-4\,p\,q\sqrt
{\lambda}}}{2p^{2}\sqrt{\lambda}}, & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{equivalently,} & &
\dfrac{2q^{2}}{\theta}=\dfrac{1-2\,p\,q\sqrt{\lambda}+\sqrt{1-4\,p\,q\sqrt
{\lambda}}}{2p^{2}\sqrt{\lambda}};
\end{array}
\
\
\begin{array}
[c]{ccccc
\dfrac{\omega}{2p^{2}}=\dfrac{-1-2\,p\,q\sqrt{\lambda}-\sqrt{1+4\,p\,q\sqrt
{\lambda}}}{2p^{2}\sqrt{\lambda}}, & & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{equivalently,} & &
\dfrac{2q^{2}}{\omega}=\dfrac{-1-2\,p\,q\sqrt{\lambda}+\sqrt{1+4\,p\,q\sqrt
{\lambda}}}{2p^{2}\sqrt{\lambda}}.
\end{array}
\]
Defin
\begin{align*}
\Delta_{1} & =256\,p^{8}q^{8}+1024\,p^{7}q^{7}\theta+64\,p^{4}q^{6
(22\,p^{2}-q-3\,p\,q)\theta^{2}+\\
& 128\,p^{3}q^{5}(8\,p^{2}-q-3\,p\,q)\theta^{3}+16\,p^{2}q^{4}(35\,p^{2
-5\,q-17\,p\,q)\theta^{4}+\\
& 32\,p\,q^{3}(8\,p^{2}-q-3\,p\,q)\theta^{5}+4\,q^{2}(22\,p^{2
-q-3\,p\,q)\theta^{6}+16\,p\,q\,\theta^{7}+\theta^{8},
\end{align*}
then it can be shown tha
\[
G(\lambda,0,1)=\frac{8\,p(1+p)q^{2}\left( \theta+2\,p\,q\right) ^{4
\theta^{2}}{(1-q^{2}\lambda)\Delta_{1}},
\
\[
G(\lambda,1,1)=\frac{8\,p^{3}q\left[ \theta^{2}+2(2\,p-1)q\,\theta
+4\,p^{2}q^{2}\right] \left[ \theta^{2}+2(2\,p+1)q\,\theta+4\,p^{2
q^{2}\right] \theta^{2}}{(1-q^{2}\lambda)\Delta_{1}}.
\]
Unfortunately the expressions become cumbersome beyond this point; no pattern
is evident. \ Defin
\begin{align*}
\Gamma & =\left[ \theta^{2}+2(2p-1)q\,\theta+4p^{2}q^{2}\right] \left[
\theta^{2}+2(2p+1)q\,\theta+4p^{2}q^{2}\right] \cdot\\
& \left[ \theta^{4}+8pq\theta^{3}+36p^{2}q^{2}\theta^{2}+32p^{3}q^{3
\theta+16p^{4}q^{4}\right] \cdot\\
& \left\{ 4p^{2}q^{2}(\theta+2pq)^{4}+\left[ 4pq(\theta+pq)(\theta
+4pq)\left( \theta^{2}-2q^{2}\theta+4p^{2}q^{2}\right) \right]
\omega+\right. \\
& \left[ \theta^{4}+2(3p-1)q\,\theta^{3}-8p(1+q)q^{2}\theta^{2
+8p^{2}(3p-1)q^{3}\theta+16p^{4}q^{4}\right] \omega^{2}-\\
& \left. 2(1+p)q\,\theta^{2}\omega^{3}\right\} ,
\end{align*
\begin{align*}
\Delta_{2} & =4096p^{12}q^{12}+8192p^{11}q^{11}\theta-2048p^{10}q^{12
\theta+7168p^{10}q^{10}\theta^{2}-4096p^{9}q^{11}\theta^{2}+5120p^{9
q^{9}\theta^{3}-\\
& 3072p^{8}q^{10}\theta^{3}+3072p^{8}q^{8}\theta^{4}-1536p^{7}q^{9}\theta
^{4}+1280p^{7}q^{7}\theta^{5}-768p^{6}q^{8}\theta^{5}+448p^{6}q^{6}\theta
^{6}-\\
& 256p^{5}q^{7}\theta^{6}+128p^{5}q^{5}\theta^{7}-32p^{4}q^{6}\theta
^{7}+16p^{4}q^{4}\theta^{8}-2048p^{10}q^{12}\omega+2048p^{10}q^{10
\theta\omega+\\
& 1024p^{8}q^{11}\theta\omega-5120p^{9}q^{11}\theta\omega+6144p^{9}q^{9
\theta^{2}\omega+2048p^{7}q^{10}\theta^{2}\omega-5632p^{8}q^{10}\theta
^{2}\omega+\\
& 7424p^{8}q^{8}\theta^{3}\omega+1536p^{6}q^{9}\theta^{3}\omega-4096p^{7
q^{9}\theta^{3}\omega+4864p^{7}q^{7}\theta^{4}\omega+768p^{5}q^{8}\theta
^{4}\omega-\\
& 2304p^{6}q^{8}\theta^{4}\omega+1856p^{6}q^{6}\theta^{5}\omega+384p^{4
q^{7}\theta^{5}\omega-1024p^{5}q^{7}\theta^{5}\omega+384p^{5}q^{5}\theta
^{6}\omega+\\
& 128p^{3}q^{6}\theta^{6}\omega-352p^{4}q^{6}\theta^{6}\omega+32p^{4
q^{4}\theta^{7}\omega+16p^{2}q^{5}\theta^{7}\omega-80p^{3}q^{5}\theta
^{7}\omega-8p^{2}q^{4}\theta^{8}\omega+\\
& 1024p^{10}q^{10}\omega^{2}+4096p^{9}q^{9}\theta\omega^{2}-512p^{8
q^{10}\theta\omega^{2}+6144p^{8}q^{8}\theta^{2}\omega^{2}-2048p^{7}q^{9
\theta^{2}\omega^{2}+\\
& 5120p^{7}q^{7}\theta^{3}\omega^{2}-2944p^{6}q^{8}\theta^{3}\omega
^{2}+2944p^{6}q^{6}\theta^{4}\omega^{2}-2048p^{5}q^{7}\theta^{4}\omega
^{2}+1280p^{5}q^{5}\theta^{5}\omega^{2}-\\
& 736p^{4}q^{6}\theta^{5}\omega^{2}+384p^{4}q^{4}\theta^{6}\omega^{2
-128p^{3}q^{5}\theta^{6}\omega^{2}+64p^{3}q^{3}\theta^{7}\omega^{2
-8p^{2}q^{4}\theta^{7}\omega^{2}+4p^{2}q^{2}\theta^{8}\omega^{2}+\\
& 1024p^{9}q^{9}\omega^{3}+3328p^{8}q^{8}\theta\omega^{3}+3328p^{7}q^{7
\theta^{2}\omega^{3}-256p^{5}q^{8}\theta^{2}\omega^{3}-896p^{6}q^{8}\theta
^{2}\omega^{3}+\\
& 1984p^{6}q^{6}\theta^{3}\omega^{3}-256p^{4}q^{7}\theta^{3}\omega
^{3}-1280p^{5}q^{7}\theta^{3}\omega^{3}+1088p^{5}q^{5}\theta^{4}\omega
^{3}-64p^{3}q^{6}\theta^{4}\omega^{3}-\\
& 768p^{4}q^{6}\theta^{4}\omega^{3}+496p^{4}q^{4}\theta^{5}\omega^{3
-64p^{2}q^{5}\theta^{5}\omega^{3}-320p^{3}q^{5}\theta^{5}\omega^{3
+208p^{3}q^{3}\theta^{6}\omega^{3}-\\
& 16pq^{4}\theta^{6}\omega^{3}-56p^{2}q^{4}\theta^{6}\omega^{3}+52p^{2
q^{2}\theta^{7}\omega^{3}+4pq\theta^{8}\omega^{3}+256p^{8}q^{8}\omega
^{4}+768p^{7}q^{7}\theta\omega^{4}+\\
& 640p^{6}q^{6}\theta^{2}\omega^{4}-64p^{4}q^{7}\theta^{2}\omega^{4
-192p^{5}q^{7}\theta^{2}\omega^{4}+320p^{5}q^{5}\theta^{3}\omega^{4
-64p^{3}q^{6}\theta^{3}\omega^{4}-\\
& 224p^{4}q^{6}\theta^{3}\omega^{4}+176p^{4}q^{4}\theta^{4}\omega^{4
-16p^{2}q^{5}\theta^{4}\omega^{4}-112p^{3}q^{5}\theta^{4}\omega^{4
+80p^{3}q^{3}\theta^{5}\omega^{4}-\\
& 16pq^{4}\theta^{5}\omega^{4}-56p^{2}q^{4}\theta^{5}\omega^{4}+40p^{2
q^{2}\theta^{6}\omega^{4}-4q^{3}\theta^{6}\omega^{4}-12pq^{3}\theta^{6
\omega^{4}+12pq\theta^{7}\omega^{4}+\theta^{8}\omega^{4},
\end{align*}
then it can be shown tha
\[
G(\lambda,0,2)=\frac{4p^{2}q^{2}(\theta+2\,p\,q)^{2}\theta\,\omega\,\Gamma
}{(1-q^{2}\lambda)\Delta_{1}\Delta_{2}}.
\]
We omit analogous expressions for $G(\lambda,1,2)$ and $G(\lambda,2,2)$.
\ Computer simulations suggest that first/second moments of $M_{n}/\sqrt{n}$
for $p=q=1/2$ and $\ell\geq2$ are numerically equal to those for $\ell=1$,
given suitably large $n$. \ It is reasonable to conjecture that the maximum
queue length distribution $\digamma$ enjoys a kind of universality, depending
only on the values $p\leq q$. \ The function $\digamma$ is apparently
independent of the choice of light cycles $RG$, $RRGG$, $RRRGGG$, \ldots\ but
a rigorous proof may be difficult.
\section{Asymptotic Distribution of $S_{n}$}
No universality applies to the asymptotic distribution of $S_{n}$. \ The light
cycle $RG$ is distinct from $RRGG$ in this regard. \ For $\ell=1$, we hav
\
\begin{array}
[c]{ccc
\lim\limits_{n\rightarrow\infty}\mathbb{P}\left\{ S_{n}=x\right\}
=\dfrac{(q-p)p^{2x}}{q^{2x+2}}, & & x=0,1,2,\ldots
\end{array}
\]
which is found via techniques in \cite{T8-trffc}. \ For $\ell=2$, the
corresponding probability for $x=0$ i
\[
\frac{4(q-p)}{q^{2}\left( 1+2\,q+\sqrt{1+4\,p\,q}\right) }
\]
and for $x=1$ i
\[
\frac{4(q-p)\left[ 1+2\,p\,q(q-p)-(q-p+2\,p\,q)\sqrt{1+4\,p\,q}\right]
}{q^{4}\left[ -(q-p)+\sqrt{1+4\,p\,q}\right] \left( 1+2\,q+\sqrt
{1+4\,p\,q}\right) }.
\]
Let us elaborate on how these formulas are derived. \ From the equatio
\[
z^{\ell}=(q+p\,z)^{2\,\ell}
\]
we obtain $\ell$ root
\[
z_{0}=1,\;z_{1},\;z_{2},\;\ldots,\;z_{\ell-1}
\]
satisfying $\left\vert z_{k}\right\vert \leq1$ for all $k$, and determine
$w_{0},\,w_{1},\,w_{2},\,\ldots,\,w_{\ell-1}$ via the linear syste
\[
\left(
\begin{array}
[c]{ccccc
1 & 1 & 1 & \ldots & 1\\
1 & \dfrac{z_{1}}{q+p\,z_{1}} & \left( \dfrac{z_{1}}{q+p\,z_{1}}\right) ^{2}
& \ldots & \left( \dfrac{z_{1}}{q+p\,z_{1}}\right) ^{\ell-1}\\
1 & \dfrac{z_{2}}{q+p\,z_{2}} & \left( \dfrac{z_{2}}{q+p\,z_{2}}\right) ^{2}
& \ldots & \left( \dfrac{z_{2}}{q+p\,z_{2}}\right) ^{\ell-1}\\
\vdots & \vdots & \vdots & & \vdots\\
1 & \dfrac{z_{\ell-1}}{q+p\,z_{\ell-1}} & \left( \dfrac{z_{\ell-1
}{q+p\,z_{\ell-1}}\right) ^{2} & \ldots & \left( \dfrac{z_{\ell-1
}{q+p\,z_{\ell-1}}\right) ^{\ell-1
\end{array}
\right) \left(
\begin{array}
[c]{c
w_{0}\\
w_{1}\\
w_{2}\\
\vdots\\
w_{\ell-1
\end{array}
\right) =\left(
\begin{array}
[c]{c
\dfrac{q-p}{q}\ell\\
0\\
0\\
\vdots\\
0
\end{array}
\right) .
\]
The
\[
H(z)=\frac{(q+p\,z)^{\,\ell}}{z^{\ell}-(q+p\,z)^{2\,\ell}}\left( \frac
{z}{q+p\,z}-1\right)
{\displaystyle\sum\limits_{k=0}^{\ell-1}}
w_{k}\left( \dfrac{z}{q+p\,z}\right) ^{k}
\]
is the probability generating function for $S_{n}$ as $n\rightarrow\infty$.
\ In particular, for $\ell=1$
\
\begin{array}
[c]{ccc
w_{0}=\dfrac{q-p}{q}, & & H(z)=\dfrac{(q-p)(z-1)}{z-(q+p\,z)^{2}}=\dfrac
{q-p}{q^{2}-p^{2}z},
\end{array}
\
\[
\lim_{n\rightarrow\infty}\mathbb{E}\left( S_{n}\right) =H^{\prime
}(1)=\left. \frac{p^{2}\left( q-p\right) }{\left( q^{2}-p^{2}z\right)
^{2}}\right\vert _{z=1}=\frac{p^{2}}{q-p},
\
\[
\lim_{n\rightarrow\infty}\mathbb{E}\left( S_{n}(S_{n}-1)\right)
=H^{^{\prime\prime}}(1)=\left. \frac{2p^{4}\left( q-p\right) }{\left(
q^{2}-p^{2}z\right) ^{3}}\right\vert _{z=1}=\frac{2p^{4}}{(q-p)^{2}}
\]
and, for $\ell=2$
\
\begin{array}
[c]{ccc
w_{0}=\dfrac{4(q-p)}{2+(q-p)+\sqrt{1+4\,p\,q}}, & & w_{1}=\dfrac
{4\,p(q-p)}{q\left[ -(q-p)+\sqrt{1+4\,p\,q}\right] },
\end{array}
\
\begin{align*}
H(z) & =\frac{4\,q(q-p)(q+p\,z)}{\left( q^{2}-p^{2}z\right) \left[
q^{2}+(1+2\,p\,q)z+p^{2}z^{2}\right] }\cdot\\
& \left\{ \frac{1}{2+(q-p)+\sqrt{1+4\,p\,q}}+\dfrac{p\,z}{q(q+p\,z)\left[
-(q-p)+\sqrt{1+4\,p\,q}\right] }\right\} ,
\end{align*
\[
\lim_{n\rightarrow\infty}\mathbb{E}\left( S_{n}\right) =H^{\prime
(1)=\frac{1}{4}\left[ -4+2(q-p)+\frac{1}{q-p}+\sqrt{1+4\,p\,q}\right] .
\]
We obtained $H(z)$ for $\ell=2$ by noting tha
\[
(q+p\,z)^{4}-z^{2}=(1-z)(q^{2}-p^{2}z)\left[ q^{2}+(1+2\,p\,q)z+p^{2
z^{2}\right] ,
\]
simplifying the first factor. \ In words, $\lim_{n\rightarrow\infty
\mathbb{E}\left( S_{n}\right) $ follows a geometric distribution for $\ell=1
$; \ its mean for $\ell=2$ is noticeably smaller than that for $\ell=1$.
\section{Appendix}
Here is more information pertinent to Section 3. \ For $a=1$, we solve a
$2\times2$ system:
\begin{align*}
0 & =-\left[ 2p^{3}q\mu+2p(1+p)q^{2}\right] \mu^{2}-\left[ 2p^{3
q\mu+4p^{2}q^{2}+2pq^{3}\right] \mu^{2}G(\lambda,0,0)+\\
& \left[ p^{4}\mu^{4}+2p^{3}q\mu^{3}+p^{2}q^{2}\mu^{2}-(1+3p)q^{3}\mu
^{2}+4pq^{3}\mu+q^{4}\right] G(\lambda,0,1)+\\
& \left[ p^{4}\mu^{4}+2p^{3}q\mu^{3}+2p^{3}q\mu^{3}+5p^{2}q^{2}\mu
^{2}+2pq^{3}\mu-q^{4}\mu+q^{4}\right] \mu\,G(\lambda,1,1)
\end{align*}
for $G(\lambda,0,1)$ and $G(\lambda,1,1)$, taking $\mu=\theta/(2p^{2})$ and
$\mu=2q^{2}/\theta$, utilizin
\[
G(\lambda,0,0)
{\displaystyle\sum\limits_{n=1}^{\infty}}
\lambda^{n}G_{n}(0,0)
{\displaystyle\sum\limits_{n=1}^{\infty}}
\lambda^{n}q^{2n}=\frac{q^{2}\lambda\,}{1-q^{2}\lambda}.
\]
For $a=2$, we solve a $3\times3$ system
\begin{align*}
0 & =-\left[ p^{4}\mu^{2}+2p^{3}q\mu+p^{2}q^{2}\right] \mu^{2}-\left[
p^{4}\mu^{2}+2p^{3}q\mu+p^{2}q^{2}\right] \mu^{2}G(\lambda,0,0)-\\
& \left[ p^{4}\mu^{2}+2p^{3}q\mu+p^{2}q^{2}\right] \mu^{2}G(\lambda,0,1)-\\
& \left[ 2p^{3}q\mu^{2}+4p^{2}q^{2}\mu+2pq^{3}\right] \mu^{2}G(\lambda
,1,1)+\\
& \left[ p^{4}\mu^{4}+2p^{3}q\mu^{3}+p^{2}q^{2}\mu^{2}-q^{4}\mu+q^{4}\right]
\mu\,G(\lambda,1,2)+\\
& \left[ p^{4}\mu^{3}+4p^{3}q\mu^{2}+5p^{2}q^{2}\mu+2pq^{3}\right] \mu
^{3}G(\lambda,2,2)-\\
& \left[ (1+3p)q^{3}\mu^{2}-4pq^{3}\mu-q^{4}\right] G(\lambda,0,2)
\end{align*}
for $G(\lambda,0,2)$, $G(\lambda,1,2)$ and $G(\lambda,2,2)$; taking
$\mu=\theta/(2p^{2})$, $\mu=2q^{2}/\theta$ and $\mu=\omega/(2p^{2})$. \ For
$a\geq3$, we solve a $4\times4$ system
\begin{align*}
0 & =-\left[ p^{4}\mu^{2}+2p^{3}q\mu+p^{2}q^{2}\right] \mu^{a
G(\lambda,a-2,a-2)-\\
& \left[ p^{4}\mu^{2}+2p^{3}q\mu+p^{2}q^{2}\right] \mu^{a}G(\lambda
,a-2,a-1)-\\
& \left[ 2p^{3}q\mu^{2}+4p^{2}q^{2}\mu+2pq^{3}\right] \mu^{a}G(\lambda
,a-1,a-1)+\\
& \left[ p^{4}\mu^{2}+2p^{3}q\mu+p^{2}q^{2}\right] \mu^{a+1}G(\lambda
,a-1,a)+\\
& \left[ p^{4}\mu^{3}+4p^{3}q\mu^{2}+5p^{2}q^{2}\mu+2pq^{3}\right] \mu
^{a+1}G(\lambda,a,a)-\\
& \left[ (1+3p)q^{3}\mu^{2}-4pq^{3}\mu-q^{4}\right] G(\lambda,0,a)-q^{4
\left( \mu-1\right) \mu\,G(\lambda,1,a)
\end{align*}
for $G(\lambda,0,a)$, $G(\lambda,1,a)$, $G(\lambda,a-1,a)$ and $G(\lambda
,a,a)$; taking $\mu=\theta/(2p^{2})$, $\mu=2q^{2}/\theta$, $\mu=\omega
/(2p^{2})$ and $\mu=2q^{2}/\omega$. \ Note that, via this procedure, the terms
$G(\lambda,2,4)$, $G(\lambda,2,5)$, $G(\lambda,3,5)$, $G(\lambda,2,6)$,
$G(\lambda,3,6)$, $G(\lambda,4,6)$, \ldots\ remain open. \ Formula (2)
experiences only limited success in calculating $\tilde{G}(\lambda,\mu,a)$,
unlike formula (1).
\section{Acknowledgements}
I am thankful to Marko Boon \cite{T9-trffc} for helpful discussions that led
to Section 4.
|
1,108,101,565,265 | arxiv | \section{Introduction}\label{sec:level1}
The spin-$S$ antiferromagnet, with isotropic coupling $J_1$ between nearest-neighbor spins located on the sites of a square lattice, represents
one of the most paradigmatic models of quantum magnetism. At zero temperature, the system develops long-range antiferromagnetic (N\'eel) order
for any value of $S$: while for $S \ge 1$ there are analytical arguments~\cite{dyson1978,neves1986}, for the extreme quantum case with $S=1/2$,
this has been numerically proven thanks to quantum Monte Carlo simulations on large systems~\cite{reger1988,sandvik1997,calandra1998}. Instead,
any finite temperature will restore spin rotation symmetry, in agreement with the Mermin-Wagner theorem~\cite{mermin1966}. A magnetically
disordered ground state may be also achieved by including further super-exchange couplings, most notably a next-nearest-neighbor interaction
$J_2$, which destabilizes the N\'eel order driving towards a quantum phase transition. In this respect, much effort has been spent to understand
the ground-state properties of the $J_1-J_2$ model defined by:
\begin{equation}
\label{eq:j1j2ham}
{\cal H} = J_1 \sum_{\langle i,j \rangle} \mathbf{S}_i \cdot \mathbf{S}_j +
J_2 \sum_{\langle\langle i,j \rangle\rangle} \mathbf{S}_i \cdot \mathbf{S}_j\, .
\end{equation}
Here, $\langle \dots \rangle$ and $\langle\langle \dots \rangle\rangle$ stand for nearest-neighbor and next-nearest-neighbor sites on the square
lattice, respectively; $\mathbf{S}_i=(S^x_i,S^y_i,S^z_i)$ represents the spin-1/2 operator on the site $i$. Both the spin-spin interactions are
taken positive.
In the presence of finite $J_2$ a severe sign problem is present (especially in the local basis with $z$-component defined on each site), which
prohibits quantum Monte Carlo algorithms from assessing large system sizes~\cite{sorella1998,choo2019,szabo2020}. Over the last three decades several alternative methods have been
introduced and kept improving such as exact diagonalizations, density-matrix renormalization group (DMRG), functional-renormalization group (fRG),
and variational Monte Carlo (VMC) approaches. The ground-state properties of the $J_1-J_2$ model have been intensively investigated, with
contradicting results, supporting the existence of a valence-bond solid (with either columnar or plaquette order)~\cite{ed1,ed4,mambrini2006}
or a spin liquid (either gapped or gapless)~\cite{jiang2012,mezzacapo2012,hu2013,hering2019}, or even
both~\cite{gong2014,morita2015,wang2018,ferrari2020,nomura2020}. One important aspect emerging in the latest calculations is the existence of
a {\it continuous} quantum phase transition between the antiferromagnetic and the paramagnetic phases for $J_2/J_1 \approx 0.5$, where the
staggered magnetization (hereafter named simply ``magnetization'') goes to zero.
Recently, borrowing concepts from quantum information, tensor-network methods have been introduced~\cite{verstraete2004,verstraete2008,murg2009}.
In one dimension, the so-called matrix-product states (MPS) offer a convenient and elegant rephrasing of previous DMRG ideas. MPS evolved into
the method of choice and provide very accurate approximations of the exact ground-state properties. Generalizations in two dimensions are more
problematic. The prominent example, projected entangled-pair states (PEPS), provide the correct entanglement structure of most quantum ground
states of local spin Hamiltonians~\cite{verstraete2006}, however, they suffer from a steep scaling of computational effort when enlarging the
system size. For this reason, their application has been limited to ladder geometries with small number of legs~\cite{poilblanc2015,poilblanc2017a}
and finite 2D clusters with open boundary and up to $\approx 200-300$ sites~\cite{liu2018}. In order to overcome this computational barrier and
avoid boundary effects, algorithms that work directly in the thermodynamic limit (dubbed iPEPS) have been introduced and
developed~\cite{jordan2008,orus2009}: here, only a small number of tensors is explicitly considered and embedded into an environment that is
self-consistently obtained (e.g., within the so-called corner-transfer matrix approaches~\cite{nishino1998} or channels~\cite{Vanderstraeten2016}).
The size of these tensors, and in turn the number of variational parameters of the wave function, is characterized by the so-called bond dimension
$D$. The iPEPS are systematically improved by enlarging the bond dimension, accounting for increasingly entangled states.
In recent years, iPEPS have been applied to assess the nature of the ground state of the $J_1-J_2$ model, mainly focusing on the highly-frustrated
regime $J_2/J_1 \approx 0.5$~\cite{wang2013,poilblanc2017b,haghshenas2018}. However, these attempts were not completely satisfactory, since they
either used a simplified tensor structure, limited to the description of paramagnets, or suffer from optimization problems, arising in methods
that are not fully satisfactory and consistent (e.g., the so-called simple and full update~\cite{jiang2008,jordan2008}). In this respect, a
breakthrough in the field has been achieved by performing the tensor optimization using the ideas of algorithmic differentiation, or better the
{\it adjoint algorithmic differentiation} (AAD) technique, which allow a very efficient optimization even in the presence of large number of
parameters~\cite{liao2019}. Here, Liao and collaborators limited their application to the unfrustrated Heisenberg model (with $J_2=0$), showing
that extremely accurate and completely stable results may be obtained for both the ground-state energy and magnetization.
Even though PEPS (and iPEPS) {\it Ans\"atze} are designed to describe both gapped and gapless states (following the entanglement entropy's area
law, up to additive corrections), it remains an open question whether generic optimization can reliably reproduce highly-entangled ground states,
as the ones that are possibly emerging in the frustrated regime $J_2/J_1 \approx 0.5$~\cite{hu2013,morita2015,ferrari2020,nomura2020}. Therefore,
in this work, we do not directly address the question of the nature of the magnetically disordered phase; instead, we focus our attention to the
magnetically ordered phase with $J_2/J_1 \le 0.45$ and perform an accurate determination of the magnetization curve as a function of the frustrating
ratio. In addition to its conceptual importance, the problem of the disappearance of antiferromagnetic order under increasing frustration offers a
stringent test to most numerical methods, in general, and to tensor network methods, in particular. To this end, we apply the same ideas of AAD
to optimize the iPEPS {\it Ansatz} for the $J_1-J_2$ model of Eq.~(\ref{eq:j1j2ham}). Importantly, unlike the previously proposed gradient-based
optimizations~\cite{Vanderstraeten2016,corboz2016}, the AAD can be effortlessly extended beyond nearest-neighbour Hamiltonians. The energy and
magnetization are obtained for different values of the bond dimension $D$, from $2$ up to $7$. Then, the estimates for $D \to \infty$ are obtained
for each frustration ratio $J_2/J_1$. Note however that, this is {\it not} realized by a crude extrapolation in $1/D$ (for which the results for
different values of $D$ are considerably scattered) but, instead, by performing a correlation-length extrapolation, which is motivated by the
finite-size scaling analysis that is well established in the N\'eel phase, as recently proposed in Refs.~\cite{rader2018,corboz2018}. Despite the
fact that this mode of extrapolation requires the calculation of the correlation length $\xi$, which may not be as accurate as other thermodynamic
quantities (e.g., energy and magnetization), it has been shown to give remarkably good results for the unfrustrated Heisenberg model. In fact,
as we have mentioned earlier, even though iPEPS can describe certain gapless phases, their generic optimization instead leads to states with
finite correlation lengths. e.g., in the N\'eel phase, and the bond dimension $D$ turn out not to be the correct object to quantify this aspect.
As we will show, also in the presence of frustration, the analysis based on the correlation length gives reliable thermodynamic estimates, even
though no exact results are available. Our calculations are compatible with a vanishing magnetization for $J_2/J_1 \approx 0.45$, which is in
close agreement with recent calculations~\cite{hu2013,morita2015,wang2018,ferrari2020,nomura2020} and give a reference for future investigations.
The paper is organized as follows: in section~\ref{sec:method}, we will describe the iPEPS method; in section~\ref{sec:results}, we present the
results; in section~\ref{sec:concl}, we finally draw our conclusions and discuss the perspectives.
\section{iPEPS {\it Ansatz} and its optimization}\label{sec:method}
\subsection{General aspects}
We parametrize the state by a single real tensor
\begin{equation}
a^s_{uldr} = \raisebox{-10pt}{\includegraphics[width=0.06\textwidth]{ipeps-1site-c4v-onsite-tensor.pdf}}
\end{equation}
with a physical index $s=\uparrow,\downarrow$ labeling the standard $S^z$ basis of the local physical Hilbert space and auxiliary (or virtual)
indices $u,l,d,r$ of bond dimension $D$ (by convention running from $0$ to $D-1$ here). The physical wave function is then obtained by tiling
the infinite square lattice with tensor $a$ and tracing over all auxiliary indices
\begin{gather}
\psi(a)= \sum_{\{s\}} c(a)_{\{s\}}|\{s\}\rangle \nonumber \\
c(a)_{\{s\}} := \text{Tr}_{aux}(a^{s_0}a^{s_1}a^{s_2}\ldots)
=
\raisebox{-18pt}{\includegraphics[width=0.15\textwidth]{ipeps-1site-c4v.pdf}}
\end{gather}
The tensor $a$ is chosen (and constructed such as) to be invariant under a number of symmetries. First of all, it belongs to the $A_1$ irreducible
representation of the $C_{4v}$ point group, thus enforcing all the spatial symmetries of the square lattice on the iPEPS. The antiferromagnetic
correlations are incorporated in the ansatz by unitaries $-i\sigma^y$, which rotate the physical $S^z$ basis at every site of one sublattice.
We absorb these unitaries into observables leaving the definition of the wave function untouched [see Eq.~(\ref{eq:epersite})].
Secondly, the tensor $a$ also possesses a further structure by requiring certain transformation properties under the action of $U(1)$ group (see
below). Such choice is motivated by the remaining $U(1)$ symmetry in the ordered phase, which manifests itself as equivalence between different
magnetizations connected by transverse (Goldstone) modes. As defined below, $U(1)$ tensor classes are defined by assigning specific ``charges''
to the virtual and physical degrees of freedom.
When considering $A_1$- and $U(1)$-symmetric states, the tensor $a=a(\vec{\lambda})$ is taken to be a linear combination of (fixed) elementary
tensors $\{t_0, t_1, \ldots \}$ (named a tensor ``class'') such that
\begin{equation}
a(\vec{\lambda}) = \sum_i \lambda_i t_i \, ,
\end{equation}
with coefficients $\vec{\lambda}$ being the variational parameters. The elementary tensors $\{t_0, t_1, \ldots \}$ are different representatives
of the $A_1$ irreducible representation for some choice of the $U(1)$ charges.
Given an iPEPS defined by tensor $a$, the evaluation of any observable $\mathcal{O}$ amounts to a contraction of infinite double-layer network
composed of tensors $a$ together with the tensor representation of $\mathcal{O}$. Such tensor network is the diagrammatic equivalent of usual
expression $\langle \mathcal{O} \rangle = \langle\psi(a)| \mathcal{O} |\psi(a)\rangle$. A central aspect of iPEPS method is an approximate
contraction of such networks. In this work, we realize them by finding the so-called environment tensors $C$ and $T$ of dimension $\chi$, dubbed
environment dimension, by the means of corner-transfer matrix (CTM) procedure~\cite{nishino1998}. These tensors compress the parts of the original
infinite network in approximate but finite-dimensional objects. Afterwards, the desired reduced density matrices can be constructed from $C$ and
$T$, together with the on-site tensor $a$. Ultimately, the exact value of any observable is recovered taking $\chi \to \infty$, which
we extrapolate from the data for increasingly large $\chi$.
The optimization of tensor $a$ (or equivalently in the $U(1)$-symmetric approach, $\vec{\lambda}$) is carried out using standard gradient-based
method L-BFGS supplemented with backtracking linesearch. The gradients are evaluated by AAD, which back-propagates the gradient through the
whole process of energy evaluation for fixed $\chi$~\cite{liao2019}: Starting with a given CTM, followed by assembling the reduced-density
matrices from converged ${C,T}$ tensors and finally evaluating the spin-spin interaction between nearest and next-nearest neighbors.
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{\label{fig:rdms}
Definition of reduced-density matrices necessary for evaluating the energy per site of the $J_1-J_2$ model over single-site iPEPS with
$C_{4v}$ symmetry. (a) Double-layer tensors with contracted and uncontracted physical indices. (b) Infinite tensor network corresponding to the
next-nearest-neighbour $\rho^{(NNN)}$ as approximated by $\rho^{(NNN)}_\chi$ in the finite network with $C$ and $T$ tensors resulting from CTM.
(c) Finite-network approximation of nearest-neighbour $\rho^{(NN)}_\chi$ within the same $2\times2$ cluster.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{fig2.pdf}
\caption{\label{fig:ctm}
Key steps of the CTM algorithm for single-site iPEPS with $C_{4v}$ symmetry. (i) Initial tensors at iteration $i$: $\{ C^{(i)}, T^{(i)} \}$.
(ii) Construction of enlarged corner and its reshaping into matrix of dimensions $D^2\chi \times D^2\chi$. (iii) Symmetric eigenvalue decomposition
of enlarged corner and truncation down to leading $\chi$ eigenpairs by magnitude of the eigenvalues. Truncation is always done at the boundaries
between degenerate eigenvalues (see text). (iv) Absorption and truncation with isometry $P$ from step (ii) for half-row/-column tensor $T$.}
\end{figure}
\subsection{Extracting the relevant $U(1)$ charges}\label{subsec:smith}
For small enough frustration, in the N\'eel phase, the unconstrained optimization of tensor $a$ leading to correct $U(1)$-symmetric iPEPS would be
a desirable outcome. Under circumstances, AAD optimization can arrive at an almost $U(1)$-symmetric tensor $\tilde{a}$. In such case, a direct and
robust evidence can be seen in the nearly degenerate pairs of leading eigenvalues of the transfer matrix. Importantly, such iPEPS states provide
an unbiased information about the energetically favourable $U(1)$-charge structure of tensor $a$. We are concerned with inferring these charges
from the elements of tensor $\tilde{a}$. Obtaining the correct charge assignment of the smallest $D$ tensors allows (i) to perform an efficient
variational optimization over a greatly reduced number of parameters $\vec{\lambda}$, (ii) to obtain truly $U(1)$-symmetric environments via CTM
and, finally, (iii) to predict the correct charge content of higher-$D$ $a$ tensors and, hence, enable to perform (i) and (ii) for larger $D$.
Before describing how to achieve the goal of obtaining the charges from the almost symmetric $\tilde{a}$ tensor, let us first briefly review the
expected properties of the resulting $U(1)$-symmetric $a$ tensor. In practice, one has to assign $U(1)$ charges $\vec{u}=(u^\uparrow, u^\downarrow)$
and $\vec{v}=(v_0,\ldots,v_{D-1})$ to the two physical spin-1/2 components and the $D$ virtual degrees of freedom on each of the four auxiliary indices. Without loss of generality we take them to be integers. In this language, $U(1)$ invariance is realized by simply enforcing a selection rule for the non-zero tensor
elements $a^{s}_{uldr}$ which should exhibit a local charge conservation
\begin{equation}
\label{eq:u1symcond}
u^{s} + v_{u} + v_{l} + v_{d} + v_{r} = N\, ,
\end{equation}
where $N$ is some fixed integer. Notice that in order to preserve $C_{4v}$ symmetry the same $\vec{v}$, associated to all the virtual indices,
is taken on the four legs of the tensor. Note also that there is some freedom in the definition of the charges since shifts like
$u^s \to u^s+\alpha$, $v_\sigma \to v_\sigma+\beta$, and $N \to N+\alpha+4\beta$, with $\alpha$ and $\beta \in \mathds{Z}$, leaves
Eq.~(\ref{eq:u1symcond}) invariant. It is easy to connect this charge conservation to the $U(1)$ invariance of the $a$ tensor. Indeed, the
action of any element $g \in U(1)$ on $a$ is given by:
\begin{equation}
a^s_{uldr} \to (ga)^s_{uldr} = a^{s'}_{u'l'd'r'} U^{ss'} V_{uu'} V_{ll'} V_{dd'} V_{rr'},
\end{equation}
where $U$ and $V$ are diagonal matrices depending on $g$, and all auxiliary indices are transformed by the same $V$:
\begin{eqnarray}
U^{ss'} &=& e^{i\theta_g u^s}\delta^{ss'}, \\
V_{\gamma \gamma'} &=& e^{i\theta_g v_{\gamma}}\delta_{\gamma \gamma'},
\end{eqnarray}
with the phase $\theta_g \in \mathbb{R}$ and $\gamma=0,\ldots,D-1$. Therefore, the non-zero elements of the tensor $a$ transform according to
\begin{equation}\label{eq:u1trans}
(ga)^s_{uldr} = a^{s}_{uldr} e^{i\theta_g(u^{s} + v_{u} + v_{l} + v_{d} + v_{r})}
\end{equation}
Hence, Eq.~(\ref{eq:u1symcond}) implies that $a$ is indeed invariant up to global phase under the action of $U(1)$. Once the relevant $U(1)$ charges $\vec{u}$ and
$\vec{v}$ are known (see below), practically, Eq.~(\ref{eq:u1symcond}) is used in the construction of the elementary tensors $\{t_0, t_1, \dots \}$
by filtering out their non-zero elements.
\begin{table}[b]
\caption{\label{tab:table1}
$U(1)$ charges as inferred from unrestricted simulations with bond dimensions $D=2,\dots,7$. Predictions of the charges for $D=8$ and $9$ are
also shown. Note that the ordering of the $v_{\alpha}$ charges is arbitrary and the gauge freedom has been fixed by taking $N=1$. The last column
shows the number of elementary tensors $t_i$. }
\begin{ruledtabular}
\begin{tabular}{lll}
$D$ & $[ u_\uparrow, u_\downarrow, v_0,v_1,\cdots, v_{D-1}] $ & number of tensors\\
\colrule
2 & $[1,-1,0,2]$ & 2 \\
3 & $[1,-1,0,2,0]$& 12 \\
4 & $[1,-1,0,2,-2,0]$& 25 \\
5 & $[1,-1,0,2,-2,0,2]$& 52 \\
6 & $[1,-1,0,2,-2,0,2,-2]$& 93 \\
7 & $[1,-1,0,2,-2,0,2,-2,2]$& 165 \\
8 & $[1,-1,0,2,-2,0,2,-2,0,2]$& 294 \\
9 & $[1,-1,0,2,-2,0,2,-2,0,2,-2]$& 426 \\
\end{tabular}
\end{ruledtabular}
\end{table}
Let us now describe how to infer the charges from an unrestricted tensor optimization that has produced an almost symmetric on-site tensor
$\tilde{a}$, with bond dimension $D$. To identify the dominant (at least for small $D$) $U(1)$-symmetric component of $\tilde{a}$, and then
ultimately derive the hidden $U(1)$ charges, we have to first perform a higher-order singular value decomposition of $\tilde{a}$:
\begin{equation}
{\tilde a}^s_{uldr} = Z^{ss'}Y_{uu'}Y_{ll'}Y_{dd'}Y_{rr'} c^{s'}_{u'l'd'r'},
\end{equation}
with unitary matrices $Z$, $Y$, and the so-called core tensor $c$. The same unitary $Y$ is associated to different auxiliary legs due to the
enforced $C_{4v}$ symmetry. The core tensor $c$ plays an analogous role to singular values in standard singular value decomposition of a matrix.
The untruncated core tensor $c$ by itself defines a physically equivalent iPEPS to the one given by $\tilde{a}$. A good lower-rank approximation
of $\tilde{a}$ can be obtained by truncation of the smallest elements of the core tensor $c$. The basic premise, supported by nearly degenerate
transfer matrix spectrum for small $D$, is that the relative magnitude of symmetry breaking elements of tensor $c$ is small. Therefore, we assume
that the largest elements of tensor $c$ respect the $U(1)$-symmetry constrain associated to an unknown set of charges $\vec{u}$ and $\vec{v}$.
For the last step in identifying the charges, we re-formulate the problem in terms of linear algebra. First, taking a set of $n$ largest tensor
elements (modulo $C_{4v}$ symmetry), and writing down Eq.~(\ref{eq:u1symcond}) for each of them will result in a set of $n$ coupled linear
equations (with integer coefficients) of the $D+2$ unknown charges. Whenever $n>D+2$, the linear system becomes over-complete and, increasing $n$
still allows the same solution for the charges, unless $n$ is taken too large so that (small) non-zero tensor elements breaking $U(1)$-symmetry
are included. To solve this linear problem it is convenient to recast the constraints into a $n \times (D+2)$ matrix. The matrix, containing
integer matrix elements, is obtained by simply counting the total number of charges of each type $\gamma$ and $s$ on the virtual and physical
legs, respectively. More precisely, we define vectors $\vec{n}(c^s_{uldr})$ of integer coordinates that count the number of times specific
{\it index value} appears among the indices of a given tensor element. Expressing each individual element constraint~(\ref{eq:u1symcond}) as
$\vec{n}(c^s_{uldr})\cdot(\vec{u},\vec{v}) = N$ and recasting them into matrix form, the linear system can be written in a compact fashion as
$M\cdot(\vec{u},\vec{v}) = \vec{N}$.
To be explicit, let us consider the case $D=3$ for which all charges can be obtained using only the $n=D+2=5$ largest tensor elements of tensor $c$:
\begin{equation}
\begin{matrix}
\vec{n}(c^\uparrow_{0000}) & \to \\
\vec{n}(c^\downarrow_{0001}) & \to \\
\vec{n}(c^\uparrow_{0002}) & \to \\
\vec{n}(c^\uparrow_{2222}) & \to \\
\vec{n}(c^\uparrow_{0222}) & \to
\end{matrix}
\begin{bmatrix}
1 & 0 & 4 & 0 & 0 \\
0 & 1 & 3 & 1 & 0 \\
1 & 0 & 3 & 0 & 1 \\
1 & 0 & 0 & 0 & 4 \\
1 & 0 & 1 & 0 & 3
\end{bmatrix}
\cdot
\begin{bmatrix}
u^\uparrow \\
u^\downarrow \\
v_0 \\
v_1 \\
v_2
\end{bmatrix}
=
\begin{bmatrix}
N \\
N \\
N \\
N \\
N
\end{bmatrix}.
\label{eq:matrixM}
\end{equation}
If the tensor $c$ possesses an (approximate) $U(1)$ symmetry structure (as in the example above), then the linear system has a non-trivial solution
in terms of charges $\vec{u}$ and $\vec{v}$. To solve it, it is known that one needs to bring the matrix $M$ into its Smith normal form (see
Appendix~\ref{appA}). Note here that the integer $N$ can, in fact, be changed arbitrarily. Although, the explicit values of the charges will depend
on $N$, the $U(1)$ class of $a$ tensors will not. In other word, there is some ``gauge'' freedom to determine each $U(1)$ class. For the example
with $D=3$ considered here, we get integer charges, $u^\uparrow=+1$, $u^\downarrow=-1$, $v_0=0$, $v_1=2$ and $v_2=0$, as can be checked by direct
substitution in Eq.~(\ref{eq:matrixM}) choosing $N=1$. A complete list of the relevant charges are shown in Table~\ref{tab:table1} for bond
dimension up to $D=9$.
\subsection{Reduced density matrices, CTM algorithm, and implementation details}
The evaluation of energy is realized through two distinct reduced-density matrices (RDM), $\rho^{(NN)}$ and $\rho^{(NNN)}$, for nearest and
next-nearest neighbour sites respectively. Their diagrammatic definition is shown in Fig.~\ref{fig:rdms}. The energy per site is then given by:
\begin{equation}
\label{eq:epersite}
e = 2 J_1 \text{Tr} \left [ \rho^{(NN)} \mathbf{S} \cdot \mathbf{\tilde{S}} \right ] +
2 J_2 \text{Tr} \left [ \rho^{(NNN)} \mathbf{S} \cdot\mathbf{S} \right ],
\end{equation}
with $\tilde{S}^\alpha = -\sigma^y S^\alpha (\sigma^y)^T$, as these are the only non-equivalent terms of Hamiltonian~(\ref{eq:j1j2ham}) acting
on the single-site iPEPS with $C_{4v}$ symmetry.
The two RDMs are obtained by substituting the environment of a $2\times2$ cluster within the infinite tensor network with the CTM approximation
and tracing out all but two (nearest-neighbor or next-nearest-neighbor) sites. The leading computational cost in contraction of these networks is
$O[(\chi D^2)^3 p^2]$ with $p=2$ being the dimension of the physical index $s$. A more complete alternative is to consider a RDM of {\it all} four
spins contained within the cluster. However, contracting such network with eight open physical indices is more expensive in terms of computational
complexity and memory requirements, as both are amplified by a factor of $p^2$.
The most demanding part of the calculations is the CTM algorithm. Given the highly constrained nature of our iPEPS, in particular the $C_{4v}$
symmetry imposed on tensor $a$, we can utilize the efficient formulation of the algorithm of Ref.~\cite{nishino1998}. The $C_{4v}$ symmetry
of the on-site tensor $a$ is reflected in the corner matrix $C$ which is taken to be diagonal and half-row/-column tensor $T$ which is symmetric
with respect to the permutation of its environment indices. We show the diagrammatic description of the main steps within single CTM iteration
in the Fig.~\ref{fig:ctm}.
There are few more remarks to be made regarding the implementation of the CTM algorithm. The initial $C$ and $T$ tensors are given by partially
contracted double-layer tensor, e.g. $C_{(dd')(rr')}=\sum_{sul} a^s_{uldr}a^s_{uld'r'}$. In addition, after each step of the CTM the tensors $C$
and $T$ are symmetrized accordingly and normalized by their largest element. To establish the convergence of the CTM, we use the RDM of nearest
neighbors $\rho^{(NN)}_{2\times1}$ computed just from the $2\times1$ cluster at each CTM step. Once the difference (in sense of Frobenius norm)
between $\rho^{(NN)}_{2\times1}$ from two consecutive iterations becomes smaller than $\epsilon_{CTM}$, we consider the CTM converged. During
optimization we set $\epsilon_{CTM} = 10^{-8}$, which typically requires at most $O(70)$ iterations to converge for largest $(D,\chi_{opt})=(7,147)$
simulations considered. For scaling of observables of optimized states we instead iterate CTM until $\epsilon_{CTM} = 10^{-12}$. Remarkably, the
$U(1)$ symmetry is preserved along the CTM procedure, whenever we adjust the truncation as to never break the multiplet structure of the enlarged
corner.
Finally, a peculiar complication is present in the process of computing gradients by AAD, with two distinct aspects. First, the standard definition
of adjoint function of eigenvalue (or singular value) decomposition relies on computing the full decomposition~\cite{giles2008}. Hence, in this
context one cannot resort to significantly faster partial decompositions such as Lanczos (at least during gradient computation). This sets the
leading complexity of CTM iteration to $O[(\chi D^2)^3]$. Recently, a developed differentiable dominant eigensolver tries to address this
shortcoming by alternative adjoint formula~\cite{Xie2020}. The second, more fundamental aspect is the ill-defined adjoint in the case of degenerate
eigenvalues stemming from the terms proportional to the inverse of spectral gaps. We use a smooth cutoff function~\cite{liao2019} to tame this
problematic terms. Although doing so, the accidental crossings of eigenvalues in course of CTM sometimes result in erroneous gradients. In general,
we found this occurrence, manifested by the failure of linesearch, to be rare. The formulation of AAD applied to gauge-invariant scalars (such as
energy), whose computation however involves eigendecomposition with degenerate spectrum, still remains an open problem.
The complete algorithm is available as a part of the open-source library {\it peps-torch}~\cite{pepstorch} focused on AAD optimization of iPEPS.
\begin{figure}
\includegraphics[width=\columnwidth]{fig3.pdf}
\caption{\label{fig:TM}
Top: Definition of the transfer matrix $E$ and its finite-$\chi$ approximation $E_\chi$ given by the converged $T$ tensor. Due to $C_{4v}$ symmetry
imposed on the ansatz, the transfer matrix is symmetric and can be diagonalized. Eigenvalues are ordered with descending magnitude with the leading
eigenvalue $\lambda_0$ normalized to unity. Bottom: RDM for two-point correlation functions, defined for $r \geq 1,$ and its connection to transfer
matrix $E$.}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{fig4.pdf}
\caption{\label{fig:energy}
Finite correlation-length scaling of the energy per site for the $C_{4v}$-symmetric $U(1)$ iPEPS {\it Ansatz} with bond dimensions $D=3,\dots,7$
(denoted by triangles, hexagons, pluses, diamonds, and crosses in the same order).
Continuous lines are linear fits in $1/\xi^3$ which is the expected scaling in the magnetically ordered phase~\cite{rader2018}.}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{fig5.pdf}
\caption{\label{fig:magnet}
Finite correlation-length scaling of the magnetization for the $C_{4v}$-symmetric $U(1)$ iPEPS {\it Ansatz} with bond dimensions $D=2,\dots,7$
(denoted by circles, triangles, hexagons, pluses, diamonds, and crosses in the same order). The magnetization is plotted as a function of $1/\xi$,
expected in the magnetically ordered phase~\cite{rader2018}. Linear (quadratic) extrapolations of magnetization, excluding $D=2$ data, are reported
in the left (right) panel, except for $J_2/J_1=0.5$.}
\end{figure*}
\section{Results}\label{sec:results}
Our analysis is based upon an extensive set of calculations for various bond dimensions, ranging from $D=2$ to $7$, and different values of the
frustrating ratio $J_2/J_1$ up to $0.5$. For the large bond dimensions considered, the optimizations have been performed with environment dimensions
up to $\chi_{opt}=4D^2$ in the case of $D=5,6$ and up to $\chi_{opt}=3D^2$ for $D=7$. Here, we want to highlight a few important aspects of iPEPS
that are crucial for the investigation of the magnetically ordered phase. First of all, within optimizations with no imposed symmetries, there is
a generic tendency to break the physical $U(1)$ symmetry of the N\'eel state (corresponding to global rotations around the axis of the spontaneous
magnetization), leading to a slight (spin) nematic order, e.g., different values of the nearest-neighbour $S^xS^x$ and $S^yS^y$ correlations. This
effect becomes more severe with increased frustration. For example, for most of the states with $D>3$ and $J_2 \gtrsim 0.3$, there is a sensible
(e.g., $5-10\%$ and even larger) difference in the correlation lengths corresponding to the transverse directions. Connected to this issue, we
observe that it is possible to stabilize distinct ``families'' of local minima for various bond dimensions $D$, in particular $D=3$ and $4$, with
substantial differences in their magnetization, correlation length, and the degree of nematic order. Every family corresponds to a specific way
the quantum fluctuations are built on top of the classical N\'eel state, e.g., by converging towards one of the possible choices of $U(1)$ charges
or breaking the symmetry completely. Given the limited number of bond dimensions that are available within our AAD optimization, it is then of
utmost importance to identify the family of minima that are connected and lead to a smooth and physically sound extrapolation in the $D \to \infty$
limit. Therefore, using the scheme introduced in Sec.~\ref{subsec:smith}, we take the optimized and almost $U(1)$-symmetric states from unrestricted
simulations (typically for $J_2 \approx 0$) and infer their charge structure. The charges revealed by this analysis are listed in
Table~\ref{tab:table1} and define the correct classes of $C_{4v}$-symmetric $U(1)$ iPEPS for $D$ ranging from $2$ to $7$, which best describe
the N\'eel phase.
In order to obtain the thermodynamic estimates of the ground-state energy and magnetization (within the magnetically ordered phase), we compute
these quantities for increasing values of the bond dimension $D$. A brute-force extrapolation in $1/D$ provides poor estimates, given the fact
that the data are usually scattered, see for example the case of the magnetization reported in Appendix~\ref{appB}. Instead, we follow the recent
proposal that has been put forward in Refs.~\cite{rader2018,corboz2018}. In this respect, for every value of $D$ used, we compute the dominant
correlation length $\xi$ which is defined by the so-called transfer matrix $E$ of iPEPS, see Fig.~\ref{fig:TM}:
\begin{equation}
\xi = - \frac{1}{\log|\lambda_1|},
\end{equation}
where $\lambda_1$ is the second largest eigenvalue of the transfer matrix (without the loss of generality we assume that the largest one is
normalized to 1). We remark that the value of $\xi$ obtained in this way coincides with the correlation length of the usual spin-spin correlation
function (or, more precisely, the transverse correlations):
\begin{equation}\label{eq:spinspin}
\langle\mathbf{S}_0\cdot\mathbf{S}_r\rangle =
\begin{cases}
\text{Tr}[\rho^{(2)}(r)\mathbf{S}\cdot\mathbf{S}] & r \in {\rm even} \\
\text{Tr}[\rho^{(2)}(r)\mathbf{S}\cdot\mathbf{\tilde{S}}] & r \in {\rm odd}
\end{cases},
\end{equation}
where $\rho^{(2)}(r)$, defined in Fig.~\ref{fig:TM}, is the two-point RDM. To obtain the $\chi \to \infty$ limit of the correlation length, we
use the scaling formula~\cite{rams2018,rader2018}:
\begin{equation}
\frac{1}{\xi(\chi)} = \frac{1}{\xi(\infty)} + \alpha \left(\log \left| \frac{\lambda_3(\chi)}{\lambda_1(\chi)} \right|\right)^\beta,
\end{equation}
which allows for more precise extrapolation of $\xi$ than the usual $1/\chi$ scaling across all ratios of $J_2/J_1$~\footnote{In general one uses
ratio of the second and third largest eigenvalues, $\lambda_1$ and $\lambda_2$; however, due to $U(1)$ symmetry, they are always degenerate,
forcing us to consider the next largest eigenvalue $\lambda_3$.}.
Finally, the thermodynamic estimates of the energy and magnetization (squared) are obtained by a suitable fit in powers of $1/\xi$:
\begin{eqnarray}
e(\xi) &=& e(\infty) + \frac{A}{\xi^3} + O \left(\frac{1}{\xi^4}\right), \\
m^2(\xi) &=& m^2(\infty) + \frac{B}{\xi} + O \left(\frac{1}{\xi^2}\right)\, ,
\end{eqnarray}
where $m=|\text{Tr}[\rho^{(1)}\mathbf{S}]|$ and $\rho^{(1)}$ is the single site RDM.
\begin{table}
\caption{\label{tab:enemag}
Ground-state energies (in units of $J_1$) $e(D,\chi)$ and magnetization square $m^2(D,\chi)$ for $D=7$, which can be considered as upper bounds
of the exact $D \to \infty$ values. The tensor was optimized up to an environment dimension $\chi_{opt}=3D^2=147$. The $\chi \to \infty$
extrapolations are done from environment bond dimensions $\chi \in [D^2,13D^2]$.}
\begin{ruledtabular}
\begin{tabular}{lcccc}
$J_2/J_1$ & $e(7,147)$ & $e(7,\chi \to \infty )$ & $m^2(7,147)$ & $m^2(7,\chi \to \infty )$ \\
\hline
0.0 & -0.669428 & -0.669432 & 0.0994 & 0.0994 \\
0.05 & -0.649273 & -0.649277 & 0.0926 & 0.0926 \\
0.1 & -0.629497 & -0.629501 & 0.0852 & 0.0852 \\
0.15 & -0.610154 & -0.610159 & 0.0771 & 0.0771 \\
0.2 & -0.591314 & -0.591320 & 0.0685 & 0.0685 \\
0.25 & -0.573067 & -0.573076 & 0.0591 & 0.0591 \\
0.3 & -0.555520 & -0.555533 & 0.0491 & 0.0491 \\
0.35 & -0.538850 & -0.538867 & 0.0383 & 0.0382 \\
0.4 & -0.523054 & -0.523259 & 0.0270 & 0.0268 \\
0.45 & -0.508895 & -0.508976 & 0.0173 & 0.0173 \\
0.5 & -0.496152 & -0.496289 & 0.0086 & 0.0086 \\
\end{tabular}
\end{ruledtabular}
\end{table}
Let us start discussing the ground-state energy, shown in Fig.~\ref{fig:energy}. For the unfrustrated case $J_2=0$, our results are fully
compatible with what has been previously obtained in Refs.~\cite{rader2018,corboz2018}. The data points align perfectly according to the theoretical
expectations and the extrapolated values are in very good agreement with quantum Monte Carlo results~\cite{sandvik1997,calandra1998}. For example
for $D=7$ (after extrapolation in the environment dimension $\chi$) we get $e(D=7)=-0.669432$, which is identical to the linear extrapolation in
$1/\xi^3$ from $D=3$ to $7$. Including the subleading term $1/\xi^4$, the extrapolation gives $e(\infty)= -0.669437(2)$ (to be compared with the
exact value $e_{\rm QMC}=-0.669437(5)$~\cite{sandvik1997}). For future comparisons, the energies for $D=7$ and different $J_2/J_1$ ratios are
reported in Table~\ref{tab:enemag}. Under increasing the frustrating ratio, a remarkably smooth behavior persists up to $J_2/J_1 \approx 0.3$;
then, for larger values, small fluctuations on the fourth digit of the energy, are visible, possibily indicating that the scaling regime moves
to larger values of $\xi$ (or $D$), not reachable within our current possibilities. Still, the quality of the results is sufficient to obtain
reliable extrapolations for $\xi \to \infty$. Our calculations show that the expected scaling is not limited to the unfrustrated case, but
persists in the whole antiferromagnetic region, thus corroborating the ideas put forward in Refs.~\cite{rader2018,corboz2018}. One remarkable
feature is that, while for small values of $D$ (i.e., for $D=2$ and $3$), the correlation length $\xi$ clearly {\it increases} by increasing
$J_2/J_1$, for larger values of $D$ (i.e., for $D=4$, $5$, $6$, and $7$), it is essentially constant, or even slightly decreasing with $J_2/J_1$.
This aspect will be discussed in connection to the magnetization curve that is presented below.
\begin{figure}
\includegraphics[width=\columnwidth]{fig6.pdf}
\caption{\label{fig:final}
Magnetization (square) as a function of the frustrating ratio $J_2/J_1$ as obtained from Fig.~\ref{fig:magnet}. The exact result for $J_2=0$
is shown~\cite{sandvik1997}. For comparison, the variational Monte Carlo calculations of Ref.~\cite{ferrari2018} are also included.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{fig7.pdf}
\caption{\label{fig:xixi}
Longitudinal correlation length $\xi_L$, as extracted from the spin-spin correlations, as a function of the transverse one $\xi$ for different
values of $J_2/J_1$ at $D=3,\ldots,7$ (denoted by triangles, hexagons, pluses, diamonds, and crosses in the same order).}
\end{figure}
Then, we move to the central part of the present work, which deals with the magnetization, see Fig.~\ref{fig:magnet}. Here, we report $m^2(\xi)$
for different values of $J_2/J_1$ (including $0.5$) for $D$ ranging from $2$ to $7$. Furthermore, the raw data for $D=7$ are also shown in
Table ~\ref{tab:enemag}. In the unfrustrated case, we get $m^2(D=7)=0.0994$ and $m^2(\infty)=0.0948(2)$, to be compared with the exact value
$m^2_{\rm QMC}=0.0942(2)$~\cite{sandvik1997}. In Fig.~\ref{fig:magnet}, we attempt both linear and quadratic fits. As in the case of energy
extrapolations, we exclude the results with $D=2$ from the fitting procedure, since they are clearly off, especially for intermediate and large
values of $J_2/J_1$. According to our fits, the linear one looks more trustable than the quadratic one, which serves to give an upperbound to
the value of the magnetization. Within the linear fit, we observe vanishing magnetization for $J_2/J_1 \approx 0.46(1)$, giving rise to a
continuous transition to a magnetically disordered phase, whose nature is beyond the scope of the present work. We would like to emphasize that
the results for $J_2/J_1=0.5$ are clearly incompatible with a smooth behavior in $1/\xi$, strongly suggesting that at this point the ground state
is already outside the magnetically-ordered phase. The final magnetization curve is shown in Fig.~\ref{fig:final}. For comparison, the variational
Monte Carlo calculations, which have been obtained by using Gutzwiller-projected fermionic states, are also shown~\cite{ferrari2018}. In the
latter case, a quantum critical point for $J_2/J_1 \approx 0.48$, separating the antiferromagnetic phase and a gapless spin liquid, has been
reported. The present results are expected to improve the accuracy of the magnetization (e.g., the accuracy of $m^2$ for the unfrustated case
is smaller than $1\%$). Still, these two independent calculations give very similar behavior, with almost compatible values for the location
of the quantum critical point. We would like to mention that, recent numerical calculations, including DMRG~\cite{wang2018}, neural-network
approaches (based upon restricted Boltzmann machines on top of fermionic states)~\cite{nomura2020}, and finite size PEPS calculations~\cite{liu2020}
also pointed out that the N\'eel phase survives up to $J_2/J_1$ in the range $0.45 \div 0.47$, a value that is considerably larger than the one
predicted by linear spin-wave theory~\cite{chandra1988}.
Finally, we would like to comment on the $J_2$-dependence of the correlation length, which is clearly different at small (i.e. $D=2,3$) and larger
(i.e. $D=4,\cdots,7$) bond dimensions. A possible explanation of the rapid increase of $\xi$, for $D=2$ and $3$, when approaching the critical
point, may be attributed to the fact that, for these very small bond dimensions, the antiferromagnetic state is poorly approximated as a ``dressed''
product state, having a finite magnetization but lacking the correct transverse (Goldstone) fluctuations. When approaching the phase transition,
the magnetization decreases and the state starts to build up long-range entanglement (for $D=3$ a short-range resonating-valence bond state can be
constructed~\cite{poilblanc2012}). Therefore, a larger correlation length can be attained. Once the basic (low $D$) structure of tensor is
established, optimizing at increasingly higher $D$ further improves the description of the antiferromagnetic state and allows correlation length
to grow, becoming large even in the presence of significant frustration. Then, no appreciable change of $\xi$ is detected when approaching the
quantum critical point. In this respect, we expect that $\xi \to \infty$ in the whole N\'eel phase, including the critical point. Remarkably,
despite optimized iPEPS being finitely correlated, the correct exponent of the power-law decay of transverse spin-spin correlations, i.e.,
$\langle S^x_0 S^x_r \rangle \simeq 1/r$ (assuming magnetization along $z$-spin axis), can already be obtained, see Appendix~\ref{appC} for
the case with $J_2=0$.
As mentioned above, $\xi$ corresponds to the correlation length of transverse spin-spin correlations. In addition to that, it is possible to
evaluate, by a direct fitting procedure of the correlation function itself, the correlation length $\xi_L$ of the longitudinal correlations.
We find also this quantity to be relatively large, i.e., $\xi_L \approx \xi/2$, see Fig.~\ref{fig:xixi}. Moreover, as for transverse spin-spin
correlations, the short-range behavior of the longitudinal correlations reveals their power-law decay (see Appendix~\ref{appC}), which then
becomes rapidly cut off above the finite-$D$ induced length scale $\xi_L$. These findings show that our optimized iPEPS are even able to
approximately capture the power-law behavior of transverse and longitudinal spin-spin correlations of the N\'eel phase.
\section{Conclusions}\label{sec:concl}
In this work, we have investigated the antiferromagnetic phase of the spin-1/2 $J_1-J_2$ model on the square lattice, evaluating with unprecedent
accuracy the energies and magnetizations for $J_2/J_1 \le 0.45$. The results point towards the existence of a quantum critical point
at $J_2/J_1 \approx 0.46(1)$, which separate the N\'eel antiferromagnet and a quantum paramagnet, whose nature is beyond the scope of the present
study. The importance of our findings is twofold. From the methodological side, we combined state-of-the-art optimization techniques (based upon
the AAD scheme~\cite{liao2019}), clever parametrizations of the tensor network (based upon the underlying residual $U(1)$ symmetry that exists in
the N\'eel phase), and recently developed extrapolation analyses (based upon the correlation-length scaling~\cite{rader2018,corboz2018}).
In particular, the construction of $U(1)$-symmetric tensor is pivotal to a straight optimization procedure and correlation-length scaling to
solid extrapolations to thermodynamic limit. With these tools in hand, it is possible to get reliable estimations for the ground-state energy but,
most importantly, also for the magnetization within the frustrated regime, for which no exact methods can be applied. Therefore, the main outcome
of the present work is to provide the magnetization curve for the spin-1/2 $J_1-J_2$ model on the square lattice up to relatively large values of
the frustrating ratios. In particular, the magnetization curve shows a smooth behavior, which strongly suggest the existence of a continuous phase
transition towards a quantum paramagnet.
Here, our calculations have been limited to the magnetically ordered phase, where relatively entangled states have been achieved. Indeed, rather
long correlation lengths are obtained, indication that the tensor network may approximately describe the existence of gapless excitations (i.e.,
Goldstone modes). The magnetically disordered phase still remains elusive, presumably because of its high-entangled nature due to fractional
excitations (spinons and visons). In this respect, the recently-developed method to impose $SU(2)$ symmetry~\cite{mambrini2016,poilblanc2017b}
would be beneficial to the final understanding of the full phase diagram of the spin-1/2 $J_1-J_2$ model.
\begin{acknowledgments}
We thank Fabien Alet, Zhengcheng Gu, Andreas L{\"a}uchli, Wenyuan Liu, Pierre Pujol, Anders Sandvik, and Sandro Sorella for helpful discussions.
This work was granted access to the HPC resources of CALMIP supercomputing center under the allocation 2020-P1231.
This project is supported by the TNSTRONG ANR-16-CE30-0025 and the TNTOP ANR-18-CE30-0026-01 grants awarded by the French Research Council.
\end{acknowledgments}
|
1,108,101,565,266 | arxiv | \section{Introduction}
One of the most remarkable discoveries of our time is related to the late time
acceleration of our universe which is supported by observations of high
redshift type Ia supernovae treated as standardized candles and, more
indirectly,
by observations of the cosmic microwave background and galaxy clustering.
The criticality of universe supported by CMB observations fixes the total
energy budget of universe. The study of large scale structure reveals that
nearly $30$ percent of the total cosmic budget is
contributed by dark matter. Then there is a deficit of
almost $70$ percent; in the standard paradigm, the missing component is an
exotic form
of energy with large negative pressure dubbed
{\it dark energy} \cite{d1,d2,d,CST,review,reviewI,reviewIV}.
The recent observations on baryon
oscillations provide yet another independent support to dark energy
hypothesis\cite{Eis}.
The dynamics of our universe is described by the Einstein equations in which
the contribution of energy content of universe is
represented by energy momentum tensor appearing on RHS of these equations. The
LHS represents pure geometry
given by the curvature of space-time. Gravitational equations in their original
form
with energy-momentum
tensor of normal matter can not lead to acceleration. There are then two ways
to obtain accelerated expansion, either by supplementing energy-momentum tensor
by dark energy component or by modifying the geometry
itself.
Dark energy problem is one of the most important problems of modern
cosmology and despite of the number of efforts (for a recent review,
see \cite{CST,d}), there is no consistent theory which may
successfully describe the late-time acceleration of the universe.
General Relativity with cosmological constant does not solve the
problem because such theory is in conflict with radiation/matter
domination eras. An alternative approach to dark energy is related
to modified theory of gravity (for a review, see \cite{rev3}) in
which dark energy emerges from the modification of geometry of our
universe. In this approach, there appears quite an interesting
possibility to mimic dark energy cosmology by string theory. It was
suggested in refs.\cite{Nojiri:2005vv,Sami:2005zc,Mota} that dark energy
cosmology may result from string-inspired gravity. In fact,
scalar-Gauss-Bonnet gravity from bosonic or Type II strings was
studied in the late universe \cite{Nojiri:2005vv,Sami:2005zc} (for
review of the applications of such theory in the early universe ,
see \cite{Calcagni:2005im}). It is also interesting such theories
may solve the initial singularity problem of the standard big-bang
model(see \cite{ART} and refs. therein). Moreover, the easy account
of next order (third order, Lovelock term) is also possible in this
approach (for recent discussion of such gravity, see \cite{DM}).
In this paper we examine string-inspired gravity with third order
curvature corrections (scalar-Gauss-Bonnet term and scalar-Euler term) and
explore the cosmological dynamics of the
system attributing special attention to dark energy (non-phantom/phantom)
solutions. We confront our result with the recent observations.
We also outline the general program
of reconstruction of scalar-Gauss-Bonnet gravity
for any {\it a priori} given cosmology following the method \cite{e}
developed in the scalar-tensor theory.
The paper is organized as follows. In section two, we consider the
cosmological dynamics in the presence of string curvature
corrections to Einstein-Hilbert action. We analyze cosmological
solutions in the FRW background;
special attention is paid at dark energy which naturally arises in the model
thanks to higher order curvature terms induced by string corrections. Brief
discussion on the comparison of theoretical results with recent
observations is included. The stability of dark energy solution is
investigated in detail.
Section three is devoted to the study of late-time cosmology for
scalar-Gauss-Bonnet gravity motivated by string theory but with the
arbitrary scalar potentials. It is explicitly shown how such theory
(actually, its potentials) may be reconstructed for any given
cosmology. Several explicit examples of dark energy cosmology with
transition from deceleration to acceleration and (or) cosmic
speed-up (quintessence, phantom or de Sitter) phase or with
oscillating (currently accelerating) behavior of scale factor are
given. The corresponding scalar potentials are reconstructed. It is
shown how such theory may be transformed to modified Gauss-Bonnet
gravity which turns out to be just specific parametrization of
scalar-Gauss-Bonnet gravity on classical level. Finally, it is shown
how to include third order curvature terms in the above
construction. Summary and outlook are given in the last section.
\section{Dark energy from higher order string curvature corrections}
In this section we shall consider higher order curvature corrections to
Einstein-Hilbert action.
To avoid technical complications we restrict the discussion up to third order
Riemann invariants
coupled to a dynamical field $\phi$. The cosmological dynamics of the system
will be developed in detail and general features of the solutions will be
discussed. It is really
interesting that the model can account for recent observations on dark energy.
\subsection{General action}
We begin from the following action
\begin{equation}
\label{act}
{\cal S}=\int d^4x \sqrt{-g}\left[
\frac{R}{2\kappa^2}-\frac12\omega(\phi)g^{\mu \nu} \partial_\mu \phi
\partial_\nu \phi-V(\phi)+{\cal L}_c+{\cal L}_m\right]\ ,
\end{equation}
where $\phi$ is
a scalar field which, in particular case, could be a dilaton.
${\cal L}_m$ is the Lagrangian of
perfect fluid with energy density $\rho_m$ and pressure $p_m$.
Note that scalar potential coupled to curvature (non-minimal coupling)
\cite{faraoni} does not appear in string-inspired gravity in the frame under
consideration.
The quantum corrections are encoded in
the term
\begin{equation}
{\cal L}_c=\xi_1(\phi){\cal L}_c^{(1)}+\xi_2(\phi){\cal L}_c^{(2)}
\end{equation}
where $\xi_1(\phi)$ and $\xi_2(\phi)$ are the couplings of
the field $\phi$ with higher curvature invariants. ${\cal L}_c^{(1)}$ and
${\cal L}_c^{(2)}$ are given by
\begin{eqnarray}
\label{Lc1}
&& {\cal L}_c^{(1)} =R_{\alpha\beta\mu\nu}R^{\alpha\beta\mu\nu}
-4R_{\mu\nu}R^{\mu\nu}
+R^2 \,, \\
&& {\cal L}_c^{(2)}=E_3+R^{\mu\nu}_{\alpha \beta}
R^{\alpha\beta}_{\lambda\rho}
R^{\lambda\rho}_{\mu\nu}
\label{Lc2}
\end{eqnarray}
The third order Euler density $E_3$ is proportional to
\begin{equation}
\label{E1}
E_3\propto \epsilon^{\mu\nu\rho\sigma\tau\eta}
\epsilon_{\mu'\nu'\rho'\sigma'\tau'\eta'}
R_{\mu\nu}^{\ \ \mu'\nu'} R_{\rho\sigma}^{\ \ \rho'\sigma'}
R_{\tau\eta}^{\ \ \tau'\eta'}\ .
\end{equation}
Since there does not exist $\epsilon^{\mu\nu\rho\sigma\tau\eta}$ if
the space time dimension $D$ is less than $6$; $E_3$ should
vanish when $D<6$, especially in four dimensions.
By using
\begin{equation}
\label{E2}
\epsilon^{\mu\nu\rho\sigma\tau\eta}
\epsilon_{\mu'\nu'\rho'\sigma'\tau'\eta'}
= \delta_\mu^{\ \mu'}\delta_\nu^{\ \nu'}\delta_\rho^{\ \rho'}
\delta_\sigma^{\ \sigma'}\delta_\tau^{\ \tau'}\delta_\eta^{\ \eta'}
\pm \left(\mbox{permutations}\right)\ ,
\end{equation}
we can rewrite the expression (\ref{E1}) as
\begin{eqnarray}
E_3 &\propto& 8\left(R^3 - 12 R R_{\mu\nu} R^{\mu\nu}
+ 3 R R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} + 16 R_\mu^{\ \nu}R_\nu^{\ \rho}
R_\rho^{\ \mu}
+ 24 R_\mu^{\ \nu} R_\rho^{\ \sigma} R_{\nu\sigma}^{\ \ \mu\rho} \right. \nonumber \\
&& \left. - 24 R_\mu^{\ \nu}R_{\nu\rho}^{\ \ \sigma\tau} R_{\sigma\tau}^{\ \
\mu\rho}
+ 2 R_{\mu\nu}^{\ \ \rho\sigma} R_{\rho\sigma}^{\ \ \tau\eta}
R_{\tau\eta}^{\ \ \mu\nu} - 8 R_{\mu\nu}^{\ \ \rho\tau} R_{\rho\sigma}^{\ \
\mu\eta}
R_{\tau\eta}^{\ \ \nu\sigma} \right)\ .
\label{E3}
\end{eqnarray}
We should note in the r.h.s. of (\ref{E2}), there appears $6!=720$ terms,
which correspond to the sum of the absolute values of the coefficients in each
term
in the RHS of (\ref{E3})
\begin{equation}
\label{E4}
8\left(1+12+3+16+24+24+2+8\right)=720\ .
\end{equation}
In what follows we shall be interested in the cosmological applications of
modified equations
of motion and thus assume a flat Friedmann-Robertson-Walker (FRW) metric
\begin{equation}\label{FRW}
ds^2=-N^2(t) dt^2+ a^2(t)\sum_{i=1}^d (dx^i)^2,
\end{equation}
where $N(t)$ is the lapse function.
With the metric (\ref{FRW}), the Riemann invariants read
\begin{equation}
\label{Lc1frw}
{\cal L}_c^{(1)}=24 H^2\left(\frac{\dot{H}+H^2}{N^4}-\frac{\dot{N}}{N^5} H
\right)\,, \quad
{\cal L}_c^{(2)}=\frac{24}{N^6}(H^6+I^3)-\frac{72\dot{N}}{N^7}HI^2
\label{Lc2frw}
\end{equation}
where $I=\dot{H}+H^2$ and $H=\dot{a}/a$. It is straightforward though
cumbersome to verify explicitly that third order
Euler density $E_3$ is identically zero in the FRW background.
The non-vanishing contribution in Eq.(\ref{Lc2frw}) comes from the second term
in (\ref{Lc2}).
To enforce the check in a particularly case, we consider $D$ dimensional
de-Sitter space, where
Riemann curvature is given by
\begin{equation}
\label{E5}
R_{\mu\nu}^{\ \ \rho\sigma}=H_0\left(\delta_\mu^{\ \rho} \delta_\nu^{\ \sigma}
- \delta_\mu^{\ \sigma} \delta_\nu^{\ \rho}\right)\ .
\end{equation}
Here $H_0$ is a constant corresponding to the Hubble rate. In the de-Sitter
background we have
\begin{equation}
\label{E6}
E_3 \propto D(D-1)(D-2)(D-3)(D-4)(D-5)\ ,
\end{equation}
which is obviously zero in case of $D<6$.
For simplicity we shall limit the discussion to a
homogeneous scalar field $\phi(t)$.
Then the spatial volume can be integrated out from
the measure in equation (\ref{act}), which we rewrite as
\begin{equation}
{\cal S}=\int{ dt Na^3
\left[\frac{R}{2 \kappa^2}+{\cal L}_c+{\cal L}_{\phi}+{\cal L}_m\right]}.
\label{Naction}
\end{equation}
where ${\cal L_{\phi}}=-\frac12\omega(\phi)(\nabla\phi)^2-V(\phi)$.
Varying the action (\ref{Naction}) with respect to the lapse function $N$
we obtain \cite{Sami:2005zc}
\begin{equation}
\frac{3H^2}{\kappa^2}=\rho_m+\rho_{\phi}+\rho_c
\label{qhubble}
\end{equation}
where
\begin{eqnarray}
\rho_{\phi}=\frac{1}{2}\omega \dot{\phi}^2+V(\phi)
\end{eqnarray}
In Eq.(\ref{qhubble}), the energy density $\rho_c$ is induced by quantum
corrections and is given by
the following expression
\begin{equation}
\rho_c=\left.\left(3H\frac{\partial {\cal L}_c}{\partial
\dot{N}}+\frac{d}{dt}\frac{\partial {\cal L}_c}{\partial
\dot{N}}-\frac{\partial {\cal L}_c}{\partial
N}-{\cal L}_c\right)\right|_{N=1}
\end{equation}
It would be convenient to rewrite $\rho_c$ as
\begin{equation}
\rho_c=\xi_1(\phi) \rho_c^{(1)}+\xi_2(\phi)\rho_c^{(2)}
\end{equation}
Using Eqs.(\ref{Lc1frw}) $\&$ (\ref{Lc2frw}) we obtain the expressions
of $\rho_c^{(1)}$ and $\rho_c^{(2)}$
\begin{eqnarray}
&&\rho_c^{(1)}=-24 H^3\Xi_1 \, \\
&& \rho_c^{(2)}=-72 H I^2 \Xi_2-72\left(\dot{H}I^2+2 IH\dot{I}\right)
-216H^2 I^2+120\left(H^6+I^3 \right)
\end{eqnarray}
where $\Xi_1={\dot{\xi}_1}/{\xi}_1$ and $\Xi_2={\dot{\xi}_2}/{\xi}_2$
It is interesting to note that the contribution of Gauss-Bonnet term (described
by Eq.(\ref{Lc1frw})) cancels in equations of motion
for fixed $\phi$ as it should be; it contributes for dynamically
evolving scalar field only. In case of the third order curvature corrections,
the Euler density is identically zero and hence it does not contribute
to the equation of motion in general.
Secondly, ${\cal L}_c^{(2)}$ contributes for fixed field as well as for
dynamically evolving
$\phi$. It contains corrections of third order in curvature beyond the Euler
density.
We should note that such higher-derivative terms in string-inspired
gravity may lead to ghosts and related instabilities (for recent
discussion in scalar-Gauss-Bonnet gravity, see \cite{ghost}).
However, the ghost spectrum of such (quantum ) gravity (for the
review, see\cite{book}) is more relevant at the early universe where
curvature is strong, but less relevant at late universe. Moreover,
in accordance with modified gravity approach, the emerging theory is
purely classical, effective theory which comes from some unknown
gravity which has different faces at different epochs. (Actually, it
could be that our universe currently enters to instable phase). For
instance, in near future the currently sub-leading terms may
dominate in the modified gravity action
which
then has totally different form! Hence, this is that (unknown)
gravity, and not its classical limit given by Eq.(\ref{act})
relevant during specific epoch, whose spectrum should be studied.
The point is best illustrated by the example of Fermi theory of weak
interactions whose quantization runs into well known problems.
Finally, on the phenomenological grounds, it is really interesting
to include higher order terms. At present the situation is
remarkably tolerant in cosmology, many exotic constructions attract
attention provided they can lead to a viable model of dark energy.
The equation of motion for the field $\phi$ reads from (\ref{Naction})
\begin{equation}
\omega(\ddot{\phi}+3H\dot{\phi})+V'-\xi_1'{\cal L}_c^{(1)} - \xi_2'{\cal
L}_c^{(2)}
+\dot{\omega}\dot{\phi}-\omega'\frac{\dot{\phi}^2}{2}=0
\label{phieq}
\end{equation}
In addition we have standard continuity equation for the barotropic background
fluid with
energy density $\rho_m$ and pressure $p_m$
\begin{equation}
\dot{\rho_m}+3H(\rho_m+p_m)=0
\label{conteq}
\end{equation}
Equations (\ref{qhubble}), (\ref{phieq}), and (\ref{conteq}) are the basic
equations for our system under consideration.
Let us note that in the string theory context with the dilaton field $\phi$ we
have
\begin{equation}
V(\phi)=0,~~\xi_1=c_1\alpha^\prime {\rm e}^{ 2 \phi/\phi_0},~
\xi_2=c_2 \alpha^{\prime 2} {\rm e}^{4 \phi/\phi_0}
\end{equation}
where $(c_{1}, c_{2})=(0, 0, 1/8), (1/8, 0, 1/8)$
for Type II, Heterotic, and Bosonic strings, respectively.
\subsection{Fixed field case: general features of solutions.}
We now look for de-Sitter solutions in case of $\phi=constant$ and $\rho_m=0$.
In this case the
modified Hubble Eqs.(\ref{qhubble}) gives rise to de-Sitter solution
\begin{equation}
3 =24 \xi_2 H^4~~or~~H=\left(\frac{1}{8\xi_2}\right)^{1/4}
\end{equation}
where $\xi_2=\frac{1}{8}exp(-4\phi/\phi_0)$ for type II and Bosonic
strings. Normalizing $\xi_2$ to one, we find that $H= 0.6$ (we have
set $\kappa^2=1$ for convenience). Below we shall discuss the
stability of the solution. There exists no de-Sitter solution for
Heterotic case. Actually, de-Sitter solutions were investigated in
the similar background in Ref.\cite{Sami:2005zc} where higher order
curvature corrections up to order four were included. Since, here we
confine ourselves up to the third order and the fourth order terms
are excluded from the expression of $\rho_c$; these terms come with
different signs. Thus it becomes necessary to check whether or not
the stability property of de-Sitter solutions is preserved order by
order.
We further note that the modified Hubble Eqs.(\ref{qhubble}) admits the
following solution in
the high curvature regime in presence of the barotropic fluid with equation of
state parameter $w$
\begin{equation}
a(t) =a_0 t^{h_0} \,, \quad \mbox{or} \quad a(t) =a_0(t_s-t)^{h_0}
\end{equation}
where
\begin{eqnarray}
&& h_0=\frac{2}{1+w} \,, \\
&& a_0=\left[\frac{\xi_2}{\rho_0}\left(72(-h_0I_0^2+2I_0h_0 \dot{I}_0)+
216h_0^2I_0^2-120(h_0^6+I_0^3)\right)\right]^{-\frac {1}{3(1+w)}}
\end{eqnarray}
We have used $\rho_m=\rho_0 a^{-3(1+w)}$ for the background matter density and
$ I_0=h_0(h_0-1),~~\dot{I_0}=-2h_0(h_0-1)$.
For the effective equation of state dictated by the modified Hubble
Eqs.(\ref{qhubble}) we have
\begin{equation}
w_{\rm eff}=-1-\frac {2}{3} \frac{\dot{H}}{H^2}=-1+\frac{1+w}{3}
\label{weff}
\end{equation}
It is interesting to note that effective EoS parameter (\ref{weff})
may correspond to inflationary solution
in the presence of
background fluid (radiation/matter). In the
low curvature regime
or at late times $w_{\rm eff}=w$. In the presence of
phantom matter, the effective EoS being less than $-1$
is typical for Big Rip singularity.
It is really not surprising that we
have inflationary solution
at early epochs in the presence of higher order curvature correction to Einstein
Hilbert action; an early example
of this phenomenon is provided by $R^2 $-$ gravity$.
\subsection{Autonomous form of equations of motion}
Let us now cast the equations of motion in the autonomous form. Introducing
the following notation ($\kappa^2=1$)
\begin{equation}
x= H,~~~y= \dot{H},~~~u= \phi,~~v= \dot{\phi},~~~z= \rho_m
\end{equation}
We shall assume $\omega(\phi)=\nu=const$.
we obtain the system of equations
\begin{eqnarray}
&&\dot{x}=y \,, \nonumber \\
&&\dot{y}= \frac{\frac{1}{2}\nu v^2-24\xi_1 \Xi_1 x^3+\xi_2
\left[-72xI^2\Xi_2-72(yI^2+4Iyx^2)-216x^2I^2+120(x^6+I^3)\right]-3x^2}{144
I(x,y) \xi_2 x} +\frac{z}{144I \xi_2 x} \,, \nonumber \\
&&\dot{u}=v \,, \quad
\dot{v}=\frac{-3 \nu xv+\xi_1{\cal L}_c^{(1)}+\xi_2 {\cal L}_c^{(2)} }{\nu} \,,
\quad
\dot{z}=-3x(1+w)z
\label{Hddot}
\end{eqnarray}
We shall be first interested in the case of fixed field
for which we have (assuming $\nu=1$)
\begin{eqnarray}
&&\dot{x}=y \,, \nonumber \\
&&\dot{
y}=\frac{\left[-72(yI(x,y)^2+4I(x,y)yx^2)+216x^2I^2(x,y)+120(x^6+I^3(x,y))-3x^2+z\right]}{144
I(x,y)x}\,, \nonumber\\
&&\dot{z}=-3x(1+w)z
\end{eqnarray}
where
\begin{equation}
I(x,y)=x^2+y,~~~{\cal L}_c^{(1)}=24 x^2(y+x^2),~~~{\cal
L}_c^{(2)}=24(x^6+I^3(x,y)
\end{equation}
In the presence of perfect fluid, the de-Sitter fixed point is characterized by
\begin{eqnarray}
x_c= 0.71,~~~y_c=0,~~~z_c=0
\label{fixedp}
\end{eqnarray}
Perturbing the system around the critical point and keeping the linear terms we
obtain
\begin{eqnarray}
&&\dot{\delta x}=\delta y \,, \nonumber \\
&&\dot{\delta y}=\left(\frac{21}{3}x_c^2+\frac{10}{3 x_c}+\frac{1}{48
x_c^2}\right)\delta x
+\left(\frac{2}{3}x_c+\frac{5}{6 x_c^2}+\frac{1}{48 x_c^3} \right)\delta
y+\frac{1}{144 x_c^3}\delta z \,, \nonumber\\
&&\dot{\delta x}=-3x_c(1+w)\delta z
\end{eqnarray}
Stability of the fixed points depends upon the nature of eigenvalues of
perturbation matrix
\begin{equation}
\lambda_{1,2}=\frac{1}{2}\left(a_{22}\pm\sqrt{4a_{21}+a_{22}^2}\right) \,,
\quad
\lambda_3=a_{33}=-3x_c(1+w)
\end{equation}
For the fixed point given by (\ref{fixedp}), $\lambda_{1}$ is positive where
as $\lambda_{2}$ is
negative making the de-Sitter solution
an unstable node. In fact, $\lambda_{1}$ remains positive for any $x_c>0$
thereby making the conclusion
independent of the choice of $\xi_2^{(0)}$ (see FIG. \ref{eigen}).
\begin{figure}
\resizebox{3.0in}{!}{\includegraphics{eigen.eps}}
\caption{Plot of the first eigenvalue $\lambda_1$ versus the critical point
$x_c$. The
eigenvalue remains positive if the critical point is varied from zero to larger
values.}
\label{eigen}
\end{figure}
\begin{figure}
\resizebox{3.0in}{!}{\includegraphics{O.eps}}
\caption{Plot of ${ Y} \equiv (\frac{48 \xi_0^{(1)}}{t_1^2})\times 10^5$ (black
line)
and ${ Y} \equiv (\frac{96 \xi_0^{(2)}}{t_1^4})$ (gray line) versus ${ X} \equiv
\phi_0^2$.
corresponding to $h_0=40$ or $w_{DE}=-0.95$ ($\nu$ is assumed to be one). The common
region corresponding to positive values of the couplings
gives possible models of dark energy induced by higher order curvature
corrections.}
\label{O}
\end{figure}
\begin{figure}
\resizebox{3.0in}{!}{\includegraphics{O1.eps}}
\caption{Plot of ${ Y} \equiv (\frac{48 \xi_0^{(1)}}{t_1^2})\times 10^5$ (gray
line)
and ${ Y} \equiv (\frac{96 \xi_0^{(2)}}{t_1^4})$ (black line) versus ${ X} \equiv
\phi_0^2$.
corresponding to $h_0=-33.33$ or $w_{DE} =-1.06$. The region bounded by $6<{ X}<31.5$
corresponds to
possible phantom dark energy models.}
\label{O1}
\end{figure}
\subsection{Dynamically evolving field $\phi$ and dark energy solutions}
In what follows we shall be interested in looking for an
exact solution of equations of motion (\ref{qhubble}) and (\ref{phieq}) which
of
interest to us from the point of view of dark energy in absence of the
background fluid.
In this case let us look for the following solution
\begin{equation}
H=\frac{h_0}{t},\ \phi=\phi_0\ln\frac{t}{t_1}\ \left(\mbox{when}\
h_0>0\right)\,, \quad
H=\frac{h_0}{t_s-t},\ \phi=\phi_0\ln\frac{t_s-t}{t_1}\ \left(\mbox{when}\
h_0<0\right)\ .
\label{solution}
\end{equation}
Substituting (\ref{solution}) in evolution Eqs. (\ref{qhubble}) and
(\ref{phieq}) yields (we again set $\kappa^2=1$)
\begin{eqnarray}
\label{x1eq1}
&&\nu(1-3h_0)\phi_0^2+\frac{48 \xi_1^{(0)} }{t_1^2} h_0^3(h_0-1)+\frac{96
\xi_2^{(0)}}{t_1^2} (h_0^6+I_0^3)=0 \,, \nonumber \\
&&-3h_0^2+\frac{\nu}{2}\phi_0^2-\frac{48
\xi_1^{(0)}}{t_1^2}h_0^3+\frac{\xi_2^{(0)}}{t_1^4}J(h_0)=0
\label{x2eq1}
\end{eqnarray}
where
\begin{eqnarray}
&&J=\frac{1}{96}\left(-288h_0I_0^2-72(-h_0I_0^2+2I_0h_0\dot{I_0})-216h_0I_0^2+120(h_0^6+I_0^3)\right)
\,, \nonumber \\
&& I_0=h_0(h_0-1),~~\dot{I_0}=-2h_0(h_0-1)
\end{eqnarray}
Using Eqs.(\ref{x1eq1}), we express the couplings through
$h_0$ and $\phi_0$
\begin{eqnarray}
\label{coupling1}
&&\frac{48 \xi_1^{(0)}}{t_1^2}=\left[\frac {3h_0^2
-\frac{\nu\phi_0^2}{2}+\nu(3h_0-1)\phi_0^2
}{J(h_0)(h_0-1)+h_0^3(h_0^3+(h_0-1)^3)}\right]\,, \\
&&\frac{96
\xi_2^{(0)}}{t_1^4}=\frac{1}{h_0^3}\left[-3h_0^2+\frac{\nu\phi_0^2}{2}
+\left[\frac
{\left(3h_0^2-\frac{\nu\phi_0^0}{2}+\nu(3h_0-1)\phi_0^2\right)J(h_0)}{J(h_0)(h_0-1)+h_0^3(h_0^3+(h_0-1)^3)}\right]
\right]
\label{coupling2}
\end{eqnarray}
Let us note that the string couplings ($\xi_1(\phi)=\xi_1^{(0)} {\rm e}^{n\frac
{\phi}{\phi_0}}, \xi_2(\phi)=\xi_2^{(0)}
{\rm e}^{m\frac {\phi}{\phi_0}}$ with $m=n^2=4$) are generic to solution described
by (\ref{solution}); for other
couplings such a solution does not exist. We also note that
Eqs.(\ref{coupling1})
$\&$ (\ref{coupling2}) reduce
to the earlier obtained results in Ref.\cite{Nojiri:2005vv}
(see Refs.\cite{Calcagni:2005im, Neupane:2006dp,Carter:2005fu} on the related
theme)
where similar investigations were carried out confining to
only second order curvature invariants in the action (\ref{act}).
There are several free parameters in the problem. In order to extract important
information from Eqs. (\ref{coupling1})
and (\ref{coupling2}), we proceed in the following manner. We fix $h_0$
corresponding to the observed value of
dark energy equation of state parameter $w_{DE}$ and impose the positivity
condition on the couplings $\xi_1^{(0)}$ and $\xi_2^{(0)}$
leading to allowed values of the parameter $\phi_0^2$.
In the absence of
coupling $\xi_2(\phi)$, it was shown in Ref.\cite{Nojiri:2005vv} that for a
given value of $h_0$ from the allowed interval, the parameter
$\phi_0$ takes a fixed value. Our model incorporates higher order curvature
corrections allowing a one
parameter flexibility in the values of $\phi_0$. This gives rise to
comfortable choice of the equation of state
consistent with observations.
The three years WMAP data is analyzed in Ref.\cite{Spergel}. which
shows that the combined analysis of WMAP with supernova Legacy
survey (SNLS) constrains the dark energy equation of state $w_{DE}$
pushing it to wards the cosmological constant. The marginalized best
fit values of the equation of state parameter at 68$\%$ confidence
level are given by $-1.14\leq w_{DE} \leq -0.93$. In case of a prior
that universe is flat, the combined data gives $-1.06 \leq w_{DE}
\leq -0.90 $. Our model can easily accommodate these values of
$w_{DE}$. For instance, in case of non-phantom (standard dark
energy) we find a one parameter family of dark energy models with
$h_0 \simeq 40$ ($w_{DE}=-0.98$) corresponding to
$\phi_0^2 >41$. Like wise, in case of phantom dark energy, we find that for
$h_0 \simeq -33$ ($w_{DE}=-1.02$),
the viable range of the parameter $\phi_0^2$ is given by $6<\phi_0^2<31.5$.
These values are consistent
with recent WMAP release and SNLS findings.
We should mention that the observations
quoted above do not incorporate the dark energy perturbations which might severely
constrain the phantom dark energy cosmologies. The combined data (CMB+LSS+SNLS) then forces
the dark energy equation of state to vary as, $-1.001< w_{DE}< -0.875 $\cite{Spergel}. Our model
can easily incorporate these numerical values of $w_{DE}$ by constraining $h_0$ and $\phi_0^2$
similar to the case of non-clustering dark energy. A word of caution, the
evolution of dark energy perturbation across the phantom divide needs additional assumptions;
a complete analysis should take into account the non-adiabatic perturbations
which makes dark energy gravitationally stable\cite{CST}.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Dark energy & $h_0$ & $\phi_0^2$ & $w_{DE}$ & Observational constraint on
$w_{DE}$ & Constraint on $ w_{DE}$ with flatness prior \\
\hline
\hline
Non-phantom&40 &$\phi_0^2>41$ & $-0.98$ & & \\
& & & &$-1.06^{+0.13}_{-0.08}$ & $-0.97^{+0.07}_{-0.09}$ \\
Phantom& $-33.33$ &$6<{\phi}_0^2<31.5 $ & $-1.02$ & & \\
\hline
\end{tabular}
\end{center}
\caption[crit]{\label{crit} Observational constraints on (non-clustering) dark energy
equation of state $w_{DE}$
dictated by the combined analysis of WMAP+SNLS data\cite{Spergel} and the numerical values
of model parameters consistent with the observations.}
\end{table*}
\subsection{Stability of dark energy solution}
In what follows we shall examine the stability of the dark energy
solution (\ref{solution}) induced by purely stringy corrections. In
general the analytical treatment becomes intractable;
simplification, however, occurs in the limit of large $h_0$
corresponding to $w_{eff} \simeq -1$.
Let us consider the following situation of interest to us
\begin{equation}
\rho_m=0,~ \omega=\nu=1,~
\xi_1(\phi)=\xi_1^{(0)}{\rm e}^{2\phi/\phi_0}\ ,\quad
\xi_2(\phi)=\xi_2^{(0)}{\rm e}^{4\phi/\phi_0}\ .
\label{P0}
\end{equation}
In order to investigate the stability around the dark energy solution
defined by (\ref{solution}), we need a convenient set of variables
to cast the evolution equations into autonomous form. We now define
the variables which are suited to our problem.
\begin{equation}
\label{P1}
{\cal X}\equiv \frac{\dot \phi}{H}\ ,\quad
{\cal Y} \equiv \left(\dot H + H^2\right)^2 \xi_2(\phi)\ ,\quad
{\cal Z} \equiv H^2 \xi_1(\phi)\ ,\quad \frac{d}{dN}\equiv \frac{1}{H}\frac{d}{dt}\ .
\end{equation}
With this choice, the evolution equations acquire the autonomous form
\begin{eqnarray}
\label{P2}
\frac{d{\cal X}}{dN}&=& - 2{\cal X} + \xi_1^{(0)}\left( - \frac{{\cal X}}{{\cal Z}} + \frac{48}{\phi_0} \right)
\left(\frac{{\cal Y}}{\xi_2^{(0)}}\right)^{1/2}
+ \frac{96}{\phi_0}\frac{\xi_2^{(0)}}{\left(\xi_1^{(0)}\right)^2} {\cal Z}^2
+ \frac{96\xi_1^{(0)}{\cal Y}}{\phi_0 {\cal Z}}\left(\frac{{\cal Y}}{\xi_2^{(0)}}\right)^{1/2} \ ,\nonumber \\
\frac{d{\cal Y}}{dN}&=& - \frac{1}{24\kappa^2} \frac{{\cal X}^2}{144} - \frac{2}{3\phi_0}{\cal Z}{\cal X} - 2{\cal Y}
+ \frac{2\xi_1^{(0)}{\cal Y}}{3{\cal Z}}\left(\frac{{\cal Y}}{\xi_2^{(0)}}\right)^{1/2}
+ \frac{5\xi_2^{(0)}}{3\left(\xi_1^{(0)}\right)^2} {\cal Z}^2\ ,\nonumber \\
\frac{d{\cal Z}}{dN} &=& \left( -2 + \frac{2}{\phi_0}{\cal X}\right){\cal Z}
+ 2 \xi_1^{(0)}\left(\frac{{\cal Y}}{\xi_2^{(0)}}\right)^{1/2} \ .
\end{eqnarray}
We have used the field equation (\ref{phieq}) and Eq.(\ref{Hddot})
for $\ddot{H}$ in deriving the above autonomous form of equations.
For our solution given by (\ref{solution}), we have
\begin{equation}
\label{P3}
{\cal X}={\cal X}_0 \equiv \frac{\phi_0}{h_0}\ ,\quad
{\cal Z}={\cal Z}_0 \equiv \frac{h_0^2 \xi_1^{(0)}}{t_1^2}\ ,\quad
{\cal Y}={\cal Y}_0 = \frac{ \left( - h_0 + h_0^2 \right)^2 \xi_2^{(0)}}{t_1^4}\ .
\end{equation}
It can be checked that $({\cal X}_0,{\cal Y}_0,{\cal Z}_0)$ is a fixed point of (\ref{P2}).
We then consider small perturbations around (\ref{P3}) or equivalently
around the original solution (\ref{solution})
\begin{equation}
\label{P4}
{\cal X}={\cal X}_0 + \delta {\cal X}\ ,\quad
{\cal Y}={\cal Y}_0 + \delta {\cal Y}\ ,\quad
{\cal Z}={\cal Z}_0 + \delta {\cal Z}\ .
\label{sp}
\end{equation}
Substituting (\ref{sp}) in (\ref{P2}) and retaining the linear terms
in perturbations, we find
\begin{eqnarray}
\label{P5}
\frac{d}{dN}\left(\begin{array}{c} \delta {\cal X} \\ \delta {\cal Y} \\ \delta {\cal Z}
\end{array}\right)
= M \left(\begin{array}{c} \delta {\cal X} \\ \delta {\cal Y} \\ \delta {\cal Z}
\end{array}\right) \ .
\end{eqnarray}
Here $M$ is $3\times 3$-matrix perturbation matrix whose components
are given by
\begin{eqnarray}
\label{P6}
M_{11} &=& -2 + \frac{- 1 + h_0}{h_0} \ ,\nonumber \\
M_{12} &=& - \frac{\phi_0 t_1^2}{2h_0^3} +
\frac{24\xi_1^{(0)}t_1^2}{\phi_0\xi_2^{(0)}\left( - h_0 + h_0^2 \right)}
+ \frac{144\left(-1 + h_0\right)}{\phi_0 h_0 } \ ,\nonumber \\
M_{13} &=& \frac{\phi_0 t_1^2 \left( -1 + h_0 \right) }{h_0^4 \xi_1^{(0)}}
+ \frac{192\xi_2^{(0)}h_0^2}{\xi_1^{(0)}\phi_0 t_1^2}
- \frac{96 \left( -1 + h_0 \right)^3 \xi_2^{(0)}}{\phi_0 h_0 \xi_1^{(0)} t_1^2} \ ,\nonumber \\
M_{21} &=& \frac{1}{72} - \frac{2h_0^2 \xi_1^{(0)}}{3\phi_0 t_1^2} \ ,\nonumber \\
M_{22} &=& - 2 - \frac{\left( - 1 + h_0 \right)}{h_0} \ ,\nonumber \\
M_{23} &=& - \frac{2}{3h_0} - \frac{2 \left( - 1 + h_0 \right)^3 \xi_2^{(0)}}{3h_0 \xi_1^{(0)} t_1^2}
+ \frac{10 h_0^2 \xi_2^{(0)}}{3 \xi_1^{(0)}t_1^2} \ ,\nonumber \\
M_{31} &=& \frac{2h_0^2 \xi_1^{(0)}}{\phi_0 t_1^2} \ ,\nonumber \\
M_{32} &=& \frac{\xi_1^{(0)} t_1^2}{\xi_2^{(0)}\left(- h_0 + h_0^2 \right)} \ ,\nonumber \\
M_{33} &=& -2 + \frac{2}{h_0}\ .
\end{eqnarray}
Stability of the fixed point(s) depends upon the nature eigenvalues
of the perturbation matrix $M$. If there is an eigenvalue whose real
part is positive, the system becomes unstable. Here for simplicity,
we only consider the case of $h_0\to \pm \infty$, which corresponds
to the limit of $w_{\rm eff}\sim -1$. In the case, we find
\begin{equation}
\label{P6b}
\frac{\xi_1^{(0)}}{t_1^2}\to \frac{1}{40 h_0^5}\ ,\quad
\frac{\xi_2^{(0)}}{t_1^4}\to - \frac{1}{32 h_0}\ ,
\end{equation}
and the eigenvalue equation is given by
\begin{equation}
\label{P7}
0=F(\lambda)
\equiv -\lambda^3 - 6\lambda^2 - \frac{h_0^3}{40\phi_0} \lambda - \frac{7 h_0^3}{40\phi_0} \ .
\end{equation}
The values of $\lambda$ satisfying $F(\lambda)=0$ give eigenvalues of $M$.
The solutions of (\ref{P7}) is given by
\begin{equation}
\label{P8}
\lambda=\lambda_\pm \equiv
\pm \frac{\left| h_0 \right| \sqrt{- h_0}}{\phi_0 \sqrt{40}}
+ {\cal O}\left(\left|h_0\right|\right)\ , \quad
\lambda = \lambda_0 \equiv -7 + {\cal O}\left(\left|h_0\right|^{-1/2}\right)\ .
\end{equation}
When $h_0<0$, the mode corresponding to $\lambda_+$
$\left(\lambda_-\right)$ becomes stable (unstable). Since
$\lambda_\pm$ are pure imaginary when $h_0>0$, the corresponding
modes become stable in this case. On the other hand, the mode corresponding to
$\lambda_0$ is always stable. Thus, the non-phantom dark energy
solution (\ref{solution}) induced by string corrections to
Einstein gravity is stable. Such a solution exists in presence
of a dynamically evolving field $\phi$ with $V(\phi)=0$ coupled to Riemann invariants
with couplings dictated by string theory. Dark energy can be
realized in a variety of scalar field models by appropriately
choosing the field potential. It is really interesting that we can obtain dark energy solution
in string model without recourse to a scalar field potential.
Let us compare the results with
those obtained in \cite{Nojiri:2005vv}, where $\xi_2=0$ but $V(\phi)\neq 0$. The dark energy
solution studied
in Ref.\cite{Nojiri:2005vv}, was shown to be stable when $h_0>0$ but unstable
for $h_0>0$. The present investigations include $\xi_2$ and $V=0$ which makes our model different
from Ref.\cite{Nojiri:2005vv}; it is therefore not surprising that
our results differ from Ref.\cite{Nojiri:2005vv}. Since $h_0>0$
corresponds to the quintessence phase and $h<0$ to the phantom, the
solution in the model (with $\xi_0$ and $V=0$) is
stable in the quintessence phase but unstable in the phantom phase.
We should notice that the approximation we used to check the stability
works fine for any generic
value of $h_0$. For instance, $5 < h_0 < -667$ which corresponds to
the variation of $w_{DE}$ in case the dark energy perturbations are taken into account.
We also carried out numerical verification of our results.
\section{The late-time cosmology in scalar-Gauss-Bonnet gravity}
A number of scalar field models have recently been investigated
in connection with dark energy (see Ref.\cite{CST} for details). The cosmological viability
of these constructs depends upon how well the Hubble parameter
predicted by them compares with observations. One could also follow the reverse
route and construct the Lagrangian using the observational input; such a scheme
might help in the search of best fit models of dark energy\cite{CST}. In what follows we shall
describe how the reconstruction program is implemented in presence of higher order
string curvature corrections.
\subsection{The reconstruction of scalar-Gauss-Bonnet gravity}
In this section it will be shown how scalar-Gauss-Bonnet gravity may be
reconstructed for any requested cosmology using the method \cite{e}
developed in the scalar-tensor theory. We limit here only by Gauss-Bonnet term
order (by technical reasons) but there is no principal problem to include
higher order terms studied in previous section.
It is interesting that the principal possibility appears to reconstruct
the scalar-Gauss-Bonnet gravity for any (quintessence, cosmological constant or
phantom) dark energy universe. The last possibility seems to be quite
attractive due to the fact \cite{Nojiri:2005vv},
that the phantom universe could be realized in
the scalar-Gauss-Bonnet gravity without introducing ghost scalar field.
In this section, we show that in scalar-Gauss-Bonnet gravity, any cosmology,
including
phantom cosmology, could be realized by properly choosing the potential and the
coupling to the Gauss-Bonnet invariant with the canonical scalar.
The starting action is
\begin{equation}
\label{GBany1}
S=\int d^4 x \sqrt{-g}\left[ \frac{R}{2\kappa^2} - \frac{1}{2}\partial_\mu \phi
\partial^\mu \phi - V(\phi) - \xi_1(\phi) G \right]\ .
\end{equation}
Here $G$ is the Gauss-Bonnet invariant $G\equiv {\cal L}_c^{(1)}$ (\ref{Lc1})
and the scalar field $\phi$ is canonical in (\ref{GBany1}).
As in previous section, it is natural to assume the FRW universe (\ref{FRW})
with $N(t)=1$ and the scalar field $\phi$ only depending on $t$.
The FRW equations look like:
\begin{eqnarray}
\label{GBany4}
0&=& - \frac{3}{\kappa^2}H^2 + \frac{1}{2}{\dot\phi}^2 + V(\phi) + 24 H^3
\frac{d \xi_1(\phi(t))}{dt}\ ,\\
\label{GBany5}
0&=& \frac{1}{\kappa^2}\left(2\dot H + 3 H^2 \right) + \frac{1}{2}{\dot\phi}^2
- V(\phi)
- 8H^2 \frac{d^2 \xi_1(\phi(t))}{dt^2} - 16H \dot H
\frac{d\xi_1(\phi(t))}{dt}
- 16 H^3 \frac{d \xi_1(\phi(t))}{dt}\ .
\end{eqnarray}
and scalar field equation:
\begin{equation}
\label{GBany6}
0=\ddot \phi + 3H\dot \phi + V'(\phi) + \xi_1'(\phi) G\ .
\end{equation}
Combining (\ref{GBany4}) and (\ref{GBany5}), one gets
\begin{equation}
\label{GBany7}
0=\frac{2}{\kappa^2}\dot H + {\dot\phi}^2 - 8H^2 \frac{d^2
\xi_1(\phi(t))}{dt^2}
- 16 H\dot H \frac{d\xi_1(\phi(t))}{dt} + 8H^3 \frac{d\xi_1(\phi(t))}{dt}
=\frac{2}{\kappa^2}\dot H + {\dot\phi}^2 -
8a\frac{d}{dt}\left(\frac{H^2}{a}\frac{d\xi_1(\phi(t))}{dt}\right)\ .
\end{equation}
Eq.(\ref{GBany7}) can be solved with respect to $\xi_1(\phi(t))$ as
\begin{equation}
\label{GBany8}
\xi_1(\phi(t))=\frac{1}{8}\int^t dt_1 \frac{a(t_1)}{H(t_1)^2} \int^{t_1}
\frac{dt_2}{a(t_2)}
\left(\frac{2}{\kappa^2}\dot H (t_2) + {\dot\phi(t_2)}^2 \right)\ .
\end{equation}
Combining (\ref{GBany4}) and (\ref{GBany8}), the scalar potential $V(\phi(t))$
is:
\begin{equation}
\label{GBany9}
V(\phi(t)) = \frac{3}{\kappa^2}H(t)^2 - \frac{1}{2}{\dot\phi (t)}^2 - 3a(t)
H(t) \int^t \frac{dt_1}{a(t_1)}
\left(\frac{2}{\kappa^2}\dot H (t_1) + {\dot\phi(t_1)}^2 \right)\ .
\end{equation}
We now identify $t$ with $f(\phi)$ and $H$ with $g'(t)$ where $f$ and $g$
are some unknown functions in analogy with Ref.\cite{e} since we know this
leads to the solution of the FRW equations subject to existence of
such functions.
Then we consider the model where $V(\phi)$ and $\xi_1(\phi)$ may be expressed
in terms of two functions $f$ and $g$ as
\begin{eqnarray}
\label{GBany10b}
V(\phi) &=& \frac{3}{\kappa^2}g'\left(f(\phi)\right)^2 - \frac{1}{2f'(\phi)^2}
- 3g'\left(f(\phi)\right) {\rm e}^{g\left(f(\phi)\right)} \int^\phi d\phi_1
f'(\phi_1 ) {\rm e}^{-g\left(f(\phi_1)\right)} \times
\left(\frac{2}{\kappa^2}g''\left(f(\phi_1)\right) + \frac{1}{f'(\phi_1 )^2}
\right)\ ,\nonumber \\
\xi_1(\phi) &=& \frac{1}{8}\int^\phi d\phi_1 \frac{f'(\phi_1)
{\rm e}^{g\left(f(\phi_1)\right)} }{g'(\phi_1)^2}
\int^{\phi_1} d\phi_2 f'(\phi_2) {\rm e}^{-g\left(f(\phi_2)\right)}
\left(\frac{2}{\kappa^2}g''\left(f(\phi_2)\right) + \frac{1}{f'(\phi_2)^2}
\right)\ .
\end{eqnarray}
By choosing $V(\phi)$ and $\xi_1(\phi)$ as (\ref{GBany10b}), we can easily find
the following solution for Eqs.(\ref{GBany4}) and (\ref{GBany5}):
\begin{equation}
\label{GBany11b}
\phi=f^{-1}(t)\quad \left(t=f(\phi)\right)\ ,\quad
a=a_0{\rm e}^{g(t)}\ \left(H= g'(t)\right)\ .
\end{equation}
We can straightforwardly check the solution (\ref{GBany11b}) satisfies the
field equation (\ref{GBany6}).
Hence, any cosmology expressed as $H=g(\phi)$ in the model (\ref{GBany1}) with
(\ref{GBany10b}) can be realized, including
the model exhibiting the transition from non-phantom phase to phantom phase
without introducing
the scalar field with wrong sign kinetic term.
In the Einstein gravity, the FRW equations are given by
\begin{equation}
\label{GBany12}
0= - \frac{3}{\kappa^2}H^2 + \rho\ ,\quad
0= \frac{1}{\kappa^2}\left(2\dot H + 3 H^2 \right) + p\ .
\end{equation}
Here $\rho$ and $p$ are total energy density and pressure in the universe.
By comparing (\ref{GBany12}) with (\ref{GBany11b}) we find the effective energy
density $\tilde\rho$
and the pressure $\tilde p$ are given
\begin{equation}
\label{GBany13b}
\tilde\rho= \frac{3}{\kappa^2}g'(t)^2\ ,\quad
\tilde p= -\frac{3}{\kappa^2}g'(t)^2 - \frac{2}{\kappa^2}g''(t)\ .
\end{equation}
Since $t={g'}^{-1}\left(\left(\kappa\right)\sqrt{\rho/3}\right)$, we obtain
the following effective equation of the state (EoS):
\begin{equation}
\label{GBany14}
\tilde p=-\tilde \rho -
\frac{2}{\kappa^2}g''\left({g'}^{-1}\left(\kappa\sqrt{\frac{\rho}{3}}\right)\right)\
,
\end{equation}
which contains all the cases where the EoS is given by $p=w(\rho)\rho$.
Furthermore, since ${g'}^{-1}$ could NOT be always a single-valued function,
Eq.(\ref{GBany14}) contains more general EoS given by
\begin{equation}
\label{GBany15}
0=F\left(\tilde\rho,\tilde p\right)\ .
\end{equation}
This shows the equivalence between scalar-tensor and ideal fluid
descriptions.
Let us come back now to scalar-Gauss-Bonnet gravity.
It is not difficult to extend the above formulation to include matter
with constant EoS parameter $w_m\equiv p_m / \rho_m$.
Here $\rho_m$ and $p_m$ are energy density and pressure of the matter.
Then, instead of (\ref{GBany4}) and (\ref{GBany5}) the FRW equations are
\begin{eqnarray}
\label{GBany16}
0&=& - \frac{3}{\kappa^2}H^2 + \frac{1}{2}{\dot\phi}^2 + V(\phi) + \rho_m
+ 24 H^3 \frac{d \xi_1(\phi(t))}{dt}\ ,\\
\label{GBany17}
0&=& \frac{1}{\kappa^2}\left(2\dot H + 3 H^2 \right) + \frac{1}{2}{\dot\phi}^2
- V(\phi) + p_m
- 8H^2 \frac{d^2 \xi_1(\phi(t))}{dt^2} - 16H \dot H
\frac{d\xi_1(\phi(t))}{dt}
- 16 H^3 \frac{d \xi_1(\phi(t))}{dt}\ .
\end{eqnarray}
The energy conservation law
\begin{equation}
\label{GBany18}
\dot\rho_m + 3H\left(\rho_m + p_m\right)=0\ ,
\end{equation}
gives \begin{equation}
\label{GBany19}
\rho_m=\rho_{m0} a^{-3(1+w_m)}\ ,
\end{equation}
with a constant $\rho_{m0}$.
Instead of (\ref{GBany10b}), if we consider the model with
\begin{eqnarray}
\label{GBany20b}
V(\phi) &=& \frac{3}{\kappa^2}g'\left(f(\phi)\right)^2 - \frac{1}{2f'(\phi)^2}
- 3g'\left(f(\phi)\right) {\rm e}^{g\left(f(\phi)\right)} \int^\phi d\phi_1
f'(\phi_1){\rm e}^{-g\left(f(\phi_1)\right)}
\left(\frac{2}{\kappa^2}g''\left(f(\phi_1)\right) + \frac{1}{2f'(\phi_1 )^2}
\right. \nonumber \\
&& \left. + (1+w_m)g_0{\rm e}^{-3(1+w_m)g\left(f(\phi_1)\right)}\right) \ ,\nonumber \\
\xi_1(\phi) &=& \frac{1}{8}\int^\phi d\phi_1 \frac{f'(\phi_1)
{\rm e}^{g\left(f(\phi_1)\right)} }{g'(\phi_1)^2}
\int^{\phi_1} d\phi_2 f'(\phi_2) {\rm e}^{-g\left(f(\phi_2)\right)}
\left(\frac{2}{\kappa^2}g''\left(f(\phi_2)\right) \right. \nonumber \\
&& \left. + \frac{1}{2f'(\phi_2)^2} +
(1+w_m)g_0{\rm e}^{-3(1+w_m)g\left(f(\phi_2)\right)} \right)\ ,
\end{eqnarray}
we re-obtain the solution (\ref{GBany11b}) even if the matter is
included.
However,
a constant $a_0$ is given by
\begin{equation}
\label{GBany21}
a_0=\frac{g_0}{\rho_0}\ .
\end{equation}
One can consider some explicit examples \cite{e}:
\begin{equation}
\label{GBany22}
t=f(\phi)=\frac{\phi}{\phi_0}\ ,\quad g(t)=h_0\ln \frac{t}{t_s - t}\ ,
\end{equation}
which gives
\begin{equation}
\label{GBany23}
H=h_0\left(\frac{1}{t} + \frac{1}{t_s - t}\right)\ ,\quad
\dot H = \frac{h_0 t_s \left(2t - t_s\right)}{t^2\left(t_s - t\right)^2}\ .
\end{equation}
Then the universe is in non-phantom phase when $t<t_s/2$ and in phantom phase
when $t>t_s/2$.
There is also a Big Rip singularity at $t=t_s$.
Especially in case $w_m=0$ (that is, matter is dust) and $h_0=2$, we
reconstruct the scalar-Gauss-Bonnet gravity with following potentials:
\begin{eqnarray}
\label{GBany24}
V(\phi)&=& \frac{6\phi_0 \phi_s}{\kappa^2\phi\left(\phi_s - \phi \right)} -
\frac{1}{2}\phi_0^2
- \frac{4\phi_0^2\phi_s \phi}{\left(\phi_s - \phi\right)^3}\left[
\frac{4}{\kappa^2}\left(\frac{\phi_s^2}{3\phi^3} - \frac{\phi_s}{\phi^2}\right)
- \frac{\phi_s^2}{\phi} - 2\phi_s \ln \frac{\phi}{\phi_s} + \phi \right. \nonumber \\
&& \left. + \frac{g_0}{\phi_0^2}\left(- \frac{\phi_s^8}{7\phi^7} +
\frac{4\phi_s^7}{3\phi^6}
- \frac{28\phi_s^6}{5\phi^5}
+ \frac{14\phi_s^5}{\phi^4} - \frac{70\phi_s^4}{3\phi^3} +
\frac{28\phi_s^3}{\phi^2}
- \frac{28\phi_s^2}{\phi} - 8\phi_s \ln \frac{\phi}{\phi_s} + \phi\right) +
c_1 \right]\ ,\nonumber \\
\xi_1(\phi)&=& \frac{1}{32\phi_0^2\phi_s^2}\left[
\frac{4}{\kappa^2}\left(\frac{\phi_s^2\phi^2}{6}
- \frac{\phi_s\phi^3}{3}\right)
- \frac{\phi_s^2 \phi^4}{4} - 2\phi_s \phi^5\left(\frac{1}{5}\ln
\frac{\phi}{\phi_s} - \frac{1}{25}\right)
+ \frac{\phi^6}{6}
+\frac{g_0}{\phi_0^2}\left(\frac{\phi_s^8}{14\phi^2} - \frac{4\phi_s^7}{3\phi}
- \frac{28\phi_s^6}{5}\ln \frac{\phi}{\phi_s} \right.\right. \nonumber \\
&& \left.\left. + 14\phi_s^5 \phi - \frac{35\phi_s^4 \phi^2}{3} +
\frac{28\phi_s^3\phi^3}{3}
- 7\phi_s^2\phi^4 - 8 \phi_s \phi^5\left(\frac{1}{5}\ln\frac{\phi}{\phi_s} -
\frac{1}{25}\right)
+ \frac{\phi^6}{6}\right) + \frac{c_1 \phi^5}{5} + c_2 \right]\ .
\end{eqnarray}
Here $\phi_s\equiv \phi_0 t_s$ and $c_1$, $c_2$ are constants of the
integration.
Another example, without matter ($g_0=0$), is \cite{osc}
\begin{equation}
\label{GBany25}
g(t) = h_0\left( t + \frac{\cos \theta_0}{\omega}\sin \omega t \right)\ ,\quad
f^{-1}(t)= \phi_0 \sin\frac{\omega t}{2}\ .
\end{equation}
Here $h_0$, $\theta_0$, $\omega$, and $\phi_0$ are constants.
This leads to reconstruction of scalar-Gauss-Bonnet gravity with \begin{eqnarray}
\label{GBany26}
V(\phi)&=&\frac{3h_0}{\kappa^2}\left(1 + \cos\theta_0 -
\frac{2\cos\theta_0}{\phi_0^2}\phi^2\right)
- \frac{\phi_0^2 \omega^2}{8}\left( 1 -
\frac{\phi^2}{\phi_0^2}\right)^{1/2}\
,\nonumber \\
\xi_1(\phi)&=& - \frac{\omega\phi_0}{32h_0^3}\int^\phi d\phi_1 \left( 1 -
\frac{\phi_1^2}{\phi_0^2}\right)^{-1/2}
\left( 1 + \cos\theta_0 - \frac{2\cos\theta_0}{\phi_0^2}\phi_1^2 \right)^{-2}\
,
\end{eqnarray}
Then from Eq.(\ref{GBany25}) we find
\begin{equation}
\label{GBany27}
H=h_0\left( 1 + \cos \theta_0 \cos \omega t \right)\geq 0 \ ,\quad
\dot H = - h_0 \omega \cos \theta_0 \sin \omega t\ ,
\end{equation}
Then the Hubble rate $H$ is oscillating but since $H$ is positive,
the universe continues to expand and if $h_0 \omega \cos \theta_0>0$, the
universe is in non-phantom (phantom)
phase when $2n \pi < \omega t < \left(2n + 1\right) \pi$ $\left(\left(2n -
1\right) \pi < \omega t < 2n \pi\right)$
with integer $n$. Thus, the oscillating late-time cosmology in string-inspired
gravity may be easily constructed.
One more example is \cite{osc}
\begin{equation}
\label{GBany28}
g(t)=H_0 t - \frac{H_1}{H_0}\ln \cosh H_0 t\ .
\end{equation}
Here we assume $H_0>H_1>0$. Since
\begin{equation}
\label{GBany29}
H=g'(t)=H_0 - H_1\tanh H_0 t\ ,\quad
\dot H=g''(t)=- \frac{H_0H_1}{\cosh^2 H_0 t}<0\ ,
\end{equation}
when $t\to \pm \infty$, the universe becomes asymptotically deSitter space,
where $H$ becomes a constant
$H\to H_0 \mp H_1$ and therefore the universe is accelerating.
When $t=0$, we find
\begin{equation}
\label{GBany30}
\frac{\ddot a}{a}=\dot H + H^2 = -H_1 H_0 + H_0^2 < 0\ ,
\end{equation}
therefore the universe is decelerating.
Then the universe is accelerating at first, turns to be decelerating, and
after that universe becomes
accelerating again. As $\dot H$ is always negative, the universe is in
non-phantom phase.
Furthermore with the choice
\begin{equation}
\label{GBany31}
w_m=0\ ,\quad t=f(\phi)= \frac{1}{H_0}\tan \left(\frac{\kappa H_0}{2\sqrt{2}
H_1}\phi\right) \ ,
\end{equation}
we find the corresponding scalar-Gauss-Bonnet gravity
\begin{eqnarray}
\label{GBany32}
V(\phi)&=&\frac{3}{\kappa^2}\left(H_0 - H_1 \tanh \varphi \right)^2 -
\frac{H_1}{\sqrt{2} \kappa^2 \cosh^2 \varphi} \nonumber \\
&& - \frac{12g_0}{H_0}\left(H_0 - H_1 \tanh \varphi
\right)\left(1+{\rm e}^{2\varphi}\right)
\left[ 2\varphi - \ln \left(1+{\rm e}^{2\varphi}\right) +
\frac{5}{6\left(1+{\rm e}^{2\varphi}\right)}
+ \frac{5}{6\left(1+{\rm e}^{2\varphi}\right)^2} +
\frac{2}{6\left(1+{\rm e}^{2\varphi}\right)^3}\right]\ ,\nonumber \\
\xi_1(\phi)&=& \frac{g_0}{2H_0}\int^\varphi d\varphi' \frac{1+{\rm e}^{2\varphi'}}{
\left(H_0 - H_1 \tanh \varphi' \right)^2}
\left[ 2\varphi' - \ln \left(1+{\rm e}^{2\varphi'}\right) +
\frac{5}{6\left(1+{\rm e}^{2\varphi'}\right)}
+ \frac{5}{6\left(1+{\rm e}^{2\varphi'}\right)^2} +
\frac{2}{6\left(1+{\rm e}^{2\varphi'}\right)^3}\right] .
\end{eqnarray}
Here
\begin{equation}
\label{GBany33}
\varphi\equiv \tan \left(\frac{\kappa H_0}{2\sqrt{2} H_1}\phi\right) \ .
\end{equation}
Although it is difficult to give the explicit forms of $V(\phi)$ and
$\xi_1(\phi)$, we may also consider the following example \cite{e}:
\begin{equation}
\label{GBA1}
g(t)=h_0 \left(\frac{t^4}{12} - \frac{t_1+t_2}{6}t^3 + \frac{t_1 +
t_2}{2}t^2\right)\ ,\quad
\left(3t_1>t_2>t_1>0\ , \quad h_0>0\right)\ .
\end{equation}
Here $h_0$, $t_1$, $t_2$ are constants.
Hence, Hubble rate is
\begin{equation}
\label{GBA2}
H(t) = h_0 \left(\frac{t^3}{3} - \frac{t_1 + t_2}{2}t^2 + t_1 t_2 t \right)\
,\quad
\dot H (t) = h_0 \left(t-t_1\right)\left(t - t_2\right)\ .
\end{equation}
Since $H>0$ when $t>0$ and $H<0$ when $t<0$, the radius of the universe
$a=a_0{\rm e}^{g(t)}$ has a minimum when $t=0$.
From the expression of $\dot H$ in (\ref{GBA2}), the universe is in phantom
phase $\left(\dot H>0\right)$
when $t<t_1$ or $t>t_2$, and in non-phantom phase $\left(\dot H<0\right)$ when
$t_1<t<t_2$ (for other string-inspired models with similar cosmology, see
for instance \cite{ar}).
Then we may identify the period $0<t<t_1$ could correspond to the inflation and
the period $t>t_2$ to the present acceleration of the universe (this is
similar in spirit to unification of the inflation with the acceleration
suggested in other class of modified gravities in ref.\cite{prd}).
If we define effective EoS parameter $w_{\rm eff}$ as
\begin{equation}
\label{FRW3k}
w_{\rm eff}=\frac{p}{\rho}= -1 - \frac{2\dot H}{3H^2}\ ,
\end{equation}
we find $w_{\rm eff}\to -1$ in the limit $t\to + \infty$.
Although it is difficult to find the explicit forms of $V(\phi)$ and
$\xi_1(\phi)$, one might give the rough forms by
using the numerical calculations. From the expression of $V(\phi)$ and
$\xi_1(\phi)$ in (\ref{GBany10b}),
if $f(\phi)$ is properly given, say as $t=f(\phi)=\phi/\phi_0$ with constant
$\phi_0$, there cannot happen any
singularity in $V(\phi)$ and $\xi_1(\phi)$ even if $t=t_1$ or $t=t_1$, which
corresponds to the transition
between phantom and non-phantom phases. Then the model (\ref{GBany1}) could
exhibit the smooth transition
between phantom and non-phantom phases.
The next example is
\begin{equation}
\label{GBany37}
g(t)=h_0\ln \frac{t}{t_0}\ ,\quad
t=f(\phi)=t_0{\rm e}^{\frac{\phi}{\phi_0}}\ .
\end{equation}
Since
\begin{equation}
\label{GBany38}
H=\frac{h_0}{t}\ ,
\end{equation}
we have a constant effective EoS parameter:
\begin{equation}
\label{GBany39}
w_{\rm eff}=-1 + \frac{2}{3h_0}\ .
\end{equation}
Eqs.(\ref{GBany37}) give
\begin{eqnarray}
\label{GBany40}
V(\phi)&=& -\frac{1}{\left(h_0 +
1\right)t_0^2}\left(\frac{3h_0^2\left(1-h_0\right)}{\kappa^2}
+ \frac{\phi_0^2}{2}\left(1-5h_0\right)\right) {\rm e}^{-\frac{2\phi}{\phi_0}}
+ \frac{3h_0\left(1+w_m\right)g_0}{\left( 4+3w_m\right)h_0 - 1}
{\rm e}^{-\frac{3\left(1+w_m\right)h_0\phi}{\phi_0}}\ , \nonumber \\
\xi_1(\phi)&=& -\frac{t_0^2}{16h_0^2\left(h_0 + 1\right)}\left( -
\frac{2h_0}{\kappa^2} + \phi_0^2
\right){\rm e}^{\frac{2\phi}{\phi_0}} \nonumber \\
&& + \frac{1}{8}\left\{3\left(1+w_m\right)h_0 - 4\right\}^{-1}
\left\{\left(4+3w_m\right)h_0 - 1\right\}^{-1}
\left(1+w_m\right)g_0 t_0^4 {\rm e}^{-\frac{\left\{3\left(1+w_m\right)h_0 -
4\right\}\phi}{\phi_0}}\ .
\end{eqnarray}
Thus, there appear exponential functions, which are typical in string-inspired
gravity.
As clear from (\ref{GBany38}), if $h_0>1$, the universe is in quintessence
phase, which corresponds to
$-1/3<w_{\rm eff}<-1$ in (\ref{GBany37}). If $h_0<0$, the universe is in
phantom phase with $w_{\rm eff}<-1$.
In phantom phase, we choose $t_0$ to be negative and our universe corresponds
to negative $t$, or if we
shift the time coordinate $t$ as $t\to t - t_s$, with a constant $t_s$, $t$
should be less than $t_s$.
The model \cite{Nojiri:2005vv} corresponds to $g_0=0$ in (\ref{GBany40}).
In the notations of ref.\cite{Nojiri:2005vv}, $t_0=t_1$,
$V(\phi)=V_0{\rm e}^{-\frac{2\phi}{\phi_0}}$, and
$f(\phi)=f_0{\rm e}^{\frac{2\phi}{\phi_0}}=-\xi_1(\phi)$. Then from the expression
(\ref{GBany40}), one gets
\begin{equation}
\label{GBany40a}
V_0= -\frac{1}{\left(h_0 +
1\right)t_0^2}\left(\frac{3h_0^2\left(1-h_0\right)}{\kappa^2}
+ \frac{\phi_0^2}{2}\left(1-5h_0\right)\right)\ , \quad
f_0= \frac{t_0^2}{16h_0^2\left(h_0 + 1\right)}\left( - \frac{2h_0}{\kappa^2} +
\phi_0^2 \right) \ ,
\end{equation}
which is identical (after replacing $t_0$ with $t_1$) with (16) in
\cite{Nojiri:2005vv}.
Thus, we demonstrated that arbitrary late-time cosmology (from specific
quintessence or phantom to oscillating cosmology) may be produced by
scalar-Gauss-Bonnet gravity with scalar potentials defined by such cosmology.
The reconstruction of string-inspired gravity may be always done. Moreover, one
can extend this formulation to include the higher order terms in low-energy
string effective action.
\subsection{The relation with modified Gauss-Bonnet gravity}
In this section we show that scalar-Gauss-Bonnet gravity may be transformed
to another form of modified Gauss-Bonnet gravity where no scalars present.
In addition, the formulation may be extended to include higher order terms
too.
Starting from (\ref{GBany1}), one may redefine the scalar field $\phi$ by
$\phi=\epsilon
\varphi$.
The action takes the following form
\begin{equation}
\label{GBany41}
S=\int d^4 x \sqrt{-g}\left[ \frac{R}{2\kappa^2} -
\frac{\epsilon^2}{2}\partial_\mu \phi \partial^\mu \phi
- \tilde V(\varphi) - \tilde\xi_1(\varphi) G \right]\ .
\end{equation}
Here
\begin{equation}
\label{GBany42}
\tilde V(\varphi) \equiv V(\epsilon\varphi)\ ,\quad
\tilde\xi_1(\varphi) \equiv \xi_1(\epsilon\varphi)\ .
\end{equation}
If a proper limit of $\epsilon\to 0$ exists, the action (\ref{GBany41}) reduces
to
\begin{equation}
\label{GBany43}
S=\int d^4 x \sqrt{-g}\left[ \frac{R}{2\kappa^2} - \tilde V(\varphi) -
\tilde\xi_1(\varphi) G \right]\ .
\end{equation}
Then $\varphi$ is an auxiliary field. By the variation of
$\varphi$, we find
\begin{equation}
\label{GBany44}
0={\tilde V}'(\varphi) - {\tilde\xi_1}'(\varphi)G\ ,
\end{equation}
which may be solved with respect to $\varphi$ as
\begin{equation}
\label{GBany45}
\varphi=\Phi(G)\ .
\end{equation}
Substituting (\ref{GBany46}) into the action (\ref{GBany43}), the
$F(G)$-gravity follows \cite{GB}:
\begin{equation}
\label{GBany46}
S=\int d^4 x \sqrt{-g}\left[ \frac{R}{2\kappa^2} - F(G)\right]\ ,\quad
F(G)\equiv \tilde V\left(\Phi(G)\right) - \tilde\xi_1\left(\Phi(G)\right) G\ .
\end{equation}
For example, in case of (\ref{GBany37}), in $\epsilon\to 0$ limit
after redefining
$\phi=\epsilon\varphi$ and $\phi_0=\epsilon\varphi_0$, $V(\phi)$ and
$\xi_1(\phi)$ reduce to
\begin{eqnarray}
\label{GBany47}
\tilde V(\varphi)&=& \frac{3h_0^2\left(h_0 - 1\right)}{\left(h_0 +
1\right)t_0^2\kappa^2}{\rm e}^{-\frac{2\varphi}{\varphi_0}}
+ \frac{3h_0\left(1+w_m\right)g_0}{\left( 4+3w_m\right)h_0 - 1}
{\rm e}^{-\frac{3\left(1+w_m\right)h_0\phi}{\phi_0}}\ , \nonumber \\
\tilde\xi_1(\varphi)&=& \frac{t_0^2}{8h_0\left(h_0 +
1\right)\kappa^2}{\rm e}^{\frac{2\phi}{\phi_0}}
+ \frac{1}{8}\left\{3\left(1+w_m\right)h_0 - 4\right\}^{-1}
\left\{\left(4+3w_m\right)h_0 - 1\right\}^{-1}
\left(1+w_m\right)g_0 t_0^4 {\rm e}^{-\frac{\left\{3\left(1+w_m\right)h_0 -
4\right\}\phi}{\phi_0}}\ .
\end{eqnarray}
The solution corresponding to (\ref{GBany37}) is:
\begin{equation}
\label{GBany48}
g(t)=h_0\ln \frac{t}{t_0}\ ,\quad \varphi=\varphi_0\ln \frac{t}{t_0} .
\end{equation}
If we further consider the case $g_0=0$, Eq.(\ref{GBany44}) gives
\begin{equation}
\label{GBany49}
{\rm e}^{-\frac{4\varphi}{\varphi_0}}=\frac{t_0^4}{24 h_0^3 \left(h_0-1\right)}G\ .
\end{equation}
Eq.(\ref{GBany49}) could have a meaning only when $h_0>1$ or $h_0<0$ if
$G$ is positive.
In this situation
\begin{equation}
\label{GBany50}
F(G)=A_0 G^{1/2}\ ,\quad
A_0\equiv \frac{1}{2\left(1+h_0\right)\kappa^2}\sqrt{\frac{3\left(h_0 -
1\right)h_0}{2}}\ .
\end{equation}
The above model has been discussed in \cite{GB}.
Actually, in
\cite{GB}
the following type of the action has been considered:
\begin{equation}
\label{GB1}
S=\int d^4 x\sqrt{-g}\left(\frac{1}{2\kappa^2}R + F(G)\right)\ .
\end{equation}
In case that $F(G)$ is given by (\ref{GBany50}), in terms of \cite{GB},
$A_0=f_0$.
Hence, $A_0$ (\ref{GBany50}) coincides with the
Eq.(26) of \cite{GB}.
As a further generalization, we may also consider the
string-inspired theory of second section where next order term is
coupled with scalar field:
\begin{equation}
\label{Eb1}
S=\int d^4 x \sqrt{-g}\Bigl[ \frac{R}{2\kappa^2} - \frac{1}{2}\partial_\mu \phi
\partial^\mu \phi
- V(\phi) - \xi_1(\phi) G + \xi_2(\phi) {\cal L}_c^{(2)}\Bigr]\ .
\end{equation}
As in (\ref{GBany41}), we may redefine the scalar field $\phi$ by
$\phi=\epsilon \varphi$.
If a proper limit of $\epsilon\to 0$ exists, the action (\ref{Eb1}) reduces to
\begin{equation}
\label{Eb2}
S=\int d^4 x \sqrt{-g}\left[ \frac{R}{2\kappa^2} - \tilde V(\varphi)
- \tilde\xi_1(\varphi) G + \tilde\xi_2(\varphi){\cal L}_c^{(2)} \right]\ .
\end{equation}
Here
\begin{equation}
\label{Eb3}
\tilde\xi_2 = \lim_{\epsilon\to 0}\xi_2(\epsilon\varphi)\ .
\end{equation}
Then $\varphi$ could be regarded as an auxiliary field and one gets
\begin{equation}
\label{Eb4}
0={\tilde V}'(\varphi) - {\tilde\xi_1}'(\varphi)G + {\tilde\xi_2}'{\cal
L}_c^{(2)} \ ,
\end{equation}
which may be solved with respect to $\varphi$ as
\begin{equation}
\label{Eb5}
\varphi=\Psi\left(G,{\cal L}_c^{(2)}\right)\ .
\end{equation}
Substituting (\ref{Eb5}) into the action (\ref{Eb2}), we obtain
$F(G,{\cal L}_c^{(2)})$-gravity theory:
\begin{equation}
\label{Eb6}
S=\int d^4 x \sqrt{-g}\left[ \frac{R}{2\kappa^2} - F(G,{\cal
L}_c^{(2)})\right]\ ,\quad
F(G)\equiv \tilde V\left(\Phi(G,{\cal L}_c^{(2)})\right) -
\tilde\xi_1\left(\Phi(G,{\cal L}_c^{(2)})\right)
G
+ \tilde\xi_2\left(\Phi(G,{\cal L}_c^{(2)})\right){\cal L}_c^{(2)}\ .
\end{equation}
In case of the string-inspired gravity:
\begin{equation}
\label{E7}
V=V_0{\rm e}^{-\frac{2\phi}{\phi_0}}\ ,\quad
\xi_1=\xi_0{\rm e}^{\frac{2\phi}{\phi_0}}\ ,\quad
\xi_2=\eta_0{\rm e}^{\frac{4\phi}{\phi_0}}\ .
\end{equation}
Here $\phi_0$, $V_0$, $\xi_0$, and $\eta_0$ are constants.
We may consider the limit of $\epsilon\to 0$ after redefining
$\phi=\epsilon\varphi$ and
$\phi_0=\epsilon\varphi_0$. Thus, Eq.(\ref{Eb4}) gives
\begin{equation}
\label{E8}
{\rm e}^{\frac{2\varphi}{\varphi_0}} = \Theta (G, {\cal L}_c^{(2)})
\equiv \frac{\xi_0 G}{2\eta_0 {\cal L}_c^{(2)}} + Y(G, {\cal L}_c^{(2)})\ .
\end{equation}
Here
\begin{equation}
\label{E8b}
Y(G,{\cal L}_c^{(2)}) = y_+ + y_ - \ ,\quad
y_+ {\rm e}^{\frac{2}{3}\pi i} + y_- {\rm e}^{\frac{4}{3}\pi i} \ ,\quad y_+
{\rm e}^{\frac{4}{3}\pi i} + y_- {\rm e}^{\frac{2}{3}\pi i}
\end{equation}
and
\begin{equation}
\label{E8c}
y_\pm \equiv \left\{\frac{V_0}{4\eta_0 {\cal L}_c^{(2)}} \pm
\sqrt{\left(\frac{V_0}{4\eta_0 {\cal L}_c^{(2)}}\right)^2 - \left(\frac{\xi_0
G}{6\eta_0
{\cal L}_c^{(2)}}\right)^6}
\right\}^{1/3}\ .
\end{equation}
Hence, the action of the corresponding $F(G,{\cal L}_c^{(2)})$-theory is
\begin{eqnarray}
\label{E9}
S&=&\int d^4 x \sqrt{-g}\left[ \frac{R}{2\kappa^2} - F(G,{\cal
L}_c^{(2)})\right]\ ,\nonumber \\
F(G,{\cal L}_c^{(2)})&= & \frac{V_0}{\Theta\left(G,{\cal L}_c^{(2)}\right)} -
\xi_0\Theta\left(G,{\cal L}_c^{(2)}\right) G
+ \eta_0\Theta\left(G,{\cal L}_c^{(2)}\right)^2{\cal L}_c^{(2)}\ .
\end{eqnarray}
Instead of (\ref{GBany1}), one may consider the model with one more scalar
field
$\chi$
coupled with the Gauss-Bonnet
invariant:
\begin{equation}
\label{GBany51}
S=\int d^4 x \sqrt{-g}\left[ \frac{R}{2\kappa^2} - \frac{1}{2}\partial_\mu \phi
\partial^\mu \phi
- \frac{\epsilon}{2}\partial_\mu \chi \partial^\mu \chi
- V(\phi) - U(\chi) - \left(\xi_1(\phi) + \theta(\chi)\right)G\right]\ .
\end{equation}
This kind of action often appears in the models inspired by the
string
theory \cite{ART}.
In such models, one scalar $\phi$ may correspond to the dilaton and another
scalar $\chi$ to modulus.
We now consider the case that the derivative of $\chi$, $\partial_\mu \chi$, is
small or $\epsilon$ is very small.
Then we may neglect the kinetic term of $\chi$ and $\chi$ could be regarded as
an auxiliary field. Repeating
the process (\ref{GBany44}-\ref{GBany46}), we obtain the $F(G)$-gravity coupled
with the scalar field $\phi$:
\begin{equation}
\label{GBany52}
S=\int d^4 x \sqrt{-g}\left[ \frac{R}{2\kappa^2} - \frac{1}{2}\partial_\mu \phi
\partial^\mu \phi
- V(\phi) - \xi_1(\phi) G + F(G)\right]\ .
\end{equation}
The relation between scalar-Gauss-Bonnet gravity and modified Gauss-Bonnet
gravity (or two parameterizations of the same theory) is discussed in this
section. It is shown that cosmological solutions
obtained in one of such theories may be used (with different physical
interpretation , compare with \cite{salvatore}) in another theory.
It is often turns out that it is easier to work with specific
parametrization of the same theory. Of course, only comparison with
observational data may select the truly physical theory in correct
parametrization.
\section{Conclusion}
In this paper we have studied several aspects of (dilaton) gravity in the
presence of string corrections up to
third order in curvature. The second order term is Euler density of order two
called, the Gauss-Bonnet term.
The next-to-leading term contains higher order Euler density ($E_3$)
plus a term of order three
in curvature. The expression of $E_3$ is identically zero in space-time of
dimension less than six; the term beyond the Euler
density contributes
to equation of motion even for a fixed field $\phi$. We have verified that the
de-Sitter solution which
exists in the case of Type II and Bosonic strings is an unstable node. It
is shown that in the presence
of a barotropic fluid (radiation/matter), inflationary solution exists in the
high curvature regime for constant field.
For a dynamically evolving field $\phi$ canonical in nature, there exists an
interesting
dark energy solution (\ref{solution}) characterized by $H={h_0}/{t}$, $\phi=\phi_0\ln{t}/{t_1}$
for $ h_0>0$
($H={h_0}/{t_s-t},\ \phi=\phi_0\ln(t_s-t/{t_1})\ (\mbox{when}\ h_0<0 $). The
three years
WMAP data taken with the SNL survey\cite{Spergel} suggests that
$w_{DE}=-1.06^{+0.13}_{-0.08}$. We have shown that
choosing a range of parameter $\phi_0^2 $ (which is amplified thanks to third
order curvature term contribution) we can easily obtain the observed
values of $w_{DE}$
for phantom as well as for non-phantom dark energy.
We have demonstrated, in detail, the stability of dark energy solution. For non-phantom energy,
in the large $h_0$
limit, we presented analytical solution which shows that one of the eigenvalues of the $3 \times 3$
perturbation matrix is real and negative where as the other two are purely imaginary, thereby,
establishing the stability of solution (\ref{solution}). We have verified numerically that stability holds
for all smaller and generic values of $h_0$ in this case. The phantom dark energy solution
corresponding to $h_0<0$ turns out to be unstable.
It is remarkable that
string curvature corrections
can account for late time acceleration and dark energy can be realized
without the introduction of a field potential.
It is shown how scalar-Gauss-Bonnet gravity may be reconstructed for any
given cosmology. The corresponding scalar potentials for several
dark energy cosmologies including quintessence, phantom, cosmological
constant or oscillatory regimes are explicitly found.
This shows that having the realistic scale factor evolution, the principal
possibility appears to present string-inspired gravity where such
evolution is realized. It is explained how to transform
scalar-Gauss-Bonnet gravity (even with account of third order curvature term)
to modified Gauss-Bonnet gravity \cite{GB} which seems to pass
the Solar System tests.
Different forms of modified gravity are attempted recently
(for a review, see \cite{rev3}) to describe dark energy universe;
these models provide a qualitatively simple resolution of dark
energy/coincidence problems and deserve further consideration.
It is quite likely that time has come to reconsider the basics of General
Relativity at the late universe in the search of realistic modified
gravity/dark energy theory.
We should also mention that in the present study we have tested the
background model against observations. The study of perturbations
in the scenario discussed here is quite complicated and challenging
and in our opinion it deserves attention; we defer this investigation to our future work.
\section*{Acknowledgments}
The research of SDO is supported in part by the project
FIS2005-01181 (MEC, Spain), by LRSS project N4489.2006.02 and by
RFBR grant 06-01-00609 (Russia). MS thanks S. Panda, I.~Neupane and S. Tsujikawa and SDO
thanks M. Sasaki for
useful discussions.
|
1,108,101,565,267 | arxiv | \section{Introduction}
The puzzling question for the origin of compact stars (CS) with masses exceeding $2~{\rm M}_\odot$ can be successfully addressed at present only within the supernovae (SN) explosion mechanism based on quark deconfinement
in the stellar matter \cite{Fischer:2017lag}. This serves as an indirect argument in favor of the existence of quark matter in cores of heavy CS. Binary CS mergers could produce a distinct postmerger gravitational wave signal \cite{Bauswein:2018bma}.
These interesting applications are summarised in \cite{Bauswein:2022vtq}.
They are based on a hybrid equation of state (EoS) that has been constructed from hadronic and quark matter EoS developed within relativistic density functional (RDF) approaches \cite{Kaltenborn:2017hus,Ivanytskyi:2022oxv}. In this contribution, we summarize recent developments of the RDF approach to quark matter which address beyond confinement also the aspects of chiral symmetry breaking and color superconductivity. In particular the occurrence of a large diquark pairing gap modifies the phase structure and EoS of QCD at low temperatures and is thus of central interest for the discussion of the existence and location of one or more critical endpoints (CEPs). A developed constructive scheme generates thermodynamically consistent EoS with multiple or absent CEP and provides a solid basis for discussing their effects in simulations of astrophysical phenomena and heavy-ion collisions (HIC).
\section{Relativistic density functional for quark matter}
The RDF approach from Ref. \cite{Ivanytskyi:2022oxv} is represented by the Lagrangian
\begin{eqnarray}
\label{I}
\mathcal{L}=\overline{q}(i\slashed\partial- m)q-
G_V(\overline{q}\gamma_\mu q)^2+
G_D(\overline{q}i\gamma_5\tau_2\lambda_A q^c)(\overline{q}^ci\gamma_5\tau_2\lambda_A q)-\mathcal{U}
\end{eqnarray}
with two-flavor quark field $q^T=(u~d)$, current quark mass $m$ and $G_V$, $G_S$ being coupling constants in vector repulsion and diquark pairing channels, respectively. A chirally symmetric generalization of the potential energy density functional inspired by the string-flip model (SFM) \cite{Kaltenborn:2017hus} reads
\begin{eqnarray}
\mathcal{U}&=&D_0\left[(1+\alpha)\langle \overline{q}q\rangle_0^2
-(\overline{q}q)^2-(\overline{q}i\gamma_5\vec\tau q)^2\right]^{\frac{1}{3}}\nonumber\\
\label{II}
&\simeq&\mathcal{U}_{MF}+
(\overline{q}q-\langle\overline{q}q\rangle)\Sigma_{MF}-
G_{S}(\overline{q}q-\langle\overline{q}q\rangle)^2-
G_{PS}(\overline{q}i\gamma_5\vec\tau q)^2.
\end{eqnarray}
Here $\alpha$ and $D_0$ are constants and $\langle \overline{q}q\rangle_0$ is the chiral condensate in the vacuum. The last line in Eq. (\ref{II}) corresponds to the second order expansion of $\mathcal{U}$ around the mean-filed solutions $\langle \overline{q}q\rangle$ and $\langle \overline{q}i\gamma_5\vec\tau q\rangle=0$ labeled with the subscript index ``$MF$''. This expansion brings the present model to the form of the NJL model with the mean-field scalar self-energy of quarks $\Sigma_{MF}=\partial\mathcal{U}_{MF}/\partial\langle\overline{q}q\rangle$ and effective couplings in scalar $G_S=-\partial\mathcal{U}_{MF}^2/\partial\langle\overline{q}q\rangle^2/2$ and pseudoscalar $G_S=-\partial\mathcal{U}_{MF}^2/\partial\langle\overline{q}i\gamma_5\vec\tau q\rangle^2/6$ channels. In Ref. \cite{Ivanytskyi:2022oxv} model parameters $m=4.2$ MeV, $\Lambda=573$ MeV, $\alpha=1.43$ and $D_0\Lambda^{-2}=1.39$ were fixed in order to reproduce the pion mass $M_\pi=140$ MeV and decay constant $F_\pi=92$ MeV, with the scalar meson mass $M_\sigma=980$ MeV and the vacuum value of the chiral condensate per flavor $\langle\overline{l}l\rangle_0=-(267~{\rm MeV})^3$.
We note that $\Lambda$ is a three-momentum scale which occurs in the smooth momentum cut-off by a Gaussian formfactor which regularizes divergent zero-point terms. The behavior of $G_S$ and $G_{PS}$ as well as the effective quark mass $m^*=m+\Sigma_{MF}$ is shown in Fig. \ref{fig1}. The dynamical breaking of chiral symmetry leads to $G_S\neq G_{PS}$ in the vacuum, while its dynamical restoration at high temperatures and/or densities is manifested by the asymptotic coincidence of the scalar and pseudoscalar couplings. This is reflected in the melt-down of $m^*$. Its vacuum value $m_0^*$ is controlled by the parameter $\alpha$ so, that $m_0^*\rightarrow\infty$ at $\alpha\rightarrow0$. For the mentioned set of parameters $m_0^*=718$ MeV and the pseudocritical temperature at $\mu_B=0$ defined by the peak of the chiral susceptibility is 163 MeV.
The quark matter EoS is obtained by treating the present model within the mean-field approximation.
It is remarkable that the BCS relation between the mass gap in the vacuum and the critical temperature for its restoration, which holds for the (P)NJL model in the chiral limit, is violated for this class of quark matter models.
\begin{figure}[t]
\includegraphics[width=0.32\columnwidth]{T_G}
\includegraphics[width=0.32\columnwidth]{mu_G}
\includegraphics[width=0.32\columnwidth]{n_m}
\caption{Scaled effective scalar $G_S\Lambda^2$ and pseudoscalar $G_{PS}\Lambda^2$ couplings as functions of temperature $T$ at $\mu_B=0$ (left panel), baryonic chemical potential $\mu_B$ at $T=0$ (middle panel) and effective quark mass $m^*$ as function of baryon density $n_B$ (right panel) from Ref. \cite{Ivanytskyi:2022oxv}. Dashed lines on the left and middle panels represent the NJL value $G\Lambda^2=2.14$ \cite{Ratti:2005jh}. Dotted curves on the middle panel indicate the unstable parts that are removed by applying the Maxwell construction. The blue dotted line on the right panel is obtained within the SFM with $\alpha_{SFM}=0.39~{\rm fm}^{-3}$ \cite{Kaltenborn:2017hus}. Calculations are performed for symmetric quark matter, $G_V=G_D=0$, $\alpha$ specified in the legend and the rest of the model parameters with the values mentioned above.}
\label{fig1}
\end{figure}
\section{Phase diagram of strongly interacting matter}
High values of the effective quark mass at low $T$ and $\mu_B$ represent phenomenological confinement in the RDF approach. This makes description of strongly interacting matter in terms of quark degrees of freedom inadequate in the confinement region and requires matching the quark matter EoS to the hadron one yielding a hybrid quark-hadron EoS. Within the Maxwell construction of quark-hadron transition the matching point is defined by the baryon chemical potential $\mu_B^{\rm Max}$ at which the pressures of two phases coincide, while the baryon density discontinuously jumps from $n_B^h|_{\rm Max}$ on the hadron side to $n_B^q|_{\rm Max}$ on the quark one. This picture ignores inhomogeneous structures in the quark-hadron interface known as pasta phases \cite{Maslov:2018ghi} and corresponds to a sharp interface between two phases. Accounting for those pasta phases would wash out the sharp quark hadron interface allowing for the existence of a mixed phase, which is restricted by the baryon chemical potentials $\mu_B^h$ and $\mu_B^q$ (corresponding to $n_B^h$ and $n_B^q$) from the hadron and quark sides, respectively.
In Ref. \cite{Ayriyan:2021prr}, the EoS of the mixed phase was parameterized by two pieces of parabolic functions. In Ref. \cite{Ivanytskyi:2022wln} such a two-zone interpolation scheme (TZIS) was further developed to the case of arbitrary fractions of electric charge and applied at finite temperatures.
The parameters of these two parabolic functions were defined so that both the pressure $p$ and the baryon density $n_B$ remain continuous at the mixed phase boundaries. Continuity of $p$ is also required at the matching point of two parabolas $\mu_B^c=(\mu_B^h+\mu_B^q)/2$, while $n_B$ experiences a discontinuous jump of $\Delta n_B$. The TZIS is given a closed form with the parameterization
\begin{eqnarray}
\label{III}
&&\mu_B^h=\mu_B^{\rm Max}|_{T=0}(1-x)\sqrt{1-T^2/T_0^2},\quad
\mu_B^{q}=\mu_B^{\rm Max}(1+x),\\
\label{IV}
&&\Delta n_B=n^*(T_{cep1}-T)^\beta(T-T_{cep2})^\beta\theta(T_{cep1}-T)\theta(T-T_{cep2}),
\end{eqnarray}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\columnwidth]{e_p_Maxwell}
\includegraphics[width=0.32\columnwidth]{e_p_TZIS_15}
\includegraphics[width=0.32\columnwidth]{e_p_TZIS_0}
\caption{Pressure $p$ of electrically neutral $\beta$-equilibrated quark-hadron matter vs. energy density $\varepsilon$ along the isentropes $s/n_B=const$ found within the Maxwell construction (left panel) and the TZIS with $n^*=0.15~{\rm fm}^{-3}$ (middle panel) and $n^*=0$ (right panel). The shaded areas represent the cold nuclear matter constraints.}
\label{fig2}
\end{figure}
where $x=0.01$, $n^*=0$ or 0.15 fm$^{-3}$, $T_{cep1}=90$ MeV and $T_{cep2}=15$ MeV correspond to high and low temperature CEPs and $\beta=0.3265$ is the critical exponent of the 3D Universality class \cite{Campostrini:2002cf}. The TZIS allows us to construct a hybrid quark-hadron EoS at arbitrary entropy per baryon $s/n_B$. Fig. \ref{fig2} compares such an EoS to the one obtained within the Maxwell construction. Furthermore, having the edges of the mixed quark-hadron phase defined we can construct the phase diagram of strongly interacting matter that is shown in Fig. \ref{fig3}. It is remarkable that the transition from quark to hadron matter leads to a growth of $T$ along adiabates $s/n_B=const$ being a direct consequence of the reduction of the number of accessible microstates due to the transition to the color superconducting phase of quark matter \cite{Ivanytskyi:2022oxv}.
\begin{figure}[t]
\centering
\includegraphics[width=0.32\columnwidth]{mu_T_neutral}
\includegraphics[width=0.32\columnwidth]{n_T_neutral_15}
\includegraphics[width=0.32\columnwidth]{n_T_neutral_00}
\caption{Phase diagram of $\beta$-equilibrated electrically neutral quark-hadron matter in the $\mu_B-T$ (left panel) and $n_B-T$ (central and right panels) planes.
The black dotted, dashed and solid curves correspond to the phase boundaries and the matching chemical potential discussed in the text.
The color mapping of phases corresponds to the TZIS. The filled black circles show the CEPs, which on the right panel are to guide the eye.
The colored solid, dashed and dotted curves show adiabates $s/n_{B}=const$ calculated within the Maxwell construction, the TZIS with $n^*=0.15~{\rm fm}^{-3}$ and $n^*=0$, respectively.
The green shaded area shows the region where $n_B$ discontinuously jumps within the TZIS with $n^*=0.15~{\rm fm}^{-3}$.}
\label{fig3}
\end{figure}
\section{Compact stars at vanishing and finite entropy}
Entropy of the quark-hadron matter in the interiors of the proto NS remains approximately constant during SN explosions \cite{Fischer:2017lag}. Therefore, isentropic EoS of quark-hadron matter is phenomenologically interesting. We applied such EoSs shown in Fig. \ref{fig2} to solving a problem of relativistic hydrostatic equilibrium \cite{Ivanytskyi:2022wln}. The corresponding mass radius relations of cold NS ($s/n_B=0$) and warm proto-NS ($s/n_B\neq0$) are shown in Fig. \ref{fig4}. In the case of cold NS our approach provides agreement with the constraints from Refs. \cite
Riley:2021pdl,Miller:2021qha,Riley:2019yda,
LIGOScientific:2018cki,Bauswein:2017vtn,Annala:2017llu} and gives the tidal polarizability of $1.4~{\rm M}_\odot$ mass stars $\Lambda_{1.4}=540-550$ agreeing with Ref. \cite{LIGOScientific:2018cki}. Finite $s/n_B$ increases the radius of NS but leaves their maximal mass almost unchanged.
\section{Conclusions}
We developed a confining RDF for color-superconducting quark matter and produced a family of hybrid quark-hadron EoS with or without (multiple) CEP(s). Due to large values of the diquark pairing gap our approach favors early quark deconfinement, provides good agreement with the present astrophysical constraints and drives trajectories of the evolution of stellar matter during the SN explosions toward the temperatures range of HIC.
\begin{figure}[t]
\centerline{%
\includegraphics[width=0.32\columnwidth]{r_m_max}
\includegraphics[width=0.32\columnwidth]{r_m_tzis_15}
\includegraphics[width=0.32\columnwidth]{r_m_tzis_00}}
\caption{Mass-radius relation of hybrid NS with the isentropic quark-hadron EoS presented in Fig. \ref{fig2}. Black solid curves obtained with the DD2 EoS of cold hadron matter are given for the sake of comparison. The astrophysical constraints depicted by the colored bands and shaded areas correspond to the case of cold neutron stars.}
\vspace*{-.2cm}
\label{fig4}
\end{figure}
\subsection*{Acknowledgements}
This work was supported by
NCN under grants 2019/33/B/ST9/03059 (O.I., D.B.) and 2020/37/B/ST9/00691 (T.F.).
A.B. acknowledges support by the European Research Council
under the European Union's Horizon 2020 research and innovation program, grant No. 759253, by
DFG Project-ID 279384907 - SFB 1245, by DFG - Project-ID 138713538 - SFB 881
and by the State of Hesse within the Cluster Project ELEMENTS.
The work was performed within a project that has received funding from the Horizon 2020 program under grant agreement STRONG-2020 - No. 824093.
|
1,108,101,565,268 | arxiv | \section{Introduction}
\label{sec:intro}
For manufacturers who run for customized production, designing and setting manipulator trajectories in the programming system is a tedious and time-consuming task, because trajectories need to be redesigned and reset frequently to adapt to different applications in a flexible production line. To simplify the process, a teaching manipulator with six-degree-of-freedom (6-DOF) is designed in this paper. The device is designed for recording trajectories and teaching a real robot to accomplish the task, as shown in Fig. \ref{work}. With its help, operators can conveniently set a proper trajectory through conducting the teaching manipulator to finish the task for the first time. Then a real robot or a batch of robots will follow the speed and the trajectory of the teaching robot to accomplish the task for many other products. A teaching manipulator with lighter weight and better operating performance is always required. However, the human expert cannot ensure the best performance in his or her design. Robot design automation is a systematic process of design optimization that can help to achieve a better design of the teaching manipulator.
Robot design automation is an emerging technology involving systematic modeling and optimization efforts. Kinematics, dynamics and stiffness are usually considered in modeling robots or other mechanical systems. A lot of progress has been made in this direction. For example, Pettersson et al \cite{pettersson2009drive} built up a model of the drive chain of light weight robotic arm for optimizing is design. Citalan-Lara et al \cite{citalan2014multidisciplinary} proposed analytical modeling of the mechanism, the controller and the servo drive subsystem of a kind of manipulator and optimized the manipulator with six objectives simultaneously. The research of parallel robot modeling and optimization has also made progress. Qin et al \cite{qin2013modelling} proposed a two-staged model for parallel mechanism with a rigid and a compliant platform. Laski et al \cite{laski2015design} designed and analyzed a kind of 3-DOF tripod parallel manipulator. Yao et al \cite{yao2017dynamic} established the dynamic driving force model of the a parallel manipulator with redundant actuation. To fully integrate advantages of both the serial and parallel robot structures, hybrid robots or robotic machine was developed. Gao et al \cite{gao2010design, gao2015performance} made a detailed analysis of a hybrid robotic machine tool and optimized its dimensional parameters.
Design optimization is an essential sub-process in design automation. Engineering solutions are expected to achieve good performance in a number of aspects, while satisfying various constraints at the same time. Therefore, multi-objective optimization algorithms are widely used to search for a group of non-dominated solutions, namely, Pareto-optimal front, considering the tradeoff among multiple objectives at the same time. Different algorithms is designed for solving robot optimization problem. Coello et al \cite{coello1998using} used a new genetic algorithm(GA)-based multi-objective optimization technique to optimize the counterweight balancing for a serial robot. Gao et al \cite{gao2010design} conducted an optimization study of system stiffness and dexterity for the parallel mechanism using multi-objective optimization. Zhang et al \cite{zhang2012forward} used a multi-objective optimization algorithm for optimizing a bio-inspired parallel manipulator design. Li and Xu \cite{li2006ga} proposed a GA-based multi-objective optimization approach to optimize a kind of cable-driven parallel manipulator. jamwal et al \cite{jamwal2015three} used NSGA-II to optimize a kind of rehabilitation robot considering six different performances and the botained solutions is better than the results obtained from sigle objective optimization and preference-based optimization.
\begin{figure*}
\centering
\includegraphics [width=15cm]{work.pdf}
\caption{The operator controls the teaching manipulator to record the trajectory. The motion data is saved in the computer and used for command the real manipulator for manufacturing.}
\label{work}
\end{figure*}
The multi-objective optimization is not only for providing design candidates of the problem, but also for providing design principles among optimal trade-off solutions. Innovization study is first proposed by Deb and Srinivasan \cite{deb2006innovization}, with the purpose to establish meaningful knowledge between objective functions and design variables. The knowledge can provide a deeper understanding about how the variables of the optimal solutions interact, which can help the designer acquire design insights. A robot gripper optimization problem \cite{osyczka2002evolutionary} is used as an example to conduct research on innovization study by Datta et al \cite{datta2011multi}. A further research performs the innovization study on the modified gripper model considering different actuators models \cite{datta2016piezoelectric, datta2016analysis}. Besides the gripper model, Deb et al \cite{deb2014integrated} also studied innovization with three other engineering problems.
As a case study, we perform modeling, structure analysis and optimization for a teaching manipulator with six degrees of freedom. Three performances, including total mass of the device, the maximal value of operating force and the difference between the maximum and minimum of operating force are treated as the objectives, constrained by the condition of gravity balancing in the paper.
The rest of the paper is organized as follows. In section \ref{sec: ConDesign} The conceptual design of the teaching manipulator is described. Section \ref{sec:ConfigD} proposes the configuration design. In section \ref{sec:StrucAnly}, balancing and operating force analysis and modeling are conducted, and condition of balancing and operating force is derived. In section \ref{sec:ProbFormu}, the objectives about and constraints are formulated. Section \ref{sec:MOO} explains the multi-objective optimization algorithm, and compares the optimal solutions with the original human expert design. An innovization study for the solutions is also conducted. Finally, we summarize the contributions and discuss some future work of the paper in section \ref{sec:conc}.
\section{Conceptual design}
\label{sec: ConDesign}
The teaching manipulator considered in this paper has six degrees of freedom (DOF), two DOF at the shoulder, one at the elbow and three at the wrist, as depicted in Fig. \ref{JointPic}. The link lengths of the manipulator are fixed. Six encoders are mounted for recording the angle position of each joint. It should be noted that there are no any actuators in the manipulator, which means that operators need to control it manually in the teaching procedure. To reduce the load of the operators, some structures are specially designed, including two counterweights at Joint 3 and 5, one pneumatic balancer at Joint 2 and three fiction disks inside Joint 1-3, in order to keep gravity balance in as many positions as possible in the workspace.
\begin{figure}
\centering
\includegraphics [width=6cm]{Joint.pdf}
\caption{A 6-DOF teaching manipulator}
\label{JointPic}
\end{figure}
To avoid requiring large space for rotation, instead of a counterweight at Joint 2, a pneumatic balancer is equipped. The balancer is an pneumatic cylinder with a spring mounted inside, as depicted in Fig. \ref{BalancerPic}. When the device works, the pneumatic cylinder needs to connect with an air pump with stable pressure. Then the balancer can provide a constant force to press the spring. The combination of the cylinder and the spring force is the drafting force against the loads. The drafting force can be considered as a linear force, proportional to the extension length of the balancer.
\begin{figure}
\centering
\includegraphics [width=8cm]{Balancer.pdf}
\caption{The structure of balancer}
\label{BalancerPic}
\end{figure}
\begin{figure}
\flushleft
\includegraphics [width=8cm]{FicD02.pdf}
\caption{The inner structure of Joint 1}
\label{FicDPic}
\end{figure}
The counterweights and balancer alone may still fail to provide a satisfactory balancing effect in some positions. That is why the friction disks are set in Joint 1-3, providing extra resisting moments to address the imbalance problem adaptively. The physical realization of Joint 1 is illustrated in Fig. \ref{FicDPic}. The springs press the iron disks and the friction disk set in the middle. The friction between the disks provides the resisting moment against the imbalance. The friction moment can be adjusted via changing the pressing force of the springs.
\section{Configuration Design}
\label{sec:ConfigD}
The paper discusses optimizing the design of a kind of teaching manipulator using multi-objective evolutionary algorithm. According to the industrial requirement, the structure should ideally keep gravity balance in any position in its workspace, which is treated as a constraint. The multi-objective optimization considers the following three conflicting objectives to minimize: 1)\ \ the total mass of the device, 2)\ \ the maximal magnitude of the operating force and 3)\ \ the maximal difference between the maximal and the minimal operating force in a representative trajectory.
\subsection{Design variables}
Nine design variables are considered in the optimization problem, which is
\begin{equation}
X = (m_A, m_B, L_A, L_B, k, H_b, T_1, T_2, T_3)^T
\nonumber
\end{equation}
where $m_A$, $m_B$ denote the masses of the two counterweights, $L_A$, $L_B$ denote the distances between the centres of the mass of the counterweights and the rotation axis of the joints, namely, the lengths of the connecting rods of the counterweights. $k$ is the stiffness coefficient of the spring inside the balancer, $H_b$ is the length of a vertical virtual link between the lower attachment point and the rotation axis of Joint 2. $T_i$ (i = 1, 2, 3 ) is the torque needed to overcome the moment of the disk friction and rotate Joint i. In gravity balancing condition, $T_i$ is approximately equal to the moment of the disk friction, therefore we treat $T_i$ as the moment of the disk friction. A schematic of the configuration of variables is shown in Fig. \ref{ParamPic}.
Other related parameters are also illustrated in Fig. \ref{ParamPic}. $m_i$ is the mass of Joint i, $m_{Li}$ and $L_i$ is the mass and the length of Link i. It is assumed that the density of each link is uniform, therefore the centres of mass are in the middle of the links. The mass per unit length of the connecting rod are $\rho_A$ and $\rho_B$, respectively.
\begin{figure}
\centering
\includegraphics [width=8.2cm]{Param02.pdf}
\caption{Design variables and related parameters}
\label{ParamPic}
\end{figure}
\subsection{D-H parameters}
The forward kinematics of the teaching manipulator is formulated based on the Denavit-Hartenberg (D-H) convention \cite{denavit1955kinematic}. The coordinate frames $o_ix_iy_iz_i$ $(i = 1,2,\cdots,6)$ are assigned based on the sketch of the manipulator, which is shown in Fig. \ref{DHPic}. The D-H parameters are defined and listed in Table \ref{Table1}. We assume that the end effector and Joint 6 share the same coordinate frame.
\begin{figure}
\centering
\includegraphics [width=8.2cm]{D-H.pdf}
\caption{Manipulator coordinate system}
\label{DHPic}
\end{figure}
\begin{table}[ht]
\small\sf\centering
\caption{D-H Parameters of the robotic arm.\label{Table1}}
\begin{tabular}{lllll}
\toprule
$Joint_i$&$\alpha_i$&$a_i$&$d_i$&$\theta_i$\\
\midrule
1 & $\pi/2$ &0.160 &0 &$q_1$\\
2 & $0$ &0.790 &0 &$q_2$\\
3 & $\pi/2$ &0.155 &0 &$q_3$\\
4 & $-\pi/2$ &0 &0.995 &$q_4$\\
5 & $\pi/2$ &0 &0 &$q_5$\\
6 & $0$ &0 &0 &$q_6$\\
\bottomrule
\end{tabular}\\[10pt]
\end{table}
\section{Structure analysis}
\label{sec:StrucAnly}
\subsection{ Balancing }
In the design, two counterweights, one balancer and three friction disks are used for balancing. Three joints with rotational axis, namely, Joint 2, 3 and 5, which are parallel to the horizontal plane are needed to design the balancing structure. There is an extra friction disk setting in Joint 1 for the adjustment of operating force performance.
Let $G_j$ be the gravitational moment reacting at the axis of Joint j, while $P_j$ be the moment provided by the counterweight and its connecting rod or the balancer, reacting at the axis of Joint j.
At Joint 5, a counterweight is designed for balancing the gravity of the Link 5 and Joint 6. The masses of the counterweight and the connecting rod, with homogeneous density are considered. Here we have the balancing equation, which is $G_5 = P_5$. However, it is difficult to satisfy an equality constraint. In industrial reality, it is not necessary to making it equal, because the operator can handle small imbalance in the wrist through the manually operable handgrip. Therefore, under $5\%$ imbalance is allowed. Then we can specify this using an inequality constraint, which is
\begin{equation}
| G_5 - P_5 | \leq 5\% G_5
\label{balance5}
\end{equation}
where
\begin{equation}
G_5 = ( m_6 + m_{L5})gL_5\cos{q_5}
\label{G5}
\end{equation}
\begin{equation}
P_5 = (m_A gL_A + \frac{L_A^2 \rho_A g}{2})\cos{q_5}
\label{P5}
\end{equation}
where $m_6$, $m_{L5}$ are the masses of Joint 6 and Link 5, $L_5$ is the length of Link 5, $\rho_A$ is linear density of the connecting rod, which connects counterweight A and the shelter of Joint 5, $g$ is the gravitational acceleration and we take $g = 9.8 m/s$.
At Joint 3, a counterweight and a friction disk are used for balancing the gravity. The friction disk can provide a static frictional moment which correspondingly increases as the imbalance moment increased, until the disk starts to rotate. If the disk rotates, the moment becomes a kinetic frictional moment, which can be treated as a constant. Because the disk moves with a low velocity, the kinetic frictional moment is approximately equal to the static frictional moment. Therefore, we have the following balance equation, which is
\begin{equation}
\max_{q_3}{| G_3 - P_3 | \leq T_3}
\label{balance3}
\end{equation}
where
\begin{equation}
\begin{split}
G_3 =& \{(m_A + m_6 + m_{L5} + m_5 + L_A \rho_A)(L_3 + L_4)g \\
&+ m_4 g L_3 + \frac{m_{L3} g L_3}{2}+ \frac{(2L_3+L_4) m_{L4} g}{2}\}\cos{q_3}
\end{split}
\end{equation}
\begin{equation}
P_3 = (m_B g L_B + \frac{L_B^2 \rho_B g}{2})\cos{q_3}
\end{equation}
where $m_{L3}$ is the mass of Link 3.
\begin{figure*}
\centering
\includegraphics [width=12cm]{ForceAnalysis02.pdf}
\caption{Force analysis of Link 2}
\label{ForceAnalysis}
\end{figure*}
At Joint 2, a balancer and a friction disk are mounted for balancing. As Link 2 rotates about Joint 2, the balancer, Link 2 and a virtual link EF make up a triangle $\bigtriangleup EFG$, which is shown in Fig. \ref{ForceAnalysis}. $L_k$ is the total length of the balancer. $F_k$ is the drafting force of the balancer in a given position, which can balance the equivalent mass $M_n$ acting on point G. We can figure out the length of the balancer based on the triangle relation, which is
\begin{equation}
L_k=\sqrt {H_b^2+L_2^2-2H_bL_2\cos{(\pi/2-q_2)}}
\label{lk}
\end{equation}
where $q_2$ is the rotation angle of Joint 2. The drafting force of the balancer is linearly proportional to the variation of the balancer length. The shortest length of the balancer appears when Link 2 is in vertical position. In this situation, $\bigtriangleup EFG$ becomes a line and the length of the balancer is
\begin{equation}
L_{k0} = L_2 - H_b
\label{lk0}
\end{equation}
Thus,
\begin{equation}
F_k=k(L_k-L_{k0})-b
\label{fk}
\end{equation}
where b is the constant force provided by the cylinder.
In Fig. \ref{ForceAnalysis}, there are three forces reacting at Link 2. $F_E$ points along Link 2. Only when the resultant force of $F_k$ and the equivalent gravity $M_eg$ are aligned with Link 2, Link 2 can be in equilibrium. From the law of sine, we have the relation that
\begin{equation}
\frac{\sin{(\pi/2-q_2)}}{L_k} = \frac{\sin{\alpha}}{H_b}
\label{Lawofsine}
\end{equation}
where $\alpha$ is the angle of $\angle FGE$, shown in Fig. \ref{ForceAnalysis}.
Thus, the moment provided by the balancer is
\begin{equation}
P_2=L_2F_k\sin{\alpha}
\label{P2base}
\end{equation}
With Eq.(\ref{lk})-(\ref{Lawofsine}), we can simplify Eq.(\ref{P2base})
\begin{equation}
\begin{split}
P_2=k H_b L_2 \cos{q_2}
+\frac{(k(H_b-L_{2})-b ) H_b L_2 \cos{q_2}}{\sqrt {H_b^2+L_2^2-2H_bL_2\sin{q_2}}}
\label{P2sim}
\end{split}
\end{equation}
Because the mass of Link 2 is homogeneous, the gravitational moment of Link 2 is $\frac{1}{2}m_{L2}gL_2$, which is equivalent to the moment produced by half of $m_2$ at point G. The mass of the balancer is not considered in the structure because it is much smaller than the equivalent payload acting at point G. Thus, we can formulate $G_2$, which are
\begin{equation}
\begin{split}
&G_2 = M_e g L_2 \cos{q_2}
\end{split}
\end{equation}
$M_e$ is the equivalent mass.
\begin{equation}
\begin{split}
M_e = &\sum_{i=3}^6 m_i + m_A + m_B + L_A \rho_A + L_B \rho_B \\
&+ m_{L3} + m_{L4}+ m_{L5} + \frac{1}{2}m_{L2}\
\label{me}
\end{split}
\end{equation}
Therefore, the value of the imbalanced moment can be formulated as follows.
\begin{equation}
\begin{split}
|G_2-P_2|=&|(M_eg -kH_b)L_2 \cos{q_2} \\
&+\frac{(k(H_b-L_{2})-b) H_b L_2 \cos{q_2}}{\sqrt {H_b^2+L_2^2-2h_bL_2\cos{q_2}}}|
\label{balancerForce}
\end{split}
\end{equation}
We expect to minimize the operating force with balancing constraints, so the balancing condition without fiction disk should be considered first because the operating force is zero in an ideal situation. Then we introduce the disk friction to keep balance in non-ideal cases.
The ideal gravity balance conditions of teaching manipulator should be independent to the position of the joints. Thus, weather the equality
\begin{equation}
|G_2-P_2|=0
\label{J2}
\end{equation}
is satisfied should be independent to $q_2$.
Observe Eq.(\ref{balancerForce}), only when
\begin{equation}
k(L_2 - H_b)-b=0
\label{condition1}
\end{equation}
\begin{equation}
M_eg - kH_b=0
\label{condition2}
\end{equation}
$cos{q_2}$ is eliminated and Eq.(\ref{J2}) is satisfied in any rotational angle.
Now we consider the real situation with error. Because of the errors of manufacturing and assembly, the device usually fails to keep balance, therefore the fiction disk is mounted for keep gravity balance from the error. Thus, we have
\begin{equation}
|G_2 - P_2| \leq T_2
\label{balance2}
\end{equation}
With the errors, Eq.(\ref{condition1}) can be still satisfied by adjusting the force of the pneumatic cylinder $b$, however, Eq. (\ref{condition2}) is difficult to be satisfied because the parameters are all related to manufacturing and assembly. Thus, we have
\begin{equation}
\begin{split}
|G_2 - P_2| =&|(M_eg -kH_b)L_2 \cos{q_2} |\\
=&|M_eg -kH_b|L_2 |\cos{q_2}|
\end{split}
\end{equation}
The maximal value of $|G_2 - P_2|$ can be obtained when $|\cos{q_2}| = 1$. Thus, we have
\begin{equation}
\begin{split}
\max_{q_2}{|G_2 - P_2|} =|M_eg - kH_b|L_2 \leq T_2
\label{balance2sim}
\end{split}
\end{equation}
\subsection{Total mass of the device}
Minimizing the total mass of the device is expected, because lighter weight usually means lower cost and lower power consumption. The mass of the six joints, five links, two counterweights and their connecting rods are all considered. Here we have the total mass equation as follows.
\begin{equation}
M = \sum_{i=1}^6 m_i + \sum_{p=1}^5 m_{Lp} + m_A + m_B+ L_A \rho_A + L_B \rho_B
\label{totomass}
\end{equation}
\subsection{Operating force analysis}
It is expected that the manipulator can be manually controlled using minimal operating force, and with minimal variation of the operating force along a trajectory. We need to analyze the relation between the operating force and the joint moments.
Let $[F] = [f_x, f_y, f_z, m_x, m_y, m_z]^T$ be the spatial force vector of the end effector in the end-effector frame, where $f_r$ $(r = x,y,z)$ is the force along r axis and $m_r$ is the moment about r axis. Let $[\tau] = [\tau_1, \tau_2, \tau_3, \tau_4, \tau_5, \tau_6]^T$ be the torque vector of the six joints, where $\tau_i$ $(i = 1,2,3,\cdots,6)$ is the torque at Joint i. Here we have the equation,
\begin{equation}
[\tau] = [J]^T[F]
\end{equation}
where $[J]^T$ is the Jacobian matrix in the base coordinate system. To generate a moment and overcome the friction moment $[\tau] = [T_1, T_2, T_3,0,0,0]^T$, an operating force and moment $[F]$ is needed, which is
\begin{equation}
[F] =([J]^T)^{-1}[\tau]
\label{ctrlforceV}
\end{equation}
When the end effector moves along a trajectory, we equally divide the trajectory into N segments. We can figure out $[J]$ for each segment, and then the the spatial force vector $[F]$. Because the operating moment reacts at the wrist with small load, only the module of operating force $F_c(X,t)$ is considered in this case. Thus,
\begin{equation}
F_c(X,t) = \sqrt {f_x^2(x,t) + f_y^2(x,t) + f_z^2(x,t)}
\label{ctrlforce}
\end{equation}
\section{Problem formulation}
\label{sec:ProbFormu}
The goal of the optimization problem is to optimize objective functions simultaneously by satisfying the gravity balance constraints. The vector of nine design variables is $X = (m_A, m_B, L_A, L_B, k, H_b, T_1, T_2, T_3)^T$, where $m_A$, $m_B$, $L_A$, $L_B$ are the variables about the two counterweights, $k$, $H_b$ are about the balancer and the rest are about the three friction disks. Details of objective functions and constraints will be given in the following subsections.
\subsection{Objective functions}
Three objectives are considered, which are to minimize the total mass of the device, the maximal operating force and the maximal difference of the maximal and minimal operating force in a trajectory.
\subsubsection{Total mass}
Based on Eq.(\ref{totomass}), the objective function can be written as
\begin{equation}
f_1(x)=M
\end{equation}
\subsubsection{Maximal operating force}
It is a bilevel optimization problem, which contains two levels of optimization tasks \cite{sinha2017evolutionary}. Bilevel optimization problem is difficult to handle, because only the optimal solutions of the lower level optimization problem are considered as feasible candidates of the upper level optimization problem.
We prefer to minimize the maximal operating force in a trajectory to ensure that an operator can easily maneuver the device. Based on Eq.(\ref{ctrlforceV}) and Eq.(\ref{ctrlforce}), the objective function can be written as
\begin{equation}
f_2(x)=\max_tF_c(X,t)
\end{equation}
\subsubsection{Variation of operating force}
Large variation of operating force can disrupt a normal operation. The third objective is to minimize the difference between the maximum and minimum of operating force, which can be formulated as
\begin{equation}
f_3(x)=|\max_tF_c(X,t)-\min_tF_c(X,t)|
\end{equation}
Again, the third objective indicateds a bi-level optimization problem. Because of the bilevel optimization formulas, the optimal solutions are not easy to be discovered in traditional step-by-step design procedure. As a result, muiti-objective optimization algorithm is used in this work.
\subsection{Constraints}
The constraints are mainly about balance conditions and the bounds of the design variables. The constraints can be derived as follows:
\begin{equation}
| G_5 - P_5 | \leq 5\% G_5
\nonumber
\end{equation}
\begin{equation}
\max_{q_3}{| G_3 - P_3 | }\leq T_3
\nonumber
\end{equation}
\begin{equation}
\max_{q_2}{|G_2 - P_2|} \leq T_2
\nonumber
\end{equation}
which are Eq.(\ref{balance5}), Eq.(\ref{balance3}) and Eq.(\ref{balance2sim}), respectively.
The bounds of the design variables are listed in Table \ref{Table2}
\begin{table}[ht]
\small\sf\centering
\caption{The bounds of design variables.\label{Table2}}
\begin{tabular}{llll}
\toprule
Variables&Range&Units&Current value\\
\midrule
$m_A$ & [0.3, 20] &kg & 1.6\\
$m_B$ & [19, 50] &kg & 25\\
$L_A$ & [0.11, 0.5]&m & 0.185\\
$L_B$ & [0.2 0.8] &m & 0.462\\
$k $ & [0, 15000] &$N/m$ &3730\\
$H_b$ & [0.11, 0.18]&m &0.15\\
$T_1$ & [0, 90] &$N\centerdot m$ &75.7\\
$T_2$ & [0, 90] &$N\centerdot m$ &75.7\\
$T_3$ & [0, 90] &$N\centerdot m$ &75.7\\
\bottomrule
\end{tabular}\\[10pt]
\end{table}
Other constant parameters needed are listed in Table \ref{Table3}
\begin{table}[ht]
\small\sf\centering
\caption{The value of relative parameters.\label{Table3}}
\begin{tabular}{lll}
\toprule
Parameter&value&Units\\
\midrule
$m_1$ &4.136 &kg\\
$m_2$ &8.225 &kg\\
$m_3$ &9.665 &kg\\
$m_4$ &1.249 &kg\\
$m_5$ &4.185 &kg\\
$m_6$ &2.013 &kg\\
$m_{L1}$ &0.631 &kg\\
$m_{L2}$ &2.071 &kg\\
$m_{L3}$ &1.816 &kg\\
$m_{L4}$ &0.340 &kg\\
$m_{L5}$ &0.358 &kg\\
$L_2$ &0.79 &m\\
$\rho_A$ &1.8 &kg/m\\
$\rho_B$ &3.7 &kg/m\\
\bottomrule
\end{tabular}\\[10pt]
\end{table}
\subsection{Trajectory design}
To evaluate the objectives, the optimization problem needs a specific a trajectory. We define a representative trajectory of the end-effector in the base coordinate system, which is
\begin{equation}
\begin{split}
&X_{ef} =1.5 - 0.25(1 - \cos(t))\\
&Y_{ef}= - 0.25+0.5\times(1 - \cos(t/2))\\
&Z_{ef}= 0.5+0.5\times(\cos(t/2) - 1)
\nonumber
\end{split}
\end{equation}
all with unit of m. The Euler angle for the end-effector are given as $[0, \pi, \pi]$, which implies that the end-effector remains vertical and points at the ground during the prescribed motion. To figure out the Jacobian matrix $[J]$, we equally divide the trajectory into $N=500$ segments.
\subsection{Multi-objective design optimization}
\label{sec:MOO}
After formulating the optimization problem, the next step is to use an appropriate algorithm to search for an optimal design set. The optimization model is a multi-objective problem, and all of the constraints must be taken into account. Due to the complexity and size of the problem, NSGA-II-CDP is chosen to solve the optimization problem. This algorithm produces a Pareto front that consists of a set of optimal solutions, which are not dominated by each other and other solutions.
\subsection{Multi-objective optimization algorithm}
Fig. \ref{NSGA} is the illustration of the NSGA-II-CDP procedure. In this procedure, an offspring $Q_{t}$ is produced by the operator of crossover, mutation and selection from the working population $P_t$. Then, a new population $R_t$ is constructed by combing $P_t$ and $Q_t$. $R_t$ is sorted based on the CDP principle to divide the population into different fronts. The CDP principle is defined as follows:
\begin{enumerate}
\item[(a)] In the case of that solution $i$ and $j$ are both feasible, the one dominating the other is better.
\item[(b)] In the case that solution $i$ is feasible and solution $j$ is infeasible, solution $i$ is better than solution $j$.
\item[(c)] In the case that solution $i$ and $j$ are both infeasible, the one with smaller constraint violation is better than the other.
\end{enumerate}
\begin{figure}
\centering
\includegraphics [width=8.5cm]{NSGA.pdf}
\caption{NSGA-II-CDP procedure, modified from NSGA-II \cite{deb2002fast}}
\label{NSGA}
\end{figure}
Each solution is assigned to a non-dominated rank based on the CDP principle. Moreover, the crowding distance is calculated to sort the solutions in $f_3$. Then, the first $N$ solutions are selected to construct the population $P_{t+1}$.
The parameters of NSGA-II-CDP are listed as follow:
\begin{enumerate}
\item population size $=300$
\item population size $=300$
\item number of generations $=5000$
\item crossover probability $=0.9$
\item mutation probability $=1/n$, where $n=9$ is the dimension of variables
\item the distribution parameter in the polynomial mutation is 20
\item the distribution parameter in the simulated binary operator is 20
\end{enumerate}
\subsection{Result analysis}
The non-dominated solutions, namely Pareto front, is shown in Fig. \ref{VSorg}. It is easy to notice that $f_1$ $f_2$ and $f_3$ are conflicting. The Pareto-front is combined by two clusters, which are cluster R with less than 57.33 total mass and cluster S with no less than 57.33 total mass. Each of them is nearly in a straight line, with Eq.(\ref{result1}) and (\ref{result2})
\begin{figure}
\centering
\includegraphics [width=8cm]{VSorg.pdf}
\caption{The Pareto-front and the original design of a human expert is shown in the objective space, with three cases circled out. The obtained solutions are all better than the expert design in terms of the three objectives.}
\label{VSorg}
\end{figure}
\begin{equation}
f_2=\left\{\begin{array}{ll}
-59.84f_1+3438.5& f_1\in[56.22,57.33], \\
-5.40f_1+317.27 & f_1\in[57.33,58.72],
\end{array}\right.
\label{result1}
\end{equation}
\begin{equation}
f_3=\left\{\begin{array}{ll}
-5.80f_1+333.43&f_1\in[56.22, 57.33],\\
-0.52f_1+30.631&f_1\in[57.33, 58.72],
\end{array}\right.
\label{result2}
\end{equation}
The obtained solutions are all better than the expert design in terms of the three objectives. Two extreme points and one turning point are chosen from the Pareto front as samples, as shown in Fig. \ref{VSorg}. Their designs are compared with the original design of a human expert, with the design variables and the three objectives listed in Fig. \ref{Model}.
\begin{figure*}
\begin{tabular}{cccc}
\begin{minipage}[t]{0.23\linewidth}
\includegraphics[width = 3.6cm]{org.pdf} \\
\centering{\scriptsize{(a) Expert design}}
\end{minipage}
\begin{minipage}[t]{0.23\linewidth}
\includegraphics[width = 3.6cm]{case1.pdf}\\
\centering{\scriptsize{(b) Optimal solution 1}}
\end{minipage}
\begin{minipage}[t]{0.23\linewidth}
\includegraphics[width =3.6cm]{case2.pdf}\\
\centering{\scriptsize{(c) Optimal solution 2}}
\end{minipage}
\begin{minipage}[t]{0.23\linewidth}
\includegraphics[width =3.6cm]{case3.pdf}\\
\centering{\scriptsize{(d) Optimal solution 3}}
\end{minipage}
\end{tabular}
\caption{3D model of original design and obtained solution}
\label{Model}
\end{figure*}
Based on preferences of the three objectives, the users can choose a proper solution on the Pareto-optimal front. Meanwhile, it is worthy to notice that the appearance of three solutions are different, while the friction moments of disks inside the three joints are also different. The differences and the interaction among the valuables make the design better. There might be some principles or meaningful relationships behind the solution data. Therefore, an innovization study is conducted.
\subsection{Innovization study}
In this section, we perform an innovization study to discover some meaningful hidden relationships between objectives and design variables. There should be some common principles among all or part of the optimal solutions. Then the common principles can help the designer in future design, e.g. the knowledge discovered from the teaching manipulator design can be reused when the designer performs a similar teaching manipulator design again.
Fig. \ref{mArep} - \ref{T1re3p} illustrate the results of innovization study. As examples, we discuss the situation of the three gravity sensitive joints, including Joint 2, 3 and 5.
\begin{figure*}
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics [width=7.5cm]{mArep.pdf}
\caption{Variation of the mass of Counterweight A $m_A$ with the total mass is shown. $m_A$ is mostly fixed at about 1.44 kg.}
\label{mArep}
\end{minipage}
\hspace{1.2cm}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics [width=7.5cm]{LArep.pdf}
\caption{Variation of the length of Counterweight A connecting rod $L_A$ with the total mass is shown. $L_A$ is fixed at about 0.2, which is the upper bound.}
\label{LArep}
\end{minipage}
\end{tabular}
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics [width=7.5cm]{mBrep.pdf}
\caption{Variation of the mass of Counterweight B $m_B$ with the total mass is shown. $m_B$ stays at the lower bound first, then rises up along with a straight line.}
\label{mBrep}
\end{minipage}
\hspace{1.2cm}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics [width=7.5cm]{LBrep.pdf}
\caption{Variation of the length of Counterweight B connecting rod $L_B$ with the total mass is shown. $L_B$ increases with a slope at 0.27 until reaching the upper bound, then keeps at 0.5. }
\label{LBrep}
\end{minipage}
\end{tabular}
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics [width=7.5cm]{T3re2p.pdf}
\caption{Variation of the fiction moment of Fiction Disk 3 $T_3$ with the maximal needed operating force $f_2$ is shown, which is a straight line with a slope of 0.91. }
\label{T3re2p}
\end{minipage}
\hspace{1.2cm}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics [width=7.5cm]{T3rep.pdf}
\caption{Variation of the fiction moment of Fiction Disk 3 $T_3$ with the total mass $f_1$ is shown. $T_3$ decreases quickly in Cluster R and decreases slowly in Cluster S, with a turning point at 57.33.}
\label{T3rep}
\end{minipage}
\end{tabular}
\end{figure*}
\begin{figure*}
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics [width=7.5cm]{T2re2p.pdf}
\caption{Variation of the fiction moment of Fiction Disk 2 $T_2$ with the maximal needed operating force $f_2$ is shown, which is a staight line with a slope of 0.80.}
\label{T2re2p}
\end{minipage}
\hspace{1.2cm}
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics [width=7.5cm]{T2rep.pdf}
\caption{Variation of the fiction moment of Fiction Disk 2 $T_2$ with the total mass $f_1$ is shown. $T_2$ decreases quickly in Cluster R and decreases slowly in Cluster S, with a turning point at 57.33.}
\label{T2rep}
\end{minipage}
\end{tabular}
\begin{tabular}{cc}
\begin{minipage}[c]{0.45\linewidth}
\centering
\includegraphics [width=8cm]{KHbrep.pdf}
\caption{Variation of product $kH_b$, which represents the tendency of the drafting force of the balancer, with the total mass $f_1$ is shown. The mean of $kH_b$ is about in a line. However, a specific $f_1$ can match different values of $kH_b$ in Cluster R. The range of $kH_b$ becomes smaller along with the increase of total mass until focusing at about $f_1=57.33$, then keeps increasing in a line.}
\label{KHbrep}
\end{minipage}
\hspace{1.2cm}
\begin{minipage}[c]{0.45\linewidth}
\centering
\includegraphics [width=8cm]{T1re3p.pdf}
\caption{Variation of the fiction moment of Fiction Disk 1 $T_1$ with the total mass $f_3$ is shown. There is a range between 0 to about 5.}
\label{T1re3p}
\end{minipage}
\end{tabular}
\end{figure*}
At Joint 5, it is easy to notice that $m_A$ is fixed at 1.44. while $L_A$ is fixed at 0.2, which is the upper bound of $L_A$. The couple of valuables keep Joint 5 balance with minimal mass and decrease the load of the other two joints. The relationships with $f_1$ are shown in Fig. \ref{mArep} and \ref{LArep}.
At Joint 3, Counterweight B, its connecting rod and a friction disk contribute in balancing. Fig. \ref{mBrep} shows the relation of $m_B$ vs $f_1$ and and Fig. \ref{LBrep} shows the relation of $L_B$ vs $f_1$. Both of the curve can be divided into two clusters (Cluster R and S), representing different situations. The two clusters are in different linear relations, with Eq.(\ref{mBre}) and (\ref{LBre}), respectively.
\begin{equation}
m_B=\left\{\begin{array}{ll}
19&f_1\in[56.22, 57.33],\\
1.00f_1-38.41&f_1\in[57.33, 58.72],
\end{array}\right.
\label{mBre}
\end{equation}
\begin{equation}
L_B=\left\{\begin{array}{ll}
0.27f_1-14.98&f_1\in[56.22, 57.33],\\
0.5&f_1\in[57.33, 58.72],
\end{array}\right.
\label{LBre}
\end{equation}
Total mass $f_1=57.33$ is an important turning point of the curve. In cluster R, $m_B$ keeps at 19, which is its lower bound, while $L_B$ linearly increases until reaching its upper bound, with a slope at 0.27 along with $f_1$ which is the reciprocal of the mass per unit length of the connecting rod B ($\rho_B=1/2.7=3.7$). In cluster S, $m_B$ have a steady increase with a slope at 1.00 along with $f_1$, while $L_B$ stay at 0.5. It means that an increase of unit length of connecting rod contributes less total mass than an increase of unit weight of Counterweight B. To keep minimal total weight, the optimal solution tend to increase $L_B$ first and to increase $m_B$ only when $L_B$ reach its upper bound.
In Fig. \ref{T3re2p}, it is illustrated that there is a linear relation between $T_3$ and $f_2$, with the equation
\begin{equation}
T_3=0.91f_2
\label{T3re}
\end{equation}
It is shown that there is a trade-off between the mass of counterweight B, the length of the connecting rod and the disk 3 fiction moment when comparing Fig. \ref{T3rep}, \ref{mBrep} and \ref{LArep}. The moment to rotate friction disk 3, namely $T_3$, decreases quickly in cluster R, along with the increase of total mass because the increase of $L_B$ can provide larger moment for balancing. $T_3$ decreases more slowly in cluster S, along with the increase of $m_B$.
At Joint 2, the gravity balance is kept by the trade-off of the balancer and friction disk 2. The situation of $T_2$ is similar to $T_3$ In Fig. \ref{T2rep}, the plot shows the relation between the $T_2$ vs $f_1$, where two clusters are in different decreasing lines with a turning point at $f_1=57.33$. In Fig. \ref{T2re2p}, the plot shows a linear relation between $T_2$ and $f_2$. The equation is given as follow.
\begin{equation}
T_2=0.80f2
\label{T2re}
\end{equation}
Based of Eq.(\ref{balancerForce}), as $g$ and $L_2$ are constant, when $|\cos{q_2}|=1$, $kH_b$ can represent the tendency of the drafting force of balancer. $kH_b$ vs $f_1$ is shown in Fig. \ref{KHbrep}. The variant of $kH_b$ become smaller and its mean increasing slightly, along with the increase of total mass because the increase of mass leads to a larger moment for balancing in need. To simplify the relation, we can treat them with a linear relation base on the mean value and ignore the variant of $kH_b$, with the equation as following.
\begin{equation}
kH_b=8.84f_1-81.27, f_1\in[56.22, 58.72],
\label{kHbre}
\end{equation}
It is noticed that the dots of cluster R in Fig. \ref{KHbrep} distributed in a triangle region. From Eq.(\ref{me}) and (\ref{totomass}), $M_e$ is linearly proportioned to $f_1$. In cluster R, $T_2$ decreases rapidly along with the increase of $f_1$. In Eq.(\ref{balance2sim}), with a fixed $M_e$, the less $T_2$, the smaller range of $kH_b$ value satisfy the equation, namely a smaller range of the drafting force of the balancer. The variant of $kH_b$ represents the range of adjustment in terms of the value of the friction moment of friction disk 2 in Cluster R. As the total mass increases, the range of adjustment becomes smaller and gets into a line.
Fig. \ref{T1re3p} is the plot of $T_1$ vs $f_3$. The variation of $T_1$ is in a range from 0 to 5 $N\centerdot m$. As $f_3$ is in a small value, $T_3$ is nearly 0. From Fig. \ref{VSorg}, $f_1$ is small when the $f_3$ keeps in large value, because the solutions are tend to have lighter counterweights but mount the friction disks with larger friction moments for balance, which probabily lead to great variant of the operating force in the moving process. Therefore, the friction disks in Joint 1 is used for smoothing the resultant of the operating force in a trajectory.
Through the innovization study, we can establish some specific relations between the objectives and design variables. Meanwhile, we have deeper understanding about the interaction of the design variables of the optimal solutions. The above knowledge is difficult to be discovered in problem formulation or normal design procedure (e.g. the linear relation of $kH_b$ and $f_1$). With the knowledge, the designer can design a new teaching manipulator for other applications without a need to repeat solving the optimization problem again.
\section{Conclusion}
\label{sec:conc}
This paper focuses on modeling and optimizing a teaching manipulator. In the modeling stage, we formulate the balancing conditions of the three gravity sensitive joints and modeling of the operating force performance. The optimization stage shows the procedure to formulate and solve a three-objective constained design optimization problem. An innovization study is conducted to acquire a deeper understanding about the implicit design principles among multiple solutions.
The three objective functions include the minimization of the total mass of the device, the maximal operating force needed and the difference between the maximum and minimum of operating force. An evolutionary multi-objective optimization algorithm, NSGA-CDP is used to solve the multi-objective optimization problem. Compared with the original design of a human expert, the obtained solutions on the Pareto front are better in all the three objectives.
A comprehensive innovization study is conducted. The optimal solutions are used for data mining the implicit knowledge in the optimization problem. The relation equations between the objectives and design variables are established. Meanwhile, the interactions among objectives and the variables are discussed. The obtained knowledge can help the designer to make decisions more efficiently and effectively in a future design procedure.
As the next step, we will extend the research of innovization study. In the paper, we summarize the principles with visualization methods through human observation. However, many methods have been developed in data mining to automate the process of innovization \cite{deb2014integrated, bandaru2015generalized}. It will be of our great interest to apply these methods as more powerful tools to extract useful information from the design automation process.
\section*{Reference}
\bibliographystyle{elsarticle-num}
|
1,108,101,565,269 | arxiv |
\section{Introduction}
\label{sec:intro}
Convolutional neural networks (CNNs) are increasingly being used in critical applications, such as self-driving cars and face authentication. Recent works have shown that gradient based attacks can reduce accuracy of visual recognition networks to less than 1\%, while minimally perturbing an image. The adversary uses gradient descent through the network to maximize the output at an incorrect label, while minimizing the perturbation to the image. Various attack methods have been produced using this common framework, including Fast Gradient Sign Method (FGSM) \cite{goodfellow2014explaining} and Projected Gradient Descent (PGD) \cite{madry2017towards}. Works have also been proposed to detect such adversarial samples, but none have been published which can estimate the adversarial setup from image samples. Knowing such parameters would allow for more accurate adversarial retraining against such attacks as well as aid in recovering the tools and processes used in these attacks~\cite{darpa-red}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample_attack_crop.png}
\caption{A sample PGD attack against ResNet. Small perturbations against a network with known weights can lead to significant differences in prediction outputs. Scores indicated here are confidence scores from 0-1, where the sum of all scores is equal to 1.}
\label{fig:sample_attack}
\end{figure}
Gradient-descent based adversarial attacks use the gradients of deep neural networks (DNNs) to imperceptibly alter their inputs so as to change the output dramatically. Within this family, there are various strains of algorithms, each with several parameters. In this work, we propose to detect such adversarial attack toolchains and their parameters. Our objectives are two-fold:
\begin{enumerate}
\item To attribute an adversarially attacked image to a particular attack toolchain/family,
\item Once an attack has been identified, determine the parameters of the attack so as to facilitate the reverse engineering of these adversarial deceptions.
\end{enumerate}
We will now briefly describe some of the attacks considered for detection and attribution. A deep neural network (DNN) is represented as a function $f: X \rightarrow Y$, where $X$ denotes the input space of data and $Y$ denotes the output space of the classification categories. The training set comprises known pairs $(x_t,y_t)$, where $x_t \in X$ and $y_t \in Y$,and $f()$ is obtained by minimizing a loss function $J(f(x_t),y_t)$. We will consider the following attacks:
\begin{enumerate}
\item Fast Gradient Sign Method (FGSM): This attack perturbs a clean image x by taking a fixed step in the direction of the gradient of $J(f(x_t),y_t)$ with respect to $x_t$.
\item Projected Gradient Descent (PGD): This attack is an improvement over FGSM, where the adversarial samples $x'$ are generated by multiple iterations and intermediate results are clipped so as to keep them within the $\epsilon$-neighborhood of $x:{x'}_i = {x'}_{i-1} - {clip}_{\epsilon}(\alpha \cdot sign({\nabla}_x J(f({x'}_{i-1},y)) )$.
\end{enumerate}
These two attacks are examples of $l_{\infty}$ attacks, where $\epsilon$ represents the maximum allowable perturbation to any pixel in x. The software repositories of these attacks can be obtained from the following: Advertorch~\cite{ding2019advertorch}, Adversarial Robustness Toolbox~\cite{nicolae2018adversarial}, Foolbox~\cite{rauber2017foolbox}, CleverHans~\cite{papernot2016technical}. A PGD example from the Advertorch toolbox is given in Figure \ref{fig:sample_attack}.
\section{Related Works}
Many works have taken the approach of creating more robust networks, for which small changes in input will not significantly change the output classification \cite{bastani2016measuring,gu2014towards,huang2015learning,jin2015robust,papernot2016distillation,rozsa2016adversarial,shaham2015understanding,zheng2016improving}. Generally, these methods cause a significant decrease in accuracy, for both tampered and untampered images \cite{carlini2017adversarial}. While these networks are necessary when class estimation is required for all samples, others methods may be more favorable when this requirement is relaxed.
Detection has become another popular approach to circumventing these attacks \cite{bhagoji2017dimensionality,feinman2017detecting,gong2017adversarial,grosse2017statistical,metzen2017detecting,hendrycks2016early}. Such methods allow for the classification networks to remain as is, while filtering out adversarial examples before they reach the target network. The methods presented in this paper move a step beyond simple detection, with the addition of attack classification and parameter estimation.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{model_high_level.png}
\caption{High level model diagram for detection. All models fit into this framework, with different preprocessing methods.}
\label{fig:model}
\end{figure}
\section{Method}
\subsection{Model}
To enhance the artifacts created by adversarial attacks, we consider two preprocessing methods common to image forensic, before training a neural network.
A visual summary of our detector is given in Figure \ref{fig:model}.
As a baseline, we compare these two methods against a method with no preprocessing. The first is a Laplacian high-pass filter. Similar filters have been used for both image resampling detection \cite{kirchner2008fast}, and general image manipulation detection \cite{bayar2016deep}. In our tests, the following 3x3 filter was applied to each of the RGB channels:
\begin{equation}
h(x,y) =
\begin{bmatrix}
1 & 1 & 1 \\
1 & -8 & 1 \\
1 & 1 & 1
\end{bmatrix}
\end{equation}
The second preprocessing method investigated is the co-occurrence matrix. Such matrices have been used extensively in detection of steganography \cite{sullivan2005steganalysis,sullivan2006steganalysis} as well as in detection of GAN images \cite{nataraj2019detecting}. For this method, two dimensional histograms of adjacent pixel pairs are constructed for each of the color channels. Below we show the equation for horizontal pairs, where X is a 2D array representing a single color channel. A sample image passed through each mode of processing is shown in Figure \ref{fig:example_inputs}.
\begin{equation}
C_{i,j} = \sum_{m,n} [X_{m,n} = i][X_{m,n+1} = j]
\end{equation}
This can be applied to $X^T$ for vertical pairs as well, and on all three channels. These 6 co-occurrence matrices are then stacked into a final input tensor of size $256 \times 256 \times 6$. This tensor is passed to a CNN classifier as a multi-channel image.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{processing_examples_crop.png}
\caption{An untampered image, and corresponding PGD attacked image, with a large step size and number of steps to amplify the difference. The adversarial noise added appears across the whole image. The difference in the co-occurrence matrices is notable in the significant increase in spread about the diagonal.}
\label{fig:example_inputs}
\end{figure}
\subsection{Detection, Attribution, and Estimation}
For our final output, we would like to tell a user whether or not a query is tampered, what attack method was used, and the parameters for that method. This high level idea described visually in Figure \ref{fig:flowchart}.
To accomplish this, we train a multiclass network, with each attack and parameter combination as a different label. To form the aggregated sets, such as real vs tampered, we sum the model outputs associated with each set. The set with the largest output is selected as the estimated class.
If the image is predicted to be tampered, we then compute our parameter estimates using the model outputs for the predicted meta-class. A weighted sum is used, with the model outputs as the weights, and the associated class parameters as the values.
\begin{equation}
P_{est} = \frac{\sum_{i \in S} P_i \times y_i}{\sum_{i \in S} y_i}
\label{eq:weighted_sum}
\end{equation}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{RED_flowchart.png}
\caption{Levels of information provided to the user by our method. Using a single network, we demonstrate results for detection, attribution, and parameter estimation.}
\label{fig:flowchart}
\end{figure*}
\section{Experiments}
\subsection{Dataset}
A full list of the attacks investigated is given in table \ref{tab:attacks_summary}. These attacks are repeated on VGG16 and ResNet50, and each is classified separately. Only the ends of the parameter spectrum are used for training. Parameters which are in-between these ends of the spectrum are seen only at test time. A total of 12 different tampered classes are used at training time, with one additional class for untampered, for a total of 13.
The dataset is constructed from a random selection of ImageNet samples, all resized to $256 \times 256$. Attacks are run as targeted, with the new label being randomly selected from the 999 labels which are different than the associated ground truth. The attacks are then run to maximize the network output for the target class.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Attack & Parameters & Training & Testing \\
\hline
FGSM & ss = 1 & X & X \\
FGSM & ss = 2 & & X \\
FGSM & ss = 3 & X & X \\
PGD & ss = 1, ns = 8 & X & X \\
PGD & ss = 1, ns = 12 & & X \\
PGD & ss = 1, ns = 16 & X & X \\
PGD & ss = 2, ns = 8 & & X \\
PGD & ss = 2, ns = 12 & & X \\
PGD & ss = 2, ns = 16 & & X \\
PGD & ss = 3, ns = 8 & X & X \\
PGD & ss = 3, ns = 12 & & X \\
PGD & ss = 3, ns = 16 & X & X \\
\hline
\end{tabular}
\vspace{6pt}
\caption{Breakdown of the attacks used for training and testing. All attacks are repeated against pretrained VGG16 and ResNet50. "ss" denotes "stride size", assuming an image is in the range [0,255], and "ns" denotes "number of steps".}
\label{tab:attacks_summary}
\end{table}
\subsection{Model Training}
A ResNet50 pretrained on ImageNet was used as our initial network, with the input and output layers modified to accommodate the different input and output sizes for this task. The model was trained over 20 epochs, using a batch size of 32, Adam optimizer, and cross-entropy loss. After each of the epochs, the model was evaluated on the validation set. The weights corresponding to the lowest validation loss were saved, and used for the remainder of the tests.
\subsection{Results}
Table \ref{tab:classification} shows our results for several different separations of the meta-classes. The co-occurrence and direct methods performed better on average than the Laplace method across the different tasks. Considering all 3 detectors, our methods achieved at least 90\% accuracy for each task. Figure \ref{fig:tsne} shows a t-SNE clustering of the deep features taken from one of these classification networks.
Table \ref{tab:estimation} shows the results of each method on different estimation tasks. Notably, the Laplace and co-occurrence methods out-perform the baseline direct method in several of the estimation tasks.
\begin{table*}[]
\centering
\scriptsize
\input{conf_dir}
\vspace{6pt}
\caption{Confusion matrix for direct method on test dataset. Column labels are in the same order as row labels. Rows indicate ground truth, columns indicate predicted.}
\label{tab:conf_dir}
\end{table*}
\begin{table*}[]
\centering
\scriptsize
\input{conf_lap}
\vspace{6pt}
\caption{Confusion matrix for Laplace method on test dataset. Column labels are in the same order as row labels. Rows indicate ground truth, columns indicate predicted.}
\label{tab:conf_lap}
\end{table*}
\begin{table*}[]
\centering
\scriptsize
\input{conf_co_occur}
\vspace{6pt}
\caption{Confusion matrix for co-occurrence method on test dataset. Column labels are in the same order as row labels. Rows indicate ground truth, columns indicate predicted.}
\label{tab:conf_co_occur}
\end{table*}
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\linewidth]{tsne.png}
\caption{t-SNE results from the co-occurrence model for one of the classification tasks. Features are taken from the penultimate layer of the detection network, and run though t-SNE dimensionality reduction. Clear clusters are seen dividing each class.}
\label{fig:tsne}
\end{figure}
\subsection{Discussion}
Across all tasks, the co-occurrence preprocessing tended to perform the best. Especially noteworthy is the difference in performance of the direct method between the classification tasks and the parameter estimation tasks. Of the three, the direct provides the most information too the neural network. While this led to good results on the training classes, the information bottleneck provided by the co-occurrence and Laplace functions may help reduce overfitting.
\section{Conclusion and Future Directions}
In this work, we presented several methods for attribution and parameter estimation of select adversarial attacks. This combination of detection, attribution, and parameter estimation was accomplished using a single pass through a multi-class neural network, trained on a sampling of several common adversarial attacks.
While our model was demonstrated effective against several attacks not seen in the training sets, there are a variety of additional attacks to be considered. Furthermore, our method of parameter interpolation is inherently limited to estimating values only within the range of values in the training set. Creation of a more robust model for real-world deployment would require a broader sampling of attack methods, target networks, and attack parameters.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Meta Classes & Direct & Laplace & Co-occur \\
\hline
Binary Detection & 0.921 & 0.955 & \textbf{0.970} \\
Full attribution & \textbf{0.907} & 0.834 & 0.808 \\
Original, ResNet, VGG & \textbf{0.925} & 0.834 & 0.865 \\
Original, FGSM, PGD & 0.919 & 0.961 & \textbf{0.978} \\
Full Classification & \textbf{0.928} & 0.766 & 0.835 \\
\hline
\end{tabular}
\vspace{6pt}
\caption{Mean average precision on different meta classification tasks. Full Attribution denotes classification between the original, FGSM ResNet, FGSM VGG, PGD ResNet, and PGD classes. Full Classification refers to accuracy across all 13 classes in the training set. }
\label{tab:classification}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& Direct & Laplace & Co-Occur \\
\hline
FGSM step size & 0.491 & \textbf{0.469} & 0.509 \\
PGD step size & 0.567 & 0.680 & \textbf{0.535} \\
PGD number of steps & 4.17 & 3.66 & \textbf{3.42} \\
\hline
\end{tabular}
\vspace{6pt}
\caption{Root Mean Squared Error (RMSE) for parameter estimation. Step sizes are sampled from \{1,2,3\}, and number of steps sampled from \{8,12,16\}.}
\label{tab:estimation}
\end{table}
{\small
\bibliographystyle{ieee}
|
1,108,101,565,270 | arxiv | \section*{Appendix}\label{appendices}\setcounter{subsection}{0}
\addcontentsline{toc}{section}{Appendix}
\setcounter{equation}{0}
\makeatletter
\renewcommand{\theequation}{\Alph{subsection}.\arabic{equation}}
\renewcommand{\thesubsection}{\Alph{subsection}}
\renewcommand{\thethm}{\Alph{subsection}.\arabic{thm}}
\@addtoreset{equation}{subsection}
\@addtoreset{thm}{subsection}
\makeatother
}
\usepackage[all,knot]{xy}
\xyoption{arc}
\def\slasha#1{\setbox0=\hbox{$#1$}#1\hskip-\wd0\hbox to\wd0{\hss\sl/\/\hss}}
\def\period#1{\setbox0=\hbox{$#1$}#1\hskip-\wd0{}\hskip-\wd0{~-~}}
\def\periodb#1{\setbox0=\hbox{$#1$}#1\hskip-\wd0\hbox to\wd0{-}}
\newcommand{\slasha{\nabla}}{\slasha{\nabla}}
\newcommand{\bar{\slasha{\nabla}}}{\bar{\slasha{\nabla}}}
\newcommand{\para}[1]{\noindent{\bf #1.}}
\newcommand{\binomr}[2]{\binom{\,#1\,}{\,#2\,}}
\newcommand{\bfa}{\mathbf{a}}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{b}}{\mathbf{b}}
\newcommand{\mathbf{c}}{\mathbf{c}}
\newcommand{\mathbf{e}}{\mathbf{e}}
\newcommand{\boldsymbol{\eta}}{\boldsymbol{\eta}}
\newcommand{\mathbf{f}}{\mathbf{f}}
\newcommand{\boldsymbol{\Phi}}{\boldsymbol{\Phi}}
\newcommand{\delder}[1]{\frac{\delta}{\delta #1}}
\newcommand{\boldsymbol{\phi}}{\boldsymbol{\phi}}
\newcommand{\lsc}{\{\hspace{-0.1cm}[}
\newcommand{]\hspace{-0.1cm}\}}{]\hspace{-0.1cm}\}}
\newcommand{(\hspace{-0.1cm}(}{(\hspace{-0.1cm}(}
\newcommand{)\hspace{-0.1cm})}{)\hspace{-0.1cm})}
\newcommand{[\hspace{-0.05cm}[}{[\hspace{-0.05cm}[}
\newcommand{]\hspace{-0.05cm}]}{]\hspace{-0.05cm}]}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{\mathbf{g}}{\mathbf{g}}
\newcommand{\mathbf{H}}{\mathbf{H}}
\newcommand{\mathbf{L}}{\mathbf{L}}
\newcommand{\boldsymbol{\lambda}}{\boldsymbol{\lambda}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbf{N}}{\mathbf{N}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{\mathbf{w}}{\mathbf{w}}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\unit}{\mathbbm{1}}
\newcommand{\zero}{\mathbbm{0}}
\newcommand{\re}{\mathrm{re}}
\newcommand{\im}{\mathrm{im}}
\newcommand{\id}{\mathrm{id}}
\newcommand{\eff}{{\mathrm{eff}}}
\newcommand{\bfone}{\mathbf{1}}
\newcommand{\mathbf{3}}{\mathbf{3}}
\newcommand{\mathbf{4}}{\mathbf{4}}
\newcommand{\CA}{\mathcal{A}}
\newcommand{\tilde{\mathcal{A}}}{\tilde{\mathcal{A}}}
\newcommand{\bar{x}}{\bar{x}}
\newcommand{\hat{x}}{\hat{x}}
\newcommand{\bar{\mathcal{A}}}{\bar{\mathcal{A}}}
\newcommand{\hat{\mathcal{A}}}{\hat{\mathcal{A}}}
\newcommand{\mathscr{A}}{\mathscr{A}}
\newcommand{\dot{p}}{\dot{p}}
\newcommand{\dot{x}}{\dot{x}}
\newcommand{\ddot{x}}{\ddot{x}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathscr{B}}{\mathscr{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathscr{C}}{\mathscr{C}}
\newcommand{\mathscr{M}}{\mathscr{M}}
\newcommand{\mathscr{L}}{\mathscr{L}}
\newcommand{\mathscr{I}}{\mathscr{I}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathscr{D}}{\mathscr{D}}
\newcommand{\bar{\mathscr{D}}}{\bar{\mathscr{D}}}
\newcommand{\mathscr{E}}{\mathscr{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathscr{F}}{\mathscr{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathscr{G}}{\mathscr{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathscr{H}}{\mathscr{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathscr{K}}{\mathscr{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathscr{U}}{\mathscr{U}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\bar{\mathcal{O}}}{\bar{\mathcal{O}}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\hat{\mathcal{P}}}{\hat{\mathcal{P}}}
\newcommand{\mathscr{P}}{\mathscr{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\hat{\mathcal{Q}}}{\hat{\mathcal{Q}}}
\newcommand{\check{C}}{\check{C}}
\newcommand{\mathscr{R}}{\mathscr{R}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathscr{T}}{\mathscr{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathscr{V}}{\mathscr{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathscr{X}}{\mathscr{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathscr{Y}}{\mathscr{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\mathscr{Z}}{\mathscr{Z}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\frc}{\mathfrak{c}}
\newcommand{\frder}{\mathfrak{der}}
\newcommand{\frg}{\mathfrak{g}}
\newcommand{\frh}{\mathfrak{h}}
\newcommand{\mathfrak{A}}{\mathfrak{A}}
\newcommand{\mathfrak{F}}{\mathfrak{F}}
\newcommand{\mathfrak{G}}{\mathfrak{G}}
\newcommand{\mathfrak{H}}{\mathfrak{H}}
\newcommand{\mathfrak{N}}{\mathfrak{N}}
\newcommand{\mathfrak{S}}{\mathfrak{S}}
\newcommand{\mathfrak{U}}{\mathfrak{U}}
\newcommand{\mathfrak{u}}{\mathfrak{u}}
\newcommand{\mathfrak{X}}{\mathfrak{X}}
\newcommand{\mathfrak{v}}{\mathfrak{v}}
\newcommand{\frl}{\mathfrak{l}}
\newcommand{\mbf}[1]{{\boldsymbol {#1} }}
\newcommand{{\mbf T}}{{\mbf T}}
\newcommand{{{\Large\blacktriangledown}}}{{{\Large\blacktriangledown}}}
\newcommand{\XF}{\mathcal{X}}
\newcommand{\FK}{\mathbbm{K}}
\newcommand{\FT}{\mathbbm{T}}
\newcommand{\FR}{\mathbbm{R}}
\newcommand{\FC}{\mathbbm{C}}
\newcommand{\FH}{\mathbbm{H}}
\newcommand{\FO}{\mathbbm{O}}
\newcommand{\NN}{\mathbbm{N}}
\newcommand{\DD}{\mathbbm{D}}
\newcommand{\FF}{\mathbbm{F}}
\newcommand{\VV}{\mathbbm{V}}
\newcommand{\RZ}{\mathbbm{Z}}
\newcommand{\CPP}{{\mathbbm{C}P}}
\newcommand{\PP}{{\mathbbm{P}}}
\newcommand{\HS}{\mathbbm{F}}
\newcommand{\hat{a}}{\hat{a}}
\newcommand{\hat{b}}{\hat{b}}
\newcommand{\hat{\lambda}}{\hat{\lambda}}
\newcommand{\bar{\lambda}}{\bar{\lambda}}
\newcommand{\hat{A}}{\hat{A}}
\newcommand{\hat{C}}{\hat{C}}
\newcommand{\hat{L}}{\hat{L}}
\newcommand{\hat{f}}{\hat{f}}
\newcommand{\hat{I}}{\hat{I}}
\newcommand{\AlA}{\mathcal{A}}
\newcommand{\AlC}{\mathcal{C}}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\dpar}{\partial}
\newcommand{\dparb}{{\bar{\partial}}}
\newcommand{\delb}{{\bar{\delta}}}
\newcommand{\nablab}{{\bar{\nabla}}}
\newcommand{\chib}{{\bar{\chi}}}
\newcommand{\embd}{{\hookrightarrow}}
\newcommand{\diag}{{\mathrm{diag}}}
\newcommand{\dL}{\mathcal{L}}
\newcommand{\dD}{\mathcal{D}}
\newcommand{\de}{\mathrm{e}}
\newcommand{\di}{\mathrm{i}}
\newcommand{\eps}{{\varepsilon}}
\newcommand{\epsb}{{\bar{\varepsilon}}}
\renewcommand{\Re}{\mathrm{Re}}
\renewcommand{\Im}{\mathrm{Im}}
\newcommand{\bi}{{\bar{\imath}}}
\newcommand{{\bar{\jmath}}}{{\bar{\jmath}}}
\newcommand{{\bar{1}}}{{\bar{1}}}
\newcommand{{\bar{w}}}{{\bar{w}}}
\newcommand{{\bar{z}}}{{\bar{z}}}
\newcommand{{\bar{\psi}}}{{\bar{\psi}}}
\newcommand{{\bar{\theta}}}{{\bar{\theta}}}
\newcommand{{\bar{\phi}}}{{\bar{\phi}}}
\newcommand{{\bar{\lambda}}}{{\bar{\lambda}}}
\newcommand{{\bar{\zeta}}}{{\bar{\zeta}}}
\newcommand{{\bar{E}}}{{\bar{E}}}
\newcommand{\mathsf{B}}{\mathsf{B}}
\newcommand{{\bar{V}}}{{\bar{V}}}
\newcommand{{\bar{D}}}{{\bar{D}}}
\newcommand{{\bar{W}}}{{\bar{W}}}
\newcommand{{\bar{y}}}{{\bar{y}}}
\newcommand{{\bar{\mu}}}{{\bar{\mu}}}
\newcommand{{\bar{\eta}}}{{\bar{\eta}}}
\newcommand{{\bar{\sigma}}}{{\bar{\sigma}}}
\newcommand{{\bar{\Theta}}}{{\bar{\Theta}}}
\newcommand{\hl}{{\hat{\lambda}}}
\newcommand{\ald}{{\dot{\alpha}}}
\newcommand{{\dot{\beta}}}{{\dot{\beta}}}
\newcommand{{\dot{\gamma}}}{{\dot{\gamma}}}
\newcommand{{\dot{\delta}}}{{\dot{\delta}}}
\newcommand{{\dot{\rho}}}{{\dot{\rho}}}
\newcommand{{\dot{\sigma}}}{{\dot{\sigma}}}
\newcommand{{\dot{1}}}{{\dot{1}}}
\newcommand{{\dot{2}}}{{\dot{2}}}
\newcommand{{\dot{\theta}}}{{\dot{\theta}}}
\newcommand{\tphi}{{\tilde{\phi}}}
\newcommand{{\tilde{\eta}}}{{\tilde{\eta}}}
\newcommand{\eand}{{\qquad\mbox{and}\qquad}}
\newcommand{{\qquad\mbox{with}\qquad}}{{\qquad\mbox{with}\qquad}}
\newcommand{{\qquad\mbox{for}\qquad}}{{\qquad\mbox{for}\qquad}}
\newcommand{{\qquad\mbox{on}\qquad}}{{\qquad\mbox{on}\qquad}}
\newcommand{{\mathrm{ker}}}{{\mathrm{ker}}}
\newcommand{{\,\cdot\,}}{{\,\cdot\,}}
\newcommand{\G}[3]{\Gamma^{#1}_{#2#3}}
\newcommand{\der}[1]{\frac{\dpar}{\dpar #1}}
\newcommand{\dder}[1]{\frac{\dd}{\dd #1}}
\newcommand{\derr}[2]{\frac{\dpar #1}{\dpar #2}}
\newcommand{\ddpart}[1]{\dd #1 \der{#1}}
\newcommand{\dderr}[2]{\frac{\dd #1}{\dd #2}}
\newcommand{\Der}[1]{\frac{\delta}{\delta #1}}
\newcommand{\ci}[1]{\overset{\circ}{#1}{}}
\newcommand{\tr}{\,\mathrm{tr}\,}
\newcommand{\pr}{\mathsf{pr}}
\newcommand{\tra}[1]{\,\mathrm{tr}_{#1}\,}
\newcommand{\str}{{\,\mathrm{str}\,}}
\newcommand{\ad}{\mathrm{ad}}
\newcommand{\Ad}{\mathrm{Ad}}
\newcommand{^{\mathrm{T}}}{^{\mathrm{T}}}
\newcommand{\dual}{^\vee}
\newcommand{\agl}{\mathfrak{gl}}
\newcommand{\mathfrak{s}}{\mathfrak{s}}
\newcommand{\mathfrak{sl}}{\mathfrak{sl}}
\newcommand{\mathfrak{u}}{\mathfrak{u}}
\newcommand{\mathfrak{su}}{\mathfrak{su}}
\newcommand{\mathfrak{so}}{\mathfrak{so}}
\newcommand{\mathfrak{spin}}{\mathfrak{spin}}
\newcommand{\mathfrak{u}}{\mathfrak{u}}
\newcommand{\sU}{\mathsf{U}}
\newcommand{\mathsf{V}}{\mathsf{V}}
\newcommand{\mathsf{G}}{\mathsf{G}}
\newcommand{\mathsf{N}}{\mathsf{N}}
\newcommand{\mathsf{L}}{\mathsf{L}}
\newcommand{\mathsf{Hom}}{\mathsf{Hom}}
\newcommand{\mathsf{Lie}}{\mathsf{Lie}}
\newcommand{\mathsf{CE}}{\mathsf{CE}}
\newcommand{\mathsf{S}}{\mathsf{S}}
\newcommand{\mathsf{Gr}}{\mathsf{Gr}}
\newcommand{\mathsf{H}}{\mathsf{H}}
\newcommand{\mathsf{SU}}{\mathsf{SU}}
\newcommand{\mathsf{SL}}{\mathsf{SL}}
\newcommand{\mathsf{GL}}{\mathsf{GL}}
\newcommand{\mathsf{Mat}}{\mathsf{Mat}}
\newcommand{\mathsf{O}}{\mathsf{O}}
\newcommand{\mathsf{M}}{\mathsf{M}}
\newcommand{\boldsymbol{F}}{\boldsymbol{F}}
\newcommand{\mathsf{Diff}}{\mathsf{Diff}}
\newcommand{\mathsf{Aut}}{\mathsf{Aut}}
\newcommand{\mathsf{SO}}{\mathsf{SO}}
\newcommand{\mathsf{Spin}}{\mathsf{Spin}}
\newcommand{\mathsf{End}}{\mathsf{End}}
\newcommand{\mathsf{SpecM}\,}{\mathsf{SpecM}\,}
\newcommand{|0\rangle}{|0\rangle}
\newcommand{\langle 0|}{\langle 0|}
\newcommand{\spn}{\mathrm{span}}
\newcommand{\acton}{\vartriangleright}
\renewcommand{\remark}[1]{}
\newcommand{\cmpl}{\mbox{[[to be completed]]}} %
\newcommand{\z}[1]{{\stackrel{\circ}{#1}}{}}
\newcommand{{\diagup\hspace{-0.27cm}\diamond}}{{\diagup\hspace{-0.27cm}\diamond}}
\newcommand{{\diagup}}{{\diagup}}
\newcommand{\inner}{\mathrm{int}}
\def\tyng(#1){\hbox{\tiny$\yng(#1)$}}
\def\tyoung(#1){\hbox{\tiny$\young(#1)$}}
\def\cpv{\setbox0=\hbox{$\int$}\int\hskip-\wd0{}\hskip-\wd0{~-~}}
\newcommand{\todo}[1]{{\textcolor{red}{#1}}}
\newcommand{\myxymatrix}[1]{\vcenter{\vbox{\xymatrix{#1}}}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\mathsf{Diff}}{\mathsf{Diff}}
\newcommand{{\sf p}}{{\sf p}}
\newcommand{{\sf s}}{{\sf s}}
\newcommand{{\sf t}}{{\sf t}}
\newcommand{{\sf m}}{{\sf m}}
\newcommand{{\sf g}}{{\sf g}}
\newcommand{{\sf h}}{{\sf h}}
\newcommand{{\sf d}}{{\sf d}}
\newcommand{{\sf b}}{{\sf b}}
\newcommand{{\sf f}}{{\sf f}}
\newcommand{{\sf k}}{{\sf k}}
\newenvironment{myitemize}{
\vspace{-2mm}\begin{itemize}
\setlength{\itemsep}{-1mm}
}{\vspace{-2mm}\end{itemize}}
\newcommand{\Omega_{{\rm cl},\RZ}}{\Omega_{{\rm cl},\RZ}}
\newcommand{{\sf Pair}}{{\sf Pair}}
\newcommand{{\sf hol}}{{\sf hol}}
\newcommand{\;\cdot\;}{\;\cdot\;}
\newcommand{\mathrm{cr}}{\mathrm{cr}}
\newcommand{\gamma_\mathrm{str}}{\gamma_\mathrm{str}}
\newenvironment{conditions}{
\vspace{-2mm}\begin{itemize}
\setlength{\itemsep}{-1mm}
}{\vspace{-2mm}\end{itemize}}
\begin{document}
\begin{titlepage}
\begin{flushright}
EMPG--15--11
\end{flushright}
\vskip 2.0cm
\begin{center}
{\LARGE \bf The Phase Diagram of \\[0.4cm] Scalar Field Theory on the Fuzzy Disc}
\vskip 1.5cm
{\Large Simone Rea and Christian S\"amann}
\setcounter{footnote}{0}
\renewcommand{\thefootnote}{\arabic{thefootnote}}
\vskip 1cm
{\em Maxwell Institute for Mathematical Sciences\\
Department of Mathematics, Heriot-Watt University\\
Colin Maclaurin Building, Riccarton, Edinburgh EH14 4AS, U.K.}\\[0.5cm]
{Email: {\ttfamily [email protected] , [email protected]}}
\end{center}
\vskip 1.0cm
\begin{center}
{\bf Abstract}
\end{center}
\begin{quote}
Using a recently developed bootstrapping method, we compute the phase diagram of scalar field theory on the fuzzy disc with quartic even potential. We find three distinct phases with second and third order phase transitions between them. In particular, we find that the second order phase transition happens approximately at a fixed ratio of the two coupling constants defining the potential. We compute this ratio analytically in the limit of large coupling constants. Our results qualitatively agree with previously obtained numerical results.
\end{quote}
\end{titlepage}
\section{Introduction}
By a fuzzy space, one usually means a geometric quantization of a compact K\"ahler manifold. The compactness of the space implies that the arising Hilbert space and therefore also the algebra of observables given by the endomorphisms of this Hilbert space are finite dimensional. Correspondingly, there is a minimal resolution with which the space is perceived, which renders it ``fuzzy.'' For this reason, fuzzy spaces are good candidates for regularizing quantum field theories \cite{Grosse:1995ar}, because the path integral over all observables turns into a finite number of ordinary integrals. To compare to what extent quantum field theories on fuzzy spaces provide an approximation to the corresponding quantum field theories, it is particularly useful to study the phase diagrams of the theories in the thermodynamic limit.
To this end, we need to evaluate the free energy of the fuzzy quantum field theory. As usual in geometric quantization, real functions on compact K\"ahler manifolds are mapped to hermitian operators, representing the quantization on the corresponding fuzzy manifold. Therefore, scalar field theories on fuzzy spaces are simply hermitian matrix models. Contrary to the hermitian matrix models most common in the literature, however, these matrix models come with a kinetic term containing fixed external matrices. This kinetic term presents an obstacle for applying the usual techniques for solving matrix models directly. In particular, the action is no longer invariant under similarity transformations and diagonalizing the matrix is no longer straightforward. This problem can be overcome by rewriting the kinetic term as a multitrace expression, which can then be solved at least in the limit of large matrix size, e.g.\ by the saddle point approximation.
This approach was developed in \cite{O'Connor:2007ea} and used there, in \cite{Saemann:2010bw} and in \cite{Ihl:2010bu} to compute the phase diagram of scalar quantum field theory on fuzzy complex projective spaces as well as on a three-dimensional space consisting of the Cartesian product of $\FR$ and the fuzzy sphere. Further applications of this technique are found in \cite{Ydri:2014uaa}, see also \cite{Nair:2011ux,Polychronakos:2013nca,Tekel:2014bta}. The rewriting of the kinetic term was done by applying techniques from group theory, making the calculations somewhat cumbersome. An alternative bootstrapping method for turning the kinetic term into multitraces was then found in \cite{Saemann:2014pca}. Here, enough conditions on the coefficients in the multitrace expressions are derived to fix them uniquely.
The purpose of this letter is to use the bootstrapping approach to compute the phase diagram of scalar field theory on the fuzzy disc \cite{Lizzi:2003ru} and to compare the result to the numerical findings of \cite{Lizzi:2012xy}. The fuzzy disc is particularly appealing as the kinetic term is somewhat simpler than in the case of the fuzzy sphere. One may therefore hope that on the fuzzy disc, quantum scalar field theory is better behaved than on the fuzzy sphere or even that the resulting hermitian matrix model is fully solvable.
This letter is structured as follows. In section 2, we briefly review the construction of the fuzzy disc and the definition of quantum scalar field theory on it, fixing our conventions. Section 3 deals with rewriting the resulting action as a multitrace expression and taking the limit of large matrix size. We compute the phase diagram of the model using a saddle point approximation in section 4, where we also compare our results to the numerical literature. We conclude in section 5.
\section{The model}
We start our discussion with a concise review of scalar field theory on the fuzzy disc.
\subsection{Fuzzy disc}
The fuzzy disc \cite{Lizzi:2003ru} provides a quantization of the algebra of functions on the unit disc in $\FC\cong\FR^2$. It is obtained by truncating and rescaling the matrix algebra of the Moyal plane. Moreover, one can obtain the fuzzy sphere by gluing together two fuzzy discs. Below, we briefly recall its construction, following roughly \cite{Lizzi:2003ru}.
Recall that the Moyal plane $\FR^2_\theta$ is the geometrical quantization of the complex plane with respect to the canonical symplectic structure, see e.g.\ \cite{IuliuLazaroiu:2008pk} for details. The result of this procedure is an infinite dimensional Hilbert space $\mathcal{H}$ which agrees with the usual Hilbert space of the harmonic oscillator up to a normalization. That is, we have a vacuum state $|0\rangle$ together with annihilation and creation operators satisfying
\begin{equation}
\hat{a}|0\rangle=0~,~~~ [\hat a, \hat a^\dagger]=\theta~,~\theta\in \FR^{> 0}~.
\end{equation}
We denote the eigenstates of the number operator $\hat N=\hat a\hat a^\dagger$ with eigenvalue $n\theta$ by $|n\rangle$:
\begin{equation}
\hat N |n\rangle=n\theta|n\rangle~,~~~n\in \NN~.
\end{equation}
We also introduce a corresponding basis for $\mathcal{H}^*$: $\langle m|$, $m\in \NN$, normalized such that $\langle m|n\rangle=\delta_{mn}$. The endomorphisms on $\mathcal{H}$ are spanned by linear combinations of the operators $|m\rangle \langle n|$. The Berezin symbol map $\sigma$ arising in the quantization assigns to each element of $\mathsf{End}(\mathcal{H})$ a function on $\FC\cong \FR^2$, providing a dequantization map. We have
\begin{equation}
\hat{f}=\sum_{m,n=0}^\infty f_{mn}|m\rangle \langle n|~,~~~\sigma(\hat f)(z,\bar z)=\de^{-\frac{|z|^2}{\theta}}\sum_{m,n=0}^\infty f_{mn} \frac{\bar z^m z^n}{\sqrt{m!n!\theta^{m+n}}}~,
\end{equation}
where $z\in \FC$.\footnote{We follow physics conventions and write $f(z,\bar z)$ for a non-holomorphic function $f$.} In particular, as the usual quantization axioms demand, $\sigma(\unit)=1$. Note that real functions correspond to the hermitian endomorphisms $\mathsf{End}_H(\mathcal{H})$.
Let us now introduce the following projector on a sub-Hilbert space $\mathcal{H}_\circ$:
\begin{equation}
P_N:=\sum_{n=0}^{N-1}|n\rangle \langle n|~,~~~\mathcal{H}_\circ:= P_N\,\mathcal{H}\eand \mathsf{End}(\mathcal{H}_\circ):=P_N\,\mathsf{End}(\mathcal{H})\,P_N~.
\end{equation}
The Berezin symbol of this projector reads as
\begin{equation}
\sigma(P_N)(z,\bar z)=\sum_{n=0}^{N-1}\frac{r^{2n}}{n!\theta^n} \de^{-\frac{r^2}{\theta}}=\frac{\Gamma(1+N,\frac{r^2}{\theta})}{\Gamma(1+N)}~,~~~r=\sqrt{\bar z z}~.
\end{equation}
Here, $\Gamma(n,x)$ is the incomplete gamma function. Note that
\begin{equation}
\lim_{N\rightarrow \infty}(\sigma(P_N)(z,\bar z))=\left\{\begin{array}{ll}
1 & \mbox{for}~~\frac{r^2}{\theta}<N~,\\
0 & \mbox{for}~~\frac{r^2}{\theta}>N~,
\end{array}\right.
\end{equation}
which shows that the projector $P_N$ corresponds to a step function with support on the disc $D_R\subset \FC$ of radius $R=\sqrt{N \theta}$. This justifies the identification of the algebra $\mathsf{End}_H(\mathcal{H}_\circ)$ with a quantization of the algebra of real functions on $D_R$, and we call the corresponding noncommutative space the {\em fuzzy disc}. We will always work with a disc of radius $R=1$, fixing $\theta=\frac{1}{N}$.
\subsection{Scalar field theory on the fuzzy disc}
To study scalar field theory on the fuzzy disc, we have to introduce a Laplace operator, i.e.\ an operator on $\mathsf{End}_H(\mathcal{H}_\circ)$, which approximates the usual Laplace operator $\Delta=4\der{z}\der{\bar z}$. Recall that geometric quantization and the Berezin symbol map lead to the identification
\begin{equation}
N[\hat a,-]\sim \der{\bar z}\eand -N[\hat a^\dagger,-]\sim \der{z}
\end{equation}
on the Moyal plane. On the fuzzy disc, we can combine these operators with the projectors $P_N$ to obtain a Laplace operator. There are two obvious candidates:
\begin{equation}
\Delta_N \hat f:=-4N^2P_N[\hat a,[\hat a^\dagger,\hat f]]P_N\eand \Delta_N \hat f:=-4N^2P_N[\hat a,P_N[\hat a^\dagger,\hat f]P_N]P_N
\end{equation}
for $\hat f\in \mathsf{End}_H(\mathcal{H}_\circ)$. The first one was used in \cite{Lizzi:2003ru} and \cite{Lizzi:2012xy}, but the second one has the advantage that $\Delta_N \unit=0$, as one might expect for the constant function $\sigma(\unit)(z,\bar z)=1$. The latter expectation is somewhat debatable as constant functions on the disc are in fact step functions, and one might argue that the fuzzy boundary should lead to deviations from $\Delta_N \unit=0$. In the following, we will nevertheless work with the second Laplace operator for two reasons. First, this choice simplifies our computations dramatically and second, we will be mostly interested in the large $N$ limit, in which both choices agree anyway.
The second ingredient we need is the notion of an integral. Geometric quantization and normalization of the integrals lead to the following identification:
\begin{equation}
\int \frac{\dd^2 z}{2}~ \sigma(\hat f)(z,\bar z)=\frac{\pi R^2}{N}\tr(\hat f)=\pi\theta\tr(\hat f)=\frac{\pi}{N}\tr(\hat f)~,
\end{equation}
where the last equality is again due to our choice $R=\sqrt{N\theta}=1$.
Introducing the shorthand $\check a:=P_N \hat a P_N$, we can now write down the action for scalar field theory on the fuzzy disc with quartic potential terms:
\begin{equation}\label{eq:action}
S[\hat \Phi] = \frac{\pi}{N} \tr\left(-4N^2\hat \Phi [ \check a, [ \check a^{\dagger}, \hat \Phi ] ] +r\hat \Phi^2 + g\hat \Phi^4 \right)~,
\end{equation}
where $r,g\in \FR$ such that $rx^2+gx^4$ is bounded from below for all $x\in\FR$. From now on, we will regard $\hat{\Phi}\in\mathsf{End}_H(\mathcal{H}_\circ)$ as a hermitian $N\times N$-matrix $\Phi$ and drop the hat to simplify our notation. The partition function for the action \eqref{eq:action} is then given by
\begin{equation}\label{eq:partition_function}
\mathcal{Z}:=\int \dd \mu_D(\Phi)~\de^{-\beta S[\Phi]}~,
\end{equation}
where $\dd \mu_D(\Phi)$ denotes the usual Dyson measure on the space of hermitian matrices.
\section{Computing the partition function}
As in the cases of the fuzzy sphere and fuzzy $\CPP^n$ in general \cite{O'Connor:2007ea,Saemann:2010bw,Saemann:2014pca}, one cannot apply the usual techniques of hermitian matrix models to the partition function \eqref{eq:partition_function} in a straightforward manner. This is due to the fact that the kinetic term presents an obstacle to a simple diagonalization of $\Phi$. To circumvent this problem, we will rewrite the kinetic term as multitrace expressions.
\subsection{Multitrace action}
As usual in dealing with hermitian matrix models, we wish to diagonalize the hermitian matrix $\Phi$ as $\Phi=\Omega\Lambda\Omega^\dagger$, where $\Lambda$ is a diagonal matrix containing the eigenvalues $\lambda_1,\ldots,\lambda_N$ of $\Phi$ and $\Omega\in \sU(N)$. Under this change of variables, the Dyson measure $\dd \mu_D(\Phi)$ on the space of hermitian $N\times N$-matrices, which appeared in the partition function \eqref{eq:partition_function}, decomposes according to
\begin{equation}
\int \dd \mu_D(\Phi) \ =\ \int \prod_{i=1}^N \dd \lambda_i~ \Delta^2(\Lambda) \int \dd \mu_H(\Omega)~.
\end{equation}
Here, $\Delta(\Lambda)$ is the Vandermonde determinant:
\begin{equation}
\Delta(\Lambda) \ := \ \det([\lambda_i^{j-1}]_{ij}) \ =\ \prod_{i>j}{(\lambda_i-\lambda_j)}
\end{equation}
and $\dd \mu_H(\Omega)$ is the Haar measure on $\sU(N)$. In a partition function, the Vandermonde determinant induces a repulsive interaction between eigenvalues:
\begin{equation}\label{eq:partion_VDM}
\begin{aligned}
\mathcal{Z} = \ &\int \prod_{i=1}^N \dd \lambda_i ~ \Delta^2(\Lambda) \int \dd \mu_H(\Omega)~ \de^{-\beta S[\Omega\Lambda\Omega^\dagger]}\\
= \ &\int \prod_{i=1}^N \dd \lambda_i ~ \int \dd \mu_H(\Omega)~\de^{-\beta S[\Omega\Lambda\Omega^\dagger] +2\sum_{i>j}{\log|\lambda_i-\lambda_j|}}~.
\end{aligned}
\end{equation}
In the case of ordinary hermitian matrix models, the action is invariant under $\Phi\rightarrow \Omega\Phi\Omega^\dagger$. Therefore, the integral over the Haar measure just gives a constant factor. In the case of our model \eqref{eq:action}, however, the kinetic term $S_{\rm kin}[\Phi] = -4\pi N \tr \! \left(\Phi[\check a ,[\check a^{\dagger},\Phi]]\right)$ is not invariant under this transformation and hence our first goal is to compute the following integral:
\begin{equation}
\mathscr{I}:=\int \dd \mu_H(\Omega)~\de^{\,\beta\,4\pi N \tr \! \left(\Omega\Lambda\Omega^\dagger[\check a ,[\check a^{\dagger},\Omega\Lambda\Omega^\dagger]]\right)}
\end{equation}
As shown in \cite{O'Connor:2007ea}, it is possible to rewrite the kinetic term as a sum of traces and multitraces of polynomials in $\Phi$ under the integral. This multitrace action is then invariant under $\Phi\rightarrow \Omega\Phi\Omega^\dagger$ and the integral over $\Omega$ becomes trivial.
Since our model is invariant under $\Phi \to - \Phi$ , our multitrace action will only contain terms of even total power of $\Phi$. At each order $\alpha$, there are $p(\alpha)$ terms in $S_{\rm MT}$ of total power $\alpha$ in $\Phi$, where $p(\alpha)$ denotes the number of integer partitions of $\alpha$. We label these coefficients by $a_{\pi_1,\pi_2,\ldots,\pi_k}$, where $\pi_1+\pi_2+\ldots+\pi_k$ is a partition of $\alpha$:
\begin{equation}\label{eq:MT-action}
\begin{aligned}
S_{\rm MT}[\Phi]=&a_2\tr(\Phi^2)+a_{1,1}\tr(\Phi)^2+a_4\tr(\Phi^4)+a_{3,1}\tr(\Phi^3)\tr(\Phi)+\\
&+a_{2,2}\tr(\Phi^2)^2+a_{2,1,1}\tr(\Phi^2)\tr(\Phi)^2+a_{1,1,1,1}\tr(\Phi)^4+\ldots~.
\end{aligned}
\end{equation}
Two methods have been developed to compute the coefficients $a_{\pi_1,\ldots,\pi_k}$. The first one uses group theoretic techniques and was applied in \cite{O'Connor:2007ea} and \cite{Saemann:2010bw} to compute the partition function of scalar field theory on fuzzy $\CPP^N$. The second one is a bootstrapping method presented in \cite{Saemann:2014pca}, which is more robust and more easily implemented, and we will use this method in the following.
The basic idea behind the bootstrapping method consists in choosing suitable differential operators $D$ such that
\begin{equation}
\begin{aligned}
D \, \de^{-\beta S_{\rm kin}[\Phi]}\ =\ & D \, \de^{-\beta S_{\rm MT}[\Phi]}~,\\
D\,\de^{-\beta S_{\rm kin}[\Phi]} = \mathcal{O}_{\rm kin}[\Phi]\,\de^{-\beta S_{\rm kin}[\Phi]}~~~\mbox{and}&~~~D\,\de^{-\beta S_{\rm MT}[\Phi]} = \mathcal{O}_{\rm MT}[\Phi]\,\de^{-\beta S_{\rm MT}[\Phi]}~,
\end{aligned}
\end{equation}
and both $\mathcal{O}_{\rm kin}$ and $\mathcal{O}_{\rm MT}$ are invariant under $\Phi \rightarrow \Omega\Phi\Omega^\dagger$ for $\Omega \in \sU(N)$. Then the operators $\mathcal{O}_{\rm kin}$ and $\mathcal{O}_{\rm MT}$ can be pulled out of the integral and the equation
\begin{equation}\label{bootstr}
D \,\int \dd \mu_H(\Omega)~\de^{-\beta S_{\rm kin}[\Phi]} = D \,\int \dd \mu_H(\Omega)~\de^{-\beta S_{\rm MT}[\Phi]}
\end{equation}
implies $\mathcal{O}_{\rm kin}=\mathcal{O}_{\rm MT}$. The left hand side of \eqref{bootstr} only depends on $\beta$ and $N$ while the right hand side depends on $\beta$, $N$ and the coefficients $a_{\pi_1,\ldots,\pi_k}$. Hence, given a sufficient number of differential operators, equations \eqref{bootstr} will yield enough conditions to fix all the coefficients in $S_{\rm MT}[\Phi]$, thereby giving the desired rewriting of the action.
It was found in \cite{Saemann:2014pca} that the operator $\sum_a \der{\Phi_{aa}} =: \der{\Phi_{aa}}$ yields conditions that fix more than half the unknown coefficients in $S_{\rm MT}[\Phi]$. More precisely, we have:
\begin{equation}
\der{\Phi_{aa}} \, \de^{-\beta S_{\rm kin}[\Phi]} = 16 \beta \pi N \tr\left( [ \check a, [ \check a^{\dagger}, \Phi ] ]\right)~ \de^{-\beta S_{\rm kin}[\Phi]} = 0~,
\end{equation}
from which it follows that
\begin{equation}
\der{\Phi_{aa}} \, \de^{-\beta S_{\rm MT}[\Phi]} =0~.
\end{equation}
This equation holds, in fact, for any scalar field theory on any fuzzy space if the kinetic term of the continuum theory vanishes on constant functions and the quantization condition $\sigma(\unit)=1$ is fulfilled. This is the case for our choice of Laplace operator on the fuzzy disc.
Equation \eqref{bootstr} for $D=\der{\Phi_{aa}}$ yields $p(\alpha-1)$ conditions on the coefficients of $ S_{\rm MT}[\Phi]$, which we can use to express all coefficients of the form $a_{\pi_1,\ldots,\pi_{k-1},1}$ in terms of other coefficients. In particular, consider the terms in $\mathcal{O}_{\rm kin}[\Phi]$ and $\mathcal{O}_{\rm MT}[\Phi]$ corresponding to the partition $\pi_1+\ldots+\pi_{k-1}=\alpha-1$. In terms of coefficients appearing in $S_{\rm MT}$, $\mathcal{O}_{\rm kin}[\Phi]=\mathcal{O}_{\rm MT}[\Phi]$ amounts to \cite{Saemann:2014pca}
\begin{equation}
\begin{aligned}
a_{\pi_1,\pi_2,\ldots,\pi_{k-1},1}=-\frac{1}{rN}\sum_{\sigma}\Big(&(\sigma(\pi_1)+1)a_{\sigma(\pi_1)+1,\sigma(\pi_2),\ldots,\sigma(\pi_{k-1})}+\\
&+(\sigma(\pi_2)+1)a_{\sigma(\pi_1),\sigma(\pi_2)+1,\ldots,\sigma(\pi_{k-1})}+\ldots\\
&\hspace{1cm}+(\sigma(\pi_{k-1})+1)a_{\sigma(\pi_1),\sigma(\pi_2),\ldots,\sigma(\pi_{k-1})+1}\Big)~,
\end{aligned}
\end{equation}
where the sum runs over all permutations of $\pi_1,\ldots,\pi_{k-1}$ and $r-1$ is the number of parts $\pi_i$ which are 1. Moreover, we define $a_{\pi_1,\pi_2,\ldots,\pi_{k-1}}:=0$ unless $\pi_1\geq\pi_2\geq\ldots\geq\pi_{k-1}$. In particular, we have
\begin{equation}
a_{1,1} = -\frac{a_2}{N}
\end{equation}
at second order and
\begin{equation}
a_{3,1} = -\frac{4a_4}{N} \ ,\qquad a_{2,1,1} = \frac{6a_4}{N^2}-\frac{2a_{2,2}}{N} \ , \qquad a_{1,1,1,1} = -\frac{3a_4}{N^3} + \frac{a_{2,2}}{N^2}
\end{equation}
at fourth order. Hence, $S_{\rm MT}[\Phi]$ is determined up to fourth order in $\Phi$ by $a_2$, $a_4$ and $a_{2,2}$. To fix these, we need to turn to higher order differential operators.
Unfortunately, none of the higher order differential operators yield functionals $\mathcal{O}_{\rm kin}[\Phi]$ which are $\sU(N)$-invariant in general. However, simply evaluating the result at $\Phi = 0$ gives the desired invariance. To fix $a_2$, we use $D := \der{\Phi_{ab}}\der{\Phi_{ba}}$ and we readily compute
\begin{equation}
\begin{aligned}
\left. \der{\Phi_{ab}}\der{\Phi_{ba}} \, \de^{-\beta S_{\rm kin}[\Phi]} \right|_{\Phi=0} &=8\pi\beta N^2(N-1)~,\\
\left. \der{\Phi_{ab}}\der{\Phi_{ba}} \, \de^{-\beta S_{\rm MT}[\Phi]} \right|_{\Phi=0} &= -2\beta (N^2-1) a_2~,
\end{aligned}
\end{equation}
which implies
\begin{equation}
a_2 = -\frac{4\pi N^2}{N+1}~.
\end{equation}
Determining $a_4$ and $a_{2,2}$ is slightly more involved. We define two differential operators $D_1$ and $D_2$,
\begin{equation}
D_1 := \der{\Phi_{ab}}\der{\Phi_{bc}} \der{\Phi_{cd}}\der{\Phi_{da}}\eand
D_2 := \der{\Phi_{ab}}\der{\Phi_{ba}} \der{\Phi_{cd}}\der{\Phi_{dc}}~,
\end{equation}
and we solve the pair of simultaneous equations
\begin{equation} \left.D_i \, \de^{-\beta S_{\rm kin}[\Phi]}\right|_{\Phi=0} \ =\ \left.D_i \, \de^{-\beta S_{\rm MT}[\Phi]}\right|_{\Phi=0} \ , \qquad i=1,2~,
\end{equation}
which yields
\begin{equation}
\begin{aligned}
a_4 &= \frac{8 \beta \pi^2 N (12 - 3 N^2 - N^3) }{ 3 (N+1)(N+2)(N+3)}~,\\
a_{2,2} &= \frac{ 8\beta \pi^2 (36 + 36 N - 3 N^2 - 11 N^3 - 2 N^4) }{ 3 (N+1)^2(N+2)(N+3)}~.
\end{aligned}
\end{equation}
The computations for the sixth order coefficients are straightforward but cumbersome and we simply quote the result of some computer algebra:
\begin{equation}
\begin{aligned}
a_6&=\scalebox{0.93}{$\frac{-64\beta^2\pi^3(N^6-15 N^5-5 N^4-123 N^3+388 N^2-30 N-120)}{3(N-5) (N-3) (N+1) (N+2) (N+3) (N+4) (N+5)}$}~,\\
a_{4,2}&=\scalebox{0.93}{$\frac{-64\beta^2\pi^3(2 N^{10}+4 N^{9}-103 N^{8}-81 N^7+1462 N^6+1610 N^5-8783 N^4-5865 N^3+10830 N^2+2700 N+3600)}{3(N-5) (N-3) (N-2)N (N+1)^2 (N+2) (N+3) (N+4) (N+5)}$}~,\\
a_{3,3}&=\scalebox{0.93}{$\frac{64 \beta^2\pi^3(2 N^{10}+2 N^{9}-98 N^{8}-129 N^7+1634 N^6+1384 N^5-9226 N^4-5905 N^3+13960 N^2+1800 N+2400)}{3 (N-5) (N-3) (N-2)N (N+1)^2 (N+2) (N+3) (N+4) (N+5)}$}~,\\
a_{2,2,2}&=\scalebox{0.93}{$\frac{64 \beta^2\pi^3(2 N^{9}+6 N^{8}-106 N^{7}-191 N^6+1653 N^5+3008 N^4-9364 N^3-15795 N^2+12135 N+19020)}{3 (N-5) (N-3) (N-2) (N+1)^3 (N+2) (N+3) (N+4) (N+5)}$}~.
\end{aligned}
\end{equation}
To keep our computations manageable, however, we limit ourselves to multitrace terms of quartic order in $\Phi$. It will turn out that this approximation is sufficient for all our purposes.
\subsection{Limit of large matrix size}
For computing the partition function of the multitrace action \eqref{eq:MT-action} plus the potential term, we turn to the large $N$ limit in order to apply the saddle point approximation later. Note that as usual in quantum field theory, a rescaling of the degrees of freedom needs to be accompanied by a rescaling of the fields and the involved coupling constants. This leads to a multiscaling limit which we discuss now.
As $N$ goes to infinity, the discrete set of eigenvalues $\lambda_1,\ldots, \lambda_N$ of $\Phi$ goes over into a continuous function $\lambda(x)=\lambda(\frac{i}{N})$, $0< x\leq 1$. Sums over powers of eigenvalues, $\tr(\Phi^j)=\sum_i(\lambda_i)^j$ turn into integrals $N\int_0^1\dd x \, \lambda(x)^j$. Note that each trace yields a factor of $N$ when being recast as an integral.
The maximum total scaling of the terms in the action is $\sim N^2$, which is fixed by the exponentiated Vandermonde determinant, cf.\ \eqref{eq:partion_VDM}. The coefficients of the multitrace action scale as follows:
\begin{equation}
a_2 \sim N~,~~~a_4\sim \beta N~,~~~a_{2,2}\sim \beta~.
\end{equation}
We denote the scaling of the eigenvalues $\lambda$, and the couplings $\beta$, $r$, $g$ by $\rho_\lambda$, $\rho_\beta$, $\rho_r$ and $\rho_g$, respectively:
\begin{equation}
\beta =N^{\rho_\beta} \tilde{\beta}~, \qquad \lambda = N^{\rho_\lambda} \tilde{\lambda}~, \qquad r = N^{\rho_r} \tilde{r}~, \qquad g = N^{\rho_g} \tilde{g} ~.
\end{equation}
With this notation, we obtain the following inequalities:
\begin{equation}
\begin{aligned}
\rho_\beta+2+2\rho_\lambda&\leq 2~,~~~&2\rho_\beta+2+4\rho_\lambda&\leq 2~,~~~&2\rho_\beta+2+4\rho_\lambda&\leq 2~,\\
\rho_\beta+\rho_r+2\rho_\lambda&\leq 2~,~~~&\rho_\beta+\rho_g+4\rho_\lambda&\leq 2~.
\end{aligned}
\end{equation}
Each inequality corresponds to a summand in the action. For each summand, we have one power of $\beta$ outside of the action and further powers from the coefficients, powers of $N$ from the various traces and the coefficients as well as scalings of further couplings and the eigenvalues. We would like to choose a scaling that saturates as many inequalities as possible. The first three inequalities are saturated by $\rho_\beta=-2\rho_\lambda$, and the last two equations yield $\rho_r=2$ and $\rho_g=-2\rho_\lambda$. A convenient choice is therefore
\begin{equation}
\rho_\lambda=\rho_\beta=\rho_g=0\eand \rho_r=2~.
\end{equation}
We can now write our model in terms of rescaled quantities. Instead of integrating over $x$, we integrate over $\lambda$:
\begin{equation}
\int_0^1\dd x \rightarrow \int_\mathcal{I} \dd \lambda\, \rho(\lambda)~,
\end{equation}
where $\rho(\lambda):=\frac{\dd x}{\dd \lambda}$ is the eigenvalue density and $\mathcal{I}$ is its support. Introducing the moments
\begin{equation}
c_i:=\int_{\mathcal{I}}\,\dd \lambda\,\rho(\lambda)\,\lambda^i~,
\end{equation}
we arrive at the action
\begin{equation}\label{eq:large-N-action}
\begin{aligned}
{\beta} {S}={\beta}\Big(&\tilde{r} c_2+ g c_4-4\pi( c_2- c^2_1)-\tfrac{8}{3} \beta \pi^2\big( c_4-4 c_3 c_1+6 c_2 c^2_1-3 c^4_1+2( c_2- c^2_1)^2\big)\Big)+\\
&-\int_\mathcal{I}\,\dd \lambda\,\dd \mu\,\rho(\lambda)\log(|\lambda-\mu|)\rho(\mu)~.
\end{aligned}
\end{equation}
\section{The phase diagram}
We now come to the computation of the phase diagram of scalar field theory on the fuzzy unit disc in the large $N$ limit. We also compare our result to the numerical findings of \cite{Lizzi:2012xy}.
\subsection{Saddle point approximation}
To compute the action \eqref{eq:large-N-action}, we rewrite it as
\begin{equation}\label{eq:large-N-action-V}
S[\rho(\lambda)] = \int_\mathcal{I}\,\dd \lambda\,\rho(\lambda) V(\lambda)-\int_\mathcal{I}\,\dd \lambda\,\dd \mu\,\rho(\lambda)\log(|\lambda-\mu|)\rho(\mu)+\xi \left( \int_\mathcal{I}\,\dd \lambda\,\rho(\lambda)-1\right)~,
\end{equation}
where the Lagrange multiplier $\xi$ was included to fix the normalization of the eigenvalue density. The potential reads as
\begin{equation}
V(\lambda)=\beta \left(\alpha_4 \lambda ^4+\alpha_{31} c_1 \lambda ^3+\lambda ^2 \left(\alpha_2+\alpha
_{211} c_1^2+\alpha_{22} c_2\right)+\lambda \left(\alpha_{1111} c_1^3+\alpha_{11}
c_1\right)\right)~.
\end{equation}
This potential is in fact the general potential for the rewritten action of a quantum scalar field theory on an arbitrary fuzzy space\footnote{assuming that the Laplace operator satisfies $\Delta \unit=0$}, truncated at fourth order in $\Phi$. We recover our action \eqref{eq:large-N-action} with the following choice of coefficients:
\begin{equation}
\begin{aligned}
\alpha_{11}&=4 \pi~,~~~&\alpha_{1111}&=\frac{8 \pi ^2 \beta }{3}~~~,~&\alpha_2&=\tilde{r}-4 \pi~,~~~&\alpha_{211}&=-\frac{16 \pi ^2 \beta }{3}~,\\
\alpha_{22}&=-\frac{16 \pi ^2 \beta }{3}~,~~~&\alpha_{31}&=\frac{32 \pi ^2 \beta }{3}~,~~~&\alpha_4&=g-\frac{8 \pi ^2 \beta }{3}~.
\end{aligned}
\end{equation}
The saddle point equation is obtained by varying the action \eqref{eq:large-N-action-V} with respect to $\rho(\lambda)$:
\begin{equation}\label{eq:eom}
\tilde V(\lambda)-2\int_\mathcal{I}\,\dd \mu\,\rho(\mu)\log(|\lambda-\mu|)+\xi = 0~,
\end{equation}
where
\begin{equation}
\begin{aligned}
\tilde V(\lambda)=\beta\Big(&\alpha_4\lambda^4+\alpha_{31}(c_1\lambda^3+c_3\lambda)+\alpha_2\lambda^2+\alpha_{211}c_1^2\lambda^2+\\
&+2\alpha_{211}c_2c_1\lambda+2\alpha_{22}c_2\lambda^2+4\alpha_{1111}c_1^3\lambda+2\alpha_{11}c_1\lambda\Big)~.
\end{aligned}
\end{equation}
The key object in finding the solution to \eqref{eq:eom} is the resolvent
\begin{equation}
W(\lambda):=\int \dd \mu~\frac{\rho(\mu)}{\lambda-\mu}~,
\end{equation}
which is an analytic function on $\FC\backslash \mathcal{I}$. A detailed review of the application of the resolvent is given e.g.\ in \cite{Brezin:1977sv,DiFrancesco:1993nw}, but for our purposes, the following observations are sufficient. First, note that we expect essentially two cases: $\mathcal{I}$ can either be given by a single interval or by the disjoint union of two intervals. We will refer to these cases as the single cut and the double cut solutions. The resolvent's singular part $\omega(\lambda)$ contains two roots $\delta_i$ in the former case and four roots in the latter case. It necessarily satisfies
\begin{equation}\label{eq:polynomial_matching}
\omega^2(\lambda)=M^2(\lambda)\prod_i(\lambda-\delta_i)=\tilde V'{}^2(\lambda)-R(\lambda)~,
\end{equation}
where $M(\lambda)$ is some polynomial in $\lambda$ and $R(\lambda)$ is a polynomial of one degree less than $\tilde V'(\lambda)$. The jump over the cut $\mathcal{I}$ yields the eigenvalue density according to
\begin{equation}
\rho(\lambda)=-\frac{1}{2\pi\di}\big(W(\lambda+\di \eps)-W(\lambda-\di \eps)\big)~,
\end{equation}
which implies that
\begin{equation}
\rho(\lambda)=\frac{1}{2\pi}\,|M(\lambda)|\,\sqrt{(\delta_2-\lambda)(\lambda-\delta_1)}~,~~~\mathcal{I}=[\delta_1,\delta_2]
\end{equation}
for the single cut solution or, for the double cut solution,
\begin{equation}
\rho(\lambda)=\frac{1}{2\pi}\,|M(\lambda)|\,\sqrt{(\delta_4-\lambda)(\lambda-\delta_3)(\delta_2-\lambda)(\delta_1-\lambda)}~,~~~\mathcal{I}=[\delta_1,\delta_2]\cup[\delta_3,\delta_4]~,
\end{equation}
where we assumed $\delta_4>\delta_3>0>\delta_2>\delta_1$.
\subsection{Solutions}
As explained later, we will be interested in two types of solutions. The eigenvalue densities of these solutions have support on a single interval and the disjoint union of two intervals, respectively. The results of related computations can be found in \cite{Shimamune:1981qf,Shishanin:2007zz} as well as \cite{O'Connor:2007ea,Saemann:2010bw}.
We will start with the former case and put $\mathcal{I}=[s-d,s+d]$. Equation \eqref{eq:polynomial_matching} fixes the polynomial
\begin{equation}
M(\lambda)=m_0+m_1\lambda+m_2\lambda^2
\end{equation}
as well as $d$. We then use the normalization of $\rho(\lambda)$, $c_0=1$, and the self-consistency conditions for $c_1$, $c_2$ and $c_3$ to fix the remaining unknowns. We obtain the eigenvalue density
\begin{equation}
\rho(\lambda)=\frac{1}{2\pi}\,|m_0+m_1\lambda+m_2\lambda^2|\,\sqrt{d^2-(s-\lambda )^2}
\end{equation}
with
\begin{equation}
\begin{aligned}
&m_0=\beta(2 \alpha_{211} c_1^2+3 s
\alpha_{31} c_1+2 \alpha_2+2(d^2+2 s^2)
\alpha_4+4 c_2 \alpha_{22})~,\\
&\hspace{2.2cm}m_1=\beta(4 s\alpha_4+3c_1\alpha_{31})~,~~~m_2=4\beta\alpha_4~.
\end{aligned}
\end{equation}
Additionally, we obtain from \eqref{eq:polynomial_matching} the equation
\begin{subequations}\label{eq:constraints_asym_single_cut}
\begin{equation}
\begin{aligned}
&8 \alpha_4 s^3+6 c_1 \alpha_{31} s^2+\left(12 \alpha_4 d^2+4 \alpha_2+8 c_2
\alpha_{22}+4 c_1^2 \alpha_{211}\right) s+4 c_1 \alpha_{11}+\\
&\hspace{4cm}+3 d^2 c_1 \alpha
_{31}+2 c_3 \alpha_{31}+4 c_1 c_2 \alpha_{211}+8 c_1^3 \alpha_{1111}=0~.
\end{aligned}
\end{equation}
The normalization condition $c_0=1$ implies
\begin{equation}
\tfrac{1}{4} \beta d^2 \left(2 \alpha_2+2 \alpha_{211} c_1^2+4 \alpha_{22} c_2+6 \alpha_{31}
c_1 s+3 \alpha_4 \left(d^2+4 s^2\right)\right)=1~,
\end{equation}
and self-consistency conditions for $c_1$, $c_2$ and $c_3$ read as
\begin{equation}
\begin{aligned}
8 \beta d^2 s \left(\alpha_2+2 \alpha_{22} c_2+3 \alpha_4 \left(d^2+2 s^2\right)\right)+\hspace{4cm}&\\
+c_1 \left(\beta d^2 \left(8 \alpha_{211} c_1 s+3 \alpha_{31} \left(d^2+8
s^2\right)\right)-16\right)&=0~,\\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
2 c_2 \left(\alpha_{22} \beta d^2 \left(d^2+4 s^2\right)-4\right)+\beta d^2 \Big(\alpha
_{211} c_1^2 \left(d^2+4 s^2\right)+\hspace{3cm}\\
+6 \alpha_{31} c_1 s \left(d^2+2 s^2\right)+\alpha_2\left(d^2+4 s^2\right)+2 \alpha_4 \left(d^4+12 d^2 s^2+12 s^4\right)\Big)&=0~,\\
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
32 c_3+\beta d^2 \Big(-4 \alpha_{211} c_1^2 \left(3 d^2 s+4 s^3\right)-4 \alpha_2 \left(3 d^2
s+4 s^3\right)-8 \alpha_{22} c_2 s \left(3
d^2+4 s^2\right)+\\
-3 \alpha_{31} c_1 \left(d^4+18 d^2 s^2+16 s^4\right)-12 \alpha_4 s \left(3 d^4+14 d^2 s^2+8 s^4\right)\Big)&=0~.
\end{aligned}
\end{equation}
\end{subequations}
The combined solution of these equations leads to rather involved expressions which do not provide any further insight. The special case $s=0$, however, does yield manageable expressions and it will be of interest to us later on. Here, the eigenvalue distribution is symmetric and correspondingly, $c_1=c_3=0$. This also implies that $\alpha_{211}$, $\alpha_{31}$ and $\alpha_{1111}$ can be put to zero. We obtain the eigenvalue density
\begin{equation}
\rho(\lambda)=\frac{\sqrt{d^2-\lambda ^2}~\left|4-d^2 \beta \left(d^2-4 \lambda ^2\right)
\alpha_4\right|}{2 d^2 \pi }~,
\end{equation}
where the boundary $d$ is determined by the equation
\begin{equation}
\beta ^2 \alpha_4 \alpha_{22} d^8+\left(12 \beta \alpha_4+4 \beta \alpha_{22}\right) d^4+8 \beta \alpha_2 d^2-16=0~.
\end{equation}
Note that the above equations can be used to reproduce the results of \cite{Saemann:2010bw} with the appropriate choices of $\alpha_2$, $\alpha_{22}$ and $\alpha_4$.
Let us now turn to the symmetric double cut solution with support on the interval $\mathcal{I}=[-\sqrt{s+d},-\sqrt{s-d}]\cup[\sqrt{s-d},\sqrt{s+d}]$. Again, this is a solution with a symmetric eigenvalue density, and we thus put $c_1=c_3=\alpha_{211}=\alpha_{31}=\alpha_{1111}=0$. Following the same steps as above, we obtain
\begin{equation}
\rho(\lambda) =\frac{2 \alpha_4 \beta |\lambda| \sqrt{d^2-\left(s-\lambda ^2\right)^2}}{\pi }~.
\end{equation}
The normalization condition of the eigenvalue density and the self consistency for $c_2$ lead to
\begin{equation}
d^2\beta \alpha_4 =1~,~~~d^2\beta \alpha_4 s = c_2~,
\end{equation}
which, together with \eqref{eq:polynomial_matching}, yield
\begin{equation}
c_2 = s~,~~~d=\frac{1}{\sqrt{\beta \alpha_4}}~,~~~s= -\frac{\alpha_2}{2(\alpha_4+\alpha_{22})}~.
\end{equation}
\subsection{Phase diagram}
Comparing with previous studies of scalar field theories on fuzzy spaces \cite{O'Connor:2007ea,Saemann:2010bw} as well as the numerical results of \cite{GarciaFlores:2005xc,Panero:2006bx,GarciaFlores:2009hf,Lizzi:2012xy}, we expect three phases: a symmetric single cut solution, in which the eigenvalue density has support on a single symmetric interval $\mathcal{I}=[-d,d]$, a symmetric double cut solution, in which the eigenvalue density has support on two symmetric intervals $\mathcal{I}=[-\sqrt{s+d},-\sqrt{s-d}]\cup[\sqrt{s-d},\sqrt{s+d}]$ and an asymmetric single cut solution, in which the eigenvalue density has support on the interval $\mathcal{I}=[s-d,s+d]$. These phases are also called {\em disorder phase}, {\em non-uniform order phase} and {\em uniform order phase}, respectively. There should be a third order phase transition between the first two phases and a second order transition between the last two phases. The former is actually the usual phase transition in a hermitian matrix model with quartic even potential, while the latter is the analogue of the usual phase transition in two-dimensional scalar quantum field theory, cf.\ \cite{Glimm:1975tw}.
We start by considering the existence boundaries of the various phases. We are interested in the parameter range in which a phase transition can occur, i.e.\ essentially positive $g$ and negative $r$. Note, however, that for the potential to be bounded from below, we need a positive coefficient of $\lambda^4$ in the potential \eqref{eq:large-N-action-V}. This restricts our parameter space to
\begin{equation}
\tilde r\leq 0 \eand g>\frac{8\pi^2\beta}{3}~.
\end{equation}
At the existence boundary of the symmetric single cut solution, the eigenvalue density develops a third root at $\lambda=0$, signaling the transition to the symmetric double cut regime. Putting $\rho(0)=0$, we can solve for $d^2$, which, together with the consistency condition $c_0=1$ yields an expression for $c_2$, which in turn yields the existence boundary
\begin{equation}\label{eq:eb_sym_single_cut}
\alpha_2=-\frac{2(\alpha_4+\alpha_{22})}{\sqrt{\beta \alpha_4}}~.
\end{equation}
In the case of our model \eqref{eq:large-N-action}, the existence boundary is thus at
\begin{equation}
\tilde r=\frac{2 \left(8 \pi ^2 \beta-g \right)}{\sqrt{3\beta
\left(3 g-8 \pi ^2 \beta \right)}}+4 \pi~.
\end{equation}
Next, we turn to the existence boundary of the symmetric double cut solution supported on the interval $\mathcal{I}=[-\sqrt{s+d},-\sqrt{s-d}]\cup[\sqrt{s-d},\sqrt{s+d}]$. Here, one readily finds that the existence boundary agrees with that of the symmetric single cut, \eqref{eq:eb_sym_single_cut}.
Finally, the asymmetric single cut solution only makes sense as long as $s-d>0$ and $\rho(\lambda)\geq 0$. This leads to complicated algebraic relations, which cannot be brought into a nice analytical expression. Instead, we simply check the validity of our solutions manually, whenever required.
As shown using an approximation in \cite{Saemann:2010bw}, the asymmetric single cut solution as well as the symmetric double cut solution exist on overlapping regions of the parameter space. To determine the preferred phase, we have to compare the free energy of both solutions since a physical system will adopt the phase with the lowest possible free energy. The latter is defined as $F:=-\log(\mathcal{Z})$, where $\mathcal{Z}$ is the partition function of our model. In the saddle point approximation, we correspondingly have $F=\beta S[\rho(\lambda)]-\beta S_{\rm free}[\rho_{\rm free}(\lambda)]$, where $S_{\rm free}$ is the free action, truncated at quadratic order in $\Phi$ and $\rho_{\rm free}(\lambda)$ is the corresponding eigenvalue density. By subtracting the free part, we only let the connected diagrams contribute to the free energy. Note that
\begin{equation}
\beta S[\rho(\lambda)]=\int_\mathcal{I} \dd \lambda\,\rho(\lambda)\big(V(\lambda)-\tfrac12 \tilde V(\lambda)\big)-\tfrac12 \xi~,
\end{equation}
as follows from \eqref{eq:large-N-action-V} and \eqref{eq:eom}. The Lagrange multiplier $\xi$ is determined by solving \eqref{eq:eom} at a suitable point $\lambda\in \mathcal{I}$. For example, in the case of the symmetric single cut solution, we can choose $\lambda=0$ to obtain $\xi=2\int_\mathcal{I}\dd \mu\,\rho(\mu)\log|\mu|$ as well as the following expression for the free energy:
\begin{equation}
F=\tfrac{3}{128} \beta ^2 \alpha_4^2 d^8+\tfrac{1}{32} \beta ^2 \alpha_2 \alpha_4
d^6-\tfrac{1}{8} \beta \alpha_4 d^4+\tfrac{1}{8} \beta \alpha_2
d^2-\tfrac{1}{2}\log \left(-\tfrac{1}{2} d^2 \beta \alpha
_2\right)-\tfrac14
\end{equation}
or
\begin{equation}
\begin{aligned}
F=&\tfrac{1}{384} \beta ^2 \left(3 g-8 \pi ^2 \beta \right)^2 d^8+\tfrac{1}{96} (\tilde{r}-4
\pi ) \beta ^2 \left(3 g-8 \pi ^2 \beta \right) d^6-\tfrac{1}{24} \beta \left(3
g-8 \pi ^2 \beta \right) d^4+\\
&+\tfrac{1}{8} (\tilde{r}-4 \pi ) \beta d^2-\tfrac{1}{2}\log \left(-\tfrac{1}{2} d^2 (\tilde{r}-4 \pi ) \beta \right)-\tfrac14
\end{aligned}
\end{equation}
in the case of our model \eqref{eq:large-N-action}.
In the cases of the symmetric double cut and asymmetric single cut solutions, we follow the choices of \cite{Saemann:2010bw} and determine $\xi$ at $\lambda=\pm\sqrt{s}$ and $\lambda=s$, respectively. For the symmetric double cut, we have
\begin{equation}
\begin{aligned}
\beta S[\rho(\lambda)]=\int_\mathcal{I}\dd{\lambda}\,\rho(\lambda)&\left(V(\lambda)-\tfrac{1}{2}\tilde V(\lambda)-\tfrac12\log|\lambda-\sqrt{s}|-\tfrac12\log|\lambda+\sqrt{s}|\right)+\\
&\hspace{1cm}+\tfrac{1}{4}\tilde V(\sqrt{s})+\tfrac{1}{4}\tilde V(-\sqrt{s})~,
\end{aligned}
\end{equation}
which evaluates to
\begin{equation}
F=-\frac{\beta \alpha_2^2}{4 \left(\alpha_4+\alpha_{22}\right)}+\tfrac{1}{4} \log
\left(\frac{\alpha_4}{\beta \alpha_2^2}\right)-\tfrac{3}{8}~,
\end{equation}
and for our model \eqref{eq:large-N-action} reads as
\begin{equation}
F=-\frac{\beta (\tilde{r}-4 \pi )^2}{4 \left(g-8 \pi ^2 \beta \right)}+\tfrac{1}{4} \log
\left(\frac{3 g-8 \pi ^2 \beta }{3 (\tilde{r}-4 \pi )^2 \beta }\right)-\tfrac{3}{8}~.
\end{equation}
For the asymmetric single cut, we have
\begin{equation}
\beta S[\rho(\lambda)]=\int_\mathcal{I}\dd{\lambda}\,\rho(\lambda)\left(V(\lambda)-\tfrac{1}{2}\tilde V(\lambda)-\log|\lambda-s|\right)+\tfrac{1}{2}\tilde V(s)~,
\end{equation}
which leads to the following lengthy expression:
\begin{equation}
\begin{aligned}
F=&\tfrac{3}{128} \beta ^2 \alpha_4^2 d^8+\tfrac{3}{16} s^2 \beta ^2 \alpha_4^2
d^6-\frac{3 s^3 \beta ^2 \alpha_4^2 d^6}{8 c_1}+\tfrac{1}{32} \beta ^2 \alpha_2 \alpha_4 d^6+\tfrac{1}{32} \beta ^2 c_1^2 \alpha_4 \alpha_{211}
d^6+\\
&-\tfrac{1}{8} \beta \alpha_4 d^4+\tfrac{1}{8} \beta \alpha_2
d^2-\tfrac{3}{2} s^2 \beta \alpha_4 d^2+s \beta c_1 \alpha_4 d^2-\frac{s^3
\beta \alpha_4 d^2}{c_1}+\tfrac{1}{8} \beta c_1^2 \alpha_{211} d^2+\\
&+\frac{3\beta c_1^2 \alpha_4 \alpha_{211} d^2}{4 \alpha_{22}}-\frac{3 s \beta c_1
\alpha_4 \alpha_{211} d^2}{4 \alpha_{22}}+\frac{\beta c_1^4 \alpha_{211}^2}{2 \alpha_{22}}-\frac{s \beta c_1^3 \alpha_{211}^2}{2 \alpha_{22}}-\tfrac{1}{2} \log \left(-\tfrac{1}{2} d \beta \alpha_2\right)+\\
&-\tfrac{1}{2} s^2 \beta \alpha_2+s \beta c_1 \alpha_2-6 s^4 \beta
\alpha_4+2 s^3 \beta c_1 \alpha_4+\frac{4 s^5 \beta \alpha_4}{c_1}+s \beta
c_1 \alpha_{11}+s \beta c_1^3 \alpha_{211}+\\
&-\tfrac{1}{2} s^2 \beta c_1^2
\alpha_{211}+\frac{\beta c_1^2 \alpha_2 \alpha_{211}}{2 \alpha_{22}}-\frac{s \beta c_1 \alpha_2 \alpha_{211}}{2 \alpha_{22}}-\frac{3 s^2
\beta c_1^2 \alpha_4 \alpha_{211}}{\alpha_{22}}+\frac{3 s^3 \beta c_1 \alpha_4 \alpha_{211}}{\alpha_{22}}+\\
&-\beta c_1^4 \alpha_{1111}+2 s \beta
c_1^3 \alpha_{1111}-\tfrac{1}{4}+\frac{s^2}{3 d^2}+\frac{4 s c_1}{3d^2}-\frac{c_1^2 \alpha_{211}}{\alpha_{22} d^2}+\frac{s c_1 \alpha_{211}}{\alpha_{22} d^2}-\frac{2 s^3}{3 c_1 d^2}+\\
&-\frac{8 s^4}{d^4}+\frac{8 s^3 c_1}{3 d^4}+\frac{8 s c_1^3 \alpha_{211}}{\alpha_{22} d^4}-\frac{16 s^2 c_1^2 \alpha_{211}}{\alpha_{22} d^4}+\frac{8 s^3 c_1 \alpha_{211}}{\alpha_{22}d^4}+\frac{16 s^5}{3 c_1 d^4}~,
\end{aligned}
\end{equation}
where the variables are subject to the constraints \eqref{eq:constraints_asym_single_cut}.
As consistency checks of our results, we readily verify that the free energies for the symmetric single cut and the symmetric double cut solutions agree at the common existence boundary, where we expect the usual third order phase transition of hermitian matrix models with quartic order potential. Furthermore, the free energy for the asymmetric single cut reduces to that of the symmetric single cut if $s=0$. Finally, the free energies of both the symmetric single cut and the symmetric double cut solutions reduce to those of \cite{Brezin:1977sv,Shimamune:1981qf,Shishanin:2007zz} with the correct choice of parameters.
The phase transition between the asymmetric single cut and the symmetric double cut is now found by equating the corresponding free energies. The resulting equations are again too involved to be reformulated in analytical form, and we find the corresponding solution using simple numerical methods.
Altogether, we obtain the phase diagram given in figure \ref{fig:1}. As discussed above, low values of $g$ are forbidden, which is an artifact of our approximation. Also, for very low values of both $g$ and $|r|$, our approximation of the kinetic term becomes unreliable. In the remaining parameter space, we have three distinct phases in which the symmetric single cut, the symmetric double cut and the asymmetric single cut solutions are appropriate. In figure \ref{fig:1}, these phases are labeled as I, II and III. The phase transition between I and II is the analogue of the usual third order phase transition in hermitian matrix models with quartic even potential. The phase transition between II and III is the second order phase transition also found in real scalar field theory with quartic even potential on the plane. Note that the phase transition from II to III is essentially given by a straight line $g\approx-4.84\,r$, and computing the phase boundary for high values of $-r$ confirm this feature of our plot in figure \ref{fig:1}.
\begin{figure}[h]
\hspace{2cm}
\begin{picture}(240,195)
\includegraphics[scale=1]{plot1.eps}
\put(0,190.0){\makebox(0,0)[l]{$1000$}}
\put(0,154.0){\makebox(0,0)[l]{$800$}}
\put(0,118.0){\makebox(0,0)[l]{$600$}}
\put(0,83.0){\makebox(0,0)[l]{$400$}}
\put(0,48.0){\makebox(0,0)[l]{$200$}}
\put(-308,5){\makebox(0,0)[l]{$-200$}}
\put(-236,5){\makebox(0,0)[l]{$-150$}}
\put(-164,5){\makebox(0,0)[l]{$-100$}}
\put(-90,5){\makebox(0,0)[l]{$-50$}}
\put(-12.0,190.0){\makebox(0,0)[c]{$g$}}
\put(-300.0,20.0){\makebox(0,0)[c]{$r$}}
\put(-17.0,150.0){\makebox(0,0)[c]{I}}
\put(-130.0,150.0){\makebox(0,0)[c]{II}}
\put(-210.0,70.0){\makebox(0,0)[c]{III}}
\put(-286.0,28.0){\vector(3,-1){36}}
\put(-300.0,35.0){\makebox(0,0)[c]{forbidden}}
\end{picture}
\vspace*{-5pt}
\caption{The phase diagram of scalar field theory on the fuzzy disc for $\beta=1$. The phases I, II and III correspond to the symmetric single cut, symmetric double cut and asymmetric single cut solutions. The approximately straight line has asymptotic slope $-\frac{8\pi}{3\sqrt{3}}$.}\label{fig:1}
\end{figure}
Using our formulas, we can now compute the slope of the straight line analytically. Our approximation of the kinetic term by multitrace expressions of quartic order in $\Phi$ becomes better for large values of $g$ and $r$, and we can restrict ourselves to these. As a second input, we find from numerically solving our equations that at the phase boundary where the free energies of phase II and III agree, $d$ tends to a fixed value $d\approx \tfrac12$. Both assumptions allow us to linearize our equations, which gives us the expressions
\begin{equation}\label{eq:analytical_slope}
d=\frac{3^{\tfrac14}}{\sqrt{2\pi}}\approx 0.5250\eand g=-\frac{8\pi}{3\sqrt{3}}\,r\approx -4.84\,r~.
\end{equation}
\subsection{Comparison to numerical results}
Having computed the phase diagram, let us now compare our results to those of \cite{Lizzi:2012xy}. There it was found that neither the coupling $g$ nor the field are renormalized, while $r$ is renormalized as $r=\tilde{r}N^{\tfrac13}$. While we also find that only $r$ requires renormalization, our factor is different, $r=\tilde{r}N^2$.
The three phases we obtained are also found in the numerical analysis and as stated before, the symmetric single cut, the symmetric double cut and the asymmetric single cut solutions correspond to the disorder phase, the non-uniform order phase and the uniform order phase, respectively. We recover again the fact that the phase transition between the non-uniform order and the uniform order phases is given by a straight line. However, the best fit of \cite{Lizzi:2012xy} suggested a relation $g\approx -0.51\,r$, while we found $g\approx -4.84\, r$. We are not sure about the source of this discrepancy. It might be due to our different renormalization of $r$ together with errors introduced by our approximation of the kinetic term by expressions of fourth order in $\Phi$ or numerical errors for low values of $N$ in \cite{Lizzi:2012xy}. The fact that we recover a perfect straight line for larger values of $|r|$ suggests that the approximation of the kinetic term is indeed a good one in that parameter range. A last possible cause for the quantitative difference of our results to those of \cite{Lizzi:2012xy} might be the different choices of Laplace operators.
Recall that scalar field theory on $\FR^2$ with quartic even potential exhibits a phase transition with a phase boundary given by a straight line, cf.\ the numerical results of \cite{Loinaz:1997az}. Scalar field theories on two-dimensional fuzzy spaces necessarily inherit this phase transition, and the straight line was found in the numerical studies of \cite{GarciaFlores:2005xc,Panero:2006bx,GarciaFlores:2009hf,Lizzi:2012xy} on the fuzzy sphere and the fuzzy disc. We reproduced this line by analytical methods.
\section{Conclusions}
In this letter, we computed the phase diagram of quantum scalar field theory on the fuzzy disc using bootstrapping and matrix model techniques. These methods are analytical but employ a perturbative expansion of the kinetic term of the scalar field theory, analogous to a high-temperature expansion. The result is a hermitian matrix model with an action containing multitrace terms.
It had been an initial hope that scalar field theory on the fuzzy disc, which is possibly the simplest fuzzy field theory, exhibits some special features allowing a bootstrapping to all orders in the field. Unfortunately, this was not the case.
We proceeded to compute the partition function of the matrix model in the limit of large matrix size using a saddle point approximation. We also derived expressions for the free energy in various phases. All our results were given in a very general form, applying to arbitrary multitrace models with actions of quartic order in the matrix. In particular, the free energy for the various phases of quantum scalar field theories on any fuzzy space can readily be written down using our equations.
We computed the shape of the phase diagram for scalar field theory on the fuzzy disc with quartic even potential and derived the relevant phase boundaries analytically. We find three phases: In the first phase, the disorder phase, the potential has essentially the shape of a single well and the expectation value of the magnitude of the field vanishes. In the second phase, the non-uniform order phase, the potential is a double well and the expectation value of the field distributes symmetrically at the bottom of both wells. In the third phase, the uniform order phase, the potential is a deep double well and the vacuum expectation value of the field equals one of the minima of the well. Our results agree qualitatively with those of \cite{Lizzi:2012xy}, but our quantitative results differ. There may be various reasons for this discrepancy, amongst which are the different renormalization of the couplings as well as errors in the numerical approximations and the different choices of Laplace operators.
The most interesting result of our computations is that we analytically reproduced the linear phase boundary between the non-uniform and the uniform order phases. This is a feature of both ordinary two-dimensional scalar field theory as well as scalar field theory on two-dimensional noncommutative spaces. It was indeed the numerical finding of this linear phase boundary for scalar field theory on the fuzzy sphere in \cite{GarciaFlores:2005xc} which triggered the research of \cite{O'Connor:2007ea}. The original aim there was to find the slope of this phase boundary analytically. In this letter, we finally achieved this goal and for the case of the fuzzy disk, the slope is given in \eqref{eq:analytical_slope}.
\section*{Acknowledgements}
This work was partially supported by the Consolidated Grant ST/L000334/1 from the UK Science and Technology Facilities Council.
|
1,108,101,565,271 | arxiv | \section*{Results}
\section*{Results}
\subsection*{Hamiltonian design}
To solve a specific problem, our GSCQC protocol starts with designing a problem Hamiltonian whose ground state corresponds to the solution, as in AQC. Different from the AQC, we then need to design an ancillary system such that we can realize ground state cooling with high probability.
Grover's search problem is modelled by considering a set $\{ \ket{0}, \ket{1}, \dots , \ket{N-1}\}] $ of $N$ orthogonal states in a finite Hilbert space. One of these states, $\ket{w}$, is the solution to the search problem, and our task is to find it. For any given state $\ket{j}$, the system as an Oracle gives a yes/no answer to the question ``is it true that $\ket{j} = \ket{w}$?''. The Oracle Hamiltonian reads~\cite{Adiabatic_qc_2},
\begin{equation}\label{H_0}
H_0 = \varepsilon \left( \vphantom{1^1} \mathbb{I} - |w \rangle\langle w| \right),
\end{equation}
where $\varepsilon$ is the strength of Hamiltonian, and $\mathbb{I}$ is the identity matrix. From~(\ref{H_0}) it follows that $H_0\ket{j}=\lambda \ket{j}$, where $\lambda = 0$ for $\ket{j}=\ket{w}$ and $\lambda = \varepsilon$ for the rest $\ket{j}\neq\ket{w}$.
Because $\ket{w}$ is the non-degenerate ground state, the system described by Hamiltonian~(\ref{H_0}) will reach this state after the ground state cooling process. Importantly, we have to ensure ourselves that the system has to be in its ground state {\em before} reading it out. This requirement determines the strategy of our proposed GSCQC protocol, where cooling process is divided into two steps. First, the system is cooled by conventional techniques to the lowest achievable temperature $T$, and afterwards we apply our shot cooling scheme~\cite{Li2011}, which has been verified experimentally in Ref. \cite{Xu2014}.
Assume that after the conventional cooling, our system is in a Gibbs thermal state described by a density matrix:
\begin{align}\label{initial_rho}
&\rho_{or}(0) = p_0\ket{w}\bra{w} + p_1\sum_{n\neq w}\ket{n}\bra{n} , \nonumber \\
&p_0 = \dfrac{1}{1 + (N-1)e^{-\varepsilon/kT}}, \, p_0 + (N-1)p_1 = 1,
\end{align}
where $k$ is the Boltzmann constant, $p_0$ is the probability of the system being in the ground state, $p_1$~is the probability of finding system in one of excitation states $|n \rangle$ $(n \neq w)$.
We show that by using our GSCQC protocol, the initial thermal state can be driven to a pure (or almost pure) solution state: $\rho_{or}(0)\xrightarrow{GSCQC} \ket{w}\bra{w}$.
It has been shown theoretically~\cite{Li2011} and experimentally~\cite{Xu2014} that the system ground state cooling can be achieved by a sequence of joint unitary evolutions and selective measurements on an ancilla coupled to the system. In our case we select a qubit as the ancilla. The total Hamiltonian of the system (or Oracle) and ancilla reads
\begin{align}\label{H_total}
H =& H_0 + \gamma \left(\ket{g}\bra{g}-\ket{e}\bra{e}\right) + \nonumber \\
&\delta\sum_{n=0}^{N}\left( \ket{n,e}\bra{n+1,g}+h.c. \right),
\end{align}
where $\ket{g}$ $(\ket{e})$ is the ground (excitation) state of the ancilla qubit, $\gamma$ corresponds to the energy splitting of the ancilla qubit, and $\delta$ is the coupling strength between the ancilla and Oracle where we consider a periodic boundary condition: $\ket{N}\equiv \ket{0}$ and set $\varepsilon=1$ and~$\hbar=1$ throughout the paper. Importantly, the second and third terms in Hamiltonian~(\ref{H_total}) do not contain any information about the unknown answer~$\ket{w}$. The initial density matrix of the whole system is $\rho(0) = \rho_{or}(0)\otimes\ket{g}\bra{g}$. In {\bf{Methods}}, we show how the system is cooled down to the state $\rho_{or}(M)$ at the~$M$-th measurement on ancilla~\cite{Li2011,Hiromichi}. The shot cooling is carried out in $M (\ge 1)$ iterations. In each iteration, we start with a different density matrix, which evolves under the Hamiltonian~(\ref{H_total}) over a particular period of time~$t$. After the evolution, one makes a projective selective measurement~$\ket{g}\bra{g}$ on the ancilla. After measurement, the density matrix becomes new. If the outcome of measurement is~$\ket{g}$ then we repeat and start the next iteration, otherwise the system will be reset, as in Fig.~\ref{fig1}. Furthermore we show that by properly choosing time~$t$ and parameters~$\gamma$ and $\delta$, one can achieve ground state cooling with a very high probability after few ancilla measurements. In what follows, we will introduce two different strategies for implementation of our GSCQC protocol. The first is specifically designed for Grover's search problem and the second is for finding the solution to a general problem, illustrated likewise by Grover's problem.
\subsection*{Algorithm of the ground state cooling}
Under the standard basis, the Hamiltonian and its propagator matrices can be represented by the direct sum of two-dimensional submatrices or blocks, and these blocks can be classified into three types only. One of these blocks (denoted as $0$) corresponds to the state~$\ket{w}$ and another (denoted as~$1$) to the~$\ket{w+1}$ state. The third type corresponds to all other $N-2$ states (denoted as~$2$, see Eq.(\ref{h_blocks}) in {\bf Methods}). The first strategy works by engineering the Hamiltonian in such a way that the propagator for the second type block is a swap operator $\begin{pmatrix}
0 & 1 \\
1 & 0 \\
\end{pmatrix}$. To achieve this we set~$\gamma = 0$ and $\delta_{1}t = \pi/2 + \pi j$, where $j=0,1,2,\dots$. After evolution and a measurement on the ancilla, if outcome is $\ket{g}$, the Oracle state becomes $\rho_{or}(1) = (p_0\ket{w}\bra{w}+p_1\ket{w+1}\bra{w+1})/(p_0+p_1)$. In the second step, we engineer a swap operator for first block by setting~$\gamma = -1/2$ and again $\delta_{2} t = \pi/2 + \pi j$, $j=0,1,2,\dots$. Given that the outcome of the ancilla measurement is $\ket{g}$, we can conclude with certainty that the Oracle state is $\rho_{or}(2) = \ket{w}\bra{w}$. The probability~$p_s$ of achieving successful ground cooling therefore is
\begin{multline}\label{probabil_2_steps}
p_s = p_0\prod_{i=1,2}\left(\cos^2\left[\sqrt{\delta_i^2+1/4}\frac{\pi/2 + \pi j}{\delta_i}\right] + \right.\\
\left.\frac{1/4}{\delta_i^2+1/4}\sin^2\left[\sqrt{\delta_i^2+1/4}\frac{\pi/2 + \pi j}{\delta_i}\right] \right) \leq p_0
\end{multline}
where $i=1,2$ is the step number. The probability~$p_s$ can be made approximately equal to the initial probability $p_0$ by choosing~$\delta_i\rightarrow 0$ ($t_i \rightarrow \infty$). Equation~(\ref{probabil_2_steps}) concludes that~$p_s\geq p_0(1-\delta_m^2/(\delta_m^2+1/4))$ where $\delta_m=\max\{\delta_1;\delta_2\}$. Assume that one has~$K$ copies of the same Oracles, the probability~$p(K)$ of achieving the above strategy in at least one copy of Oracles is~$p(K)=1-(1-p_s)^K\approx1-(1-p_{0})^K$. For instance, if~$p(K)=0.99$ is required, one has $K\geq7$ for $p_0=1/2$, and $K\geq44$ for~$p_0=1/10$.
The second strategy assumes that there is no flexibility to adjust parameters in the Hamiltonian during cooling process. The goal of this process is that the 0-type $2\times2$ evolution matrix becomes diagonal and the 2-type matrices come to be as close to a swap operator as possible after $M$ measurements on the ancilla, given that all outcomes are $\ket{g}$.
When~$t=2\pi$, the optimal parameters for the search algorithm are $\gamma\approx0.059$ and $\delta\approx0.236$
(see {\bf Methods} for details).
\begin{figure}[t]
\begin{center}
\includegraphics{fig_1.eps}
\end{center}
\caption{\textbf{Sketch of our GSCQC protocol.} The initial product states of the system~$\rho_{or}(0)$ and ancilla qubit~$\ket{g}\bra{g}$ become correlated due to the joint evolution~$U(t)$. After projection measurement~$\ket{g}\bra{g}$ on ancilla the next step of cooling process is applied if the outcome is~$\ket{g}$, and the state of the system is changed to~$\rho_{or}(1)$. Otherwise, the process has to run again from the beginning.
\label{fig1}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics{fig_2.eps}
\end{center}
\caption{\textbf{Probabilities as a function of $M$.} The probability of
finding ancilla in its ground state under different temperatures. Here the database size is $N=10^{23}$ and $T = T_0 + \delta T$, where $T_0 = \varepsilon (k\ln N)^{-1}$. Inset: Survival probability as a function of $M$ at different temperatures.}
\label{fig2}
\end{figure}
\subsection*{Initial conventional cooling as a quantum computation tool}
The effectiveness of GSCQC dramatically depends on the initial temperature~$T$. The characteristic temperature~$ T_0$, which we define by ~$p_0 = 1/2$ in Eq.~(\ref{initial_rho}), is $T_0 = \dfrac{\varepsilon}{k\ln N}$. It has been shown~\cite{grover_number} that the minimum size ($N$) of oracle database from which quantum Grover's search becomes more advantageous than the classical counterpart is $N\approx10^{22}$. The energy gap~$\varepsilon$ in the experimental NMR realization of the Grover search algorithm~\cite{nmr_adiabatic} is~$\varepsilon\approx 8.3\times10^{-19}$~erg which corresponds to the resonance frequency~$125.76$~MHz of~$^{13}$C in a magnetic field of $11.2$~Tesla. Using these data we estimate~$T_0\approx1.2\times10^{-4}$~K. This temperature is very low, but it is achievable by using modern cooling techniques~\cite{low-temp-book, cooling_BEC_500pk}.
Proposal of quantum computation based on electron spin resonance in solids~\cite{ESR_QC} allows to have $\exp(-\varepsilon/kT)\approx0.02$ by using a combination of strong pre-polarization fields and laser pulses at cryogenic temperature $T = 4.2$~K. This corresponds to $T_0\approx 0.3$~K for the Hamiltonian~(\ref{H_0}) with $N=10^{23}$. These examples imply that we can expect experimental realizations of the ground state even with conventional cooling, reading out the correct {\em answer} with high probability. We thus claim that for the problem Hamiltonian, the conventional ground state cooling already acts as a non-deterministic quantum computer, a conventional cooling induced quantum effect like superconductivity. This seems to be the simplest version of quantum computing. After reading out, the state should be finally checked and confirmed by substituting it into eigenequation of the problem Hamiltonian. Now the issue arises with the final check and confirmation. Unlike explicitly-written problem Hamiltonians, our problem Hamiltonian is a {\em black-box or unknown} Hamiltonian hidden in the Oracle, with which we are not able to check whether or not the state satisfies the eigenequation of the unknown Hamiltonian. In this sense, conventional ground state cooling cannot complete the full task of quantum computation, and we therefore need the second step.
\subsection*{Efficiency of ground state cooling}
Equation~(\ref{initial_rho}) shows that small temperature variations around $T_0$ can cause dramatic changes to the probability $p_0$. As discussed earlier, our second step uses shot cooling to avoid errors from temperature fluctuations, obtain a near~$\approx100\%$ probability of achieving the right answer, and more importantly provide a complete readout scheme for unknown Hamiltonians.
Let us consider~$N=10^{23}$ and denote the temperature of the system after the conventional cooling as $T = T_0 + \delta T$, where we consider worse cases when $\delta T\geq 0$. Figure~\ref{fig2} shows the probability of achieving the answer state~$\ket{w}$ after $M$ ancilla measurements, given that all outcomes are $\ket{g}$, for different values of~$\delta T/T_0$. After a few such ancilla measurements, the probability approaches ~$1$. The inset of Fig.~\ref{fig2} shows the survival probability~$p_s$ versus $M$, which approaches a constant $p_0$. For instance, if $T = T_0$ and ~$p_0 = 1/2$ initially, after $M=3$ ancilla measurements it is shown that the probability to be in ground state~$\ket{w}$ is about~$0.999$ while $p_s\approx 1/2$.
We now consider the effect of the temperature fluctuations. For a given probability $P_{cooling}$, different temperature fluctuations requires different ancilla measurement times $M$.
For our first strategy, $P_{cooling}=1$ for two ancilla measurements. Fig.~\ref{fig2} shows for the second strategy, where $M$ increases with $\delta T/T_0$.
\begin{comment}
It can be shown by using Eq.~(\ref{Wi-from-W0}) that the number $M_p$ of sequential non-discarded measurements for desired probability $1/N<P_{cooling}\leq1/3$ and for any $N$ can be written as
\begin{equation}\label{log_law}
M_p < \log_{1/b_2}\frac{2P_{cooling}N}{1-P_{cooling}},
\end{equation}
assuming $b_1^M<N b_2^M$.
If the required probability $P_{cooling} = 1 - \Delta$, where $\Delta\ll1$, then we have:
\begin{equation}\label{log_law_2}
M_p < \log_{b_1}\Delta,
\end{equation}
within is valid for $N\ll \Delta / (1/b_2)^{\log_{1/b_1}\Delta}$. For example, if $P_{cooling}=0.99$ and $N<10^{90}$, then $M_p < 75$.
\end{comment}
Assume that maximum value of $\delta T/T_0$, the minimal $M_p$ ancilla measurements are required, i.e.
\begin{align}\label{log_law_3}
&M_p < \log_{1/b_2}\dfrac{P_{cooling}N}{1-P_{cooling}-aP_{cooling}}, \nonumber\\
&a = \left( \frac{1}{N} \right)^{\dfrac{1}{1 + \delta T/T_0}},
\end{align}
with the probability $P_{cooling} < 1 - N^{-1/(1+\delta T/T_0)}$.
\begin{comment}
\begin{figure}[t]
\begin{center}
\includegraphics{fig_3.eps}
\end{center}
\caption{\textbf{How many non-interrupted cycles one needs to get a result.} Minimal number of non-discarded measurements~$M_p$ to achieve the ``answer state'' with probability~$P_{cooling}$ as a function on database size~$N$. a) $P_{cooling}=1/3$, triangles ( $\delta T/T_0 = 10$ ) and squares ( $\delta T/T_0 = 3$ ) are obtained from exact simulation (see Methods), solid line is calculated by using Eq.~(\ref{log_law}): $M_p = \lfloor \log_{16.43}(2P_{cooling}N(1-P_{cooling})^{-1}) \rfloor + 1$. b) $P_{cooling}=0.9$, triangles ( $\delta T/T_0 = 9$ ) and squares ( $\delta T/T_0 = 1$ ) are obtained from exact simulation, solid lines are calculated by using Eq.~(\ref{log_law_3}): $M_p = \lfloor \log_{16.43}(P_{cooling}N(1-P_{cooling}-aP_{cooling})^{-1}) \rfloor + 1$ .} \label{fig3}
\end{figure}
\end{comment}
\begin{figure}[t]
\begin{center}
\includegraphics{energy_variance.eps}
\end{center}
\caption{\textbf{Probabilities as a function of the gap $r$. } (a) Schematic illustration of Oracle's energy levels. (b) Minimal probability of finding ancilla in its initial state with $M=4$ as a function of the gaps $2r$ between low and high excitation states under different temperature. The database size is $N=10^{23}$ and $T = T_0 + \delta T$, where $T_0 = \varepsilon (k\ln N)^{-1}$.} \label{fig4}
\end{figure}
\begin{comment}
\begin{figure}[t]
\begin{center}
\includegraphics{fig_4.eps}
\end{center}
\caption{Minimal number of non-discarded measurements~$M_p$ to achieve the ``answer state'' with probability~$p=0.99$ as function on database size~$N$. Triangles ( $\delta T/T_0 = 3$ ) and squares ( $\delta T/T_0 = 1$ ) corresponds to simulation using expressions~(\ref{W_b_def}), solid line calculated by using expression~(\ref{log_law_3}): $M_p = \lfloor \log_{16.43}(pN(1-p)^{-1}) \rfloor + 1$.}
\label{fig4}
\end{figure}
\end{comment}
\section*{Discussion}
We have illustrated our nondeterministic GSCQC model by Grover's search problem. The model has two steps to find the ground state of a problem Hamiltonian. We first notice that the conventional cooling itself may act as quantum computation to find the ground state for a {known} problem Hamiltonian. For an {unknown} problem Hamiltonian, our second step finds the ground state by ancilla measurements, which is conceptually proposed by ref.~\cite{Li2011}. We design two strategies for ancilla measurements: $M=2$ and does not depend on the database size $N$ for the first strategy designed specifically for Grover's search problem and in the second strategy $M$ depends logarithmically on $N$ but for a general problem Hamiltonian, as exemplified by Fig.\ref{fig4}.
It is interesting to note that if the ground state~$\ket{w}$ is known, the second step of the GSCQC model can be further simplified to one ancilla measurement, by using two Hadamard gates acting on ancilla qubit, one control-$U$ gate and one ancilla measurement, where $U$ is the propagator of the black-box Oracle Hamiltonian~(\ref{H_0}), $U = \exp(-i\pi H_0)$. However, the simplified scheme cannot be applied to an unknown~$\ket{w}$, because realizing control-$U$ gate with black-box~$H_0$ Hamiltonian is impossible as proven in~\cite{unknown-u}. Intuitively, the control-$U$ gate may be physically given by the Hamiltonian $H_{c-U} = H_0 - \sum_{n\neq w}\ket{n,g}\bra{n,g}$, which implies a contradiction that one has to have pre-knowledge about the known ground state~$\ket{w}$.
A direct generalization is the case when excitation states are not fully degenerate. Figure~\ref{fig4}~(a) assumes that the energy gaps~$2r$ between low and high excitation states are smaller than the gaps between the ground state and the first excitation state. In~Fig.\ref{fig4}~(b) we show the cooling probability of the Oracle after $M=4$ ancilla measurements as a function of the gap~$r$, indicating that small energy gaps between low and high excitation states do not change the cooling possibility.
\section*{Methods}
After $M$ ancilla measurements, given that all outcomes are $\ket {g}$, the density matrix of the system is~\cite{Hiromichi}:
\begin{equation}\label{rho_transf_total}
\begin{aligned}
\rho_{or}(M) = \dfrac{V^M\rho_{or}(0)V^{\dag M}}{\mathrm{Tr}(V^M\rho(0)V^{\dag M})}, \\
\end{aligned}
\end{equation}
where $V = \braket{g|e^{-iHt}|g}$. Due to the block-diagonal structure of $H$, it is easier to treat each $2\times2$ submatrices of $H$ and $\rho$ separately.
It can be checked that the Hamiltonian~(\ref{H_total}) is block-diagonal with three types of $2\times2$ submatrices or blocks,
\begin{align}\label{h_blocks}
&h_0 = \begin{pmatrix}
1-\gamma & \delta \\
\delta & \gamma \\
\end{pmatrix}, \nonumber \\
&h_1 = \begin{pmatrix}
-\gamma & \delta \\
\delta & 1+\gamma \\
\end{pmatrix}, \nonumber\\
&h_2 = \begin{pmatrix}
1-\gamma & \delta \\
\delta & 1 + \gamma \\
\end{pmatrix}.
\end{align}
There are $N-2$ blocks of $h_2$ types and one block of $h_0$ and $h_1$ each in the Hamiltonian. The block $h_0$ corresponds to the solution $\ket{w}$. The density matrix of the system stays block-diagonal throughout the shot cooling process, and we denote the corresponding $2\times2$ blocks of the total density matrix as $\rho_0(M), \rho_1(M)$, and $\rho_2(M)$ respectively. The corresponding initial blocks are $\rho_0(0)=p_0\mathrm{diag}(0,1)$, and $\rho_1(0)=\rho_2(0)=p_1\mathrm{diag}(0,1)$.
The evolution of the density matrix blocks between two consecutive measurements is governed by the corresponding blocks of Hamiltonian. After $M$ ancilla measurements, the blocks of the density matrix becomes
\begin{align}\label{rho_steps}
&\rho_0(M) = W_0(M)\mathrm{diag}(0,1), \nonumber\\
&\rho_1(M) = W_1(M)\mathrm{diag}(0,1), \nonumber\\
&\rho_2(M) = \frac{W_2(M)}{N-2}\mathrm{diag}(0,1)
\end{align}
with
\begin{align}\label{W_b_def}
&W_0(M)+W_1(M)+W_2(M)=1, \quad W_0(0)=p_0, \nonumber\\
&W_1(0)=(N-2)^{-1}W_2(0)=p_1, \nonumber\\
&W_i(M+1) = b_iA(M)W_i(M), \; i=0,1,2.\nonumber\\
&A(M) = \left( \sum_i b_i W_i(M) \right)^{-1}, \nonumber\\
&\begin{pmatrix}
1-b_i & . \\
. & b_i \\
\end{pmatrix} = e^{-ih_it}\begin{pmatrix}
0 & 0 \\
0 & 1 \\
\end{pmatrix}e^{ih_it},
\end{align}
where $b_i$ corresponds to the $\ket{g}$ state of the ancilla and the dots in a matrix in Eq.~(\ref{W_b_def}) are arbitrary numbers irrelevant to the shot cooling procedure. $W_0(M)$ is the probability of the Oracle system being in the answer state~$\ket{w}$. From Eq.~(\ref{W_b_def}) we can obtain
\begin{equation}\label{Wi-from-W0}
W_i(M) = \frac{W_i(0)b_i^M}{\sum_{j}W_j(0)b_j^M}.
\end{equation}
Eq.~\eqref{Wi-from-W0} is used for numerical simulation in {Fig.~\ref{fig2}. Following~\cite{gsc_paper} we require simultaneously: (i) $b_0=1$ (dictated by the answer block) and (ii) $b_2\rightarrow0$ (corresponding to the second type block).
Using Eqs.~(\ref{h_blocks}) and~(\ref{W_b_def}), the requirement (i) is
\begin{equation}\label{first_condition}
\dfrac{\delta^2}{\delta^2 + (1/2 - \gamma)^2}\sin^2\sqrt{\delta^2 + (1/2 - \gamma)^2}t = 0.
\end{equation}
By setting $t=2\pi$ we get from Eq.~(\ref{first_condition})
\begin{equation}\label{ep-de-relation}
\delta^2 = \gamma(1-\gamma) + \frac{1}{4}\left( n^2 -1 \right), \; n=1,2,3\dots.
\end{equation}
We take~$n=1$. The requirement (ii) means that the following expression should have a maximum value:
\begin{equation}\label{second_condition}
\dfrac{\delta^2}{\delta^2 + \gamma^2}\sin^2\sqrt{\delta^2 + \gamma^2}t.
\end{equation}
The last requirement allows us to estimate $\delta$ and $\gamma$ for any given $t$.
\section*{Acknowledgments}
We acknowledge grant support from the Basque Government (Grant No. IT986-16), the Spanish MICINN (Project No. FIS2015- 69983-P), and the Basque Country University UFI (Project No. 11/55-01-2013), the National Key Research and Development Program of China (No. 2016YFA0301200) and the NSAF (Nos. U1330201 and U1530401).
\bibliographystyle{ieeetr}
|
1,108,101,565,272 | arxiv | \section{Introduction}
\label{Introduction}
The transition metal oxides based on nickel, or the nickelates for short, have witnessed a resurgence of interest in the last few years. Several recent papers have shown that nickelates are unique due to their strongly coupled charge, spin and lattice degrees of freedom, which can be manipulated to engineer novel electronic and magnetic phases (see for example Ref.~\citenum{PhysRevLett.103.156401, Zhao2014, PhysRevLett.113.227206}). Another reason for this resurgence can be attributed to the discovery of superconductivity in Nd$_{0.8}$Sr$_{0.2}$NiO$_{2}$ by Li et al. in the year 2019, which, in fact, led to the fulfillment of a long-sought-after quest for superconductivity in the nickelates \cite{Li2019}. Nearly two years before this momentous discovery, an ARPES study on single crystals of La$_4$Ni$_3$O$_{10}$, which is the $n =3$ member of the Ruddlesden Popper (RP) La$_{n+1}$Ni$_n$O$_{3n+1}$ $(n = 1, 2, 3 \dots \infty)$ series, revealed a large hole Fermi surface that closely resembles the Fermi surface of optimally hole-doped cuprates~\cite{Li2017} (see also Ref. \citenum{Zhang2017}). This discovery is important since the infinite layer NdNiO$_{2}$ (called the $T'$ phase) is related to the perovskite NdNiO$_3$ ($n = \infty$ member of RP series) from which it is obtained through a process of chemical reduction. In general, there is a whole range of infinite layer $T'$ phases given by $R_{n+1}$Ni$_n$O$_{2n+2}$ $(n = 1, 2, 3 \dots \infty)$, where $R$ is usually an alkaline earth or rare-earth ion, that are analogously related to their corresponding RP $R_{n+1}$Ni$_n$O$_{3n+1}$ phases. The nickelates of the RP series, therefore, constitute the primary phases with perovskite-type structure elements from which other nickelates, including the infinite layer $T'$ variants, can be derived.
A survey of past literature on the nickelates of the RP series reveals that the $n = 1, 2, 3. \dots$ members of the $RP$ series are relatively much less investigated -- an exception to this being La$_2$NiO$_{4-\delta}$ ($n = 1$), which shows an interesting phase diagram as a function of $\delta$ (see for example: Ref. \citenum{GREENBLATT1997174}). These intermediate members between $n = 1$ and $n = \infty$, in fact, exhibit a mixed-valent state, ranging from $2+$ for $ n = 1$ to $3+$ for $n = \infty$. Such a mixed-valency is well-known to give rise to strongly coupled electronic and magnetic phases (see for example Ref. \citenum{CoeyAIP1999}). Hence, there is a significant interest to study them in the recent years.
In particular, the $n = 3$ member of the RP series, consisting of the compounds $R_4$Ni$_3$O$_{10}$\ ($R =$ La, Pr and Nd) with an average Ni valence of $2.67$, will be interesting to investigate. The compounds $R_4$Ni$_3$O$_{10}$ ($R =$ La, Pr and Nd) are relatively easy to synthesize in pure form, and can also be readily reduced to their corresponding infinite layer $T'$ analogs. Previous studies have shown that they undergo a metal-to-metal transition (MMT) in the temperature range $135$ K to $160$ K depending on the identity of the $R$ ion. Recently, they have drawn a considerable attention (see for example: Ref. \citenum{Li2017,zhang2020intertwined, PhysRevB.97.115116, PhysRevB.101.195142, PhysRevB.101.104104, ZhangPRM2020, HuangfuPRR2020}). However, majority of these studies have mainly focused on understanding the nature of MMT where Ni 3d electrons play a crucial role. The magnetic ground state of the rare-earth sublattice or of the $4f$ electrons, and the interplay of $3d$ and $4f$ electrons have not been studied in detail so far. Moreover, the question of whether there is a structural phase transition associated with MMT or not has remain unsettled issues over the years.
Here, we investigate the resistivity, thermopower, thermal conductivity, magnetic susceptibility and specific heat of $R_4$Ni$_3$O$_{10}$ ($R = $La, Pr and Nd) in considerable detail to explore and understand the low-temperature properties arising due to 4f electrons. Further, to throw light on the nature of MMT, the crystal structure of $R_4$Ni$_3$O$_{10}$ ($R =$ La, Pr and Nd) is examined over a broad temperature range spanning MMT in all three compounds using a very high-resolution synchrotron data on high-quality samples. This is complemented by high-resolution capacitance dilatometry to investigate the temperature dependence of thermal expansion and Gr\"{u}neisen parameter across the MMT.
We show that in Pr$_4$Ni$_3$O$_{10}$, the Pr-moments in the rock-salt block layers undergo a magnetic ordering near $T_N = 5$ K while the Pr$^{3+}$ ions in the perovskite block layers exhibit a crystal field split non-magnetic singlet ground state. On the other hand, Nd$^{3+}$ moments in Nd$_4$Ni$_3$O$_{10}$ show no long-range ordering down to $2$ K (lowest temperature in our measurements). The paramagnetic Curie-temperatures for these compounds is found to lie in the range $-40$ K to $-50$ K indicating the presence of strong magnetic frustration. The effective carrier mass deduced from specific heat and thermopower lies in the range 2 to 4 times the free electron mass indicating moderately enhanced electronic correlations. The resistivities of all three compounds show an upturn at low-temperatures obeying a $-\sqrt{T}$ dependence. No evidence for the Kondo effect or the heavy Fermion behavior in any of the $R_4$Ni$_3$O$_{10}$'s could be found, contradicting the claim of heavy fermion state due to Ni$^{3+}$ centered Kondo effect in Nd$_4$Ni$_3$O$_{10}$\ published recently \cite{PhysRevB.101.195142}.
The rest of the paper has been organized as follows: The details of experimental methods are given in section \ref{ED}. This is followed by Results and Discussion section (\ref{III}), which has been divided further into several subsections for convenience. The details of crystal structure appear under \ref{IIIA}. The electrical and thermal transport, and the magnetic susceptibility have been briefly discussed in \ref{ET}. This is followed by subsections on specific heat (\ref{SH}) and thermal expansion (\ref{TE}) both of which form the crux of the paper. Finally, a summary of the important results, and conclusions drawn are presented under section \ref{SC}.
\section{Experimental Details}
\label{ED}
Conventional solid state synthesis of the higher members of the RP family leads to the formation of mixed phases and intergrowth~\cite{A702424J,ZHANG1994402}, which greatly influences the physical properties of the compounds. Hence, we adopted a wet chemical method to synthesize these compounds in pure phase. Further details of sample preparation are given here (see Ref.~\citenum{SM}). The phase purity was monitored using a Bruker D8 Advance powder X-ray diffractometer. The chemical composition of the samples was analyzed using the energy dispersive X-ray analysis (EDX) technique in a Zeiss Ultra Plus scanning electron microscope. Since the structural and electronic properties of RP phases often show strong dependence on the oxygen stoichiometry, we carried out complete decomposition of our samples under $10\%$ Ar-H$_2$ atmosphere employing a heating rate of $5$ K/min in a high resolution TGA setup (Netzsch STA $449$ F1). Using these experiments, we inferred the oxygen stoichiometry to lie in the range $97~\%$ to $98~\%$ of the ideal value for all the samples.
The high-resolution synchrotron powder X-ray diffraction experiments were carried out at the MSPD-BLO4 beamline of the ALBA synchrotron center, Barcelona, Spain. The samples were prepared in the form of finely ground powders that were placed in a borosilicate capillary tube of $0.5$ mm inner diameter. The sample was cooled using an Oxford Cryostream $700$ series nitrogen blower, and the diffractograms were collected in the range $0^o\leq 2\theta\leq30^o$ with a step size of $0.003^o$. The incident beam energy was set at $38$ keV ($\lambda =0.3263$~\AA) and a high resolution MAD$26$ detector with an angular resolution of about $4 \cdot 10^{-4}$ was used to resolve any subtle structural modifications~\cite{Fauth2015}. The data at each temperature was collected at a rate of $30$ min/scan. The structural refinement was done by the Rietveld method using the FULLPROF suite~\cite{rodriguez1993recent}. During the refinement, the occupancies of the O--sites are fixed as fully occupied as X-Ray diffraction is not sensitive to the position of lighter elements. The Pseudo-Voigt function was used to model the line-profile. Linear interpolation method was used to define the background. To account for the anisotropic strain broadening of the peaks, the Broadening Model (quartic form) was used. In this model, only certain $h k l$ dependent strain parameters ($S_{hkl}$) were refined corresponding to the Laue class used. Further details ae given in the SI. The quality of the refinement was assessed, both from the visual inspection of the fitted pattern or the difference plots, and the quantitative assessment on the basis of $\chi^2$, and the R-factors ($R_\mathrm{WP}, R_\mathrm{EXP}$ and $R_\mathrm{P}$). For fitting the low temperature data, the lattice parameters were refined along with angle $\beta$, the overall isotropic displacement factor ($B_\mathrm{iso}$), and the strain coefficients.
Magnetization, resistivity, thermopower and specific heat measurements were done using a Physical Property Measurement System (PPMS), Quantum Design USA. Magnetization measurements were done, both, under the zero-field-cooled (ZFC) and field-cooled (FC) conditions. Resistivity measurements were done on sintered rectangular samples of known dimensions using the four-probe method. Gold wires were used for electrical contacts with silver conducting paste. Specific-heat measurements were performed using the relaxation method in the PPMS. The heat capacity of the sample holder and APIEZON N grease (addenda) was determined prior to the measurements.
The relative length changes $dL/L$ were studied on cuboid shaped sintered samples of approximate dimensions $3 \times 2 \times 1~$mm$^{3}$. The measurements were done in zero magnetic field by means of a three-terminal high-resolution capacitance dilatometer\cite{kuchler2012compact}. The relative volume changes $dV/V= 3 dL/L$ and the volume thermal expansion coefficient $\beta=3\alpha$, with $\alpha = 1/L\cdot dL(T)/dT$ are derived.
\begin{figure}[!]
\centering
\includegraphics[width= 1.1\columnwidth]{CS.pdf}
\caption {The crystal structure of trilayer $R_4$Ni$_3$O$_{10}$ ($R =$ La, Pr and Nd) nickelates. Here PB represents the perovskite block and RS represents the rocksalt layer. R1 and R3 denote $9-$fold and $12-$fold coordinated rare-earth ions located in RS and PB, respectively.}
\label{CS}
\end{figure}
\section{Results and Discussion}
\label{III}
\subsection{Crystal structure}
\label{IIIA}
\textit{La$_4$Ni$_3$O$_{10}$}: There is a great deal of ambiguity in previous literature regarding the space group that correctly defines the crystal structure of La$_4$Ni$_3$O$_{10}$. The earliest work by Sepp\"{a}nen et al. reported an orthorhombic space group $Fmmm$ \cite{seppanen1979crystal}. However, Tkalich et al. \cite{tkalich1991synthesis}, and Voronin et al. \cite{VORONIN2001202} used the space group $Cmca$. Ling et al. \cite{LING2000517}, on the other hand, found the orthorhombic space group $Bmab$ (unconventional setting for $Cmca$) to be more suitable for refining their neutron powder diffraction data. Zhang et al. carried out structural refinement on the powders obtained by crushing high-pressure floating-zone grown single crystalline specimens \cite{ZhangPRM2020}. They propose that La$_4$Ni$_3$O$_{10}$ crystallizes in a mixture of $Bmab$ and $P2_1\slash a$---the phase fraction between the two phases being a function of the cooling condition employed \cite{zhang2019high}. For example, the phase $Bmab$ transforms almost completely to $P2_1\slash a$ when annealed under flowing oxygen. Finally, in a recent synchrotron based study by Kumar et al., the space group symmetry $P2_1/a, Z = 4$ has been endorsed~\cite{KUMAR2020165915}.
\begin{figure*}[!]
\centering
\includegraphics[width= 2\columnwidth]{RR_4310.pdf}
\caption {Rietveld refinement results of the room temperature synchrotron powder X-ray diffraction data for the three nickelates: (a1) La$_4$Ni$_3$O$_{10}$, (a2)~Pr$_4$Ni$_3$O$_{10}$\, (a3)~Nd$_4$Ni$_3$O$_{10}$. The black circles represent observed data; the red lines is the calculated intensity, and the vertical green bars indicate the positions of the Bragg peaks; the blue line at the bottom is the difference plot. In panel a1 the first, second, and third row of Bragg peaks correspond to $P2_1\slash a$, $Bmab$ and La$_3$Ni$_2$O$_7$ phases, respectively. In panels a2 and a3, only single phase refinement is done and the Bragg peaks (vertical green bars) are for $P2_1\slash a$. Panels b's, c's \& d's show the temperature variation of lattice parameters $a$, $b$ and $c$, respectively; panels e's \& f's show the temperature dependence of angle $\beta$ and unit cell volume, respectively; panels (g's) shows the normalized unit cell parameters. In some cases the size of the error bars is smaller than that of the data points.}
\label{RR4310}
\end{figure*}
In order to find the most appropriate space group from among those that were previously reported, we started by refining the structure using one space group at a time. To avoid biasing this procedure, every space group is tried till the refinement could not be improved further. Using this procedure (see supplementary Material for details \cite{SM}), we found that $P2_1\slash a$ (SG no.~$14, Z = 4$) best fits the experimental data. However, even with $P2_1\slash a$, the calculated profile around the high intensity peaks in the range $2\theta = 6 ^\circ$ to $7^\circ$, and those around $2\theta = 9.7 ^\circ$, remains far from perfect as shown in the SI. In the paper by Kumar et al.~\cite{KUMAR2020165915} also a similar difference between the calculated and measured intensities can be seen (see Fig. $2a$ and $2b$ of Ref. \citenum{KUMAR2020165915}).
We therefore attempted a mixed phase refinement wherein, besides the principal $P2_1\slash a$ phase, two additional phases: (i) the orthorhombic $Bmab$ (SG no. $64$) phase, and (ii) a lower ($n = 2$) member La$_3$Ni$_2$O$_7$, with an orthorhombic space group $Cmcm$ (SG no. $63$), are also incorporated. As shown in Supplementary Material \cite{SM}, inclusion of the $Bmab$ phase alone improves the quality of fit significantly with $P2_1\slash a : Bmab \equiv 86.3 : 13.7$. In order to see if we can get an even better match with the observed intensities, La$_3$Ni$_2$O$_7$ was also incorporated which lead to a further slight improvement. In this case, we find the ratio of three phases to be $P2_1\slash a : Bmab :$~La$_3$Ni$_2$O$_7 \equiv 85.6: 7.8 : 6.6$. Clearly, in both $2$-- and $3$--phase refinements, the phase fraction of the primary phase $P2_1\slash a$ remains more or less unchanged. Since the $R-$factors quantifying the quality of fit are slightly lower for the $3$--phase refinement, here we have shown the results for the same in Fig.~\ref{RR4310}(a1). Finally, even in the $3-$phase refinement some mismatch between the observed and calculated intensities around $2\theta = 10 ^\circ$ remains; this has been reported in the previous studies also and may arise from stacking faults \cite{NAGELL20177}. It should also be remarked, that a small extra peak, $\sim 1\%$ of the intensity of the main peak, near $2\theta = 8.95^\circ$, is also observed (see, Fig. \ref{LTXRDLa}(b), \ref{LTXRDLa}(f) or \ref{LTXRDLa}(j)), which indicates the presence of a small unidentified parasitic phase.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{LTXRDLa.pdf}
\caption {Synchrotron powder X-ray diffraction data for La$_4$Ni$_3$O$_{10}$ at three representative temperatures: $90$ K (top row), $140$ K (middle row) and $300$ K (bottom row) over select $2\theta$ ranges. The dashed vertical lines are shown as a guide to the eye. The y-scale in each panel is kept the same. Asterisk indicates an unidentified peak. Analogous low-temperature data for Pr$_4$Ni$_3$O$_{10}$~and Nd$_4$Ni$_3$O$_{10}$~are shown in the Supplementary Material \cite{SM}.}
\label{LTXRDLa}
\end{figure}
Fig.~\ref{RR4310}(b1-f1) show the temperature variation of the lattice parameters of the $P2_1\slash a$ phase. The lattice parameters decrease monotonically upon cooling exhibiting clearly discernible anomalies at $T_{\rm MMT}$. The $b-$axis, in fact, undergoes an expansion upon further cooling below $T_{\rm MMT}$. The diffraction patterns recorded below $T_{\rm MMT}$\ reveal neither the appearance of any new diffraction peak nor any peak splitting, which suggests that the structural reorganization across the MMT, if any, is rather subtle without any noticeable change of the lattice symmetry (see, Fig. \ref{LTXRDLa}). The negative thermal expansion along the $b-$axis is in agreement with that reported by Kumar et al. (Ref. \citenum{KUMAR2020165915}). The temperature variation of angle $\beta$, shown in panel \ref{RR4310}(e1), shows an increasing behavior upon cooling with a perceptible dip at $T_{\rm MMT}$. For comparison, the normalized lattice parameters are shown in Fig.~\ref{RR4310}(g1).
\textit{Pr$_4$Ni$_3$O$_{10}$}: Fig. \ref{RR4310}(a2) shows the results of Rietveld refinement for Pr$_4$Ni$_3$O$_{10}$. In this case, the refinement was done using the monoclinic space group $P2_1\slash a$ (SG no. $14, Z = 4$) alone, which resulted in a satisfactory fit except near the highest intensity peak where the calculated profile does not exactly match the observed data. Inclusion of strain improved the fitting to some extent but did not resolve the issue completely. Similar inconsistency over the same $2\theta$ range has also been previously observed~\cite{zhang2019high}. Whether the stacking faults or the intergrowth of lower RP members is the reason could not however be reliably ascertained. Also, analogous to La$_4$Ni$_3$O$_{10}$, some intensity mismatch is observed near $2\theta = 10^\circ$ (peak $\bar{2} 2 1$), which may be due to the stacking faults~\cite{NAGELL20177}.
As shown in Fig.~\ref{RR4310}(b2-e2), in the temperature range around $156$ K, where MMT is expected to occur, a clear anomaly in the lattice parameters is observed. The $b-$axis parameter shows an increase upon cooling below the MMT, analogous to La$_4$Ni$_3$O$_{10}$. In the temperature dependence of angle $\beta$, an appreciable non-monotonic variation has also been observed between the MMT and room-temperature.
\textit{Nd$_4$Ni$_3$O$_{10}$}: Fig.~\ref{RR4310}(a3) shows the results of Rietveld refinement for Nd$_4$Ni$_3$O$_{10}$ at room-temperature. The structural refinement in this case too is done using the monoclinic space group $P2_1\slash a$ (SG no. $14$; $Z = 4$) alone. Though all the observed peaks could be satisfactorily accounted for, the highest intensity peak was found to be unusually broad and the strain model $2$ is used to account for it (see section \ref{ED}).
As shown in Fig.~\ref{RR4310}(b3-e3) the lattice parameters of Nd$_4$Ni$_3$O$_{10}$\ decrease monotonically upon cooling with a weak anomaly around $160$~K, which coincides with $T_{\rm MMT}$\ previously reported for this compound. This anomaly is most prominent in the variation of the $b-$parameter. However, unlike the case of La$_4$Ni$_3$O$_{10}$\ and Pr$_4$Ni$_3$O$_{10}$\, the $b-$parameter in this case continues to decrease upon cooling below the MMT. The temperature variation of angle $\beta$ is shown in panel~\ref{RR4310}(e3). Upon cooling below room temperature, $\beta$ first increases down to about $T = 200$~K and decreases upon further cooling showing a broad peak near $T = 200$~K which may indicate the presence of a rather continuous but subtle and non-monotonic structure evolution occurring even above the MMT, analogous to the case of Pr$_4$Ni$_3$O$_{10}$. However, this should be further confirmed by collecting data at intermediate temperatures for all the samples.
\begin{table*}[!]
\setlength{\tabcolsep}{4pt}
\caption{Refinement parameters obtained using the high-resolution synchrotron data for room the temperature crystal structure of $R_4$Ni$_3$O$_{10}$ ($R = La, Pr$ and $Nd$). The error bar in the lattice parameters is estimated to be of the order of $\pm 0.0002$ in the fourth decimal place}
\label{RD}
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{c c c c c c c c c c c c c c}
\hline\hline
Specimen & Space group & SG No. & Phase Type & Phase \% & a($\AA$) & b($\AA)$ & c($\AA$) &$\beta$& $\chi^2$ & $R\rm_{WP}$ & $R\rm_{EXP}$ & $R\rm_P$\\
\hline
\tabularnewline
$La_4Ni_3O_{10}$ & $P2_1\slash a$ & 14 & M$^\dagger$ & 85.6 & 5.4243(5) & 5.4748(5) & 28.0053(4) & $90.192^o(3)$ & 6.15 & 14.7 & 5.90 & 11.7\\
& $Bmab$ & 64 & O$^\dagger$ & 7.8 & 5.4040 & 5.4621 & 28.5542 & $90^o$ & & & &\\
& $Cmcm$ & 63 & O$^\dagger$ & 6.6 & 20.1250 & 5.4638 & 5.4638 & $90^o$ & & & &\\
\tabularnewline
\hline
\tabularnewline
$Pr_4Ni_3O_{10}$ & $P2_1\slash a$ & 14 & M$^\dagger$ & 100 & 5.3826(4) & 5.4717(4) & 27.583(4) & $90.284^o(3)$ & 3.86 & 19.0 & 9.67 & 16\\
\tabularnewline
\hline
\tabularnewline
$Nd_4Ni_3O_{10}$ & $P2_1\slash a$ & 14 & M$^\dagger$ & 100 & 5.3719(4) & 5.46(5) & 27.4560(4) & $90.299^o(3)$ & 4.57 & 15.8 & 7.41 & 12.7\\
\tabularnewline
\hline\hline
\end{tabular}
}
\end{center}
\footnotetext{M$^\dagger$ : monoclinic and O$^\dagger$ : orthorhombic}
\end{table*}
Table~\ref{RD} summarizes the refinement details for the room temperature crystal structures of $R_4$Ni$_3$O$_{10}$, $R = $ La, Pr and Nd. The room temperature lattice parameters for all three samples are listed in Table \ref{RD}, which agree well with the values reported in previous literature \cite{KUMAR2020165915, BASSAT1998173, OLAFSEN200046}. The crystal structure of $R_4$Ni$_3$O$_{10}$ (monoclinic $P2_1/a$, $Z = 4$ ), shown in Fig.~\ref{CS}, comprises triple perovskite block (PB) layers ($R$NiO$_3$)$_3$, which consist of corner-linked NiO$_6$ octahedra. These triple PB layers are separated by $R$O layers with the rocksalt (RS) structure. There are four inequivalent $R-$atoms, two of these are located within the PB layers ($R3$, $R4$). They have a deformed $12-$fold coordination analogous to the perovskites $R$NiO$_3$ as shown in Fig. \ref{CS}. The remaining two $R-$atoms are located within the RS layers ($R1$, $R2$) with a $9-$fold coordination. Likewise, there are four distinct crystallographic sites for the Ni atoms. Borrowing the terminology used in Ref. \citenum{OLAFSEN200046}, we shall label them as Ni$1$, Ni$2$: located in the inner layer (ILs), and Ni$3$, Ni$4$: located in the outer layer (OL) that faces the $R$O layer on one side and PB layer on the other. The various R--O and Ni--O bond distances for all three samples are given in the Supplementary Material \cite{SM}. In all three cases, the elongated Ni--O bonds are apical, pointing towards the RS layer which is speculated to be a consequence of Ni$^{3+}$(OL)-Ni$^{2+}$(IL) charge ordering \cite{OLAFSEN200046}.\\
\subsection {Transport and Magnetization}
\label{ET}
\textit{Resistivity}: Fig.~\ref{Transport} shows the temperature dependence of resistivity ($\rho$), thermopower ($S$), and thermal conductivity ($\kappa$) for all three samples. We first examine the electrical resistivity. Upon cooling below room temperature, $\rho(T)$ for all three samples decreases monotonically down to a temperature of approximately $136$~K (La), $156$~K (Pr) and $160$~K (Nd). Upon further cooling, $\rho$ increases in a step like fashion, which can be identified with the MMT. The temperature at which the step occurs ($T_{\rm MMT}$), agrees well with the temperature where the lattice parameters show an anomaly. The resistivity discontinuity ($\Delta \rho$) at $T_{\rm MMT}$~ appears to be first-order like, however, no measurable thermal hysteresis at $T_{\rm MMT}$\ could be observed between the heating and cooling data.
\begin{figure}[!]
\centering
\includegraphics[width = 0.95\columnwidth]{Transport.pdf}
\caption{Panels (a), (b) and (c) show the temperature variation of resistivity ($\rho$) and thermoelectric power ($S$) for La$_4$Ni$_3$O$_{10}$~, Pr$_4$Ni$_3$O$_{10}$~and Nd$_4$Ni$_3$O$_{10}$~respectively. The temperature variation of their thermal conductivity ($\kappa$) is shown, respectively, in (d), (e) and (f).}
\label{Transport}
\end{figure}
Below the MMT, the resistivity for La$_4$Ni$_3$O$_{10}$\ and Pr$_4$Ni$_3$O$_{10}$\ continue to decrease down to some temperature $T_0$, which is followed by an upturn or a region of negative $d\rho/dT$ that persists down to $2$ K. $T_0$ is $\simeq20$ K and $\simeq80$ K for La$_4$Ni$_3$O$_{10}$ and Pr$_4$Ni$_3$O$_{10}$, respectively. These observations concerning behavior of $\rho(T)$ in the La and Pr compounds are in good agreement with previous reports \cite{SAKURAI201327, BASSAT1998173, KUMAR2020165915, PhysRevB.101.195142}.
In Nd$_4$Ni$_3$O$_{10}$, however, $d\rho/dT \simeq 0$ down to about $100$ K, and $<0$ upon further cooling followed by a steep increase below about $50$ K. The upturn in this case is also more pronounced than for La and Pr. In previous resistivity data for Nd$_4$Ni$_3$O$_{10}$\ however a region of negative $d\rho/dT$ for $T <$ $T_{\rm MMT}$ $< 50$ K is shown~\cite{PhysRevB.101.195142, ZHANG1995236}. Such variations may however arise if there are slight differences in the oxygen off-stoichiometry between the various samples, assuming that other factors, such as, purity and density are same~\cite{BASSAT1998173}. Typically, the extent of oxygen off-stoichiometry is controlled by the synthesis protocol. The Nd$_4$Ni$_3$O$_{10}$~sample used in Ref. ~\citenum{PhysRevB.101.195142} was reported to have been prepared under a pressurized oxygen atmosphere of $5$ bar at $1100^\circ C$ for $24$ h. Similarly, in Ref.~\citenum{ZHANG1995236}, the sample was prepared by annealing it under an oxygen flow for a period of close to $120$ h as opposed to $24$ h in our case. From the TGA data (Fig. 1 of the Supplementary Material \cite{SM}), it is clear that our Nd sample is oxygen deficient with O$_{9.8}$ as its oxygen content instead of full O$_{10}$. The exact oxygen off-stoichiometry for the samples used in Refs. ~\citenum{PhysRevB.101.195142} and ~\citenum{ZHANG1995236} is not known.
In previous studies, the low-temperature upturn has been variously interpreted. While it is attributed to the weak localization due to inelastic electron-electron interactions in Ref. \citenum{KUMAR2020165915}, the Kondo effect was claimed to be the reason in Ref.~\citenum{PhysRevB.101.195142}. In order to resolve this issue, we replotted the low-temperature data for all three compounds on two different temperature scales: (i) $T^{0.5}$, and (ii) $\ln T$ scales. The results are shown in Fig.~\ref{Res_fit}. Clearly, the data for all three samples are best described by a $-\sqrt{T}$ dependence which persists down to the lowest temperature of $2$~K. Very slight departure from this scaling for Pr$_4$Ni$_3$O$_{10}$~ and Nd$_4$Ni$_3$O$_{10}$~near $10$~K can be attributed to the short-range ordering of the rare-earth moments (\textit{vide infra}). On the contrary, the $-\ln T$ behavior does not describe the upturn in $\rho$ satisfactorily or does so only over a narrow temperature range, with significant departure at low temperatures. Attempts to fit the low-temperature upturn to the Arrhenius or Variable Range Hopping (VRH) models (with or without interactions) also did not give satisfactory results (not shown). The analysis above clearly favors a $-\sqrt{T}$ dependence over other functional dependences commonly used to describe the low-temperature upturn in resistivity. The validity of $-\sqrt{T}$ behavior suggests that at low-temperatures weak localization due to inelastic electron-electron scattering is possibly what causes the resistivity upturn in all three compounds, which is typical of disordered metals and alloys \cite{RevModPhys.57.287}. Here, the structural disorder might be in the form of stacking faults and intergrowth whose presence is reflected in the powder X-ray diffraction. This conclusion is also in agreement with Ref. \citenum{KUMAR2020165915}. On the other hand, the evidence for the Kondo effect in our data is rather weak.
\begin{figure}[!]
\centering
\includegraphics[width = \columnwidth]{Res_fit.pdf}
\caption{Temperature ($T$) variation of resistivity ($\rho$) for La$_4$Ni$_3$O$_{10}$~(a and b), Pr$_4$Ni$_3$O$_{10}$~(c and d) and Nd$_4$Ni$_3$O$_{10}$~(e and f) is plotted on a $T^{0.5}$ scale (left panels), and a $\ln$T scale (right panels) to show that $T^{0.5}$ is a better fit to the data except at low-temperatures ($T < 10$ K) which can be attributed to the short-range ordering of the Pr or Nd moments (see text for details).}
\label{Res_fit}
\end{figure}
\textit{Thermopower}: The thermopower of these samples is shown in panel (a), (b) and (c) of Fig.~\ref{Transport}. The overall behavior and the range of variation of $S$ for the three samples is comparable to that previously reported \cite{SREEDHAR1994208,BASSAT1998173}. The temperature variation of $S$ parallels that of $\rho$ in the sense that at $T_{\rm MMT}$ , $S(T)$ exhibits a sharp jump, which can be understood based on the Mott's formula for thermopower, which is given by:
\begin{equation}
\centering
S = \frac{\pi ^2}{3} \frac{k_B ^2T}{e} \left ( \frac{\partial \ln\sigma(E)}{\partial E} \right) _{E ={E_F}}
\label{eq1}
\end{equation}
where $k_B$ is the Boltzmann constant, $\sigma (E)$ is the electrical conductivity, $e$ the electronic charge, and $E_F$ is the Fermi energy. Since, $\sigma$ can be expressed as: $\sigma = n(E)q\mu (E)$, where $n(E) = D(E)f(E)$: $D(E)$ is the density of states, and $f(E)$ the Fermi-Dirac distribution function, and $\mu$ is the carrier mobility, one can rewrite eq. \ref{eq1} with a term in $S$ proportional to the quantity $dn/dE$ at $E_F$, i.e., change in carrier concentration with respect to energy at $E_F$, which is expected to vary drastically due to opening of a gap at $E_F$ below the MMT as shown in the previous ARPES studies~\cite{Li2017}.
We notice that, for $T > $~$T_{\rm MMT}$ , $|S|$ increase almost linearly with increasing temperature as is typically seen for metals. Naively, one can use the single parabolic band model approximation to rewrite the Mott formula in eq.~\ref{eq1} in the following form:
\begin{equation}
\centering
S = \frac{8\pi ^2 k_B^2 m^*}{3eh^2} \left ( \frac{\pi}{3n} \right ) ^\frac{2}{3} T
\label{eq3}
\end{equation}
where $m^*$ is the band effective mass of the charge carriers. By fitting $S$ above $T_{\rm MMT}$\ using $S = a_oT$, where $a_0$ is the prefactor in eq. \ref{eq3}, one can estimate $m^*$. For this purpose, we use $n$ obtained from the Hall coefficient $R_H \approx 10^{-3}$cm$^3$/C at $T = 300$~K \cite{kobayashi1996transport}. Following this procedure, we get $m^* \approx 3.0m_0$ for La$_4$Ni$_3$O$_{10}$, $\approx 3.7m_0$ for Pr$_4$Ni$_3$O$_{10}$, and $\approx 2.7m_0$ for Nd$_4$Ni$_3$O$_{10}$.
\textit{Thermal conductivity}: The temperature variation of thermal conductivity ($\kappa$) is shown in Fig.~\ref{Transport}(d--f). For all three samples the $T_{\rm MMT}$\ is manifested in $\kappa$ as a small but clearly discernible kink. For $R =$ La and Pr, we measured the data both while heating and cooling and found some hysteresis around $T_{\rm MMT}$. However, since no hysteresis was found in $\rho$, it is difficult to conclude if this is an intrinsic feature or a measurement issue. At low-temperatures, $\kappa$ increases upon heating as $\sim T^3$, which suggests that in this temperature range the acoustic phonons contributes dominantly to $\kappa$. Upon further heating, a noticeable change in the functional form of $\kappa$ takes place for $T \gtrsim 50$~K: In La$_4$Ni$_3$O$_{10}$ $\kappa$ shows a broad peak in the range from $50$~K to $100$~K with a peak value of $3$ Wm$^{-1}$K$^{-1}$ around $80$~K; in Pr$_4$Ni$_3$O$_{10}$ $\kappa$ shows an increasing behavior all the way up to $300$~K, albeit with a much slower rate $T \gtrsim 50$ K; and, in Nd$_4$Ni$_3$O$_{10}$, $\kappa$ gradually levels off with a saturated value of $\approx1$ Wm$^{-1}$K$^{-1}$ for $T > 100$~K. Thus, the behavior of $\kappa$ in all three cases is rather similar at low-temperatures, but differs somewhat depending on $R$ in the range $T \gtrsim 50$ K.
It is interesting to note that in spite of their reasonably high electrical conductivities (ranging from $100$-$1000$ S cm$^{-1}$), the thermal conductivities of these nickelates, ranging from $1$ W m$^{-1}$K$^{-1}$ to $3$ Wm$^{-1}$K$^{-1}$, is rather low, which, in turn, implies that the lattice thermal conductivity in these nickelates is intrinsically very low. This may be related to their complex layered structure. The low thermal conductivity and metal-like electrical conductivity above the MMT together indicates that the trilayer nickelates are potential oxide thermoelectric materials.
\begin{figure*}[t]
\centering
\includegraphics[width=2\columnwidth]{chi.pdf}
\caption{Zero-field-cooled (ZFC) and field-cooled (FC) susceptibility ($\chi$) of (a) La$_4$Ni$_3$O$_{10}$, (b) Pr$_4$Ni$_3$O$_{10}$, and (c) Nd$_4$Ni$_3$O$_{10}$~measured under an applied field of $5$~kOe. The inset in: (a) shows a kink in susceptibility at $T_{\rm MMT}$, (b) the low-temperature anomaly is emphasized in the first derivative plot, (c) the kink in susceptibility at $T_{\rm MMT}$.}
\label{chi}
\end{figure*}
\textit{Magnetic susceptibility}: The magnetic susceptibility ($\chi$) is shown as Fig. \ref{chi}. Our data shows a good agreement with previous reports \cite{PhysRevB.63.245120, kobayashi1996transport, KUMAR2020165915}. In~La$_4$Ni$_3$O$_{10}$,~$\chi$ exhibits a discernible kink at $T = 136$~K, which corresponds well with the MMT. In the temperature range $136$ K~$< T < 300$~K, $\chi(T)$ decreases upon cooling which is uncharacteristic of a local moment system. However, at low-temperatures it increases sharply upon cooling. The upturn in $\chi$ could be fitted using the modified Curie--Weiss (CW) law: $\chi = \chi_o + C/(T - \theta_P)$ in the range $2$ K $ \leq $ T $\leq 10$ K yielding $\chi_0 = 10^{-3}$ emu mol$^{-1}$Oe$^{-1}$, the Curie-constant $C = 1.7 \times 10^{-2}$ emu mol$^{-1}$Oe$^{-1}$K$^{-1}$, and the paramagnetic Curie-temperature $\theta_P$ $\approx 2.7$ K. From a previous ARPES study~\cite{Li2017} we know that a gap of $\approx20$ meV opens-up in the $d_{3z^2 - r^2}$ band below~$T_{\rm MMT}$, which may induce a localization of $d_{3z^2 - r^2}$ electrons upon cooling leading to the observed upturn below~$T_{\rm MMT}$. The overall magnetic behavior of La$_4$Ni$_3$O$_{10}$~exhibits a complex interplay of itinerant and local moment behavior.
The magnetic susceptibility of the~Pr$_4$Ni$_3$O$_{10}$~is dominated by the CW behavior associated with the $Pr^{3+}$ moments. Additionally, a weak anomaly is also observed around $T \simeq 5$~K. The high-temperature $\chi$ could be fitted using the modified CW law yielding: $\chi_0$ $\approx2.8 \times 10^{-3}$ emu mol$^{-1}$Oe$^{-1}$, C $\approx$ $6.3$ emu mol$^{-1}$Oe$^{-1}$K$^{-1}$, and $\theta_p$ $\approx -36$~K in good agreement with literature~\cite{BASSAT1998173}. The value of $\chi_0$ is positive and comparable in magnitude to that for La$_4$Ni$_3$O$_{10}$. The negative sign of $\theta_p$ indicates antiferromagnetic nature of exchange between the Pr$^{3+}$ moments. The experimental effective magnetic moment per formula unit can be estimated using the formula: $\mu_\mathrm{{eff}} = \sqrt{8C}$ which gives $\approx 7.2$ $\mu_B$. Theoretically, $\mu_\mathrm{{eff}}/f.u.$ is given by $[4.\mu^2_{\rm eff}(Pr) + 3.\mu^2_{\rm eff}(Ni)]^\frac{1}{2}$. Substituting the theoretical value of $\mu_\mathrm{{eff}} = 3.58~\mu_B$ per $Pr^{3+}$ ion results in a relatively negligible moment on the Ni-ions.
Though effective magnetic moments of Nd$^{3+}$ and Pr$^{3+}$ are nearly the same in free space, the low-temperature $\chi$ in Nd$_4$Ni$_3$O$_{10}$~is almost four times as large as that of Pr$_4$Ni$_3$O$_{10}$~. This suggests the presence of strong crystal field effect that renders one-half of the Pr-moment effectively non-magnetic at low-temperature due to their singlet ground state as shown later in the manuscript. The CW fit in this case resulted in: $\chi_0$ $\sim$ $3.8\times10^{-3}$ emu mol$^{-1}$Oe$^{-1}$, $C \sim 6.3$ emu mol$^{-1}$Oe$^{-1}$K$^{-1}$, and $\theta_p = -46.5$~K. These values are in close agreement with those recently reported by Li et al.~\cite{PhysRevB.101.195142}. From $C$, the experimental $\mu_\mathrm{{eff}}$ is estimated to be $\approx 7.1\mu_B/f.u.$ which is practically all due to the Nd$^{3+}$, suggesting that the local moment associated with Ni is comparatively negligible. The value of $\theta_p$ is high given the absence of any magnetic ordering, suggesting that a strong magnetic frustration is at play in these nickelates. For more information we refer the reader to Ref. \citenum{SM}.
\subsection{Specific heat}\label{sectioncp}
\label{SH}
The specific heat ($c_p$) data of $R_4$Ni$_3$O$_{10}$ compounds exhibits a sharp anomaly at their respective MMTs which is particularly pronounced for Pr$_4$Ni$_3$O$_{10}$\ and Nd$_4$Ni$_3$O$_{10}$~, which also show additional anomalies at low temperatures, associated with the rare-earth sublattice.
\textit{La$_4$Ni$_3$O$_{10}$}: In La$_4$Ni$_3$O$_{10}$ the specific heat anomaly occurs at $136$~K as shown in Fig.~\ref{cp_Pr4310}. It should be emphasized that in a La$_4$Ni$_3$O$_{10}$ crystallizing in the $Bmab$ space group the specific heat anomaly occurs at a temperature of $\approx150$~K, and it is at $136$ K for the $P2_1/a$ phase \cite{zhang2019high}. This is consistent with our assessment of $P2_1/a$ as the majority phase in our samples. The applied magnetic field of $50$~kOe (not shown) was found to have practically no effect on this anomaly. At low-temperatures, $c_p$ can be fitted using the equation: $c_p= \gamma T + \beta T^3$, where $\gamma$ and $\beta$ represents the electronic and lattice contributions, respectively (see the lower inset in Fig.~\ref{cp_Pr4310}). The best-fit yields: $\gamma \approx 15$~mJ-mol$^{-1}$K$^{-2}$, $\beta \approx0.43$~mJ~mol~K$^{-4}$. The Debye temperature ($\Theta_D$) is calculated from $\beta$ using the relation: $\beta= 12\pi^4Nk_B/5\Theta_D^3$, which gives a value of $\Theta_D \approx 450$~K. The values of $\Theta_D$ and $\gamma$ obtained here are comparable to those previously reported \cite{PhysRevB.63.245120}. From the value of $\gamma$ one can readily estimate the density of states at the Fermi energy, D(E$_F$), using the expression: D(E$_f$)$ = 3\gamma/ \pi^2 k_B^2$, which gives a value of $\approx3.0 \times 10^{22}$ states eV$^{-1}$ cm$^{-3}$. Now, using the carrier density $n$, one can estimate the corresponding density of states D$^{\circ}$(E$_F$) at $E_F$ using the free-electron model. Taking $n \approx6.3 \times 10^{21}$ cm$^{-3}\,$ \cite{kobayashi1996transport}, one gets D$^{\circ}$(E$_F$) $\approx7.6 \times 10^{21}$ states eV$^{-1}$ cm$^{-3}$. From the ratio D(E$_F$)$/$D$^{\circ}$(E$_F$)$= m^*/m_{\circ}$, we estimate the effective mass ($m^*$) for La$_4$Ni$_3$O$_{10}$ to be $m^*\approx3.9 m_{\circ}$, where $m_{\circ}$ is the bare electron mass, which is comparable to the value of $m^*$ from the thermopower ($\approx3.0 m_{\circ}$). The small difference between the two can be due to the possible Fermi surface reconstruction below the MMT. Also, we have not accounted for the valley degeneracy, if any, which makes the the effective mass derived from the density of states higher than the band effective mass by a factor $N^{2/3}$ where $N$ is the valley degeneracy. In any case, the important point is that from the value of $m^*$ one can conclude that the electronic correlations in La$_4$Ni$_3$O$_{10}$ are only modestly enhanced.\\
\textit{Pr$_4$Ni$_3$O$_{10}$}: Fig.~\ref{cp_Pr4310} shows the specific heat of Pr$_4$Ni$_3$O$_{10}$ where a sharp transition is observed at $156$~K, which agrees nicely with the anomaly associated with the MMT in the transport data. In this case, too, the position and shape of the anomaly remains unaffected by the application of an external magnetic field. Apart from the expected peak at MMT, an additional broad anomaly is seen at low temperatures centered around $T_1 = 5$ K, which coincides with the anomaly in $\chi$ at the same temperature. Interestingly, the applied field up to $50$ kOe has no significant effect on the shape or position of this anomaly ruling out its Schottky-like origin.
\\
\begin{figure}[!]
\centering
\includegraphics[width = 0.95\columnwidth]{cp.pdf}
\caption{(a) Temperature ($T$) variation of the specific heat ($c_p$) of La$_4$Ni$_3$O$_{10}$\ and Pr$_4$Ni$_3$O$_{10}$. The upper inset in (a) highlights the presence of a broad anomaly in $c_p$ of Pr$_4$Ni$_3$O$_{10}$~at low temperature measured under zero-field and a field of $50$~kOe. The immunity to magnetic field of this peak rules out its Schottky-like origin. The lower inset shows $c_p/T$ vs. $T^2$ of La$_4$Ni$_3$O$_{10}$\ at low-temperatures. The dashed line is a linear-fit to the data. (b) $\frac{c_p}{T}$ against $T$. $c_{4f}$(Pr) represents the specific heat associated with the $4f$ electrons of Pr$^{3+}$ which is obtained by subtracting the specific heat of La$_4$Ni$_3$O$_{10}$~ from that of Pr$_4$Ni$_3$O$_{10}$~(see text for details). Temperature variation of entropy associated with the $4f$ electrons of Pr$^{3+}$ is shown as an inset.}
\label{cp_Pr4310}
\end{figure}
To examine the contribution of $4f$ electrons associated with Pr to the specific heat (designated as $c_{4f}^{Pr}$ in the following) at low temperatures, we subtracted the specific heat data of La$_4$Ni$_3$O$_{10}$ from that of Pr$_4$Ni$_3$O$_{10}$. Since both are isostructural, with very similar molecular weights, it is, therefore, reasonable to approximate the lattice specific heat of Pr$_4$Ni$_3$O$_{10}$ with that of La$_4$Ni$_3$O$_{10}$. Furthermore, we assume that the small contribution due to Ni 3d electrons to the specific heat does not vary much upon going from La to Pr, at least well below the MMT. This is a reasonable approximation to make given that $T_{\rm MMT}$\ of the these nickelates is not very sensitive to the choice of $R$, in wide contrast with the members of the $R$NiO$_3$ series where the structural and physical properties are closed tied to the identity of the rare-earth ion \cite{Catalano_2018}.
$c_{4f}^{Pr}$ obtained using this procedure is shown in Fig.~\ref{cp_Pr4310}b (lower panel) over the temperature range $2$~K $ \leq T \leq 100$~K. Interestingly, beside the peak at $T_1 = 5$~K, $c_{4f}^{Pr}$ also exhibits an additional broad peak around $T_2 = 36$~K. This new feature is likely a Schottky anomaly arising due to the crystal field splitting of the lowest $J = 4$ multiplet of the Pr$^{3+}$ ions. To understand this further, we estimate the magnetic entropy ($s_{4f}$) buried under the peak at $T_1$ using the formula: $s_{4f} =\int_{0}^{T} \ (c_{4f}^{Pr}/T'\ ) dT'$. For our rough estimate, we extrapolate $c_{4f}^{Pr}$ below $T = 2$~K linearly to $T = 0$~K. The calculated $s_{4f}$ is shown as an inset in the lower panel of Fig. \ref{cp_Pr4310}. It shows a relatively steep rise up to $10$~K, but continues to increase, albeit at a slower rate, upon heating beyond $15$~K. The region between $10$ K and $15$ K is where the crossover from higher ($T < 10$ K) to slower ($T > 15$ K) rates happens. The magnetic entropy released in the temperature range $T \leq 15$~K ($\approx3T_1$) is $\approx 11.5$ J mol$^{-1}$K$^{-1}$, i.e., $\approx2.9$~J Pr-mol$^{-1}$K$^{-1}$, which is approximately $\frac{1}{2}$ of $R\ln2$. What this suggests is that the peak at $T_1$ is likely due to the magnetic ordering of $\frac{1}{2}$ of the Pr$^{3+}$ ions per Pr$_4$Ni$_3$O$_{10}$ formula unit, which is plausible since there are $2$-types of Pr coordinations in this structure: $9$ -- fold (RS layers) and $12$ -- fold (PB layers). Incidentally, Pr$^{3+}$ in the perovskite PrNiO$_3$ has a noon-magnetic singlet ground state~\cite{PhysRevB.60.14857}. Since the coordination of Pr$^{3+}$ ions in the PB layers of Pr$_4$Ni$_3$O$_{10}$ is analogous to that in PrNiO$_3$, it is reasonable to assume that they, too, have a singlet ground state with no magnetic ordering. Therefore, we can tentatively associate the broad peak in the specific heat at $T_1$ to the magnetic ordering of the $9-$fold coordinated Pr$^{3+}$ ions. The increase in $s_{4f}$ beyond $15$ K can be attributed to the higher lying crystal field levels as discussed further. A similar scenario has been previously reported for the compounds Pr$_3$RuO$_7$ which has two types of Pr coordinations, namely, eightfold and sevenfold, with Pr ions in the sevenfold coordination having a crystal field split singlet ground state, and those in the eightfold coordination a doublet \cite{PhysRevB.72.014458}.
However, the question arises as to why the peak associated with the magnetic ordering of Pr$^{3+}$ ions in the RS layer is not as sharp as is typically seen at a long-range ordered magnetic transitions. To answer this question, one should see that for the $9-$fold coordinated Pr$^{3+}$ ions there are, in fact, two distinct crystallographically sites (Pr$1$ and Pr$2$) as discussed in \ref{IIIA}. Due to minor differences in bond angles and bond lengths around Pr$1$ and Pr$2$, the exchange integrals $J_{11}$ (within the Pr$1$ sublattice), $J_{22}$ (within the Pr$2$ sublattice), and intersite $J_{12}$ may differ slightly, which could be one of the reasons for the $c_p$ anomaly at $T_1$, associated with ordering of Pr$1$ and Pr$2$, to be broad. The other reason could be related to the fact that a $Pr^{3+}$ moments in a RS layer is only weakly coupled to the $Pr^{3+}$ moments in the RS layer above it (see Fig. \ref{CS}), leading to a quasi-two-dimensional behavior.
Let us now turn our attention to the peak at $T_2$ which seems to arise due to the crystal field splitting of the lowest J-multiplet of Pr$^{3+}$ ions. In a previous inelastic neutron scattering study on the perovskite compound PrNiO$_3$ \cite{PhysRevB.60.14857}, it was found that the $9-$fold degenerate J-multiplet of the $Pr^{3+}$ ion splits into $9$ singlets due to the crystal field effect. The energy difference between the ground state singlet $(E_0^1)$ and the first excited state $(E_1^1)$ is $6.4$ meV or approximately $70$~K. In the first order approximation, the crystal field splitting of Pr ions in the PB layers of Pr$_4$Ni$_3$O$_{10}$ can be assumed to be similar to that in the compound PrNiO$_3$. Within this assumption, the Schottky anomaly due to the ground and first excited singlet is expected to be centered slightly below $T = (E_1^1 - E_0^1)/2k_B \approx 35$~K, which is remarkably close to the position of the peak at $T_2$. Since the second excited singlet for Pr in the PB layers is located around $E_2^1 = 15$ meV ($\approx 165$~K), it is too high up to have any significant effect on the Schottky anomaly arising due to the $E_0^1$/$E_1^1$ pair.\\
It can therefore be concluded that the Pr ions in the PB layer have a singlet ground state due to a crystal field effect, with a broad Schottky anomaly associated with ground and first excited singlet pair. On the other hand, Pr ions in the RS layers have a crystal field split doublet as their ground state, and undergo magnetic ordering around $T_1$. The observed increase in $s_{4f}$ above $2\,T_1$ is partly due to $E_0/E_1$ excitations associated with Pr-ions in the PB layer, and partly due to the higher lying crystal field split levels of Pr ions in the RS layers. In the absence of a detailed crystal field splitting scheme for the Pr ions in the RS layers, a quantitative analysis of the low-temperature specific heat is left as a future exercise.\\
\textit{Nd$_4$Ni$_3$O$_{10}$}: Fig.~\ref{cp_Nd4310}(a) shows the specific heat of Nd$_4$Ni$_3$O$_{10}$, which is characterized by a sharp anomaly at $T = 160$~K. The position of this anomaly is in a fairly good agreement with the MMT inferred from the transport data, and is found to be independent of an applied magnetic field at least up to $50$~kOe. The low temperature $c_p$ is characterized by an upturn below $T = 10$~K. Under an applied magnetic field, this upturn evolves leading to a broad peak, centered around $4$~K under $H = 50$~kOe, which progressively shifts to higher temperatures with increasing magnetic field. This behavior is reminiscent of a Schottky-like anomaly, which often arises in the rare-earth based compounds due to the crystal field splitting.\\
\begin{figure}[!]
\centering
\includegraphics[width = \columnwidth]{cp_nd.pdf}
\caption{(a) Specific heat ($c_p$) of Nd$_4$Ni$_3$O$_{10}$. Lower inset shows $c_p$ in the low-temperature range for an applied field of $0$~kOe, $30$~kOe and $50$~kOe; $c_p$ of La$_4$Ni$_3$O$_{10}$\ is also shown for comparison. Upper inset shows an expanded view of the anomaly at MMT under zero-field and a field of 50 kOe. (b) Low-temperature specific heat associated with the $4f$ electrons of Nd$_4$Ni$_3$O$_{10}$~is plotted as $c_{4f}/T$ versus $T^2$; inset shows $c_{4f}$ versus $T$ up to $T = 120$ K to show the presence of a pronounced Schottky anomaly near $T = 40$ K. The modified Schottky fittings for three cases: $g^{(1)} = 1$, $g^{(2)} = 1$ (red), $g^{(1)} = 1$, $g^{(2)} = 2$ (blue), and $g^{(1)} = 1$, $g^{(2)} = 0.5$ (khaki) (see text for details)}
\label{cp_Nd4310}
\end{figure}
To investigate this further, we estimate the specific heat associated with $4f$ electrons of Nd, labeled $c_{4f}^{Nd}$. The specific heat of La$_4$Ni$_3$O$_{10}$ is used as a lattice template, and also to subtract the small magnetic specific heat associated with the Ni sublattice. $c_{4f}^{Nd}$ obtained in this manner is displayed in the lower panel of Fig.~\ref{cp_Nd4310} (inset). At $T = 2$~K, it has a value of about $\sim6.9$ J mol$^{-1}$K$^{-1}$, which decreases sharply upon heating but remains substantial ($\sim3.5$ J mol$^{-1}$K$^{-1}$) even at $T = 12$~K, and increases again upon further heating, exhibiting a broad Schottky like anomaly near $T = 50$~K that can be attributed to the higher-lying crystal field split levels of Nd$^{3+}$ ions. In NdNiO$_3$, for example, the lowest $^4I_{9/2}$ multiplet of Nd$^{3+}$ ion splits into \textit{five} Kramers doublets with the first excited doublet situated around $100$~K above the ground doublet \cite{bartolome1994low}. Since Nd$^{3+}$ ions in the PB layers of Nd$_4$Ni$_3$O$_{10}$ are analogously coordinated, one can assume a similar crystal field splitting scheme for them. On the other hand, for the $9$--fold coordinated Nd$^{3+}$ ions the splitting scheme may be different. However, since Nd$^{3+}$ is a Kramers ion with $3$ electrons in the $f$--orbitals, in the absence of a magnetic field each crystal field split level should at least be two fold degenerate: i.e., for the $9$--fold coordinated Nd$^{3+}$ ions the ground and first excited state crystal field split levels can have degeneracies as follows:
$g_0 = 2$, $g_1 = 2$, $g_0 = 2$, $g_1 = 4$, or $g_0 = 4$, $g_1 = 2$. Thus, the ratio $\frac{g_1}{g_0}$, which appears in the expression for the Schottky anomaly, can take values $1$, $2$ or $0.5$, respectively. Note that for $Nd^{3+}$ ions in the PB layer this ratio will be $1$. With this as an input, one can try fitting the broad peak in c$_{4f}$ near $40$~K using the expression: $c_{Sch} = c_{Sch}^{(1)} + c_{Sch}^{(2)}$, where:
\begin{equation}
\centering
\
c_{Sch}^{(i)} = 2R\left( \frac{\Delta_i}{T} \right)^2 \frac{g^{(i)}exp(\frac{-\Delta_i}{T})}{[1 + g^{(i)}exp(\frac{-\Delta_i}{T})]^2}
\
\label{eq5}
\end{equation}
In this expression, $R$ is the universal gas constant, $\Delta$ is the splitting between the ground and first excited state, and $g$ is the ratio $\frac{g_1}{g_0}$. Here, the index $i$ is used for the the two types of coordinations, \textit{viz}, $i = 1$ corresponding to the $12$--fold coordination, and $i = 2$ corresponding to the $9$--fold. The prefactor $2$ account for the number of Nd$^{3+}$ ions per formula unit in each type of layers. The fitting result for $g^{(1)} = 1$, $g^{(2)} = 1$ (fit$1$), $g^{(1)} = 1$, $g^{(2)} = 2$ (fit$2$), and $g^{(1)} = 1$, $g^{(2)} = 0.5$ (fit$3$) are shown in the inset of Fig.~\ref{cp_Nd4310}(b). The corresponding values of $\Delta_1$ and $\Delta_2$ for these fits are: $98$ K and $98$ K for (fit$1$), $150$ K and $95$ K for (fit$2$), and $87$ K and $93$ K for (fit$3$), respectively. Clearly, the best fit corresponds to (fit$2$), which implies that the ground state of Nd$^{3+}$ ions in the $9$--fold coordination is also a Kramers doublet, with a quartet for the first excited state.\\
Let us now turn our attention to the increase in $c_{4f}^{Nd}$ upon cooling below $T = 10$~K. In NdNiO$_3$ a similar upturn leading to a broad peak around $T = 1.7$~K had been previously reported \cite{bartolome1994low}. It was argued to arise from the exchange splitting of the ground state doublet. However, unlike NdNiO$_3$, in Nd$_4$Ni$_3$O$_{10}$ the Ni moments are not ordered and hence the Ni--Nd exchange field in this case is almost non-existent. On the other hand, it might be that this upturn is precursory to an impending magnetic ordering of the Nd moments at further low-temperatures. After all, the Nd-Nd exchange, as inferred from the high temperature Curie-Weiss fit, is about $-45$~K, which is rather high. This could then be a case closely analogous to the case of Nd$_2$O$_3$ recently reported, which also exhibits a high $\theta_p \simeq -24$~K, but with long-range order setting in only below $T = 0.55$~K. Surprisingly, $c_p$ of Nd$_2$O$_3$ shows not only a sharp peak at $0.55$ K corresponding to the long-range ordering of Nd moments but also a broad feature centered around $1.5$~K. The authors report that the entropy associated with this broad peak must be taken into account in order to recover the $R\ln2$ entropy expected from a ground state doublet suggesting a complex two-step ordering of the Nd moments. The $c_p$ of Nd$_4$Ni$_3$O$_{10}$ also shows a broad peak at $T \approx 1.8$ K \cite{PhysRevB.101.195142} which suggests that a phenomenology analogous to Nd$_2$O$_3$ might also be at play here. Further studies down to much lower temperatures would be interesting to explore this analogy further and to understand the true ground state of the Nd sublattice. \\
Finally, $c_{4f}^{Nd}/T$ versus $T^2$ is plotted in the lower panel of Fig.~\ref{cp_Nd4310}. The data from $12$~K to $20$~K can be fitted to a straight line whose intercept on the $y-$axis is $\sim150$ mJ mol$^{-1}$K$^{-2}$. Indeed, in Ref. \citenum{PhysRevB.101.195142}, a high $\gamma$ value of $146$ mJ mol$^{-1}$K$^{-2}$ is reported by fitting $c_p/T$ versus $T^2$ to $\gamma + \beta T^2$ in this temperature range. However, caution must be exercised while interpreting the intercept value in this case since $c_p$ in this temperature range, as shown in the inset of Fig. \ref{cp_Nd4310}b, is overwhelmed by the Schottky contribution arising from the crystal field split lowest $J-$multiplet of Nd$^{3+}$ ions. It is for this reason we believe that the erroneously high $\gamma$ value in Ref.~\citenum{PhysRevB.101.195142} misled the authors to conclude a "novel" heavy-electron behavior in Nd$_4$Ni$_3$O$_{10}$, is a gross overestimation. In fact, as shown in the supplementary~\cite{SM}, if one use the same procedure for~Pr$_4$Ni$_3$O$_{10}$, a high $\gamma$ value of $\approx300$ mJ mol$^{-1}$K$^{-2}$ will emerge, but we know from the work of Huangfu et al.\cite{HuangfuPRR2020}, the resistivity of a~Pr$_4$Ni$_3$O$_{10}$ single crystal decreases upon cooling at low-temperature, i.e., no heavy-electron behavior is observed in the transport studies. However, as is well documented in the heavy fermion-literature, if the electronic specific heat $\gamma$ in such cases is derived by extrapolating the high-temperature specific heat data to $T=0$~K using $\gamma T + \beta T^2$ unusually large values emerge, which can be \textit{falsely} interpreted as arising due to the heavy fermion behavior.
\subsection{Thermal Expansion and Gr\"{u}neisen analysis}
\label{TE}
The temperature dependence of length changes studied by capacitive dilatometry, shown in Figs.~\ref{all-TE}, follows the volume dependence as measured using the X-ray diffraction data (see Fig.~\ref{RR4310}(f1, f2 and f3)). However, while there is quantitative agreement for Pr$_4$Ni$_3$O$_{10}$, discrepancies are noticed for La$_4$Ni$_3$O$_{10}$\ and Nd$_4$Ni$_3$O$_{10}$. Specifically, the dilatometric length changes are about $25$\% and $45$\% larger than suggested by X-ray diffraction, respectively. The data are isotropic, i.e., we find the behavior to be the same when measuring along different directions of the polycrystalline cuboids, which excludes a simple non-random orientation effect to cause this discrepancy. Instead, the data suggest a non-uniform internal stress distribution within the polycrystalline samples which can lead, in porous materials, to larger thermal expansion than in the bulk~\cite{ho1998thermal}.\\
\begin{figure}[htb]
\centering
\includegraphics [width=0.95\columnwidth,clip] {All_TA.pdf}
\caption{Temperature dependence of the thermal expansion coefficient $\alpha$ of La$_4$Ni$_3$O$_{10}$, Pr$_4$Ni$_3$O$_{10}$, and Nd$_4$Ni$_3$O$_{10}$. The red line shows a polynomial estimate of the background (see text for details). The arrows marks the position of $T_{\rm MMT}$. The asterisk in the upper panel indicates an experimental artifact. The additional low-temperature peak in Pr$_4$Ni$_3$O$_{10}$\ and Nd$_4$Ni$_3$O$_{10}$\ is likely due to the crystal field excitations.The inset shows length change ($dL/L$) around $T_{\rm MMT}$; the dotted line is a guide to the eye.} \label{all-TE}
\end{figure}
The length changes in $R_4$Ni$_3$O$_{10}$\ evidence significant coupling of electronic and structural degrees of freedom. Specifically, there are pronounced anomalies at $T_{\rm MMT}$\ in all studied materials. In La$_4$Ni$_3$O$_{10}$ , the data in Fig.~\ref{all-TE} displays a broad feature which signals shrinking of the sample volume upon exiting the MMT phase while heating the sample. Qualitatively, this implies negative hydrostatic pressure dependence $dT_{\rm MMT}/dp < 0$. The minimum of the thermal expansion anomaly appears at $T_{\rm MMT}$\ $ = 134$~K, suggesting either a weak first-order character of the transition or a somehow truncated $\lambda$-like behavior similar to what is indicated by the specific heat anomaly (cf. Fig.~\ref{cp_Pr4310}).
In order to estimate the background contribution to the thermal expansion coefficient, a polynomial was fitted to the data well below and above the thermal expansion anomaly as shown in Fig.~\ref{all-TE}~\cite{PhysRevB.65.174404}. The background $\alpha^{\rm bgr}$\ mainly reflects the phonon contribution. Due to the large size of the anomaly, using different temperature ranges for the determination of the background and/or choosing different fit functions does not change the result significantly. Subtracting $\alpha^{\rm bgr}$\ from the data yields the anomaly contribution to the thermal expansion coefficient $\Delta \alpha$ as shown in Fig.~\ref{grueneisen}a. Recalling the discrepancy of dilatometric and XRD length changes mentioned above for La$_4$Ni$_3$O$_{10}$\ and Nd$_4$Ni$_3$O$_{10}$ , for the following quantitative analysis of both we have scaled the dilatometric data to the XRD results. Quantitatively, our analysis then yields total anomalous length changes $\Delta_t L/L = \int \Delta \alpha dT = -4.2(9) \cdot 10^{-5}$.
\begin{table*}[!]
\setlength{\tabcolsep}{4pt}
\caption{Total anomalous length and entropy changes $\Delta_t L/L = \int \Delta \alpha dT$ and $\Delta_t S = \int \Delta c_{\rm p}^{\rm MMT}/T dT$, discontinuous length changes $\Delta_d L/L$, Gr\"{u}neisen parameter $\Gamma$ and hydrostatic pressure dependence of $T_{\rm MMT}$\ of $R_4$Ni$_3$O$_{10}$\ (see the text).}
\label{tab1}
\begin{center}
\begin{tabular}{c c c c c c}
\hline\hline
& $\Delta_t L/L$ & $\Delta_d L/L$ & $\Delta_t S$ & $\Gamma$ & $dT_{\rm MMT}/dp$ \\
\hline
\tabularnewline
La$_4$Ni$_3$O$_{10}$\ & $-4(1)\cdot 10^{-5}$ & - & 1.0(3)~J/(mol\,K) & $-4.9(9)\cdot 10^{-7}$~mol/J & $-8(2)$~K/GPa \\
Pr$_4$Ni$_3$O$_{10}$\ & $-5(1)\cdot 10^{-5}$ & $-3.1(6)\cdot 10^{-5}$& 3.1(6)~J/(mol\,K) & $-2.3(6)\cdot 10^{-7}$~mol/J & $-4(1)$~K/GPa \\
Nd$_4$Ni$_3$O$_{10}$\ & $-5.1(4)\cdot 10^{-5}$ & $-2.6(2)\cdot 10^{-5}$ & 3.5(9)~J/(mol\,K) & $-1.4(4)\cdot 10^{-7}$~mol/J & $-3(1)$~K/GPa \\
\tabularnewline
\hline\hline
\end{tabular}
\end{center}
\end{table*}
When replacing La by Pr and Nd in $R_4$Ni$_3$O$_{10}$ , the anomalies in the thermal expansion at $T_{\rm MMT}$\ become significantly sharper and evidence rather discontinuous behavior (see Figs.~\ref{all-TE}). In addition, there are pronounced features at low temperatures (marked by arrows) that are associated with the rare-earth sublattice. In particular, the data clearly confirm negative volume expansion in Nd$_4$Ni$_3$O$_{10}$\ below $\sim 20$~K. At higher temperatures, the sharp anomalies at $156$~K ($R$ = Pr) and 160~K ($R$ = Nd) at $T_{\rm MMT}$\ are accompanied by a regime of rather continuous length changes which extends from $T_{\rm MMT}$\ down to about $110$ K, i.e., it is significantly larger than the anomaly regime in La$_4$Ni$_3$O$_{10}$ . Applying the procedure described above for determining the background yields the thermal expansion anomalies as displayed in Fig.~\ref{grueneisen}b and \ref{grueneisen}c for the two compounds.
\begin{figure}[htb]
\centering
\includegraphics [width=0.95\columnwidth,clip] {Grueneisen.pdf}
\caption{Anomalies in the specific heat and the negative thermal expansion coefficient of $R_4$Ni$_3$O$_{10}$\ with $R$ = La, Pr, and Nd. The anomaly size in (a) and (c) has been rescaled according to the X-ray diffraction results (see the text). Note the same scale of the thermal expansion ordinate in all graphs. } \label{grueneisen}
\end{figure}
The anomalies $\Delta \alpha$ in the thermal expansion coefficients at $T_{\rm MMT}$ \: are presented in Fig.~\ref{grueneisen} together with the respective anomalies of the specific heat. The latter have been derived by estimating the background specific heat analogously to the procedure used for the thermal expansion data and by using the same fitting regimes in both cases~\cite{PhysRevB.65.174404}. For each composition, scaling of $\Delta c_{\rm p}$\ and $\Delta \alpha$ has been chosen to obtain the best overlap of the specific heat and thermal expansion data around $T_{\rm MMT}$\ and above. The fact that the thermal expansion and specific heat anomalies are proportional at $T_{\rm MMT}$\ implies a $T$-independent Gr\"{u}neisen parameter describing the ratio of pressure and temperature dependence of entropy changes in this temperature range. This observation implies the presence of a single dominant energy scale $\epsilon$~\cite{Gegenwart_2016}. In contrast, the fact that Gr\"{u}neisen scaling starts to fail at around $10$~K below $T_{\rm MMT}$\ indicates the presence of more than one relevant degree of freedom. In the temperature regime around $T_{\rm MMT}$\ and above, the corresponding scaling parameter is the Gr\"{u}neisen parameter~\cite{PhysRevB.73.214432}:
$$\Gamma = \frac{3\Delta \alpha}{\Delta c_{\rm p}} = \frac{1}{V}\left. \frac{\partial \ln \epsilon}{dp}\right|_T .$$
Our analysis yields the $\Gamma$ values summarized in Table~\ref{tab1}. Using the Ehrenfest relation, the obtained values of $\Gamma$ yield the hydrostatic pressure dependencies of the ordering temperature at vanishing pressure, i.e., $dT_{\rm MMT}/dp = T_{\rm MMT} V_{\rm m}\Gamma$. The results deduced using the molar volume $V_{\rm m}$ are shown in Table~\ref{tab1}.
The obtained initial slopes of hydrostatic pressure dependencies of $T_{\rm MMT}$\ are comparable to values reported from measurements of the electrical resistivity under pressure. Specifically, Wu~\textit{et~al.}\ report $-6.9$~K/GPa for La$_4$Ni$_3$O$_{10}$\ which nicely agrees to the results of the Gr\"{u}neisen analysis presented above. The comparison with Nd$_4$Ni$_3$O$_{10}$\ studied in Ref.~\citenum{PhysRevB.101.195142} is, however, ambiguous. On the one hand, Li et al. ~\cite{PhysRevB.101.195142} report discontinuous shrinking of the unit cell volume at $T_{\rm MMT}$\ by $0.08$~\% while cooling, which, both, qualitatively and quantitatively, contrasts our data (cf.~inset of Fig.~\ref{all-TE}c). In particular, this value implies a $positive$ hydrostatic pressure dependence of about $+35$~K/GPa~\footnote{We have applied the Clausius-Clapeyron equation and used $\Delta S=2.8$~J/(mol\,K)\ as reported in Ref.~\cite{PhysRevB.101.195142}.}. However, at the same time an initial $negative$ hydrostatic pressure dependence of about $-8$~K/GPa is reported in Ref.~\citenum{PhysRevB.101.195142} which thermodynamically contradicts the reported volume changes at $T_{\rm MMT}$\ but is reasonably consistent with the results of our Gr\"{u}neisen analysis.
The broad region of anomalous length changes between $T_{\rm MMT}$\ and $\sim100$~K signals clear temperature variation of the Gr\"{u}neisen ratio, in this temperature regime, the reason of which is not fully clear. In general, the fact that capacitance dilatometry is obtained under small but finite pressure, which in the case at hand is estimated to about $0.6(1)$~MPa, may affect measurements in particular on polycrystalline samples. The fact that the dilatometer detects volume increase however renders a scenario as observed in recent studies of electronic nematicity of LaFeAsO rather unlikely, where the shear modulus $C_{66}$ is the elastic soft mode of the associated nematic transition so that dilatometry under finite pressure results in associated volume decrease~\cite{PhysRevB.80.094512,PhysRevLett.105.157003}. We also exclude that variation of $\Gamma$ is associated with incompletely resolved strain from the discontinuous transition at $T_{\rm MMT}$\ because the measurements have been performed upon heating and the temperature regime of the observed anomaly is very large. Instead, we conclude the presence of a competing ordering phenomenon as suggested by the failure of Gr\"{u}neisen scaling~\cite{Gegenwart_2016}. Intriguingly, a temperature regime of unexpected behavior has also been detected in the out-of-plane resistivity $\rho_{\perp}$ in Pr$_4$Ni$_3$O$_{10}$\ single crystal where, in contrast to the in-plane resistivity, an increase of $\rho_{\perp}$ upon cooling, i.e., insulating behavior, is observed in a large temperature regime \cite{PhysRevB.101.104104}. It is tempting to trace back this intermediate temperature regime of $d\rho_{\perp}/dT<0$, i.e., a metal-to-insulator-like behavior of $\rho_{\perp}$ at $T_{\rm MMT}$ , to the competing degree of freedom which manifests in the thermal expansion coefficient and change of Gr\"{u}neisen parameter shown in Fig.~\ref{grueneisen}b.
\section{Summary \& Conclusions}
\label{SC}
We investigated the trilayer nickelates $R_4$Ni$_3$O$_{10}$~($R= $ La, Pr and Nd) that are $n = 3$ members of the RP series. We focused our investigations on understanding the following important aspects concerning the physical properties of these compounds: (i) what is the correct space group characterizing the room-temperature crystal structure of these compounds, (ii) is there a structural phase transition at $T_{\rm MMT}$, (iii) how do various thermodynamic quantities, including resistivity, magnetic susceptibility, specific heat, thermopower, thermal conductivity and thermal expansion coefficient vary across MMT, and (iv) to understand the magnetic behavior of the rare-earth sublattices in Pr$_4$Ni$_3$O$_{10}$\ and Nd$_4$Ni$_3$O$_{10}$ .
In order to address these questions, we synthesized high-quality samples using the sol-gel method. These samples were then subject to a high-resolution synchrotron powder X-ray diffraction at the ALBA synchrotron source, both at 300 K and lower temperatures down to $90$ K. A thorough analysis confirms that these compounds crystallize in the monoclinic $P2_1/a, Z= 4$ phase. Absence of new peaks emerging or splitting of the existing peaks ruled out any lowering of the lattice symmetry accompanying this transition. The thermal expansion coefficient also captured the anomaly at $T_{\rm MMT}$\ rather vividly. From the analysis of $\Delta \alpha$, we conclude that the MMT anomaly becomes more first order-like as we go to smaller lanthanide ionic radii (and thereby larger distortions from the perovskite structure).This was further corroborated by temperature variation of various physical properties.
Resistivity data of all samples exhibit sharp jump or discontinuity at their respective $T_{\rm MMT}$\, and an upturn, i.e., with $d\rho/dT < 0$, at low-temperatures. We show that this upturn is likely a consequence of weak-localization arising due to inelastic electron-electron interactions. This result is in agreement with Ref.~\onlinecite{KUMAR2020165915} where resistivity of La$_4$Ni$_3$O$_{10}$\ has been analyzed in considerable details. In particular, we excluded a Kondo-like mechanism in the Ni-sublattice leading to $d\rho/dT < 0$ as has been proposed recently \cite{PhysRevB.101.195142}. This result is further strengthened by thermopower and specific heat experiments. From thermopower and specific heat, we found the band effective mass of the charge carriers to range from around $3 m_\circ$, which indicates that the electronic correlations are at best moderately enhanced.
The magnetic ground state of the $R-$ions in Pr$_4$Ni$_3$O$_{10}$\ and Nd$_4$Ni$_3$O$_{10}$ is shown to be rather interesting. First, the Curie-Weiss temperature ($\theta_p$) for both these compounds is of the order of $-40$ K; however the long-range ordering remains suppressed down to temperatures as low as $5$ K for Pr$_4$Ni$_3$O$_{10}$\, and less than $2$ K for Nd$_4$Ni$_3$O$_{10}$\, suggesting the presence of strong magnetic frustration, which may be related to their layered structure that renders the $R^{3+}$ moments located in the RS layers quasi-two-dimensional. From the analysis of $c_p$ and $\chi$, we infer that in Pr$_4$Ni$_3$O$_{10}$, the Pr$^{3+}$ ions located in the PB layers exhibit a crystal field split non-magnetic singlet ground state, while those located in the RS layers show a ground state doublet with an antiferromagnetic ordering below about $5$ K.
In Nd$_4$Ni$_3$O$_{10}$, on the other hand, all four Nd-ions in the formula unit exhibit a Kramers doublet ground state with first excited state as doublet for one-half of the Nd ions and quartet for the remaining half, giving rise to a pronounced Schottky-type anomaly centered around $T = 35$ K. The low-temperature specific heat of both Pr$_4$Ni$_3$O$_{10}$\ and Nd$_4$Ni$_3$O$_{10}$\ is found to be overwhelmed by the Schottky-like contributions arising from the crystal field excitations associated with the lowest $J$--multiplet of the rare-earth ions, which tends to \textit{falsely} inflate the value of $\gamma$.
In summary, the rare-earth sublattice in $R_4$Ni$_3$O$_{10}$\ compounds with $R=$ Pr and Nd, exhibit very intriguing behavior which should be subject to further examination using specific heat down to much lower temperatures, inelastic and elastic neutron scattering. With the possibility of single-crystal growth, the interesting low-temperature behavior of these compounds as shown here should attract significant further interest.\\
\section*{Acknowledgments}
The powder x-ray diffraction experiments were performed at MSPD - BLO4 beamline at ALBA Synchrotron with the collaboration of ALBA staff. The authors thank the Department of Science and Technology, India (SR/NM/Z-07/2015) for the financial support and Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR) for managing the project. SS acknowledges financial support form SERB (WMR/2016/003792). RK acknowledges support by Deutsche Forschungsgemeinschaft (DFG) through KL 1824/13-1 and by BMBF via SpinFun (no. 13XP5088).
|
1,108,101,565,273 | arxiv | \section{Introduction}
Long duration Gamma--ray bursts (GRBs) are thought to originate from the collapse of massive stars.
Several lines of evidence points toward this conclusion, ranging from the association to type Ib/c supernovae (Woosley
\& Bloom 2006 and references therein), to the occurrence of GRBs in the most luminous part of their host galaxies
(Svensson et al. 2010). The ambient medium in which GRBs explode is expected to be denser than the interstellar medium
and typical of star forming regions.
The values of the absorbing column densities as measured in X--rays are high (Galama \& Wijers 2001;
Stratta et al. 2004; Campana et al. 2006; Watson et al. 2007).
An analysis of the intrinsic column densities of all {\it Swift} GRBs observed within 1,000 s and with a known redshift has been
carried out by Campana et al. (2010). The selected sample consisting of 93 GRBs was biased.
The distribution of the intrinsic X--ray absorption column density is consistent with a
lognormal distribution with mean $\log N_H(z)=21.9\pm0.1$ ($90\%$ confidence level).
This distribution is in agreement with the expected distribution for GRBs occurring randomly in giant molecular
clouds similar to those within the Milky Way (Campana et al. 2006; Reichart \& Price 2002).
Looking at the distribution of X--ray column densities vs. redshift, there is a lack of non-absorbed GRBs at high redshift and
a lack of heavily absorbed GRBs at low redshift. This might be the outcome of biases present in the sample.
Looking at the distribution of X--ray column densities versus redshift a lack of non-absorbed GRBs at high redshift and
a lack of heavily absorbed GRBs at low redshift were found in previous studies (Campana et al. 2010).
The former might be explained in terms of more compact and dense star formation
regions in the young Universe (or to a sizable contribution from intervening systems). The latter might be interpreted as due to
a change in the dust properties with redshift, with GRBs at redshift $z\lower.5ex\hbox{\ltsima} 2-3$ having a higher dust to gas ratio for
the same X--ray column density (e.g. different grain size or composition). This will naturally provide a lack of heavily (X--ray) absorbed GRBs at small redshifts.
In the optical the presence of a large amount of absorbing material is much less clear, since a large number of GRB afterglows are not affected
by absorption (Kann, Klose \& Zeh 2006; Schady et al. 2010; Zafar et al. 2011; but see Greiner et al. 2011 and Covino et al. 2011, in preparation).
In this respect photoionisation of dust grains can play an important role (Lazzati, Perna \& Ghisellini 2001; Lazzati, Covino \& Ghisellini;
Draine \& Hao 2002; Campana et al. 2007).
Moreover, the absorbing column densities measured in the optical
based on damped Lyman-$\alpha$ absorption are a factor of $\sim 10$ lower than those measured in the X--ray band
(Campana et al. 2010; Fynbo et al. 2009). This has been interpreted as due to photoionization of the surrounding medium
by GRB photons (Campana et al. 2006, 2007; Watson et al. 2007; Campana et al. 2010; Schady et al. 2011).
The presence of a large amount of material is also testified by the existence of `dark' GRBs. There are several definitions of dark GRBs.
The easiest is that they do not show an optical counterpart (Fynbo et al. 2001). Since this definition is clearly related to the sensitivity
(and availability) of the instruments used for the follow-up a more general definition is needed.
Based on the predictions of the fireball model (M\'esz\'aros \& Rees 1997) one can require that the optical to
X--ray spectral index $\beta_{OX}$ (i.e. the slope between the fluxes in the
$R$-band and at 3 keV at 11 hr after the burst) should be lower than 0.5 (Jakobsson et al. 2004).
This will individuate optically sub-luminous bursts, i.e. fainter than expected from the fireball model.
Alternatively, with the advent of {\it Swift}, X--ray spectral slopes were more easily available and a somewhat different definition
was put forward by van der Horst et al. (2009) for which $\beta_{OX}$ is shallower than $\beta_X - 0.5$.
The darkness of a GRB can have different causes: it can be due to intrinsically optically faint GRBs, it can be due to absorption
by intervening material within the host galaxy or it can be due to high redshift GRBs, thus being absorbed by the intergalactic medium.
Several works have addressed this topic in the {\it Swift} era when a number of facilities allowed a quick follow-up of the
afterglows. The fraction of dark bursts has been estimated to be $\sim 25-50\%$ according to Jakobsson's definition
(Melandri et al. 2008; Roming et al. 2009; Cenko et al. 2009; Greiner et al. 2011; Melandri et al. 2011). It is now believed that the faint
optical afterglow emission of dark bursts might be due to a moderate intrinsic extinction at moderate redshifts.
Greiner et al. (2011) estimated a $\sim 20\%$ contribution from high redshift ($z\lower.5ex\hbox{\gtsima} 4-5$) GRBs to the dark population only.
Salvaterra et al. (2011, see also Nava et al. 2011) selected a complete sample of bright GRBs based on optical observability (Jakobsson et al. 2006) and
{\it Swift} BAT peak flux $P\ge 2.6$ ph s$^{-1}$ cm$^{-2}$. The sample consists of 58 GRBs and it is complete in spectroscopic redshift
at $90\%$ (with $95\%$ of GRBs having some constraints on the redshift).
This sample offers the opportunity to study in an unbiased way the distribution of the X--ray column densities and
its relation to GRB darkness.
The paper is organised as follows. In section 2 we derive the X--ray absorbing column densities for the Salvaterra's sample
and briefly describe how the slope $\beta_{OX}$ has been computed for each GRB of the sample.
In section 3 we discuss our findings and in section 4 we draw our conclusions.
\section{X--ray absorbing column densities and spectral slope between optical and X--ray bands}
The intrinsic column densities were computed using the automated data products provided by
the {\it Swift}/XRT GRB spectra repository (Evans et al. 2009). The archive has been recently updated by
reprocessing all the on-line GRB data products using the latest software and calibration (Evans 2011).
Therefore some of these values overwrite those reported in Campana et al. (2010).
In Table 1 we list the column density value at the host galaxy redshift $N_H(z)$.
These are obtained fitting an absorbed power law model to the data
in the specified time interval when there are no strong spectral variations.
The absorption component is modeled with {\tt PHABS} within {\tt XSPEC}.
We consider two components one Galactic (held fixed) and a component at the redshift of the GRB ({\tt ZPHABS}).
The Galactic column density for each burst is provided by the Leiden/Argentine/Bonn (LAB) Survey of Galactic HI
(Kalberla et al. 2005).
For those GRBs without redshift, we fix the redshift of the free absorption component to zero,
so that the resulting value provides a lower limit to the intrinsic column density.
In the next sections we will also use the $\beta_{OX}$ index. This index is computed as the spectral index
connecting the $R$-band flux and the (unabsorbed) 3 keV flux at 11 hr from the trigger. The collection of indexes and limits for the
burst in our sample can be found in Melandri et al. (2011).
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{nhdist.eps}
\end{center}
\caption{Column density ($N_H$) as derived from the X--ray data of the GRBs in our complete sample. The dotted
(orange) histogram shows the distribution for GRBs with $z<1.7$ and the dashed (light blue) histogram shows the distribution for
GRBs with $z>1.7$.}
\label{metal}
\vskip -0.1truecm
\end{figure}
\begin{table*}
\caption{Column densities for bright {\it Swift} GRBs in the Salvaterra et al. (2011) sample.}
\footnotesize{
\begin{tabular}{ccccccc}
GRB & $z$ & $N_H(z)$ &$\Gamma$ & $N_H(\rm Gal)$ & Time interv. & Comments\\
& & ($10^{21}\rm \ cm^{-2}$) & &($10^{20}\rm \ cm^{-2}$) & (s) & (exp. time ks)\\
\hline
050318 &1.44 &$0.7^{+0.6}_{-0.5}$ &$1.97\pm0.07$& 1.9 & $3000-7\times 10^4$ & PC (23.5) \\
050401 &2.90 &$18.2^{+2.3}_{-2.3}$ &$1.84\pm0.05$& 4.4 & $200-2000$ & WT (1.4) \\
050416A &0.65 &$6.4^{+0.9}_{-0.5}$ &$2.12\pm0.06$& 2.4 & $400-3\times 10^5$ & PC (92.3) \\
050525A &0.61 &$2.0^{+0.9}_{-0.9}$ &$2.12\pm0.16$& 9.1 & $6000-3\times10^5$ & PC (18.7) \\
050802 &1.71 &$1.6^{+1.6}_{-1.5}$ &$1.86\pm0.12$& 1.9 & $400-2000$ & PC (1.5)\\
050922C &2.20 &$3.5^{+2.2}_{-2.1}$ &$2.24\pm0.11$& 5.4 & $400-3\times 10^5$ & PC (7.3)\\
060206 &4.05 &$14.2^{+16.0}_{-9.8}$ &$2.04\pm0.28$& 0.9 & $700-10^5$ & PC (4.5) \\
060210 &3.91 &$24.6^{+2.9}_{-3.5}$ &$2.09\pm0.05$& 6.1 & $3000-10^6$ & PC (76.8) \\
060306 &3.50 &$97^{+47}_{-36}$ &$1.86\pm0.26$& 3.4 & $300-925$ & PC (0.6) \\
060614 &0.13 &$0.33^{ +0.18}_{-0.13}$&$1.90\pm0.06$& 1.9 & $4400-10^6$ & PC (28.4)\\
060814 &1.92 &$20.9^{+2.3}_{-2.2}$ &$1.94\pm0.07$& 2.3 & $250-500$ &WT (0.2)\\
060904A &-- &$>2.07$ &$3.53\pm0.43$& 1.3 & $185-225$ & WT (0.1)\\
060908 &1.88 &$6.2^{+2.8}_{-2.5}$ &$2.12\pm0.18$& 2.3 & $200-10^5$ & PC (11.6) \\
060912A &0.94 &$3.2^{+1.5}_{-1.3}$ &$2.02\pm0.18$& 3.9 & $200-2000$ & PC (1.7)\\
060927 &5.47 & $<36$ &$1.94\pm0.16$& 4.6 & $100-10^4$ & PC (3.5) \\
061007 &1.26 &$5.1^{+0.3}_{-0.3}$ &$1.85\pm0.02$& 1.8 & $90-2000$ & WT (1.9)\\
061021 &0.35 &$0.73^{+0.2}_{-0.1}$ &$1.99\pm0.03$& 4.5 & $3000-3\times10^5$ & PC (83.1)\\
061121 &1.31 &$5.4^{+0.8}_{-0.5}$ &$1.88\pm0.05$& 4.0 & $600-3\times10^5$ & PC (43.1) \\
061222A &2.09 &$44.8^{+5.4}_{-3.0}$ &$2.11\pm0.06$& 9.0 & $3\times 10^4-2\times 10^5$ & PC (49.5)\\
070306 &1.50 &$26.8^{+4.7}_{-4.3}$ &$1.88\pm0.12$& 2.9 & $10^4-4\times10^4$ & PC (6.0)\\
070328 &{\bf --} &$2.4^{+0.2}_{-0.2}$ &$2.24\pm0.05$& 2.6 & $350-1000$ & WT (0.7)\\
070521 &1.35 &$54^{+13}_{-11}$ &$1.78\pm0.20$& 2.9 & $3000-10^4$ & PC (1.9)\\
071020 &2.15 &$6.8^{+1.7}_{-1.6}$ &$1.87\pm$0.07& 5.1 & $70-300$ & WT (0.2)\\
071112C &0.82 &$1.4^{+0.5}_{-0.5}$ &$1.82\pm0.08$& 7.4 & $90-280$ & WT (0.2)\\
071117 &1.33 &$10.9^{+2.1}_{-3.1}$ &$2.05\pm0.18$& 2.3 & $2900-6.2\times 10^4$& PC (19.0)\\
080319B &0.94 &$1.7^{+0.1}_{-0.1}$ &$1.78\pm0.02$& 1.1 & $800-2000$ & WT (1.7) \\
080319C &1.95 &$5.5^{+2.5}_{-2.3}$ &$1.61\pm0.10$& 2.2 & $200-3\times10^5$ & PC (4.1) \\
080413B &1.10 &$1.9^{+0.6}_{-0.4}$ &$1.97\pm0.07$& 3.1 &$6\times 10^3-10^6$ &PC (40.6)\\
080430 &0.77 &$3.5^{+0.8}_{-0.6}$ &$2.03\pm0.10$& 1.0 & $5000-3\times10^5$ & PC (10.9)\\
080602A &1.40 &$6.7^{+2.4}_{-2.1}$ &$2.01\pm0.15$& 3.5 & $200-800$ & PC (0.6)\\
080603B &2.69 &$7.3^{+2.9}_{-2.7}$ &$1.84\pm0.10$& 1.2 & $100-250$ & WT (0.2)\\
080605 &1.64 &$9.0^{+0.9}_{-0.9}$ &$1.76\pm0.04$& 6.7 & $100-800$ & WT (0.6)\\
080607 &3.04 &$22.8^{+5.7}_{-4.2}$ &$2.14\pm0.10$& 1.7 & $4000-6\times10^4$ & PC (19.7)\\
080613B & -- &$>0.5$ &$1.31\pm0.12$& 3.2 &$105-190$ & WT (0.1)\\
080721 &2.59 &$10.4^{+0.6}_{-0.6}$ &$1.81\pm0.02$& 6.9 & $100-2000$ & WT (1.3)\\
080804 &2.20 &$1.4^{+1.9}_{-1.1}$ &$1.82\pm0.09$& 1.6 & $200-3\times10^5$ & PC (12.6)\\
080916A &0.69 &$8.0^{+3.2}_{-1.9}$ &$2.26\pm0.15$& 1.8 & $2\times10^4-10^6$ & PC (172.9)\\
081007 &0.53 &$4.8^{+0.9}_{-1.2}$ &$2.04\pm0.18$& 1.4 & $5000-4\times10^5$ & PC (9.7)\\
081121 &2.51 &$1.9^{+1.6}_{-1.5}$ &$1.93\pm0.06$& 4.0 & $3000- 2\times10^6$ & PC (36.3)\\
081203A &2.05 &$4.5^{+1.1}_{-1.0}$ &$1.74\pm0.05$& 1.7 & $200-600$ & WT (0.4)\\
081221 &2.26 &$26.1^{+3.8}_{-3.6}$ &$2.04\pm0.09$& 2.0 & $300-500$ & WT (0.2)\\
081222 &2.77 &$6.0^{+1.1}_{-1.0}$ &$1.96\pm0.04$& 2.2 & $60-1000$ & WT (0.8)\\
090102 &1.55 &$5.0^{+2.5}_{-2.2}$ &$1.73\pm0.13$& 4.1 & $700-7\times10^4$ &PC (1.8)\\
090201 & $<4$ &$>3.85$ &$2.01\pm0.16$& 4.9 &$3000-6000$ & PC (1.7) \\
090424 &0.54 &$4.1^{+0.6}_{-0.5}$ &$1.94\pm0.08$& 1.9 & $2000-3\times10^6$ & PC (14.9)\\
090709A &$<3.5$&$>1.82$ &$2.04\pm0.13$& 6.6 & $4000-1.5\times10^4$ & PC (4.4) \\
090715B &3.00 &$7.6^{+2.5}_{ -2.8}$ &$2.01\pm0.10$& 1.3 & $4000-10^5$ & PC (28.8)\\
090812 &2.45 &$12.0^{+7.2}_{-6.6}$ &$2.10\pm0.23$& 2.3 & $10^4-7\times10^4$ & PC (7.7) \\
090926B &1.24 &$13.9^{+1.6}_{-1.5}$ &$1.97\pm0.08$& 1.9 & $130-300$ & WT (0.2)\\
091018 &0.97 &$1.0^{+0.9}_{-0.8}$ &$1.84\pm0.12$& 2.8 & $150-1000$ & PC (0.9)\\
091020 &1.71 &$5.8^{+1.7}_{-1.6}$ &$1.82\pm0.10$& 1.4 & $200-400$ & WT (0.2)\\
091127 &0.49 &$0.76^{+0.35}_{-0.5}$ &$1.80\pm0.11$& 2.8 & $6000-2\times10^4$ & PC (2.0)\\
091208B &1.06 &$8.3^{+4.3}_{-3.4}$ &$2.16\pm0.27$& 4.9 & $200-600$ & PC (0.4)\\
100615A &-- &$>10.1$ &$2.43\pm0.32$& 3.3 & $200-2000$ & PC (1.4)\\
100621A &0.54 &$18.0^{+1.2}_{-1.1}$ &$2.86\pm0.09$& 2.9 & $130-200$ & WT (0.1)\\
100728B &2.11 &$4.3^{+3.1}_{-2.5}$ &$2.08\pm0.18$& 6.2 & $4000-3\times 10^4$ & PC (8.5)\\
110205A &2.22 &$3.5^{+1.9}_{-1.4}$ &$2.11\pm0.09$& 1.6 & $5000-5\times 10^4$ & PC (18.8)\\
110503A &1.61 &$3.53^{+0.67}_{-0.64}$ &$1.80\pm0.05$& 2.6 & $200-700$ & WT (0.5)\\
\hline
\end{tabular}
}
\noindent Errors and upper limits are at $90\%$ confidence level obtained with a $\Delta \chi^2=2.71$.
\end{table*}
\section{Discussion}
\subsection{Distribution of absorbing column densities}
The distribution of the X--ray column densities is shown in Fig. 1. The overall distribution can be well described by a Gaussian
with (logarithmic) mean 21.7 and standard deviation 0.5 (the median of the distribution is very close to the mean).
This is consistent with the column density distribution obtained
considering all GRBs with known redshift (Campana et al. 2010). The distribution of column densities as a function of redshift
is shown in Fig. 2. It is apparent from Fig. 2 that there is a trend of increasing column densities with redshift. However, the lack of
mildly absorbed GRBs at high redshift is not statistically overwhelming. To see if there is a real difference we
cut the sample at the mean redshift ($z=1.7$), and make a Kolmogorov-Smirnov (KS) test on the two distributions.
We obtain a probability of {\bf 9}$\%$ that the two distributions come from the same
parent population. No firm conclusions can therefore be drawn.
A cut at $z=1$ results in a {\bf 0.5}$\%$ KS probability. This might indicate a difference in the intrinsic absorption column densities
(see below).
With respect to a non-complete sample (Campana et al. 2010), we note that the high-column density region at low redshift is
here more populated. This indicates a bias present in the non-complete sample:
it is difficult to obtain the redshift of highly-absorbed low-redshift GRBs (likely due to a higher optical absorption).
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{zdist_inter.eps}
\end{center}
\caption{Column density distribution versus redshift. Error bars have been computed from X--ray spectral fitting and
are at $90\%$ confidence level. Upper limits are at $90\%$ confidence level. Light grey lines represent the possible column density values
for GRBs without redshift, scaling the $z=0$ value with $(1+z)^{2.6}$ within the range of allowed redshifts (Galama \& Wijers 2001).
Darker (orange) region limited by a continuous line marks the mean contribution in the observed column density $N_H(z)$
resulting from intervening systems as a function of redshift. The lighter (light orange) region limited by a dotted line
marks the maximal $90\%$ line of sight contribution of intervening systems to the observed column density.
The dashed line marks the mean contribution to $N_H(z)$ in the case of doubling the population of
intervening systems as suggested by the comparison of quasars and GRB studies of intervening systems (see text, section 3.1).}
\label{metal}
\end{figure}
The lack of mildly-absorbed high redshift GRBs remains. This has been interpreted as due to the presence of
absorbing matter along the line of sight not related to the GRB host galaxy. This can be either diffuse (i.e. located in diffuse structures like the
filaments of the Warm-Hot Intergalactic medium - WHIM, Behar et al. 2011) or concentrated into intervening systems (i.e. galaxies or clouds along
the line of sight, Campana et al. 2010).
Based on quasar studies (Wolfe et al. 2005; P\'eroux et al. 2003; Noterdaeme et al. 2009; Prochaska, O'Meare \& Worseck 2010),
we can evaluate the contribution of the intervening systems
to the observed column density by simulating their distribution.
In particular, we assumed a number distribution of intervening systems, based on damped (and sub-damped) Lyman-$\alpha$ systems,
with a redshift dependence $n(z)\propto (1+z)^{0.26}$ for $z\le2.3$
and $n(z)\propto (1+z)^{2.78}$ for $z> 2.3$. For the contribution of the intervening systems in terms of absorption we adopted
a threefold equation (measuring column densities in cm$^{-2}$ units):
for $\log{N_H}<18.2$ we consider $n(\log{N_H})\propto\log{N_H}^{-1.9}$,
for $18.2\le \log{N_H}\le20.6$ $n(\log{N_H})\propto \log{N_H}^{-0.8}$,
and for $\log{N_H}>20.6$ $n(\log{N_H})\propto\log{N_H}^{-1.4}$
(Wolfe et al. 2005; P\'eroux et al. 2003).
The contribution of each intervening system is weighted considering the system as if it was at the redshift of the GRB, i.e. weighting its
intrinsic column density as $((1+z_{GRB})/(1+z_{DLA}))^{2.6}$ (where $z_{GRB}$ is the redshift of the GRB and $z_{DLA}$ the redshift of the intervening system).
We set up a MonteCarlo simulation considering systems in the $\log(N_H/{\rm cm^{-2}})=17.2-22$ range and probing 10,000 lines-of-sight.
Because the GRB column densities were calculated assuming solar metallicity, for comparison we also assumed solar metallicities
for the evaluation of the contribution of the intervening systems. It is important to note that GRB hosts have typically sub-solar metallicities,
in which case the assumption of solar metallicity would lead to the equivalent column density being underestimated.
All data points in Fig. 2 can therefore be (usually) considered as being lower-limits on $N_H$.
The $90\%$ confidence envelope of the simulated line-of-sights as a function of redshift is shown in Fig. 2 (dashed line).
The average contribution is also shown (continuous line and dark orange region).
These calculations clearly provide just an indicative estimate and are subject to uncertainties related to
modeling the number density evolution in redshift of these systems (e.g. Ribaudo, Lehner \& Howk 2011).
It is apparent from Fig. 2 that a GRB lying along the `average' line of sight experiences a too low increase of the
intrinsic column density due to intervening systems with respect to the observed increase of GRB column densities with redshift
(even if a less favorable line of sight might fully account for the observed increase at high redshifts).
Studies on strong intervening systems in quasars and GRB spectra have shown a larger occurrance of intervening systems in the latter lines of sight
(e.g. Prochter et al. 2006). These systems are mainly identified through strong, rest-frame equivalent width (EW) $> 1$ \AA\ Mg II
$\lambda\lambda$ 2796, 2803 absorption lines. In a study with a large sample of GRBs, Vergani et al. (2009)
confirm the presence of this effect and set the discrepancy to a factor of $\sim 2$.
Even if the reason for this discrepancy is still not fully understood, Budezynski \& Hewett (2011) show that
this discrepancy is likely related to a lack of quasars heavily absorbed along their line of sight.
Given this observational result, we artificially increased the number of intervening systems based on quasar studies by a factor of 2.
The resulting mean contribution derived from the intervening systems is shown in Fig. 2 with a dashed line. This line follows nicely the
increase of the intrinsic column density with redshift providing a plausible explanation for this effect.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{dark.eps}
\end{center}
\caption{Column density ($N_H$) as a function of the spectral index $\beta_{OX}$ (Melandri et al. 2011) for the GRBs in our
complete sample. Limits on the column density and $\beta_{OX}$ are indicated with arrows.
The dashed line for $\beta_{OX}=0.5$ divides dark and non-dark GRBs according to Jakobsson et al. (2004).
Open squares (filled circles) indicate dark (non-dark) GRBs according to van der Horst et al. (2009).
GRBs with lower limits on the column density (i.e. without redshift information) are not shown.}
\label{metal}
\end{figure}
\subsection{Relation between X--ray absorption and GRB `darkness'}
Given our complete sample of bright GRBs we investigate the connection between the X--ray absorption and the GRB darkness.
The GRB darkness can be caused by several effects that can be divided into two main classes: $i)$ intrinsic, i.e. due
to some physical mechanism hampering the optical emission or $ii)$ environmental, i.e. due to intrinsic absorption and/or to high redshift.
Considering the $\beta_{OX}$ values computed in Melandri et al. (2011), there are 19 GRBs in our complete sample that can be
classified as dark according to Jakobsson et al. (2004) and 12 according to van der Horst et al. (2009).
Out of them, 4 (common to both definitions) do not have any redshift information.
In Fig. 3 we show the distribution of $\beta_{OX}$ as a function of the intrinsic column density for the GRBs of our complete sample
(we did not include the few GRBs without a redshift determination).
It is apparent that for bursts with $\beta_{OX}<0.5$ (i.e. dark according to the Jakobsson's definition) all but three (all with $0.45<\beta_{OX}=0.5$)
have an intrinsic column density larger than $\log(N_H/{\rm cm^{-2}})\lower.5ex\hbox{\gtsima} 22$. This same $N_H$ limit is valid for all the bursts that are dark
according to the van der Horst's definition (Fig. 3).
Comparing the intrinsic column density distribution of dark and non-dark GRBs (taking only the ones with known redshift, i.e. 15--38 and 8--45 for the
Jakobsson's and van der Horst's definition, respectively) we obtain a KS probability
of $2\times 10^{-6}$ ($4.8\,\sigma$) for the Jakobsson's definition and $1\times 10^{-5}$ ($4.4\,\sigma$) for the van der Horst's definition
(the lower value is due to the smaller number of dark GRBs).
We also note that if the 4 GRBs classified as dark and without redshift information would have a redshift in line with the mean of the sample,
then they would have an intrinsic column density $N_H(z)\lower.5ex\hbox{\gtsima} 10^{22}$ cm$^{-2}$.
These results indicate that the intrinsic absorption as evaluated in the X--ray band is highly correlated with the darkness of a GRB.
\section{Conclusions}
Salvaterra et al. (2011) selected a complete sample of bright GRBs with a high degree of completeness in redshift.
In a series of papers we investigate the impact of this sample on GRB studies. Here we focus on the properties of the sample
with respect to the intrinsic X--ray absorption.
We found that the intrinsic column density distribution of our complete sample is consistent with the total distribution of column
densities presented in Campana et al. (2010). The mean of the two distributions are in fact $21.7\pm0.5$ and $21.9\pm0.1$,
respectively.
This likely indicates that the GRB brightness, as well as any other bias present in the total sample of GRBs with redshift (e.g. dust), does
not heavily influence the total distribution of intrinsic column densities.
At variance with the total distribution presented in Campana et al. (2010), we see in the complete sample presented here
that the region at high column densities and low redshift is now more populated by GRBs. This clearly reveals a bias present in the non-complete
sample, where this region is heavily underpopulated due to the lack of a redshift determination of dark bursts.
Even if not statistically compelling there is an increase of the intrinsic column density with redshift (this is more apparent in the
full sample of GRBs with redshift, Campana et al. 2010). We evaluate the mean contribution to $N_H(z)$ due to the intervening systems
along the GRB line of sight. We find that, if we take into account the larger number of observed systems affecting the line of
sight of GRBs with respect to the quasar one (Vergani et al. 2009), the population of
sub-Damped Lyman-$\alpha$ and Damped Lyman-$\alpha$ systems can account for the increase with redshift of $N_H(z)$.
It would be interesting to confirm this directly through the study of high-$z$ GRB lines of sight.
Unfortunately this effect plays a significant role at very high redshift, where the number of GRB afterglow spectra is very low.
It is indeed difficult to measure absorption from Lyman-$\alpha$. However, the column density of neutral gas can still be traced by weakly ionised metal lines
(e.g. Zn II, Si II), which in fact is a more logical method of comparing absorption in X--rays and the optical, given that the X--rays are absorbed by metals and
not neutral hydrogen (e.g. Schady et al. 2011).
The X-shooter instrument mounted at the ESO/VLT offers the best opportunities for these studies.
Making use of the $\beta_{OX}$ computed by Melandri et al. (2011), we found a strong correlation between GRB darkness
and X--ray absorbing column densities. Since metals are a key ingredient for dust production (Draine 2003), our findings are consistent
with a picture in which the darkness of a GRB is in most cases due to absorption by circumburst material.
\section{Acknowledgments}
SC thanks Darach Watson and Phil Evans for useful conversations. This work has been supported by ASI grant I/004/11/0.
This work made use of data supplied by the UK {\it Swift} Science Data Centre at the University of Leicester.
|
1,108,101,565,274 | arxiv |
\section{Introduction}
In a distributed multi-agent optimization problem, the inputs ({\it e.g.}, functions, variables, data) are spread over multiple computing agents ({\it e.g.}, nodes, processors) that are connected over some network, and the agents are required to communicate with each other to solve this problem. Distributed optimization have attracted a lot of attention due to the need of developing efficient methods to solve large-scale optimization problems \cite{boyd2011admm} such as in deep neural networks applications \cite{li2014scaling}. Decentralized optimization methods are algorithms where the agents seek to find a solution through local interactions (dictated by the network connection) with their neighboring agents. Decentralized methods have several advantages over centralized methods, which require all the agents to communicate with a central coordinator, such as their robustness to failure and privacy. Moreover, decentralized methods have been shown to enjoy lower communication cost compared to centralized methods under certain practical scenarios \cite{lian2017can,assran2019stochastic,chen2021accelerating}.
In this work, we consider a network (graph) of $n$ collaborative agents (nodes) that are interested in solving the following distributed stochastic optimization problem:
\begin{equation} \label{min_learning_prob}
\begin{aligned}
\minimize_{x \in \real^d} \quad f(x)=\frac{1}{n} \sum_{i=1}^n f_i(x), \quad f_i(x)\define\Ex_{\xi_i} [F_i(x;\xi_i)].
\end{aligned}
\end{equation}
In the above formulation, $F_i:\real^d \rightarrow \real$ is a smooth non-convex function privately known by agent $i$. The notation $\Ex_{\xi_i}$ is the expected value of the random variable $\xi_i$ ({\it e.g.}, random data samples) taken with respect to some local distribution. The above formulation is known as the consensus formulation since the agents share a common variable, which they need to agree upon \cite{boyd2011admm}. We consider {\em decentralized methods} where the agents aim to find a solution of \eqref{min_learning_prob} through local interactions (each agent can only send and receive information to its immediate neighbors).
Two important measures of the performance of distributed (or decentralized) methods are the {linear speedup} and { transient time}. A decentralized method is said to achieve {\em linear speedup} if the gradient computational complexity needed to reach certain accuracy reduces linearly with the network size $n$. The {\em transient time} of a distributed method is the number of iterations needed to achieve linear speedup. A common method for solving problem \eqref{min_learning_prob} is the decentralized/distributed stochastic gradient descent (\textsc{Dsgd}) method \cite{ram2010distributed,cattivelli2010diffusion}, where each agent employs a local stochastic gradient descent update and a local gossip step (there are several variations based on the order of the gossip step such as diffusion or consensus methods \cite{cattivelli2010diffusion
,chen2013distributed,nedic2009distributed}). \textsc{Dsgd} is simple to implement; moreover, it has been shown to achieve linear speedup asymptotically \cite{lian2017can}. This implies that the convergence rate of \textsc{Dsgd} asymptotically achieves the same network independent rate as the centralized (also known as parallel) stochastic gradient descent (\textsc{Psgd}) with a central coordinator. While being attractive, \textsc{Dsgd} suffers from an error or bias term caused by the heterogeneity between the local cost functions minimizers ({\it e.g.}, heterogeneous data distributions across the agents) \cite{chen2013distributed,yuan2016convergence}. The existence of such bias term will slow down the convergence of \textsc{Dsgd}, and hence enlarge its transient time.
Several bias-correction methods have been proposed to remove the bias of \textsc{Dsgd} such as EXTRA \cite{shi2015extra}, Exact-Diffusion (ED) (a.k.a D$^2$ or NIDS) \cite{yuan2019exactdiffI,li2017nids,yuan2020influence,tang2018d}, and gradient-tracking (GT) methods \cite{xu2015augmented,di2016next,qu2017harnessing,nedic2017achieving}. Although these methods have been extensively studied, their convergence properties have not been fully understood as we now explain. Under convex stochastic settings, ED/D$^2$ is theoretically shown to improve upon the transient time of \textsc{Dsgd} \cite{yuan2021removing,huang2021improving}, especially under sparse topologies. However, existing non-convex results imply that ED/D$^2$ has worse transient time compared to \textsc{Dsgd} for sparse networks \cite{tang2018d}. Moreover, the transient time of GT-methods are theoretically worse than \textsc{Dsgd} under sparse networks even under convex settings \cite{pu2021distributed}. These existing theoretical results imply that under non-convex settings, bias-correction methods can suffer from worse transient time compared to \textsc{Dsgd}. However, empirical results suggest that both ED/D$^2$ and GT methods outperform \textsc{Dsgd} under sparse topologies (without any acceleration) \cite{lu2021optimal,xin2021improved,tang2018d,xin2021fast}. This phenomenon is yet to be explained.
In this work, we provide a novel unified convergence analysis of several decentralized bias-correction methods including both ED/D$^2$ and GT methods under {\em non-convex} settings. We establish refined and improved convergence rate bounds over existing results. Moreover, our results show that bias-correction methods such as Exact-Diffusion/D$^2$ and GT methods have better network topology dependent bounds compared to \textsc{Dsgd}. We also study these methods under the Polyak-{\mathbf{L}}{}ojasiewicz (PL) condition \cite{Pol63,karimi2016linear} and provide refined bounds over existing literature. Before we state our main contributions, we will go over the related works.
\subsection{Related Works}
There exists many works that study decentralized optimization methods under deterministic settings (full knowledge of gradients) -- see \cite{nedic2009distributed,shi2015extra,
nedic2017achieving,scutari2019distributed,scaman2019optimal
,alghunaim2019linearly,arjevani2020ideal,alghunaim2019decentralized,xu2021distributed} and references therein. For example, the works \cite{alghunaim2019decentralized,xu2021distributed,sundararajan2018canonical,sundararajan2020analysis,jakovetic2019unification} propose unified frameworks that cover several state-of-the-art-methods and study their convergence, albeit under deterministic and convex settings. This work considers {\em nonconvex} costs and focuses on the {\em stochastic learning} setting where each agent has access to a random estimate of its gradient at each iteration. For this setting, \textsc{Dsgd} is the most widely studied and understood method \cite{bianchi2012convergence,chen2015onthepart1,chen2015onthepart2
,sayed2014nowbook,tatarenko2017non,swenson2020distributed
,pu2019sharp,vlaski2019distributedII,jiang2017collaborative
,lian2017can,koloskova2019decentralized
,assran2019stochastic,koloskova2020unified,wang2021cooperative}. Under non-convex settings, the transient time of \textsc{Dsgd} is on the order of $O(n^3/(1-\lambda)^4)$ \cite{assran2019stochastic,lian2017can,koloskova2020unified}, where $1 - \lambda \in (0,1)$ denotes the
the network spectral gap
that measures the connectivity of the network topology ({\it e.g.}, it goes to zero for sparse networks). As a result, improving the convergence rate dependence on the network topology quantity $1-\lambda$ is
crucial to enhance the transient time of decentralized methods.
The severe dependence on the network topology in \textsc{Dsgd} is caused by the data heterogeneity between different agents \cite{yuan2020influence,koloskova2020unified}. Consequently, the dependence on the network topology can be ameliorated by removing the bias caused by data heterogeneity. For example, the transient time of ED/D$^2$ has been shown to have enhanced dependence on the network topology compared to \textsc{Dsgd} \cite{yuan2021removing,huang2021improving} under convex settings. However, it is unclear whether bias-correction methods can achieve the same results for non-convex settings \cite{tang2018d,zhang2019decentralized,lu2019gnsd,
lu2020decentralized,xin2021improved,yi2020primal}. In fact, the established transient time of bias-correction methods such as ED/D$^2$ and GT in literature are even worse than that of \textsc{Dsgd}. For instance, the best known transient time for both ED/D$^2$ and GT is on the order of $O(n^3/(1-\lambda)^6)$ \cite{tang2018d,xin2021improved}, which is worse than \textsc{Dsgd} with transient time $O(n^3/(1-\lambda)^4)$. These counter-intuitive results naturally motivates us to study whether ED/D$^2$ and GT can enjoy an enhanced dependence on the network topology in the non-convex setting. It is also worth noting that the dependence on network topology established in existing GT references are worse than \textsc{Dsgd} even for convex scenarios \cite{pu2021distributed}. This work provides refined and enhanced convergence rates for both ED/D$^2$ and GT (as well as other methods such as EXTRA) under the non-convex setting.
In this work, we also study the convergence properties of decentralized methods under the Polyak-{\mathbf{L}}{}ojasiewicz (PL) condition \cite{Pol63}. The PL condition can hold for non-convex costs, yet it can be used to establish similar convergence rates to strong-convexity rates \cite{karimi2016linear}. For strongly-convex settings, the works \cite{huang2021improving,yuan2021removing} showed that the transient time of ED/D$^2$ is on the order of $O(n/1-\lambda)$. These are the best available network bounds for decentralized methods so far for strongly-convex settings. It is still unclear, whether bias-correction methods can achieve similar bounds to \textsc{Dsgd} under the PL condition. For example, the work \cite{xin2021improved} shows that under the PL condition, GT methods have transient time on the order $O(n/(1-\lambda)^3)$.
We remark that this work only considers non-accelerated decentralized methods with a {\em single} gossip round per iteration. It has been shown that combining GT methods with {\em multiple} gossip rounds can further improve the dependence on network topology \cite{lu2021optimal,xin2021stochastic}, and this technique can also be incorporated into our studied algorithm and its analysis.
However, it is worth noting that the utilization of multiple gossip rounds in decentralized stochastic methods might suffer from several limitations. First, it requires the knowledge of the quantity $\lambda$ to decide the number of gossip rounds per iteration, which, however, might not be available in practice. Second, the multiple gossip rounds update can take even more time than a global average operation. For example, the experiments provided in \cite[Table 17]{chen2021accelerating} indicate that, under a certain practical scenario, one gossip step requires half or third the communication overhead of a centralized \textsc{Ring-Allreduce} operation \cite{patarasuk2009bandwidth}, which conducts global averaging. This implies that decentralized methods with as much as two or three gossip rounds per iteration can be more costly than global averaging. Third, the theoretical improvements brought by multiple gossip rounds rely heavily on gradient accumulation. Such gradient accumulation can easily cause large batch-size which are empirically and theoretically found to be harmful for generalization performance on unseen dataset \cite{you2017large,gurbuzbalaban2021heavy}.
\begin{table}[t]
\renewcommand{\arraystretch}{2}
\begin{center}
\caption{\small Comparison with existing {\em non-convex} convergence rates highlighting the network quantities. Here, $\varsigma_0^2 = \tfrac{1}{n} \sum_{i=1}^n \big\| \grad f_i(0)-\grad f(0) \big\|^2$ and $\varsigma^2$ satisfies $\tfrac{1}{n} \sum_{i=1}^n \big\| \grad f_i(x)-\grad f(x) \big\|^2 \leq \varsigma^2$ for all $x \in \real^{d}$ for \textsc{Dsgd}. The quantity $\lambda=\rho(W-\tfrac{1}{n} \one \one\tran)$ is the mixing rate of the network where $W$ is the network combination matrix. Compared with GT methods our result assumes that $W$ is symmetric and positive-semidefinite. }
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{cccc} \toprule
{\sc method} & {\sc Work} & {\sc Convergence rate} & {\sc Transient time} \\ \midrule
\textsc{Dsgd} & \cite{koloskova2020unified}
& $
O\left(\frac{1}{\sqrt{nK}}+\frac{\lambda^{2/3}}{(1-\lambda)^{1/3} K^{2/3}}+\frac{\lambda^{2/3}\varsigma^{2/3}}{(1-\lambda)^{2/3} K^{2/3}}\right)$
& $O\left(\frac{n^3}{(1-\lambda)^4}\right)$ \vspace{2mm}
\\ \hline
\multirow{2}{*}{ED/D$^2$} & \cite{tang2018d} & $O \left( \frac{ 1 }{ \sqrt{n K}}
+ \frac{n \lambda^2 }{ (1-\lambda)^3 K}
+ \frac{ n \varsigma_0^2}{ (1-\lambda)^2 K^2} \right)$ & $O\left(\frac{n^3}{(1-\lambda)^6}\right)$ \\
& \textbf{This work} & $O \left( \frac{ 1 }{ \sqrt{n K}} + \frac{n \lambda^2 }{ (1-\lambda) K}
+ \frac{ n \lambda^2 \varsigma_0^2}{ (1-\lambda)^2 K^2} \right)$
& $O\left( \frac{n^3}{(1-\lambda)^2} \right)$
\vspace{2mm} \\ \hline
\multirow{2}{*}{ATC-GT} & \cite{xin2021improved} &
$ O \left( \frac{1}{ \sqrt{n K}}
+ \frac{n \lambda^2 }{(1-\lambda)^3 K}
+ \frac{ \lambda^4 \sum_{i=1}^n \| \grad f_i(0) \|^2}{ (1-\lambda)^3 K^2} \right)$
&
$O\left(\frac{n^3}{(1-\lambda)^6}\right)$ \\
& \textbf{This work} & $O \left( \frac{1}{ \sqrt{n K}}
+ \frac{n \lambda^4 }{ (1-\lambda)K}
+ \frac{n \lambda^4 }{ (1-\lambda)^4 K^2}
+ \frac{ n \lambda^4 \varsigma_0^2}{ (1-\lambda)^3 K^2} \right)$
& $O\left( \max \left\{\frac{n^3}{(1-\lambda)^2} ,~\frac{n}{(1-\lambda)^{8/3}} \right\}\right)$
\vspace{2mm} \\ \bottomrule
\end{tabular}
\end{adjustbox}
\label{table_non_convex}
\end{center}
\end{table}
\subsection{Main Contributions}
Our main contributions are formally listed below.
\begin{itemize}
\item We unify the analysis of several well-known decentralized methods under {\em non-convex} and {\em stochastic} settings. In particular, we study the convergence properties of a general primal-dual algorithmic framework, called stochastic unified decentralized algorithm ({\bfseries \footnotesize SUDA}), which includes several existing methods such as EXTRA, ED/D$^2$, and GT methods as special cases.
\item We provide a novel analysis technique for these type of methods. In particular, we employ several novel transformations to {\bfseries \footnotesize SUDA}~that are key to establish our refined convergence rates bounds (see Remark \ref{remark:gt_difference}). Our analysis provides improved network dependent bounds for the special cases of {\bfseries \footnotesize SUDA}~such as \textsc{ED/D$^2$} and \textsc{GT} methods compared to existing best known results. In addition, the established transient time of ED/D$^2$ and ATC-GT have improved network topology dependence compared to \textsc{Dsgd} -- see Table \ref{table_non_convex}.
\item We also study the convergence properties of {\bfseries \footnotesize SUDA}~under the PL condition. When specifying {\bfseries \footnotesize SUDA}~to ED/D$^2$, we achieve network dependent bound matching the best known bounds established under strongly-convex settings. When specifying {\bfseries \footnotesize SUDA}~to GT methods, we achieve an improved network dependent bound compared to current results even under strong-convexity. Table \ref{table_pl} compares the transient times network dependent bounds under the PL or strongly-convex setting.
\end{itemize}
\begin{table}[t]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\caption{ \small Comparison with existing network dependent transient times under both strongly-convex and PL condition settings. Here, the quantity $\lambda=\rho(W-\tfrac{1}{n} \one \one\tran)$ is the mixing rate of the network where $W$ is the network combination matrix. Compared with GT methods our result assumes that $W$ is symmetric and positive-semidefinite. }
\begin{tabular}{cccc} \toprule
{\sc method} & {\sc Work} & {\sc Assumption} & {\sc Transient time} \\ \midrule
\textsc{Dsgd} & \cite{koloskova2020unified}
& Strongly-convex
& $O\left(\frac{n}{(1-\lambda)^2}\right)$
\vspace{1mm} \\ \hline
\multirow{2}{*}{ED/D$^2$} & \cite{yuan2021removing,huang2021improving} & Strongly-convex
& $O\left(\frac{n}{1-\lambda}\right)$ \\
& \textbf{This work} & PL condition
& $O\left(\frac{n}{1-\lambda}\right)$
\vspace{1mm} \\ \hline
\multirow{3}{*}{GT} & \cite{pu2021distributed} &
Strongly-convex
&
$O\left(\frac{n}{(1-\lambda)^3}\right)$\\ & \cite{xin2021improved} &
PL condition
&
$O\left(\frac{n}{(1-\lambda)^3}\right)$
\\
& \textbf{This work} & PL condition
& $O\left( \max \left\{\frac{n}{1-\lambda} ,~\frac{1}{(1-\lambda)^{4/3}} \right\}\right)$
\\ \bottomrule
\end{tabular}
\label{table_pl}
\end{center}
\end{table}
{\bf Notation.} Vectors and scalars are denoted by lowercase letters. Matrices are denoted using uppercase letters. We use $\col\{a_1,\ldots,a_n\}$ (or $\col\{a_i\}_{i=1}^n$) to denote the vector that stacks the vectors (or scalars) $a_i$ on top of each other. We use $\diag\{d_1,\ldots,d_n\}$ (or $\diag\{d_i\}_{i=1}^n$) to denote a diagonal matrix with diagonal elements $d_i$. We also use $\bdiag\{D_1,\ldots,D_n\}$ (or $\bdiag\{D_i\}_{i=1}^n$) to denote a block diagonal matrix with diagonal blocks $D_i$. The vector of all ones with size $n$ is denoted by $\one_n$ (or $\one$ and size is known from context). The inner product of two vectors $a$ and $b$ is denoted by $\langle a,b \rangle$. The Kronecker product operation is denoted by $\otimes$. For a square matrix $A$, we let $\rho(A)$ denote the spectral radius of $A$, which is the largest absolute value of its eignevalues. Upright bold symbols ({\it e.g.}, ${\mathbf{x}},{\mathbf{f}},{\mathbf{W}}$) are used to denote augmented network quantities.
\section{General Algorithm Description}
In this section, we describe the deterministic form of the studied algorithm and list several specific instances of interest to us.
\subsection{General Algorithm} \label{sec:uda}
To describe the algorithm, we introduce the network quantities:
\begin{subequations}
\begin{align}
{\mathbf{x}}& \define \col\{x_1,\dots,x_n\} \in \real^{dn}, \\
{\mathbf{f}}({\mathbf{x}})& \define \sum_{i=1}^n f_i(x_i).
\end{align}
\end{subequations}
We also introduce the matrix ${\mathbf{B}} \in \real^{dn \times dn}$ that satisfies
\begin{align} \label{null_B}
{\mathbf{B}} {\mathbf{x}}&=\zero \iff x_1=x_2=\dots=x_n.
\end{align}
Using the previous definitions, the general algorithmic framework can be described as follows. Set an arbitrary initial estimate $\mathbf{x}^{0} \in \real^{dn}$ and set $\mathbf{y}^{0}=\zero$. Repeat for $k=0,1,\ldots$
\begin{subequations} \label{UDA_alg}
\begin{align}
\mathbf{x}^{k+1} &= {\mathbf{A}} \big({\mathbf{C}} \mathbf{x}^{k}-\alpha \grad \mathbf{f}(\mathbf{x}^{k}) \big) - \mathbf{B} \mathbf{y}^{k}, \label{x_UDA} \\
\mathbf{y}^{k+1} &= \mathbf{y}^{k}+ \mathbf{B} \mathbf{x}^{k+1}. \label{dual_UDA}
\end{align}
\end{subequations}
Here, $\alpha>0$ is the step size (learning rate), and the matrices ${\mathbf{A}} \in \real^{dn \times dn}$ and ${\mathbf{C}} \in \real^{dn \times dn}$ are doubly stochastic matrices that are chosen according to the network combination matrix introduced next.
\subsection{Network Combination matrix}
We let $W=[w_{ij}] \in \real^{n \times n}$ denote the network combination (weighting) matrix assumed to be symmetric. Here, the $(i,j)$th entry $w_{ij} \geq 0$ is used by agent $i$ to scale information received from agent $j$. We consider a decentralized setup where $w_{ij}=0$ if $j \notin {\mathcal{N}}_i$ where ${\mathcal{N}}_i$ is the neighborhood of agent $i$. If we introduce the augmented combination matrix $
{\mathbf{W}}=W \otimes I_d \in \real^{dn \times dn}$, then
the matrices ${\mathbf{A}},{\mathbf{B}},{\mathbf{C}}$ can be chosen as a function of ${\mathbf{W}}$ to recover several existing decentralized methods. Note that if ${\mathbf{u}}=\col\{u_i\}_{i=1}^n$ where $u_i \in \real^d$ is local to agent $i$, then, the $i$th block of ${\mathbf{W}}{\mathbf{u}}=\col\{\sum_{j \in {\mathcal{N}}_i} w_{ij} u_j\}_{i=1}^n$ can be computed by agent $i$ through local interactions with its neighbors.
\subsection{Relation to Existing Decentralized Methods}
Below, we list several important well-known decentralized algorithms that are covered in our framework. Please see Appendix \ref{app:relation_to_other_methods} for more details.
\noindent \textbf{Exact-Diffusion/D$^2$ and EXTRA.} If we select ${\mathbf{A}}={\mathbf{W}}$, ${\mathbf{B}}=({\mathbf{I}}-{\mathbf{W}})^{1/2}$, and ${\mathbf{C}}={\mathbf{I}}$. Then, algorithm \eqref{UDA_alg} becomes equivalent to ED/D$^{2}$ \cite{yuan2019exactdiffI,tang2018d}:\begin{align} \label{exact_diff}
x_i^{k+2}=\sum_{j \in {\mathcal{N}}_i} w_{ij} \Big(2 x_j^{k+1} - x_j^{k}-\alpha \big( \grad f_j(x_j^{k+1})-\grad f_j(x_j^{k})\big) \Big),
\end{align}
with $x_i^1=\sum_{j \in {\mathcal{N}}_i} w_{ij} \big( x_j^0-\alpha \grad f_j(x_j^0)\big)$. If we instead select ${\mathbf{A}}={\mathbf{I}}$, ${\mathbf{B}}=({\mathbf{I}}-{\mathbf{W}})^{1/2}$, and ${\mathbf{C}}={\mathbf{W}}$, then algorithm \eqref{UDA_alg} is equivalent to EXTRA \cite{shi2015extra}:
\begin{align} \label{EXTRA}
x_i^{k+2}=\sum_{j \in {\mathcal{N}}_i} w_{ij} \big(2 x_j^{k+1} - x_j^{k} \big)-\alpha \big( \grad f(x_i^{k+1})-\grad f(x_i^{k})\big),
\end{align}
with $x_i^1=\sum_{j \in {\mathcal{N}}_i} w_{ij} x_j^0-\alpha \grad f_i(x_i^0)$.
\noindent \textbf{Gradient-Tracking (GT) methods.} Consider the adapt-then-combine gradient-tracking (ATC-GT) method \cite{xu2015augmented}:
\begin{subequations} \label{GT_atc_alg}
\begin{align}
x_i^{k+1}&=\sum_{j \in {\mathcal{N}}_i} w_{ij} (x_j^{k} - \alpha g_j^{k}) \\
g_i^{k+1} &= \sum_{j \in {\mathcal{N}}_i} w_{ij} \big(g_j^{k} + \grad f_j(x_j^{k+1})-\grad f_j(x_j^{k}) \big).
\end{align}
\end{subequations}
With proper initialization, the above is equivalent to \eqref{UDA_alg} when $
{\mathbf{A}}={\mathbf{W}}^2$, ${\mathbf{B}}={\mathbf{I}}-{\mathbf{W}}$, and ${\mathbf{C}}= {\mathbf{I}}$. We can also recover other gradient-tracking variants. For example, if we select $
{\mathbf{A}}={\mathbf{I}}$, ${\mathbf{B}}={\mathbf{I}}-{\mathbf{W}}$, and ${\mathbf{C}}= {\mathbf{W}}^2$, then \eqref{UDA_single_update} becomes equivalent to the non-ATC-GT method \cite{qu2017harnessing}:
\begin{subequations} \label{GT_nonatc_alg}
\begin{align}
x_i^{k+1}&=\sum_{j \in {\mathcal{N}}_i} w_{ij} x_j^{k} - \alpha g_i^{k} \\
g_i^{k+1} &= \sum_{j \in {\mathcal{N}}_i} w_{ij} g_j^{k} + \grad f_i(x_i^{k+1})-\grad f_i(x_i^{k}).
\end{align}
\end{subequations}
Notice that in \eqref{GT_nonatc_alg} the communication (gossip) step only involves the terms $x_j^{k}$ and $g_j^{k}$ in the update of each vector. This is in contrast to the ATC structure \eqref{GT_atc_alg} where the communication (gossip) step involves all terms. Similarly, we can also cover the semi-ATC-GT variations \cite{di2016next} where only the update of $x_i^{k}$ or $g_i^{k}$ uses the ATC structure. Please see Appendix \ref{app:relation_to_other_methods} for details.
\begin{remark}[\sc Relation with other frameworks] \rm
The unified decentralized algorithm (UDA) from \cite{alghunaim2019decentralized} is equivalent to \eqref{UDA_alg} if ${\mathbf{A}}$ and ${\mathbf{B}}^2$ commute ({\it i.e.}, ${\mathbf{A}} {\mathbf{B}}^2={\mathbf{B}}^2 {\mathbf{A}}$). Therefore, all the methods covered in \cite{alghunaim2019decentralized} are also covered by our framework (such as DLM \cite{ling2015dlm}). Moreover, under certain conditions, the frameworks from \cite{xu2021distributed} and \cite{sundararajan2018canonical} can also be related with \eqref{UDA_alg} -- see Appendix \ref{app:relation_to_other_methods}. The works \cite{alghunaim2019decentralized,xu2021distributed,sundararajan2018canonical} only studied convergence under deterministic and convex settings. In contrast, we study the {\em non-convex and stochastic} case, and more importantly, we establish tighter rates for the above special bias-correction methods, which is the main focus of this work.
\end{remark}
\section{Stochastic UDA and Assumptions} \label{sec:suda}
In this section, we describe the stochastic version of algorithm \eqref{UDA_alg} and list the assumptions used to analyze it.
As stated in problem \eqref{min_learning_prob}, we consider stochastic settings where each agent may only have access to a stochastic gradient $\grad F_i (x_i,\xi_i^k)$ at each iteration $k$ instead of the true gradient. This scenario arises in online learning settings, where the data are not known in advance; hence, we do not have access to the actual gradient. Moreover, even if all the data is available, the true gradient might be expensive to compute for large datasets and can be replaced by a gradient at one sample or a mini-batch.
Replacing the actual gradient by its stochastic approximation in \eqref{UDA_alg}, we get the \textbf{S}tochastic \textbf{U}nified \textbf{D}ecentralized \textbf{A}lgorithm ({\bfseries \footnotesize SUDA}):
\begin{subequations} \label{SUDA_alg}
\begin{align}
\mathbf{x}^{k+1} &= {\mathbf{A}} \big({\mathbf{C}} \mathbf{x}^{k}-\alpha \grad {\mathbf{F}} ({\mathbf{x}}^k,{\boldsymbol \xi}^k) \big) - \mathbf{B} \mathbf{y}^{k}, \label{x_SUDA} \\
\mathbf{y}^{k+1} &= \mathbf{y}^{k}+ \mathbf{B} \mathbf{x}^{k+1}, \label{dual_SUDA}
\end{align}
\end{subequations}
where
\begin{align*}
\grad {\mathbf{F}} ({\mathbf{x}},{\boldsymbol \xi}^k) \define \col\{ \grad F_1 (x_1,\xi_1^k),\dots,\grad F_n (x_n,\xi_n^k) \}.
\end{align*}
We next list the assumptions used in our analyses. Our first assumption is about the network combination matrix given next.
\begin{assumption}[\bfseries \small Combination matrix] \label{assump:network}
The combination matrix $W$ is assumed to be doubly stochastic, symmetric, and primitive. Moreover, we assume that the matrices ${\mathbf{A}},{\mathbf{B}}^2,{\mathbf{C}}$ are chosen as a polynomial function of ${\mathbf{W}}$:
\begin{align} \label{polynomials_matrices}
{\mathbf{A}}=\sum_{l=0}^p a_l {\mathbf{W}}^l, \quad {\mathbf{B}}^2=\sum_{l=0}^p b_l {\mathbf{W}}^l, \quad
{\mathbf{C}}=\sum_{l=0}^p c_l {\mathbf{W}}^l,
\end{align}
where $p \geq 1$. The constants $\{a_l,b_l,c_l\}_{l=0}^p$ are chosen such that ${\mathbf{A}}$ and ${\mathbf{C}}$ are doubly stochastic and the matrix ${\mathbf{B}}$ satisfies equation \eqref{null_B}. \hfill{$\square$}
\end{assumption}
\noindent Under Assumption \ref{assump:network}, the combination matrix $W$ has a single eigenvalue at one, denoted by $\lambda_1=1$. Moreover, all other eigenvalues, denoted by $\{\lambda_i\}_{i=2}^n$, are strictly less than one in magnitude \cite{sayed2014nowbook}, and the mixing rate of the network is:
\begin{align} \label{graph_mixing_rate}
\lambda \define \rho \big(W-\tfrac{1}{n} \one \one\tran\big) =\max_{i \in \{2,\ldots,n\}} |\lambda_i| <1.
\end{align}
Note that the assumptions on ${\mathbf{A}},{\mathbf{B}}^2,{\mathbf{C}}$ are mild and hold for all the special cases described before.
We now introduce the main assumption on the objective function.
\begin{assumption}[\bfseries \small Objective function] \label{assump:smoothness} Each function $f_i: \real^d \rightarrow \real$ is $L$-smooth:
\begin{align} \label{smooth_f_eq}
\|\grad f_i(z) -\grad f_i(y)\| \leq L \|z -y\|, \quad \forall~z,y \in \real^d,
\end{align}
for some $L>0$. We also assume that the aggregate function $f(x)=\frac{1}{n} \sum_{i=1}^n f_i(x)$ is bounded below, {\it i.e.}, $f(x) \geq f^\star > -\infty$ $\forall~ x \in \real^d$ where $f^\star$ denote the optimal value of $f$. \hfill{$\square$}
\end{assumption}
\noindent The above assumption is standard to establish convergence under non-convex settings. Note that we do not impose the strong assumption of bounded gradient dissimilarity, which is required to establish convergence of \textsc{Dsgd} -- see \cite{assran2019stochastic,lian2017can,koloskova2020unified}.
We next list our assumption on the stochastic gradient. To do that, we define the filtration generated by the random process \eqref{SUDA_alg}:
\begin{align}
\bm{{\mathcal{F}}}^k \define \{{\mathbf{x}}^0,{\mathbf{x}}^2,\ldots,{\mathbf{x}}^k\}.
\end{align}
The filtration $\bm{{\mathcal{F}}}^k$ can be interpreted as the collection of all information available on the past iterates up to time $k$.
\begin{assumption}[\bfseries \small Gradient noise] \label{assump:noise}
For all $\{i\}_{i=1}^n$ and $k=0,1,\ldots$, we assume that the following holds
\begin{subequations} \label{noise_bound_eq}
\begin{align}
\Ex \big[\grad F_i(x_i^k;\xi_i^k)-\grad f_i(x_i^k) ~|~ \bm{{\mathcal{F}}}^{k}\big] &=0, \label{noise_bound_eq_mean} \\
\Ex \big[\|\grad F_i(x_i^k;\xi_i^k)-\grad f_i(x_i^k)\|^2 ~|~ \bm{{\mathcal{F}}}^{k} \big] &\leq \sigma^2, \label{noise_bound_eq_variance}
\end{align}
\end{subequations}
for some $\sigma^2 \geq 0$. We also assume that conditioned on $\bm{{\mathcal{F}}}^{k}$, the random data $\{\xi_i^t\}$ are independent of each other for all $\{i\}_{i=1}^n$ and $\{t \}_{t \leq k}$. \hfill{$\square$}
\end{assumption}
The previous assumptions will be used to analyze {\bfseries \footnotesize SUDA}~\eqref{SUDA_alg} for general non-convex costs. In the sequel, we will also study {\bfseries \footnotesize SUDA}~under the following additional assumption.
\begin{assumption}[\bfseries \small PL condition] \label{assump:PL} The aggregate function $f(x)=\frac{1}{n} \sum_{i=1}^n f_i(x)$ satisfies the PL inequality:
\begin{align} \label{PL_cond}
2\mu \big(f(x)-f^\star\big) \leq \|\grad f(x)\|^2, \quad \forall~ x \in \real^d,
\end{align}
for some $\mu >0$ where $f^\star$ denote the optimal value of $f$. \hfill{$\square$}
\end{assumption}
\noindent The above condition implies that every stationary point is a global minimizer, which is weaker than many assumptions used to establish linear convergence without strong-convexity \cite{karimi2016linear}. Note that the PL condition is also referred to as the gradient dominated condition \cite{tang2020distributed}.
\section{Fundamental Transformations}
The updates of {\bfseries \footnotesize SUDA}~\eqref{SUDA_alg}, while useful for implementation purposes, are not helpful for analysis purposes. In this section, we will transform {\bfseries \footnotesize SUDA}~\eqref{SUDA_alg} into an equivalent recursion that is fundamental to arrive at our results.
\subsection{Transformation I}
Using the change of variable
\begin{align}
{\mathbf{z}}^{k}\define {\mathbf{y}}^{k}-{\mathbf{B}} {\mathbf{x}}^{k}
\end{align}
in \eqref{x_SUDA}--\eqref{dual_SUDA}, we can describe \eqref{SUDA_alg} by the following equivalent non-incremental form:
\begin{subequations} \label{alg_uda_noninc}
\begin{align}
\mathbf{x}^{k+1} &= ({\mathbf{A}}{\mathbf{C}}-{\mathbf{B}}^2) \mathbf{x}^{k}-\alpha {\mathbf{A}} (\grad \mathbf{f}({\mathbf{x}}^{k}) +{\mathbf{w}}^k) - \mathbf{B} \mathbf{z}^{k}, \\
\mathbf{z}^{k+1} &= \mathbf{z}^{k}+ \mathbf{B} \mathbf{x}^{k},
\end{align}
\end{subequations}
where ${\mathbf{w}}^k$ is the gradient noise defined as:
\begin{align} \label{noise_gradient}
{\mathbf{w}}^k \define \grad {\mathbf{F}} ({\mathbf{x}}^k,{\boldsymbol \xi}^k) -\grad \mathbf{f}({\mathbf{x}}^{k}).
\end{align}
If we introduce the quantities
\begin{subequations}
\begin{align}
\bar{x}^k & \define \tfrac{1}{n} (\one_n\tran \otimes I_d) {\mathbf{x}}^{k}=\frac{1}{n} \sum_{i=1}^n x_i^k, \label{bar_x_def} \\
\bar{{\mathbf{x}}}^k& \define \one_n \otimes \bar{x}^k, \label{bar_x_augmented_def} \\
{\mathbf{s}}^{k}&\define{\mathbf{B}} {\mathbf{z}}^{k} +\alpha {\mathbf{A}} \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k}), \label{s_definition}
\end{align}
\end{subequations}
then recursion \eqref{alg_uda_noninc} can be rewritten as:
\begin{subequations} \label{error0}
\begin{align}
{\mathbf{x}}^{k+1} &= ({\mathbf{A}}{\mathbf{C}}-{\mathbf{B}}^2) {\mathbf{x}}^{k} - {\mathbf{s}}^{k} -\alpha {\mathbf{A}} \big( \grad \mathbf{f}({\mathbf{x}}^{k}) - \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k}) +{\mathbf{w}}^k\big) , \label{x_error0} \\
{\mathbf{s}}^{k+1} &= {\mathbf{s}}^{k}+ {\mathbf{B}}^2 {\mathbf{x}}^{k}+ \alpha {\mathbf{A}} \big( \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k+1})- \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k})\big). \label{s_error0}
\end{align}
\end{subequations}
\begin{remark}[\sc Motivation behind ${\mathbf{s}}^k$] \rm
Suppose that $({\mathbf{x}},{\mathbf{s}})$ is a fixed point of the deterministic form of \eqref{error0} (gradient noise ${\mathbf{w}}^k=\zero$) where ${\mathbf{s}} \in \real^{dn}$ and ${\mathbf{x}}=(x_1,\dots,x_n) \in \real^{dn}$. Then, from \eqref{s_error0} it holds that
$\zero = {\mathbf{B}}^2 {\mathbf{x}} \iff x_1=\dots=x_n=x$ and from \eqref{x_error0}, we have $
{\mathbf{s}}=\zero$.
Therefore, using the definition of ${\mathbf{s}}^k$ in \eqref{s_definition} and $\bar{{\mathbf{x}}}=\one \otimes x$, it follows that there exists some ${\mathbf{z}} \in \real^{dn}$ such that
\begin{align*}
&{\mathbf{s}}= \alpha {\mathbf{A}} \grad \mathbf{f}(\bar{{\mathbf{x}}}) + {\mathbf{B}} {\mathbf{z}}=\zero \\
& \Rightarrow \tfrac{1}{n}(\one\tran \otimes I_n) \left(\alpha {\mathbf{A}} \grad \mathbf{f}(\bar{{\mathbf{x}}}) + {\mathbf{B}} {\mathbf{z}} \right) =\frac{\alpha}{n} \sum_{i=1}^n \grad f_i(x)=0.
\end{align*}
Hence, $x$ is a stationary point of problem \eqref{min_learning_prob}. \hfill{$\square$}
\end{remark}
\subsection{Transformation II}
We next exploit the structure of the matrices ${\mathbf{A}}$, ${\mathbf{B}}$, and ${\mathbf{C}}$ to further transform recursion \eqref{error0} into a more useful form. Under Assumption \ref{assump:network}, the combination matrix $W$ can be decomposed as
\begin{align*}
W=U \Lambda U\tran = \begin{bmatrix}
\frac{1}{\sqrt{n}} \one & \hat{U}
\end{bmatrix} \begin{bmatrix}
1 & 0 \\
0 & \hat{\Lambda}
\end{bmatrix} \begin{bmatrix}
\frac{1}{\sqrt{n}} \one\tran \vspace{0.6mm} \\ \hat{U}\tran
\end{bmatrix},
\end{align*}
where $\hat{\Lambda}=\diag\{\lambda_i\}_{i=2}^n$. The matrix $U$ is an orthogonal matrix ($UU\tran=U\tran U=I$) and $\hat{U}$ is an ${n \times (n-1)}$ matrix that satisfies $\hat{U}\hat{U}\tran=I_n-\tfrac{1}{n} \one \one\tran$ and $\one\tran \hat{U}=0$. It follows that
\begin{align*}
{\mathbf{W}}&={\mathbf{U}}\mathbf{\Lambda} {\mathbf{U}}\tran = \begin{bmatrix}
\frac{1}{\sqrt{n}} \one \otimes I_d & \hat{{\mathbf{U}}}
\end{bmatrix} \begin{bmatrix}
I_d & 0 \\
0 & \hat{\mathbf{\Lambda}}
\end{bmatrix} \begin{bmatrix}
\frac{1}{\sqrt{n}} \one\tran \otimes I_d \\ \hat{{\mathbf{U}}}\tran
\end{bmatrix},
\end{align*}
where $\hat{\mathbf{\Lambda}} \define \hat{\Lambda} \otimes I_d \in \real^{d(n-1)\times d(n-1)}$, ${\mathbf{U}} \in \real^{dn \times dn}$ is an orthogonal matrix, and $\hat{{\mathbf{U}}}\define \hat{U} \otimes I_d \in \real^{dn \times d(n-1)}$ satisfies:
\begin{align} \label{UUtran}
\hat{{\mathbf{U}}}\tran \hat{{\mathbf{U}}}={\mathbf{I}}, \quad \hat{{\mathbf{U}}}\hat{{\mathbf{U}}}\tran={\mathbf{I}}-\tfrac{1}{n} \one \one\tran \otimes I_d, \quad (\one\tran \otimes I_d) \hat{{\mathbf{U}}}=0.
\end{align}
Now, since ${\mathbf{A}},{\mathbf{B}}^2,{\mathbf{C}}$ are chosen as polynomial function of ${\mathbf{W}}$ as described in Assumption \ref{assump:network}, it holds that
\begin{subequations} \label{ABC_decompositon}
\begin{align}
{\mathbf{A}}&={\mathbf{U}}\mathbf{\Lambda}_a {\mathbf{U}}\tran = \begin{bmatrix}
\frac{1}{\sqrt{n}} \one \otimes I_d & \hat{{\mathbf{U}}}
\end{bmatrix} \begin{bmatrix}
I_d & 0 \\
0 & \hat{\mathbf{\Lambda}}_a
\end{bmatrix} \begin{bmatrix}
\frac{1}{\sqrt{n}} \one\tran \otimes I_d \\ \hat{{\mathbf{U}}}\tran
\end{bmatrix}, \\
{\mathbf{C}}&={\mathbf{U}}\mathbf{\Lambda}_c {\mathbf{U}}\tran = \begin{bmatrix}
\frac{1}{\sqrt{n}} \one \otimes I_d & \hat{{\mathbf{U}}}
\end{bmatrix} \begin{bmatrix}
I_d & 0 \\
0 & \hat{\mathbf{\Lambda}}_c
\end{bmatrix} \begin{bmatrix}
\frac{1}{\sqrt{n}} \one\tran \otimes I_d \\ \hat{{\mathbf{U}}}\tran
\end{bmatrix}, \\
{\mathbf{B}}^2&={\mathbf{U}} \mathbf{\Lambda}_b^2 {\mathbf{U}}\tran = \begin{bmatrix}
\frac{1}{\sqrt{n}} \one \otimes I_d & \hat{{\mathbf{U}}}
\end{bmatrix} \begin{bmatrix}
0 & 0 \\
0 & \hat{\mathbf{\Lambda}}_b^2
\end{bmatrix} \begin{bmatrix}
\frac{1}{\sqrt{n}} \one\tran \otimes I_d \\ \hat{{\mathbf{U}}}\tran
\end{bmatrix},
\end{align}
\end{subequations}
where
\begin{align}
\hat{\mathbf{\Lambda}}_a=\diag\{\lambda_{a,i}\}_{i=2}^n \otimes I_d, \quad \hat{\mathbf{\Lambda}}_b^2=\diag\{\lambda_{b,i}^2\}_{i=2}^n \otimes I_d, \quad \hat{\mathbf{\Lambda}}_c=\diag\{\lambda_{i,c}\}_{i=2}^n \otimes I_d,\label{znznas82}
\end{align}
with $\lambda_{a,i} \define \sum_{l=0}^p a_l \lambda_i^l$, $\lambda_{b,i}^2 \define \sum_{l=0}^p b_l \lambda_{i}^l$, and $
\lambda_{c,i} \define \sum_{l=0}^p c_l \lambda_{i}^l$. Moreover, $\hat{\mathbf{\Lambda}}_b$ is positive definite due to the null space condition \eqref{null_B}. Multiplying both sides of \eqref{error0} by ${\mathbf{U}}\tran$ and using the structure \eqref{ABC_decompositon}, we get
\begin{subequations} \label{error1}
\begin{align}
{\mathbf{U}}\tran {\mathbf{x}}^{k+1} &= (\mathbf{\Lambda}_a \mathbf{\Lambda}_c-\mathbf{\Lambda}_b^2 ) {\mathbf{U}}\tran {\mathbf{x}}^{k}
- {\mathbf{U}}\tran {\mathbf{s}}^{k}
-\alpha \mathbf{\Lambda}_a {\mathbf{U}}\tran \big( \grad \mathbf{f}(\mathbf{x}^{k}) - \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k})+{\mathbf{w}}^k \big) , \label{x_error1} \\
{\mathbf{U}}\tran {\mathbf{s}}^{k+1} &= {\mathbf{U}}\tran {\mathbf{s}}^{k}+ \mathbf{\Lambda}_b^2 {\mathbf{U}}\tran {\mathbf{x}}^{k}+ \alpha \mathbf{\Lambda}_a {\mathbf{U}}\tran \big( \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k+1})- \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k}) \big). \label{s_error1}
\end{align}
\end{subequations}
Note that
\begin{align}
( \one\tran \otimes I_d){\mathbf{s}}^k \overset{\eqref{s_definition}}{=}(\one\tran \otimes I_d) \big({\mathbf{B}} {\mathbf{z}}^{k} +\alpha {\mathbf{A}} \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k}) \big)= \alpha \sum_{i=1}^n \grad f_i(\bar{x}^k).
\end{align}
Moreover, utilizing the structure of ${\mathbf{U}}$, we have
\begin{subequations}
\begin{align}
{\mathbf{U}}\tran {\mathbf{x}}^{k}&=\begin{bmatrix}
\sqrt{n}\, \bar{x}^k \vspace{0.5mm} \\
\hat{{\mathbf{U}}}\tran {\mathbf{x}}^{k}
\end{bmatrix},
\quad
{\mathbf{U}}\tran {\mathbf{s}}^{k} =\begin{bmatrix}
\sqrt{n}\, \alpha \overline{\grad {\mathbf{f}}}(\bar{{\mathbf{x}}}^k) \vspace{0.5mm} \\
\hat{{\mathbf{U}}}\tran {\mathbf{s}}^{k}
\end{bmatrix},
\\
{\mathbf{U}}\tran \grad \mathbf{f}(\mathbf{x})&=\begin{bmatrix}
\sqrt{n}\, \overline{\grad {\mathbf{f}}}({\mathbf{x}}^k) \vspace{0.5mm} \\
\hat{{\mathbf{U}}}\tran \grad \mathbf{f}(\mathbf{x})
\end{bmatrix},
\quad
{\mathbf{U}}\tran {\mathbf{w}}^{k} =\begin{bmatrix}
\sqrt{n}\, \bar{{\mathbf{w}}}^k \vspace{0.5mm} \\
\hat{{\mathbf{U}}}\tran {\mathbf{w}}^{k}
\end{bmatrix},
\end{align}
\end{subequations}
where
\begin{subequations}
\begin{align}
\overline{\grad {\mathbf{f}}}({\mathbf{x}}^k)& \define\tfrac{1}{n} (\one_n\tran \otimes I_d) \grad {\mathbf{f}}({\mathbf{x}}^k)=\frac{1}{n} \sum_{i=1}^n \grad f_i(x_i^k), \\
\bar{{\mathbf{w}}}^k& \define \tfrac{1}{n} (\one_n\tran \otimes I_d) {\mathbf{w}}^k=\frac{1}{n} \sum_{i=1}^n \big(\grad F_i(x_i^k,\xi^k_i)-\grad f_i(x_i^k) \big).
\end{align}
\end{subequations}
Hence, using the previous quantities and the structure of $\mathbf{\Lambda}_a,\mathbf{\Lambda}_b^2,\mathbf{\Lambda}_c$ given in \eqref{ABC_decompositon}, we find that
\begin{subequations}
\begin{align}
\bar{x}^{k+1} &=\bar{x}^{k} - \alpha \overline{\grad {\mathbf{f}}}({\mathbf{x}}^k)-\alpha \bar{{\mathbf{w}}}^k,
\\
\hat{{\mathbf{U}}}\tran {\mathbf{x}}^{k+1} &= (\hat{\mathbf{\Lambda}}_a \hat{\mathbf{\Lambda}}_c-\hat{\mathbf{\Lambda}}_b^2 ) \hat{{\mathbf{U}}}\tran {\mathbf{x}}^{k}
- \hat{{\mathbf{U}}}\tran {\mathbf{s}}^{k}
-\alpha \hat{\mathbf{\Lambda}}_a \hat{{\mathbf{U}}}\tran \big( \grad \mathbf{f}(\mathbf{x}^{k}) - \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k})+{\mathbf{w}}^k \big) ,
\\
\hat{{\mathbf{U}}}\tran {\mathbf{s}}^{k+1} &= \hat{{\mathbf{U}}}\tran {\mathbf{s}}^{k}+ \hat{\mathbf{\Lambda}}_b^2 \hat{{\mathbf{U}}}\tran {\mathbf{x}}^{k}+ \alpha \hat{\mathbf{\Lambda}}_a \hat{{\mathbf{U}}}\tran \big( \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k+1})- \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k})\big).
\end{align}
\end{subequations}
Multiplying the third equation by $\hat{\mathbf{\Lambda}}_b^{-1}$ and rewriting the previous recursion in matrix notation, we obtain:
\begin{subequations} \label{trans_uda_non_diag}
\begin{align}
\hspace{-0.75mm} \bar{x}^{k+1} &=\bar{x}^{k} - \alpha \overline{\grad {\mathbf{f}}}({\mathbf{x}}^k) -\alpha \bar{{\mathbf{w}}}^k, \label{trans_uda_non_diag_x} \\
\begin{bmatrix}
\hat{{\mathbf{U}}}\tran{\mathbf{x}}^{k+1} \\
\hat{\mathbf{\Lambda}}_b^{-1} \hat{{\mathbf{U}}}\tran{\mathbf{s}}^{k+1}
\end{bmatrix}&= \begin{bmatrix}
\hat{\mathbf{\Lambda}}_a \hat{\mathbf{\Lambda}}_c-\hat{\mathbf{\Lambda}}_b^2 & -\hat{\mathbf{\Lambda}}_b \\
\hat{\mathbf{\Lambda}}_b & ~~{\mathbf{I}}
\end{bmatrix} \begin{bmatrix}
\hat{{\mathbf{U}}}\tran{\mathbf{x}}^{k} \\
\hat{\mathbf{\Lambda}}_b^{-1} \hat{{\mathbf{U}}}\tran{\mathbf{s}}^{k}
\end{bmatrix} \nonumber \\
& \quad - \alpha \begin{bmatrix}
\hat{\mathbf{\Lambda}}_a \hat{{\mathbf{U}}}\tran \big(\grad \mathbf{f}(\mathbf{x}^{k}) - \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k})+{\mathbf{w}}^k\big) \\
\hat{\mathbf{\Lambda}}_b^{-1} \hat{\mathbf{\Lambda}}_a \hat{{\mathbf{U}}}\tran \big(\grad \mathbf{f}(\bar{{\mathbf{x}}}^{k}) -\grad \mathbf{f}(\bar{{\mathbf{x}}}^{k+1})\big)
\end{bmatrix}. \label{trans_uda_non_diag_y}
\end{align}
\end{subequations}
The convergence of \eqref{trans_uda_non_diag} is be governed by the matrix:
\begin{align} \label{G_matrix}
{\mathbf{G}} \define \begin{bmatrix}
\hat{\mathbf{\Lambda}}_a \hat{\mathbf{\Lambda}}_c-\hat{\mathbf{\Lambda}}_b^2 & -\hat{\mathbf{\Lambda}}_b \\
\hat{\mathbf{\Lambda}}_b & ~~{\mathbf{I}}
\end{bmatrix} \in \real^{2d(n-1)\times 2d(n-1)}.
\end{align}
We next introduce a fundamental factorization of ${\mathbf{G}}$ that will be used to transform \eqref{trans_uda_non_diag} into our final key recursion. The next result is proven in Appendix \ref{app:lemma_diag_proof}.
\begin{lemma}[\bfseries \small Fundamental factorization] \label{lemma:diagonalization}
Suppose that the eigenvalues of ${\mathbf{G}}$ are strictly less than one in magnitude. Then, there exists an invertible matrix $\hat{{\mathbf{V}}}$ such that the matrix ${\mathbf{G}}$ admits the similarity transformation
\begin{align} \label{G_diagonalized}
{\mathbf{G}} = \hat{{\mathbf{V}}} \mathbf{\Gamma} \hat{{\mathbf{V}}}^{-1},
\end{align}
where $\mathbf{\Gamma}$ satisfies $ \|\mathbf{\Gamma}\|<1$. \hfill{$\square$}
\end{lemma}
\noindent For the convergence of \eqref{trans_uda_non_diag}, it is necessary that ${\mathbf{G}}$ is a stable matrix (has eigenvalues strictly less than one in magnitude). To see this suppose that $\grad {\mathbf{f}}({\mathbf{x}})={\mathbf{w}}^k=\zero$, then the convergence is dictated by \eqref{trans_uda_non_diag_y}, which diverges for unstable ${\mathbf{G}}$ if $\hat{{\mathbf{U}}}\tran{\mathbf{x}}^{0} \neq \zero$ or $\hat{{\mathbf{U}}}\tran{\mathbf{s}}^{0} \neq \zero$. Thus, we implicitly assume that ${\mathbf{G}}$ is a stable matrix. Explicit expressions for $\mathbf{\Gamma}$ and $\hat{{\mathbf{V}}}$ for ED/D$^2$, EXTRA, and GT methods are derived in Appendix \ref{app:bounds_special_cases}, where we also find exact expressions for the eigenvalues of ${\mathbf{G}}$ for these methods.
Finally, multiplying both sides of \eqref{trans_uda_non_diag_y} by $\frac{1}{\upsilon}\hat{{\mathbf{V}}}^{-1}$ for any $\upsilon >0$ and using the structure \eqref{G_diagonalized}, we arrive at the following key result.
\begin{lemma}[\bfseries \small Final transformed recursion] Under Assumption \ref{assump:network}, there exists an invertible matrix $\hat{{\mathbf{V}}}$ such that recursion \eqref{trans_uda_non_diag} can transformed into
\begin{subequations} \label{error_diag_transformed}
\begin{align}
\bar{x}^{k+1} &=\bar{x}^{k} - \alpha \overline{\grad {\mathbf{f}}}({\mathbf{x}}^k)-\alpha \bar{{\mathbf{w}}}^k, \label{error_average_diag} \\
\hat{{\mathbf{e}}}^{k+1}&=\mathbf{\Gamma} \hat{{\mathbf{e}}}^{k} - \alpha \hat{{\mathbf{V}}}^{-1} \begin{bmatrix}
\frac{1}{\upsilon} \hat{\mathbf{\Lambda}}_a \hat{{\mathbf{U}}}\tran \big(\grad \mathbf{f}(\mathbf{x}^{k}) - \grad \mathbf{f}(\bar{{\mathbf{x}}}^{k})+{\mathbf{w}}^k\big) \\
\frac{1}{\upsilon} \hat{\mathbf{\Lambda}}_b^{-1} \hat{\mathbf{\Lambda}}_a \hat{{\mathbf{U}}}\tran \big(\grad \mathbf{f}(\bar{{\mathbf{x}}}^{k}) -\grad \mathbf{f}(\bar{{\mathbf{x}}}^{k+1})\big)
\end{bmatrix}, \label{error_check_diag}
\end{align}
\end{subequations}
where $\mathbf{\Gamma}$ was introduced in \eqref{G_diagonalized}, $\upsilon$ is an arbitrary strictly positive constant, and
\begin{align} \label{e_hat_def}
\hat{{\mathbf{e}}}^{k} \define \frac{1}{\upsilon}\hat{{\mathbf{V}}}^{-1} \begin{bmatrix}
\hat{{\mathbf{U}}}\tran{\mathbf{x}}^{k} \\
\hat{\mathbf{\Lambda}}_b^{-1} \hat{{\mathbf{U}}}\tran{\mathbf{s}}^{k}
\end{bmatrix}.
\end{align}
\hfill{$\square$}
\end{lemma}
\begin{remark}[\sc Deviation from average] \rm
Recall that $\bar{{\mathbf{x}}}^k= \one \otimes \bar{x}^k$ where $\bar{x}^k=\tfrac{1}{n} \sum_{i=1}^n x^k_i$. Since $\hat{{\mathbf{U}}}\tran \hat{{\mathbf{U}}}={\mathbf{I}}$, it holds that
\begin{align*}
\|\hat{{\mathbf{U}}}\tran {\mathbf{x}}\|^2 = {\mathbf{x}}\tran \hat{{\mathbf{U}}} \hat{{\mathbf{U}}}\tran \hat{{\mathbf{U}}} \hat{{\mathbf{U}}}\tran {\mathbf{x}} =\|\hat{{\mathbf{U}}} \hat{{\mathbf{U}}}\tran {\mathbf{x}}^k\|^2 \overset{\eqref{UUtran}}{=}\|{\mathbf{x}}^k-\bar{{\mathbf{x}}}^k\|^2.
\end{align*}
Therefore, the quantity $\|\hat{{\mathbf{U}}}\tran {\mathbf{x}}^k\|^2$ measures the deviation of ${\mathbf{x}}^k$ from the average $\bar{{\mathbf{x}}}^k$. Similarly, $\|\hat{{\mathbf{U}}}\tran {\mathbf{s}}^k\|^2$ measures the deviation of ${\mathbf{s}}^k$ with the average $(\tfrac{1}{n}\one \one\tran \otimes I_n) {\mathbf{s}}^k=\one \otimes \tfrac{\alpha}{n}\sum_{i=1}^n \grad f_i(\bar{x}^k)$ (see \eqref{s_definition}). Now, using \eqref{e_hat_def}, we have
\begin{align} \label{hat_relation_avg_Dev}
\|\upsilon \hat{{\mathbf{V}}} \hat{{\mathbf{e}}}^{k} \|^2=\|\hat{{\mathbf{U}}}\tran{\mathbf{x}}^{k}\|^2 + \|\hat{\mathbf{\Lambda}}_b^{-1} \hat{{\mathbf{U}}}\tran{\mathbf{s}}^{k}\|^2.
\end{align}
Thus, the vector $\hat{{\mathbf{e}}}^{k}$ can be interpreted as a measure of a weighted deviation of ${\mathbf{x}}^k$ and ${\mathbf{s}}^k$ from $\bar{{\mathbf{x}}}^k$ and $\one \otimes \tfrac{\alpha}{n}\sum_{i=1}^n \grad f_i(\bar{x}^k)$, respectively. \hfill{$\square$}
\end{remark}
\begin{remark} \label{remark:gt_difference}\rm
This work handles the deviation of the individual vectors from the average $\mathbf{x}^k -\bar{\mathbf{x}}^k$, and the deviation of the ``gradient-tracking'' variable ${\mathbf{s}}^k$ from the averaged-gradients $\one \otimes \tfrac{\alpha}{n}\sum_{i=1}^n \grad f_i(\bar{x}^k)$ as one augmented quantity. This leads to two coupled error terms given by \eqref{trans_uda_non_diag}. This is one main reason that allows us to obtain tighter network-dependent rates compared to previous works. The rigorous factorization of the matrix $\mathbf{G}$ leads us to arrive at the final transformed recursion \eqref{error_diag_transformed} with contractive matrix $\mathbf{\Gamma}$, which is key for our result.
This is in contrast to other works. For example, in previous GT works (see {\it e.g.}, \cite{qu2017harnessing,xin2021improved,pu2021distributed}), the deviation of the individual vectors from the average $\mathbf{x}^k -\bar{\mathbf{x}}^k$, and the deviation of the gradient-tracking variable from the averaged-gradients are handled independently, and thus, they do not exploit the coupling matrix between these two network quantities. \hfill{$\square$}
\end{remark}
\section{Convergence Results}
In this section, we state and discuss the convergence results.
\subsection{Convergence Under General Non-Convex Costs}
The following result establishes the convergence under general non-convex smooth costs.
\begin{theorem} [\bfseries \small Convergence of SUDA] \label{thm_nonconvex}
Suppose that Assumptions \ref{assump:network}--\ref{assump:noise} hold and the step size satisfies $
\alpha \leq \min \left\{\frac{1}{2L},~ \frac{1-\gamma}{2L v_1v_2 \lambda_a}
,~ \frac{\sqrt{\underline{\lambda_b}(1-\gamma)}}{2 L \sqrt{v_1 v_2 \lambda_a }}
\right\}$. Then, the iterates $\{{\mathbf{x}}^k\}$ of {\bfseries \footnotesize SUDA}~with ${\mathbf{x}}^0=\one \otimes x^0$ ($x^0 \in \real^d$) satisfy
\begin{equation} \label{eq:thm:nonconvex}
\begin{aligned}
\frac{1}{K} \sum_{k=0}^{K-1} \Ex \| \grad f(\bar{x}^{k}) \|^2 + \Ex \| \overline{\grad{\mathbf{f}}}({\mathbf{x}}^k)\|^2 &\leq
\frac{8 (f(x^{0})-f^\star)}{\alpha K}
+ \frac{ 4\alpha L \sigma^2 }{ n}
+ \frac{12 \alpha^2 L^2 v_1^2 v_2^2 \zeta_0^2}{ \underline{\lambda_b}^2 (1-\gamma)K}
\\
& \quad
+ \frac{16\alpha^2 L^2 v_1^2 v_2^2 \lambda_a^2 \sigma^2}{1-\gamma}+\frac{16\alpha^4 L^4 v_1^2 v_2^2 \lambda_a^2 \sigma^2}{ \underline{\lambda_b}^2(1-\gamma)^2n},
\end{aligned}
\end{equation}
where $v_1 \define \|\hat{{\mathbf{V}}}\|$, $v_2 \define \|\hat{{\mathbf{V}}}^{-1}\|$ and
\begin{align*}
\gamma &\define \|\mathbf{\Gamma}\|<1, \quad \underline{\lambda_b}\define\frac{1}{\|\mathbf{\Lambda}_b^{-1}\|}, \quad \lambda_a \define \|\mathbf{\Lambda}_a\|, \\
\zeta_0 &\define \tfrac{1}n \|({\mathbf{A}}-\tfrac{1}{n}\one\tran \one \otimes I_d ) \big(\grad \mathbf{f}({\mathbf{x}}^0) - \one \otimes \grad f(x^0) \big)\|.
\end{align*}
Consequently, if we set $\alpha=\tfrac{1}{2L \beta +\sigma \sqrt{K/n}}$ where $\beta \define 1
+\frac{ v_1v_2 \lambda_a}{1-\gamma}
+\frac{\sqrt{v_1 v_2 \lambda_a }}{\sqrt{\underline{\lambda_b}(1-\gamma)}}$. Then, we obtain the convergence rate
\begin{equation} \label{eq:thm:nonconvex_rate}
\begin{aligned}
\frac{1}{K} \sum_{k=0}^{K-1} \Ex \big\| \overline{\grad{\mathbf{f}}}({\mathbf{x}}^k) \big\|^2 \leq
O \left(
\frac{ \sigma }{ \sqrt{nK}}
+ \frac{1}{K}
+ \frac{ n \sigma^2}{n+\sigma^2 K}
+ \frac{ n \zeta_0^2}{ n K+\sigma^2 K^2} \right).
\end{aligned}
\end{equation}
\hfill{$\square$}
\end{theorem}
\noindent The proof of Theorem \ref{thm_nonconvex} is established in Appendix \ref{app:convergence_analysis}. The convergence rate \eqref{eq:thm:nonconvex_rate} shows that {\bfseries \footnotesize SUDA}~enjoys linear speedup since the dominating term is $O(1/\sqrt{nK})$ for sufficiently large $K$ \cite{lian2017can}. Note that the rate given in \eqref{eq:thm:nonconvex_rate} treats the network quantities $\{\gamma,\lambda_a,\underline{\lambda_b},v_1,v_2\}$ as constants. However, these quantities can have a significant influence on the convergence rate as explained in the introduction. While the above result holds for EXTRA, ED/D$^2$ and GT methods, it is still unclear whether these methods can achieve enhanced transient time compared to \textsc{Dsgd}. To show the network affect and see the implication of Theorem \ref{thm_nonconvex} in terms of network quantities, we will specialize Theorem \ref{thm_nonconvex} to ED/D$^2$ \eqref{exact_diff} and ATC-GT \eqref{GT_atc_alg} and reflect the values of the $\{\gamma,\lambda_a,\underline{\lambda_b},v_1,v_2\}$ in the convergence rate.
\begin{corollary}[\bfseries \small ED/D$^2$ convergence] \label{corollary:ED}
Suppose that all the conditions given in Theorem \ref{thm_nonconvex} hold and assume further that $W$ is positive semi-definite. If we set $\alpha=\sqrt{n/K}$ and $
K \geq \max \left\{ 4 nL^2,
~\frac{32L^2 \lambda^2 n}{ (1-\sqrt{\lambda})^2 \underline{\lambda}} ,~ \frac{\sqrt{32} L^2 \lambda n}{ \sqrt{1-\lambda} (1-\sqrt{\lambda} ) \sqrt{\underline{\lambda}}} \right\}$, then ED/D$^2$ \eqref{exact_diff} with ${\mathbf{x}}^0=\one \otimes x^0$ ($x^0 \in \real^d$) has convergence rate
\begin{equation} \label{ED_nonconvex_result}
\begin{aligned}
\frac{1}{K} \sum_{k=0}^{K-1} \Ex \| \overline{\grad{\mathbf{f}}}({\mathbf{x}}^k)\|^2 &\leq
O \bigg( \frac{ f(x^{0}) - f^\star }{ \sqrt{n K}}
+\frac{ \sigma^2 }{ \sqrt{nK}} \bigg)
\\
& \quad
+ O \bigg( \frac{n \lambda^2 \sigma^2}{ (1-\lambda) \underline{\lambda} K}
+ \frac{n \lambda^2 \sigma^2}{ (1-\lambda)^{3} \underline{\lambda} K^2}
+ \frac{ n L^2 \zeta_0^2}{ (1-\lambda)^2 \underline{\lambda} K^2} \bigg),
\end{aligned}
\end{equation}
where $\underline{\lambda}$ is the minimum non-zero eigenvalue of $W$. Moreover, we have
\begin{align*}
\zeta_0^2 =\textstyle \frac{1}{n} \sum_{i=1}^n \big\|\sum\limits_{j \in {\mathcal{N}}_i} w_{ij} \grad f_j(x^0)-\grad f(x^0) \big\|^2 \leq \lambda^2 \varsigma_0^2,
\end{align*}
where $\varsigma_0^2 \define \tfrac{1}{n} \sum_{i=1}^n \big\| \grad f_i(x^0)-\grad f(x^0) \big\|^2$.
\begin{proof}
The proof follows by substituting the bounds \eqref{exact_diff_bounds} established in Appendix \ref{app:bounds_special_cases} into \eqref{eq:thm:nonconvex}.
\end{proof}
\end{corollary}
\begin{corollary}[\bfseries \small ATC-GT convergence] \label{corollary:GT}
Suppose that all the conditions given in Theorem \ref{thm_nonconvex} hold and assume further that $W$ is positive semi-definite. If we let $\alpha=\sqrt{n/K}$ and $
K \geq \max \left\{ 4nL^2,~ \frac{432 L^2 \lambda^4 n}{(1-\lambda)^2}
,~ \frac{72 L^2 \lambda^2 n}{(1-\lambda)^2}
\right\}$, then ATC-GT \eqref{GT_atc_alg} with ${\mathbf{x}}^0=\one \otimes x^0$ ($x^0 \in \real^d$) has convergence rate
\begin{equation} \label{ATC_GT_nonconvex_result}
\begin{aligned}
\frac{1}{K} \sum_{k=0}^{K-1} \Ex \| \overline{\grad{\mathbf{f}}}({\mathbf{x}}^k)\|^2
&\leq
O \bigg( \frac{ f(x^{0}) - f^\star }{ \sqrt{n K}}
+ \frac{ \sigma^2 }{ \sqrt{nK}} \bigg)
\\
& \quad + O\bigg( \frac{n \lambda^4 \sigma^2}{ (1-\lambda)K}
+ \frac{n \lambda^4 \sigma^2}{ (1-\lambda)^4 K^2}
+ \frac{ n \zeta_0^2}{ (1-\lambda)^3 K^2} \bigg).
\end{aligned}
\end{equation}
Moreover, we have
\begin{align*}
\zeta_0^2 = \textstyle \frac{1}{n} \sum_{i=1}^n \big\|\sum\limits_{j \in {\mathcal{N}}_i} [w_{ij}]^2 \grad f_j(x^0)-\grad f(x^0) \big\|^2 \leq \lambda^4 \varsigma_0^2,
\end{align*}
where $\varsigma_0^2 \define \tfrac{1}{n} \sum_{i=1}^n \big\| \grad f_i(x^0)-\grad f(x^0) \big\|^2$.
\begin{proof}
The proof follows by substituting the bounds \eqref{gt_diff_W_bound} established in Appendix \ref{app:bounds_special_cases} into Theorem \ref{thm_nonconvex}.
\end{proof}
\end{corollary}
\noindent
Note that linear speedup is achieved when the dominating term is $O(\tfrac{1}{\sqrt{nK}})$. This is case when $K$ is sufficiently large so that the higher order terms are less than or equal to $O(\tfrac{1}{\sqrt{nK}})$. For example, for ED/D$^2$ bound in \eqref{ED_nonconvex_result}, linear speedup is achieved when
\begin{align*}
& O \bigg( \hspace{-0.5mm}\frac{n \lambda^2 }{ (1-\lambda) K}
\hspace{-0.5mm}+\hspace{-0.5mm} \frac{n \lambda^2 }{ (1-\lambda)^{3} K^2}
\hspace{-0.5mm}+\hspace{-0.5mm} \frac{ n }{ (1-\lambda)^2 K^2} \bigg)
\leq O \bigg( \frac{ 1 }{ \sqrt{n K}}
\bigg).
\end{align*}
The above holds when $K \geq O\left(n^3/(1-\lambda)^2 \right)$.
Here, we treated $\underline{\lambda}$ for ED/D$^2$ as constant since, for example, if we set $W \leftarrow (1-\theta)W+\theta I$ for constant $\theta>0$, then it holds that $\underline{\lambda} \in (\theta, 1)$. Table \ref{table_non_convex} compares the our results with existing works. It is clear that our bounds are tighter in terms of the spectral gap $1-\lambda$. Moreover, ED/D$^2$ and GT methods have enhanced transient time compared to \textsc{Dsgd}.
\begin{remark}[\sc Step size selection] \label{remark:stepsize} \rm
We remark that in Corollaries \ref{corollary:ED} and \ref{corollary:GT}, the step size is chosen as $\alpha=\sqrt{n/K}$ to simplify the expressions. This choice is not optimal and tighter rates can be obtained if we meticulously select the step size. For example, we can select the step size as in Theorem \ref{thm_nonconvex} and obtain a rate similar to \eqref{eq:thm:nonconvex_rate} where the dominating term is $O(\sigma/\sqrt{nK})$ instead of $O(\sigma^2/\sqrt{nK})$. We can even get a tighter rate by carefully selecting the step size similar to \cite{koloskova2020unified}. However, such choices does not affect the transient time order in terms of network quantities, which is the main conclusion of our results. \hfill{$\square$}
\end{remark}
\begin{remark}[\sc EXTRA and other GT variations] \rm
We remark that the matrix ${\mathbf{G}}$ defined in \eqref{G_matrix} is identical for both EXTRA \eqref{EXTRA} and ED/D$^2$. Hence, the convergence rate of ED/D$^2$ given in \eqref{ED_nonconvex_result} also holds for EXTRA with the exception of the value of $\lambda^2$ in the {\em numerators}, which should be replaced by one for EXTRA ($\lambda^2 \to 1$ in numerators). Likewise, for the non-ATC-GT \eqref{GT_nonatc_alg} and semi-ATC-GT (see Appendix \ref{app:relation_to_other_methods}), the matrix ${\mathbf{G}}$ is identical to ATC-GT and the convergence rate of ATC-GT given \eqref{ATC_GT_nonconvex_result} holds for these other variations except for the value of $\lambda^2$ in the {\em numerators}, which is one for non-ATC-GT ($\lambda^2 \to 1$ in numerators) and $\lambda$ for the semi-ATC-GT ($\lambda^2 \to \lambda$ in numerators). Please see Appendix \ref{app:bounds_special_cases} for more details.
Finally, note that for static and undirected graphs, our technique can be used to improve the network bounds for EXTRA, ED/D$^2$, and GT modifications for other cost-function settings such variance-reduced settings \cite{xin21hybrid}.
\hfill{$\square$}
\end{remark}
\subsection{Convergence Under PL Condition}
We next state the convergence of {\bfseries \footnotesize SUDA}~under the PL condition given in Assumption \ref{assump:PL}.
\begin{theorem} [\bfseries \small PL case] \label{thm_PL_linear_conv}
Suppose that Assumptions \ref{assump:network}--\ref{assump:PL} hold and the step size satisfies:
\begin{align} \label{pl_thm_ineq_constant}
\alpha \leq \min\left\{
\frac{1-\gamma}{3L},~
\frac{ \underline{\lambda_b}}{2 L },~
\frac{1-\gamma}{\sqrt{6} L v_1 v_2 \lambda_a}
,~
\left(\frac{\mu\underline{\lambda_b}^{2}(1-\gamma)}{8 L^{4} v_1^{2} v_2^{2} } \right)^{1/3}
\right\}.
\end{align}
Then, the iterates $\{{\mathbf{x}}^k\}$ of {\bfseries \footnotesize SUDA}~ with ${\mathbf{x}}^0=\one \otimes x^0$ ($x^0 \in \real^d$) satisfy
\begin{equation} \label{eq:PL_thm_1}
\begin{aligned}
\frac{1}{n}\sum_{i=1}^n \Ex [f(x_i^{k}) - f^\star]
& \leq
\left(1-\frac{\alpha \mu}{2} \right)^k r_0
\\
& \quad
+O\left(\frac{\alpha L \sigma^2}{\mu n}
+\frac{\alpha^2 L^2 v_1^2 v_2^2 \lambda_a^2 \sigma^2}{\mu (1-\gamma)}
+\frac{ \alpha^4 L^4 v_1^2 v_2^2 \lambda_a^2 \sigma^2}{ \mu \underline{\lambda_b}^2(1-\gamma)^2 n } \right),
\end{aligned}
\end{equation}
where $r_0 =2\Ex [f(x^{0})-f^\star] +
\alpha^2 L v_1^2 v_2^2 \zeta_0^2/ \underline{\lambda_b}^2$ and the quantities $v_1$, $v_2$, $\gamma$, $\lambda_a$, $\underline{\lambda_b}$ and $\zeta_0$ are defined as in Theorem \ref{thm_nonconvex}.
\hfill{$\square$}
\end{theorem}
\noindent The proof of Theorem \ref{thm_PL_linear_conv} is given in Appendix \ref{app:thm_pl_lin_conv_proof}. To discuss the implication of the above result, we specialize it to ED/D$^2$ and ATC-GT.
\begin{corollary}[\bfseries \small ED/D$^2$ convergence under PL condition] \label{corollary:pl_ED}
\sloppy Let the same conditions as in Theorem \ref{thm_PL_linear_conv} hold and suppose further that $W$ is positive semi-definite. If the step size satisfies $\alpha \leq \min\left\{
\frac{1-\sqrt{\lambda}}{3L},~
\frac{ \sqrt{1-\lambda}}{2 L },~
\frac{(1-\sqrt{\lambda}) \underline{\lambda} }{2\sqrt{12} L \lambda}
,~
\left(\frac{\mu (1-\lambda)(1-\sqrt{\lambda}) \underline{\lambda}}{18 L^{4} } \right)^{1/3}
\right\}$, then ED/D$^2$ \eqref{exact_diff} with ${\mathbf{x}}^0=\one \otimes x^0$ ($x^0 \in \real^d$) has convergence rate
\begin{equation} \label{ED_PL_result}
\begin{aligned}
\frac{1}{n}\sum_{i=1}^n \Ex [f(x_i^{k})-f^\star] &\leq
(1-\tfrac{\alpha \mu}{2})^k r_0 +
O\left(\frac{\alpha L \sigma^2}{\mu n}
+\frac{\alpha^2 L^2 \lambda^2 \sigma^2}{\mu (1-\lambda)\underline{\lambda} }
+\frac{ \alpha^4 L^4 \lambda^2 \sigma^2}{ \mu (1-\lambda)^3 \underline{\lambda} n } \right),
\end{aligned}
\end{equation}
where $r_0 =2\Ex [f(x^{0})-f^\star] +
8 \alpha^2 L \zeta_0^2/ (1-\lambda) \underline{\lambda}$, $\zeta_0^2 = \textstyle \frac{1}{n} \sum_{i=1}^n \big\|\sum\limits_{j \in {\mathcal{N}}_i} w_{ij} \grad f_j(x^0)-\grad f(x^0) \big\|^2$, and $\underline{\lambda}$ is the minimum non-zero eigenvalue of $W$. Hence, selecting $\alpha=2\ln( K^2)/\mu K$, we obtain
\begin{align} \label{rate_ED_pl}
\frac{1}{n}\sum_{i=1}^n \Ex [f(x_i^{K}) - f^\star]
& \leq
\frac{2\Ex [f(x^{0})-f^\star]}{ K^2}
\nonumber \\
& \quad + \tilde{O}\left(
\frac{ \sigma^2}{K n}
+\frac{ \lambda^2 \sigma^2}{K^2 (1-\lambda) \underline{\lambda}}
+ \frac{ \zeta_0^2}{K^4 (1-\lambda) \underline{\lambda} }
+
\dfrac{ \lambda^2 \sigma^2}{ K^4 (1-\lambda)^3 \underline{\lambda} n } \right),
\end{align}
where $\tilde{O}(\cdot)$ hides logarithmic factors.
\begin{proof}
Equation \eqref{ED_PL_result} follows from Theorem \ref{thm_PL_linear_conv} and the bounds \eqref{exact_diff_bounds} derived in Appendix \ref{app:bounds_special_cases}. Now, if we set $\alpha=2\ln( K^2)/\mu K$, then $
1-\tfrac{\alpha \mu}{2} \leq \exp(- \alpha \mu K/2) =\frac{1}{K^2}$ where $\exp(\cdot)$ denote the exponential function. Hence, for $\alpha=2\ln(K^2)/\mu K$ and large enough $K$, inequality \eqref{ED_PL_result} can be upper bounded by \eqref{rate_ED_pl}.
\end{proof}
\end{corollary}
\begin{corollary}[\bfseries \small ATC-GT convergence under PL condition] \label{corollary:pl_GT}
\sloppy Let the same conditions as in Theorem \ref{thm_PL_linear_conv} hold and assume further that $W$ is positive semi-definite. Then, ATC-GT \eqref{GT_atc_alg} with ${\mathbf{x}}^0=\one \otimes x^0$ ($x^0 \in \real^d$) and $\alpha \leq \min\left\{
\frac{1-\lambda}{6L}
,~
\frac{1-\lambda}{ 6\sqrt{18} L \lambda^2 }
,~
\left(\frac{\mu (1-\lambda)^3}{432 L^{4} } \right)^{1/3}
\right\}$ has convergence rate:
\begin{equation} \label{ATC_GT_PL_result}
\begin{aligned}
\frac{1}{n}\sum_{i=1}^n \Ex [f(x_i^{k})-f^\star] &\leq
(1-\tfrac{\alpha \mu}{2})^k r_0
+
O\left(\frac{\alpha L \sigma^2}{\mu n}
+\frac{\alpha^2 L^2 \lambda^4 \sigma^2}{\mu (1-\lambda)}
+\frac{ \alpha^4 L^4 \lambda^4 \sigma^2}{ \mu (1-\lambda)^4 n } \right).
\end{aligned}
\end{equation}
where $r_0 =2\Ex [f(x^{0})-f^\star] +
27 \alpha^2 L \zeta_0^2/ (1-\lambda)^2$ and $\zeta_0^2 = \textstyle \frac{1}{n} \sum_{i=1}^n \big\|\sum\limits_{j \in {\mathcal{N}}_i} [w_{ij}]^2 \grad f_j(x^0)-\grad f(x^0) \big\|^2$.
Hence, if we set $\alpha=2\ln( K^2)/\mu K$, then it holds that
\begin{align} \label{rate_GT_pl}
\frac{1}{n}\sum_{i=1}^n \Ex [f(x_i^{K}) - f^\star]
&\leq
\frac{2\Ex [f(x^{0})-f^\star]}{K^2}
\nonumber \\
& \quad + \tilde{O}\left(
\frac{ \sigma^2}{K n}
+\frac{ \lambda^4 \sigma^2}{K^2 (1-\lambda)}
+ \frac{ \zeta_0^2}{K^4 (1-\lambda)^2 }
+
\frac{ \lambda^4 \sigma^2}{ K^4 (1-\lambda)^4 n } \right),
\end{align}
where $\tilde{O}(\cdot)$ hides logarithmic factors.
\begin{proof}
Equation \eqref{ATC_GT_PL_result} follows from Theorem \ref{thm_PL_linear_conv} and the bounds \eqref{gt_diff_W_bound} derived in Appendix \ref{app:bounds_special_cases}. Inequality \eqref{rate_GT_pl} follows by subsisting $\alpha=2\ln(K^2)/\mu K$, into \eqref{gt_diff_W_bound} and using $
1-\tfrac{\alpha \mu}{2} \leq \exp(- \alpha \mu K/2) =\frac{1}{K^2}$.
\end{proof}
\end{corollary}
\noindent We note that as in Remark \ref{remark:stepsize}, the selected step sizes in Corollaries \ref{corollary:pl_ED} and \ref{corollary:pl_GT} are not optimized. The hidden log factors can be removed if we adopt decaying step-sizes techniques ({\it e.g.}, \cite{pu2019sharp}). However, the main conclusion we want to emphasize is the network dependent bounds, which do not change if select better step size choices. Under the PL condition, linear speedup is achieved when $K$ is large enough such that the dominating term is $O(\tfrac{1}{nK})$. Table \ref{table_pl} lists the transient times implied by the above result and compares them with existing results. It is clear that our results significantly improves upon existing GT results. Moreover, our bound for ED/D$^2$ matches the existing bound, which are under the stronger assumption of strong-convexity.
\begin{remark}[\sc Steady-state error] \rm
For constant step size $\alpha$ independent of $k$, we can let $k$ goes to $\infty$ in \eqref{ED_PL_result} and \eqref{ATC_GT_PL_result} to arrive at the following steady state results.
\begin{itemize}
\item For ED/D$^2$ \eqref{exact_diff}, we have
\begin{equation} \label{ED_PL_ss_result}
\begin{aligned}
\limsup_{k \rightarrow \infty} \frac{1}{n}\sum_{i=1}^n \Ex [f(x_i^{k})-f^\star] &\leq
O\left(\frac{\alpha L \sigma^2}{\mu n}
+\frac{\alpha^2 L^2 \lambda^2 \sigma^2}{\mu (1-\lambda) }
+\frac{ \alpha^4 L^4 \lambda^2 \sigma^2}{ \mu (1-\lambda)^3 n } \right).
\end{aligned}
\end{equation}
\item For ATC-GT \eqref{GT_atc_alg}, we have
\begin{equation} \label{ATC_GT_PL_ss_result}
\begin{aligned}
\limsup_{k \rightarrow \infty} \frac{1}{n}\sum_{i=1}^n \Ex [f(x_i^{k})-f^\star] &\leq
O\left(\frac{\alpha L \sigma^2}{\mu n}
+\frac{\alpha^2 L^2 \lambda^4 \sigma^2}{\mu (1-\lambda)}
+\frac{ \alpha^4 L^4 \lambda^4 \sigma^2}{ \mu (1-\lambda)^4 n } \right).
\end{aligned}
\end{equation}
\end{itemize}
The bound \eqref{ED_PL_ss_result} for ED/D$^2$ has the same network dependent bounds as in the strongly-convex case \cite{yuan2020influence}. Moreover, the bound \eqref{ATC_GT_PL_ss_result} for ATC-GT improves upon existing bounds for both strongly-convex \cite{pu2021distributed} and PL settings \cite{xin2021improved}, which are on the order of $
O\big(\alpha^2 \lambda^2 \sigma^2/ (1-\lambda)^3\big)
$.
\hfill{$\square$}
\end{remark}
\section{Simulation Results}\label{sec:simu-nonconvex}
In this section, we validate the established theoretical results with numerical simulations.
\subsection{Simulation for non-convex problems}
\noindent \textbf{The problem.} We consider the logistic regression problem with a non-convex regularization term \cite{antoniadis2011penalized,xin2021improved}.
The problem formulation is given by $\min_{x \in \real^d} \frac{1}{n}\sum_{i=1}^n f_i(x) + \rho\, r(x)$, where
\begin{equation} \label{non_convex_lr}
\begin{aligned}
f_i(x) = \frac{1}{L}\sum_{\ell=1}^L \ln\big(1 + \exp(-y_{i,\ell}h_{i,\ell}\tran x)\big) \quad \mbox{and} \quad r(x) = \sum_{j=1}^d \frac{x(j)^2}{ 1 + x(j)^2}.
\end{aligned}
\end{equation}
In the above problem, $x=\col\{x(j)\}_{j=1}^d \in \real^d$ is the unknown variable to be optimized, $\{h_{i,\ell}, y_{i,\ell}\}_{\ell=1}^L$ is the training dateset held by agent $i$ in which $h_{i,\ell}\in \mathbb{R}^d$ is a feature vector while $y_{i,\ell} \in \{-1,+1\}$ is the corresponding label. The regularization $r(x)$ is a smooth but non-convex function and the regularization constant $\rho > 0$ controls the influence of $r(x)$.
\vspace{1mm}
\noindent \textbf{Experimental settings.} In our experiments, we set $d=20$, $L=2000$ and $\rho = 0.001$. To control data heterogeneity across the agents, we first let each agent $i$ be associated with a local solution $x^\star_{i}$, and such $x^\star_i$ is generated by $x^\star_i = x^\star + v_i$ where $x^\star\sim {\mathcal{N}}(0, I_d)$ is a randomly generated vector while $v_i \sim {\mathcal{N}}(0, \sigma^2_h I_d)$ controls the similarity between each local solution. Generally speaking, a large $\sigma^2_h$ result in local solutions $\{x_i^\star\}$ that are vastly different from each other. With $x_i^\star$ at hand, we can generate local data that follows distinct distributions. At agent $i$, we generate each feature vector $h_{i,\ell} \sim {\mathcal{N}}(0, I_d)$. To produce
the corresponding label $y_{i,\ell}$, we generate a random variable $z_{i,\ell} \sim {\mathcal{U}}(0,1)$. If $z_{i,\ell} \le 1 + \exp(-y_{i,\ell}h_{i,\ell}\tran x_i^\star)$, we set $y_{i,\ell} = 1$; otherwise $y_{i,\ell} = -1$. Clearly, solution $x_i^\star$ controls the distribution of the labels. In this way, we can easily control data heterogeneity by adjusting $\sigma^2_h$. Furthermore, to easily control the influence of gradient noise, we will achieve the stochastic gradient by imposing a Gaussian noise to the real gradient, {\it i.e.}, $\widehat{\nabla f}_i(x) = {\nabla f}_i(x) + s_i$ in which $s_i\sim {\mathcal{N}}(0, \sigma^2_{n} I_d)$. We can control the magnitude of the gradient noise by adjusting $\sigma^2_n$. The metric for all simulations is $\Ex \|\nabla f(\bar{x})\|^2$ where $\bar{x}=\frac{1}{n}\sum_{i=1}^n x_i$.
\vspace{1mm}
\noindent \textbf{Performances of SUDA with and without data heterogeneity.} In this set of simulations, we will test the performance of ED/D$^2$ and ATC-GT (which are covered by the SUDA framework) with constant and decaying step size, and compare them with \textsc{Dsgd}. In the simulation, we organize $n=32$ agents into an undirected ring topology.
Fig.~\ref{fig:peformance-const-decay-homo} shows the performances of these algorithms with homogeneous data, {\it i.e.}, $\sigma_h^2 = 0$. The gradient noise magnitude is set as $\sigma_n^2 = 0.001$.
In the left plot, we set up a constant step size $\alpha = 0.01$. In the right plot, we set an initial step size as $0.01$, and then scale it by $0.5$ after every $100$ iterations. It is observed in Fig.~\ref{fig:peformance-const-decay-homo} that all stochastic algorithms perform similarly to each other with homogeneous data.
Fig.~\ref{fig:peformance-const-decay-slight-heterogeneity} shows the performance under heterogeneous data settings with $\sigma_h^2 = 0.2$. The gradient noise and the step size values are the same as in Fig.~\ref{fig:peformance-const-decay-homo}. It is clear from Fig.~\ref{fig:peformance-const-decay-slight-heterogeneity} that ED/D$^2$ and ATC-GT are more robust to data heterogeneity compared to \textsc{Dsgd}. We see that ED/D$^2$ can converge as well as \textsc{Psgd} while ATC-GT performs slightly worse than ED/D$^2$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{figure/nonconvex_lr_ring_36nodes_const_10142021_homo_changeLegend.pdf}
\hspace{1cm}
\includegraphics[width=0.4\textwidth]{figure/nonconvex_lr_ring_36nodes_decay_10142021_homo_changeLegend.pdf}
\caption{Performance of different stochastic algorithms to solve problem \eqref{non_convex_lr} with homogeneous data. Constant and learning decaying rates are used in the left and right plots, respectively. All algorithms in both plots are over the ring topology with $\lambda = 0.99$.}
\label{fig:peformance-const-decay-homo}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{figure/nonconvex_lr_ring_36nodes_const_10142021_changeLegend.pdf}
\hspace{1cm}
\includegraphics[width=0.4\textwidth]{figure/nonconvex_lr_ring_36nodes_decay_10142021_changeLegend.pdf}
\caption{Performance of different stochastic algorithms to solve problem \eqref{non_convex_lr} with heterogeneous data. Constant and learning decaying rates are used in the left and right plots, respectively. All algorithms in both plots are over the ring topology with $\lambda = 0.99$. }
\label{fig:peformance-const-decay-slight-heterogeneity}
\end{figure}
\vspace{1mm}
\noindent \textbf{Influence of network topology.} In this set of simulations, we will test the influence of the spectral gap $1-\lambda$ on various decentralized stochastic algorithms. We generate four topologies: the Erdos-Renyi graph with probability $0.8$, the Ring topology, the Grid topology, and the scaled Ring topology with $\lambda = (9 + \lambda_{\rm Ring})/10$. The value of the mixing rate $\lambda$ for each topology is listed in the caption in Fig.~\ref{fig:influence_of_the_topology}. We utilize a constant step size $0.01$ for each plot. It is observed in Fig.~\ref{fig:influence_of_the_topology} that each decentralized algorithm will converge to a less accurate solution as $\lambda \to 1$ while \textsc{Psgd} is immune to the network topology. In addition, it is also observed that ED/D$^2$ is least sensitive to the network topology while \textsc{Dsgd} is most sensitive (under heterogeneous setting), which is consistent with the our results listed in Table \ref{table_non_convex}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{figure/nonconvex_lr_randGraph_36nodes_const_10142021_changeLegend.pdf}
\includegraphics[width=0.4\textwidth]{figure/nonconvex_lr_Grid_36nodes_const_10142021_changeLegend.pdf}
\includegraphics[width=0.4\textwidth]{figure/nonconvex_lr_ring_36nodes_const_10142021_changeLegend.pdf}
\includegraphics[width=0.4\textwidth]{figure/nonconvex_lr_ScaledRing_36nodes_const_10142021_changeLegend.pdf}
\caption{Performance of different stochastic algorithms to solve problem \eqref{non_convex_lr} with different topologies. Top-left: Erdos-Renyi random graph with $\lambda = 0.32$; Top-right: Grid with $\lambda = 0.94$; Bottom-left: Ring with $\lambda = 0.99$; Bottom-right: Scaled Ring with $\lambda = 0.999$}
\label{fig:influence_of_the_topology}
\end{figure}
\subsection{Simulation results under the PL condition}
\noindent \textbf{The problem.} In this section we examine the performance of ED/D$^2$ and GT algorithms for the non-convex problem under PL condition. We consider the same setup used in \cite{xin2021improved} where the problem formulation is given by $\min_{x \in \real^d} \frac{1}{n}\sum_{i=1}^n f_i(x)$ with $f_i(x) = x^2 + 3\sin^2(x) + a_i x \cos(x)$. By letting $\sum_{i=1}^n a_i = 0$, we have $f(x) = \frac{1}{n}\sum_{i=1}^n f_i(x) = x^2 + 3\sin^2(x)$ which is a non-convex cost function that satisfies the PL condition \cite{karimi2016linear}.
\vspace{1mm}
\noindent \textbf{Experimental settings.} We set $n=32$ in all simulations. To generate $a_i$, we let $a_i = \sigma_h^2 \cdot i$ and $a_{n-i} = - a_i$ for $i \in \{1,\dots, n/2\}$ where $\sigma_h^2$ is used to control data heterogeneity. In this way, we can guarantee $\sum_{i=1}^n a_i = 0$. Similar to Sec.~\ref{sec:simu-nonconvex}, we will achieve the stochastic gradient by imposing a Gaussian noise to the real gradient, {\it i.e.}, $\widehat{\nabla f}_i(x) = {\nabla f}_i(x) + s_i$ in which $s_i\sim {\mathcal{N}}(0, \sigma^2_{n} I_d)$. The metric for all simulations is $\Ex f(\bar{x})-f^\star$ where $\bar{x}=\frac{1}{n}\sum_{i=1}^n x_i$.
\vspace{1mm}
\noindent \textbf{Influence of network topology.} We test the influence of the network topology on various stochastic decentralized methods. In simulations, we set data heterogeneity $\sigma_h^2 = 2$ and gradient noise $\sigma_n^2 = 0.1$. We generate two types of topologies: an Erdos-Renyi random graph with $\lambda = 0.28$, and an Erdos-Renyi random graph with $\lambda = 0.87$. We employ a constant step size $0.008$ for all tested algorithms. It is observed in Fig.~\ref{fig:PL-peformance} that the performance of all algorithms can be deteriorated by the badly-connected network topology when $\lambda \to 1$. We see that ED/D$^2$ is least sensitive to the network topology while \textsc{Dsgd} is most sensitive (under heterogeneous setting), which is consistent with the our results listed in Table \ref{table_pl}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{figure/PL_nonconvex_rand_0p23_32nodes_const_10142021_changeLegend.pdf}
\hspace{1cm}
\includegraphics[width=0.4\textwidth]{figure/PL_nonconvex_rand_32nodes_const_10142021_changeLegend.pdf}
\caption{Performance of different stochastic algorithms to solve non-convex problem under PL condition with different topologies. Left: Erdos-Renyi random graph with $\lambda = 0.28$; Right: Erdos-Renyi random graph with $\lambda = 0.87$}
\label{fig:PL-peformance}
\end{figure}
\section{Conclusion}
In this work, we analyzed the convergence properties of {\bfseries \footnotesize SUDA}~\eqref{SUDA_alg} for decentralized stochastic non-convex optimization problems. {\bfseries \footnotesize SUDA}~is a general algorithmic framework that includes several state of the art decentralized methods as special case such as EXTRA, Exact-Diffusion/D$^2$, and gradient-tracking methods. We established the convergence of {\bfseries \footnotesize SUDA}~under both general non-convex and PL condition settings. Explicit convergence rate bounds are provided in terms of the problem parameters and the network topology. When specializing {\bfseries \footnotesize SUDA}~to the particular instances of ED/D$^2$, EXTRA, and GT-methods, we achieve improved network topology dependent rates compared to existing results under non-convex settings. Moreover, our rate shows that ED/D$^2$, EXTRA, and GT-methods are less sensitive to network topology compared to \textsc{Dsgd} under heterogeneous data setting.
Finally, it should be noted that the lower bound from \cite{lu2021optimal} suggests that it could be possible to further improve these network-dependent rates. However, such improvement have only been established by utilizing multiple gossip rounds as discussed in the introduction. Therefore, one potential future direction is to investigate whether these rates can be further improved {\em without} utilizing multiple gossip rounds.
{\small
|
1,108,101,565,275 | arxiv | \section{INTRODUCTION}
Heavy quarkonium plays an important role in high energy collider physics, as it presents an ideal laboratory for studying the perturbative and non-perturbative properties of quantum chromodynamics (QCD) within a controlled environment.
The nonrelativistic QCD (NRQCD) factorization formalism \cite{Bodwin:1994jh}, which developed by Bodwin, Braaten, and Lepage, provides a systematic framework for the theoretical study of quarkonium production and decay.
According to the NRQCD factorization formalism, quarkonium production rates can be written as a sum of products of short distance coefficients and the long distance matrix elements (LDMEs).
The short distance coefficients can be calculated as perturbation series in the strong-coupling constant $\alpha_s$, and the LDMEs can be expanded in powers of the relative velocity $v$ of the heavy quarks in the bound state.
In this way, the theoretical prediction takes the form of a double expansion in $\alpha_s$ and $v$.
Although the quarkonium production has been extensively investigated at various colliders, the existing researches are still not sufficient to clarify the underlying production mechanisms \cite{Campbell:2007ws,Gong:2008sn,Ma:2010yw,Butenschoen:2010rq}.
Quarkonium production in association with two heavy quarks via massless vector boson fusion, i.e. $\gamma(g)+\gamma(g)\to \mathcal{Q}[Q_1\bar{Q_2}]+Q_2+\bar{Q_1}$ process, with $Q_i$ represents $c$ or $b$ quark, is an interesting topic to study.
Experimentally, the techniques to tag heavy quark are now routinely used with high efficiencies, hence the observation of these associated production processes is hopefully possible.
On the other hand, as has been indicated by previous studies \cite{Klasen:2001cu,Qiao:2003ba,Li:2009zzu,Sun:2015hhv,Chen:2016hju,Artoisenet:2007xi,Chang:1992jb,Chang:1994aw,Berezhnoy:1994bb,Kolodziej:1994uu}, these processes are the dominant color-singlet (CS) channels for corresponding single quarkonium inclusive production, and therefore be crucial for pinning down the contributions from CS model.
For $B_c$ meson production, similar mechanism is more important, as quark flavor conservation requires that $B_c$ meson should be produced in accompany with an additional $b\bar{c}$ pair.
Despite the admitted importance, these processes are not fully investigated due to the high technical difficulty.
At present, the only full next-to-leading order (NLO) study is in Ref. \cite{Chen:2016hju}, where the NLO QCD corrections to $\gamma+\gamma\to J/\psi +c+\bar{c}$ process is calculated.
As a further step, in this work, we calculate the NLO QCD corrections to $\gamma+\gamma\to \eta_c +c+\bar{c}$, $\gamma+\gamma\to \eta_b +b+\bar{b}$, and $\gamma+\gamma\to B_c +b+\bar{c}$ processes.
The rest of the paper is organized as follows.
In Sec. II, we present the primary formulae employed in the calculation.
In Sec. III, we elucidate some technical details for the analytical calculation.
In Sec. IV, the numerical evaluation for concerned processes is performed.
The last section is reserved for summary and conclusions.
\section{FORMULATION}
The photon-photon scattering can be achieved at $e^+e^-$ collider like SuperKEKB, where the initial photons are generated by bremsstrahlung effect.
The energy spectrum of bremsstrahlung photon is well formulated by Weizsacker-Williams approximation (WWA) \cite{Frixione:1993yw}:
\begin{equation}
f_{\gamma}(x) = \frac{\alpha}{2\pi}\left(\frac{1+(1-x)^2}{x}\log\left(\frac{Q^{2}_{\rm max}}{Q^{2}_{\rm min}}\right)+2m^2_{e}x\left(\frac{1}{Q^{2}_{\rm max}}-\frac{1}{Q^{2}_{\rm min}}\right)\right),
\end{equation}
where $Q^{2}_{\rm min}=m^{2}_{e}x^{2}/(1-x)$, $Q^{2}_{\rm max}=(\theta_{c}\sqrt{s}/2)^2(1-x)+Q^{2}_{\rm min}$, $m_e$ is the electron mass, $x=E_{\gamma}/E_{e}$ is the energy fraction of photon, $\sqrt{s}$ is the collision energy of the $e^+e^-$ collider, $\theta_{c}=32$ mrad \cite{Klasen:2001cu} is the maximum scattering angle of the electron or positron.
In future $e^+e^-$ collider like the Circular Electron Positron Collider (CEPC), high energy photon can be achieved through the Compton back-scattering of laser light off electron or positron beam, namely the laser back scattering (LBS) effect.
The LBS photons mostly carry a large energy fraction of the incident electron or positron beam, and at the same time can achieve high luminosity.
The energy spectrum of LBS photon is \cite{Ginzburg:1981vm}
\begin{equation}
f_{\gamma}(x)=\frac{1}{N}\left(1-x+\frac{1}{1-x}-4r(1-r)\right),
\end{equation}
where $r=\frac{x}{x_{m}(1-x)}$ and N is the normalization factor:
\begin{equation}
N=\left(1-\frac{4}{x_{m}}-\frac{8}{x^{2}_{m}}\right)\log(1+x_{m})+\frac{1}{2}+\frac{8}{x_{m}}-\frac{1}{2(1+x_{m})^2}\ .
\end{equation}
Here $x_{m} \approx 4.83$ \cite{Telnov:1989sd}, and the maximum energy fraction of the LBS photon is restricted by $0 \leq x \leq \frac{x_m}{1+x_m}\approx 0.83$.
The total cross section can be obtained by convoluting the $\gamma+\gamma\to \mathcal{Q}[Q_1\bar{Q_2}]+Q_2+\bar{Q_1}$ cross section with the photon distribution functions:
\begin{equation}
d\sigma=\int dx_{1}dx_{2} f_{\gamma}(x_{1})f_{\gamma}(x_{2})d \hat{\sigma}( \gamma+\gamma\to \mathcal{Q}[Q_1\bar{Q_2}]+Q_2+\bar{Q_1})\ ,
\end{equation}
where $\mathcal{Q} = \eta_c$, $\eta_b$ or $B_c$, and $Q_i$ denotes charm or bottom quark accordingly.
The $d \hat{\sigma}$ is calculated perturbatively up to the NLO level,
\begin{equation}
d\hat{\sigma}( \gamma+\gamma\to \mathcal{Q}[Q_1\bar{Q_2}]+Q_2+\bar{Q_1})= d\hat{\sigma}_{\rm born} + d\hat{\sigma}_{\rm virtual} + d\hat{\sigma}_{\rm real} + \mathcal{O}(\alpha^{2}\alpha^{4}_{s})\ .
\end{equation}
The Born level cross section, the virtual correction, and the real correction take the following forms:
\begin{equation}
\begin{split}
&d\hat{\sigma}_{\rm born}=\frac{1}{2\hat{s}}\overline{\sum}|\mathcal{M}_{\rm tree}|^{2}d{\rm PS}_{3}\ ,\\
&d\hat{\sigma}_{\rm virtual}=\frac{1}{2\hat{s}}\overline{\sum}2{\rm Re}(\mathcal{M}^{*}_{\rm tree}\mathcal{M}_{\rm oneloop})d{\rm PS}_{3}\ ,\\
&d\hat{\sigma}_{\rm real}=\frac{1}{2\hat{s}}\overline{\sum}|\mathcal{M}_{\rm real}|^{2}d{\rm PS}_{4}\ ,
\end{split}
\end{equation}
where $\hat{s}$ is the center-of-mass energy square for the two photons, $\overline{\sum}$ means sum (average) over the polarizations and colors of final (initial) state particles, $d{\rm PS}_{3}$ ($d{\rm PS}_{4}$) denotes final state three (four)-body phase space.
The computation of $d \hat{\sigma}$ can be carried out by using the covariant projection method \cite{Bodwin:2013zu}.
At the leading order of relative velocity expansion, the standard spin and color projection operator can be simplified to
\begin{equation}
\Pi = \frac{1}{2\sqrt{m_{\mathcal{Q}}}} \gamma^5 (\slashed{p}_{\mathcal{Q}} + m_{\mathcal{Q}})\otimes\left(\frac{\bf{1}_{c}}{\sqrt{N_{c}}}\right),\\
\label{projection}
\end{equation}
where, $p_{\mathcal{Q}}$ and $m_{\mathcal{Q}}$ are the momentum and mass of the pseudoscalar quarkonium $\mathcal{Q}$ respectively; $\bf{1}_{c}$ represents the unit color matrix, $N_c=3$ is the number of colors in QCD.
\section{ANALYTICAL CALCULATION}
\begin{figure}[thbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{rr2bctree.eps}
\caption{Typical LO Feynman diagrams for $\gamma+\gamma\to \mathcal{Q}[Q_1\bar{Q_2}]+Q_2+\bar{Q_1}$ process.}
\label{figtree}
\end{center}
\end{figure}
\begin{figure}[thbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{rr2bcloop.eps}
\caption{Typical Feynman diagrams in virtual corrections.}
\label{figloop}
\end{center}
\end{figure}
At LO, there are twenty Feynman diagrams contributing to the $\gamma+\gamma\to \mathcal{Q}[Q_1\bar{Q_2}]+Q_2+\bar{Q_1}$ process. Half of them are shown in Fig.\ref{figtree}, and the rest can be generated by exchanging the initial two photons.
The typical Feynman diagrams in virtual correction are shown in Fig. \ref{figloop}.
Therein, Loop N1-N5 and Loop N11 arise from the corrections to LO Feynman diagrams, and the rest are new topologies appearing at NLO.
Note, according to the charge-parity conservation, the contributions of type Loop N6 diagrams are vanished, which is verified by our explicit calculation.
And obviously, Loop N6-N10 will not appear in $B_c$ production case.
In the computation of virtual corrections, the conventional dimensional regularization with $D=4-2\epsilon$ is employed to regularize the ultraviolet (UV) divergences, while the infrared (IR) divergences are regularized by introducing an infinitesimal gluon mass $\lambda$.
As a result, the UV and IR singularities appear as $1/\epsilon$ and $\ln (\lambda^2)$ terms, respectively.
In renormalized perturbation theory, the UV singularities are canceled by corresponding counter term diagrams, hence the final virtual corrections are UV finite.
Here, the relevant renormalization constants include $Z_{2}, Z_{3}, Z_{m}$ and $Z_{g}$, which correspond to the heavy quark field, gluon field, heavy quark mass and strong coupling constant, respectively.
Among them, $Z_{2}$ and $Z_{m}$ are defined in the on-mass-shell (OS) scheme, while others are defined in the modified minimal-subtraction ($\overline{\rm MS}$) scheme. The counterterms are
\begin{equation}
\begin{split}
&\delta Z^{\rm OS}_{2}=-C_{F}\frac{\alpha_{s}}{4\pi}\left[2\ln \frac{\lambda^2}{m^2} + \frac{1}{\epsilon_{\rm UV}}-\gamma_{E}+\ln\frac{4\pi\mu^{2}}{m^{2}}+4\right],\\
&\delta Z^{\rm OS}_{m}=-3C_{F}\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm UV}}-\gamma_{E}+\ln\frac{4\pi\mu^{2}}{m^{2}}+\frac{4}{3}\right],\\
&\delta Z^{\overline{\rm MS}}_{3}=(\beta_{0}-2C_{A})\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm UV}}-\gamma_{E}+\ln(4\pi)\right],\\
&\delta Z^{\overline{\rm MS}}_{g}=-\frac{\beta_{0}}{2}\frac{\alpha_{s}}{4\pi}\left[\frac{1}{\epsilon_{\rm UV}}-\gamma_{E}+\ln(4\pi)\right],
\label{eq_ctterm}
\end{split}
\end{equation}
where $\gamma_E$ is the Euler's constant; $\mu$ is the renormalization scale, $m$ stands for $m_c$ or $m_b$; $\beta_{0}=\frac{11}{3}C_{A}-\frac{4}{3}T_{F}n_{f}$ is the one-loop coefficient of the QCD $\beta$-function, $n_{f}$ denotes the active quark flavor numbers; $C_{A}=3, C_{F}=\frac{4}{3}$ and $T_{F}=\frac{1}{2}$ are QCD color factors.
Note, since there is no gluon external leg, the final result is independent of $\delta Z_3$.
The IR singularities in virtual corrections can be isolated by using the method proposed in \cite{Kramer:1995nb}.
Considering the scalar 5-point integral of Fig. \ref{figloop} Loop N4, the IR singularities originate from $q\to 0$ region.
By performing power counting analysis, we have
\begin{equation}
\begin{split}
E_0 =
&\frac{1}{i\pi^2}\int \frac{d^{4}q}{(2\pi)^4}\frac{1}{q^2-\lambda^2}\frac{1}{(q-p_5)^2-m^2}\frac{1}{(q-p_4-p_5)^2-\lambda^2}\\
\times&\frac{1}{(q+p_6-p_2)^2-m^2}\frac{1}{(q+p_6)^2-m^2} \\
\overset{\rm soft}{\sim}&\frac{1}{(p_4+p_5)^2}\frac{1}{(p_6-p_2)^2-m^2} C_0\left(p_6^2,(p_5+p_6)^2,p_5^2,\lambda^2,m^2,m^2\right)\\
\overset{\rm soft}{\sim}&\frac{1}{s_{45}(t_{26}-m^2)}\frac{\ln(\lambda^2)}{\sqrt{s_{56}(s_{56}-4m^2)}}\left(\ln\frac{\sqrt{s_{56}}-\sqrt{s_{56}-4m^2}}{\sqrt{s_{56}}+\sqrt{s_{56}-4m^2}}+i\pi\right),
\end{split}
\end{equation}
where $s_{45}=(p_4+p_5)^2$, $s_{56}=(p_5+p_6)^2$, $t_{26}=(p_2-p_6)^2$.
The Coulomb singularities, which appears when a potential gluon is exchanged between the constituent quarks of a quarkonium, are also regularized by infinitesimal gluon mass $\lambda$.
We obtain
\begin{equation}
2{\rm Re}(\mathcal{M}^{*}_{\rm tree}\mathcal{M}_{\rm oneloop}) \overset{\rm Coulomb}{\sim} |\mathcal{M}_{\rm tree}|^{2} \frac{2\alpha_s C_F m}{\lambda},
\label{eq_coulomb}
\end{equation}
which can be absorbed into the wave function of quarkonium.
Note, for $B_c$ production, the $m$ in Eq. (\ref{eq_coulomb}) should be replaced by $\frac{2m_bm_c}{m_b+m_c}$.
\begin{figure}[thbp]
\begin{center}
\includegraphics[width=0.9\textwidth]{rr2bcreal.eps}
\caption{Typical Feynman diagrams in real corrections.}
\label{figreal}
\end{center}
\end{figure}
Typical Feynman diagrams in real corrections are shown in Fig. \ref{figreal}.
The IR divergences here are also regularized by infinitesimal gluon mass.
To isolate the IR singularities, the subtraction method which formulated in Ref. \cite{Dittmaier:1999mb} is employed.
As a result, the contribution of real corrections can be separated into two parts:
\begin{equation}
\hat{\sigma}_{\rm real}=\hat{\sigma}_{\rm real}^A + \hat{\sigma}_{\rm real}^B,
\end{equation}
with
\begin{align}
\hat{\sigma}_{\rm real}^A &= \frac{1}{2\hat{s}}\int d{\rm PS}_{4} (\overline{\sum}|\mathcal{M}_{\rm real}|^{2} - |\mathcal{M}_{\rm sub}|^{2}), \\
\hat{\sigma}_{\rm real}^B &= \frac{1}{2\hat{s}}\int d{\rm PS}_{4} |\mathcal{M}_{\rm sub}|^{2} = \frac{1}{2\hat{s}}\int d{\rm PS}_{3} \int [dp_g]|\mathcal{M}_{\rm sub}|^{2}.
\end{align}
Here $[dp_g]$ denotes the phase space of the additional emitted gluon, $|\mathcal{M}_{\rm sub}|^{2}$ is an auxiliary subtraction function which holds the same asymptotic behavior as $\overline{\sum}|\mathcal{M}_{\rm real}|^{2}$ in the soft limits.
Hence the difference $(\overline{\sum}|\mathcal{M}_{\rm real}|^{2} - |\mathcal{M}_{\rm sub}|^{2})$ is non-singular at each point of phase space, and the integral can be evaluated with $\lambda=0$ everywhere.
With an appropriate construction of $|\mathcal{M}_{\rm sub}|^{2}$ \cite{Dittmaier:1999mb}, the integral $ \int [dp_g]|\mathcal{M}_{\rm sub}|^{2}$ can be carried out analytically.
After adding $2{\rm Re}(\mathcal{M}^{*}_{\rm tree}\mathcal{M}_{\rm oneloop})$ and $ \int [dp_g]|\mathcal{M}_{\rm sub}|^{2}$, the IR singularities, i.e. $\ln(\lambda^2)$ terms, cancel with each other as expected.
\section{NUMERICAL RESULTS}
In the numerical calculation, the input parameters are taken as
\begin{align}
&\alpha=1/137.065,\quad m_e=0.511\ {\rm MeV},\quad m_c=1.5\ {\rm GeV},\quad m_b=4.8\ {\rm GeV};\nonumber \\
&|R_{\eta_c}^{\rm LO}(0)|^2=0.528\ {\rm GeV}^3,\quad |R_{\eta_c}^{\rm NLO}(0)|^2=0.907\ {\rm GeV}^3; \nonumber \\
&|R_{\eta_b}^{\rm LO}(0)|^2=5.22\ {\rm GeV}^3,\quad |R_{\eta_b}^{\rm NLO}(0)|^2=7.48\ {\rm GeV}^3; \nonumber \\
&|R_{B_c}(0)|^2=1.642\ {\rm GeV}^3 \nonumber.
\end{align}
Here, the $B_c$ wave function at the origin is estimated by using the Buchmueller-Tye potential \cite{Eichten:1994gt}.
According to the heavy quark spin symmetry of NRQCD at the leading order in relative velocity expansion \cite{Bodwin:1994jh}, here we take $R_{\eta_c}(0)=R_{J/\psi}(0)$ and $R_{\eta_b}(0)=R_{\Upsilon}(0)$.
The $J/\psi$ and $\Upsilon$ radial wave functions at the origin are extracted from their leptonic widths.
\begin{equation}
\Gamma(\mathcal{Q}\to e^+e^-)=\frac{\alpha^2e_Q^2}{m_Q^2}|R_{\mathcal{Q}}(0)|^2\left(1-4C_F\frac{\alpha_s(\mu_0)}{\pi}\right),\ e^{}_Q=\begin{cases}\frac{2}{3},\ {\rm if}\ \mathcal{Q}=J/\psi\\ \frac{1}{3},\ {\rm if}\ \mathcal{Q}=\Upsilon\end{cases},
\end{equation}
with $\mu_0=2m^{}_Q$, $\Gamma(J/\psi\to e^+e^-)=5.55$ keV, and $\Gamma(\Upsilon\to e^+e^-)=1.34$ keV \cite{ParticleDataGroup:2020ssz}.
Note, the LO and NLO extractions are employed in the LO and NLO calculation respectively.
In the NLO calculation, the two-loop formula
\begin{equation}
\frac{\alpha_{s}(\mu)}{4\pi}=\frac{1}{\beta_{0}L}-\frac{\beta_{1}\ln L}{\beta^{3}_{0}L^{2}},
\end{equation}
for the running coupling constant is employed, in which, $L=\ln (\mu^{2}/\Lambda^{2}_{\rm QCD})$, $\beta_0=\tfrac{11}{3}C_A-\tfrac{4}{3}T_Fn_f$, $\beta_{1}=\frac{34}{3}C^{2}_{A}-4C_{F}T_{F}n_{f}-\frac{20}{3}C_{A}T_{F}n_{f}$, with $n_f=4$, $\Lambda_{\rm QCD}=297$ MeV for $\eta_c$ production, and $n_f=5$, $\Lambda_{\rm QCD}=214$ MeV for $\eta_b$ and $B_c$ production.
For LO calculation, the one-loop formula for the running coupling constant is used.
\begin{table}[ht]
\caption{The LO and NLO total cross sections for $\eta_c+c+\bar{c}$ production via photon-photon fusion at the SuperKEKB collider. Here $m_c=1.5$ GeV, $\mu=r\sqrt{4m_c^2 + p_{t}^2}$ with $r=\{0.5,1,2\}$. The transverse momentum cut $ 0.2\ {\rm GeV} \le p_{t} \le 4.0$ GeV is imposed to $\eta_c$ meson.}
\begin{center}
\begin{tabular}{p{2.5cm}<{\centering} p{2.5cm}<{\centering} p{2.5cm}<{\centering} p{2.5cm}<{\centering}}
\toprule
\hline
$r$ & $0.5$ & $1$ & $2$ \\
\hline
$\sigma_{\rm LO}$ (fb)&
$0.340$ &
$0.171$ &
$0.102$ \\
$\sigma_{\rm NLO}$ (fb)&
$0.647$ &
$0.366$ &
$0.239$ \\
\botrule
\end{tabular}
\end{center}
\label{tabscale}
\end{table}
\begin{table}[ht]
\caption{The LO and NLO total cross sections for $\eta_c+c+\bar{c}$ production via photon-photon fusion at the SuperKEKB collider. Here $m_c=\{1.4,1.5,1.6\}$ GeV, $\mu=\sqrt{4m_c^2 + p_{t}^2}$. The transverse momentum cut $ 0.2\ {\rm GeV} \le p_{t} \le 4.0$ GeV is imposed to $\eta_c$ meson.}
\begin{center}
\begin{tabular}{p{2.5cm}<{\centering} p{2.5cm}<{\centering} p{2.5cm}<{\centering} p{2.5cm}<{\centering}}
\toprule
\hline
$m_c\ (\rm{GeV})$ & $1.4$ & $1.5$ & $1.6$ \\
\hline
$\sigma_{\rm LO}$ (fb)&
$0.393$ &
$0.171$ &
$0.074$ \\
$\sigma_{\rm NLO}$ (fb)&
$0.821$ &
$0.366$ &
$0.161$ \\
\botrule
\end{tabular}
\end{center}
\label{tabmc}
\end{table}
We investigate the production of $\eta_c+c+\bar{c}$ with WWA photons as the initial state at the SuperKEKB collider, where the beam energies of the positron and electron are 4 GeV and 7 GeV respectively, yielding a center-of-mass energy of 10.6 GeV.
In order to estimate the theoretical uncertainties induced by renormalization scale and charm quark mass, we set $\mu=r\sqrt{4m_c^2+p_t^2}$ with $r=\{0.5,1,2\}$, and $m_c=\{1.4,1.5,1.6\}$ GeV.
The corresponding results are shown in Table \ref{tabscale} and Table \ref{tabmc}, respectively.
It can be seen that the NLO corrections are significant, and the total cross sections are enhanced by a factor (defined as the $K$ factor) of about $2.1$.
To measure the dependency of the cross section on renormalization scale and charm quark mass, we define $R_{\mu}=\frac{\sigma|_{r=0.5}-\sigma|_{r=2}}{\sigma|_{r=1}}$ and $R_{m}=\frac{\sigma|_{m_c=1.4}-\sigma|_{m_c=1.6}}{\sigma|_{m_c=1.5}}$.
Then we have $R_\mu^{\rm LO}=1.39$, $R_\mu^{\rm NLO}=1.11$, $R_{m}^{\rm LO}=1.86$ and $R_{m}^{\rm NLO}=1.80$, which indicates that the theoretical uncertainties are slightly reduced by NLO corrections.
In the coming year, the instantaneous luminosity of the SuperKEKB collider may reach $8\times10^{35}\ {\rm cm}^{-2}{\rm s}^{-1}$ \cite{urlsuperkekb}.
Then the yearly produced $\eta_c+ c+\bar{c}$ events is estimated to be $(4.05\sim 20.7)\times 10^3$.
In experiment, $\eta_c$ can be reconstructed through its $K\bar{K}\pi$ decay channel with the branching ratio ${\rm Br}(\eta_c\to K\bar{K}\pi)=7.3\%$ \cite{ParticleDataGroup:2020ssz}, and the tagging efficiency of charm quark is about $41\%$ \cite{ATLAS:2018mgv}.
Therefore we expect to obtain $49\sim 253$ $\eta_c+ c+\bar{c}$ events per year.
\begin{table}[ht]
\caption{The LO (in brackets) and NLO total cross sections for $\eta_c +c+\bar{c}$, $\eta_b+ b+\bar{b}$, $B_c+ b+\bar{c}$ production via photon-photon fusion at $\sqrt{s}=250$ GeV. Here the cut 1 GeV$\le p_{t} \le 50$ GeV is imposed.}
\begin{center}
\begin{tabular}{p{2.5cm}<{\raggedright} p{2.5cm}<{\raggedright} p{2.5cm}<{\raggedright} p{2.5cm}<{\raggedright}}
\toprule
\hline
photon & $\sigma_{\eta_c c\bar{c}}$ (fb) & $\sigma_{\eta_b b\bar{b}}$ (fb) & $\sigma_{B_c b\bar{c}}$ (fb) \\
\hline
WWA&
$217.8(126.7)$ &
$0.073(0.055)$ &
$1.051(0.778)$ \\
LBS&
$1127(606)$ &
$3.84(2.08)$ &
$34.0(23.1)$ \\
\botrule
\end{tabular}
\end{center}
\label{tabcepc}
\end{table}
Of the future high energy $e^+e^-$ colliders, like the CEPC, the collision energy may reach 250 GeV \cite{CEPCStudyGroup:2018ghi}.
And the LBS photon collision can be realized by imposing a laser beam to each $e$ beam.
Therefore, we investigate the $\eta_c +c+\bar{c}, \eta_b+ b+\bar{b}$ and $B_c+b+\bar{c}$ productions under both WWA and LBS photon collisions with $\sqrt{s}=250$ GeV.
The corresponding LO and NLO total cross sections are presented in Table.\ref{tabcepc}.
As the energy scale of CEPC is higher than that of SuperKEKB, the $K$ factors here are less than 2.
Taking a typical luminosity $\mathcal{L}=10^{34}\ {\rm cm}^{-2}{\rm s}^{-1}$ \cite{ParticleDataGroup:2020ssz}, the number of reconstructed $\eta_c+ c+\bar{c}$ candidates per year is about $1.15 \times 10^4$ for the WWA photon case, $5.97 \times 10^4$ for the LBS photon case.
For $\eta_b+ b+\bar{b}$ production, the observation is somewhat difficult due to the insignificant production rates.
For $B_c$ inclusive production, it is not necessary to reconstruct the produced $b$ and $\bar{c}$ jets.
Assuming $B_c$ is reconstructed through $B_c^{\pm} \rightarrow J/\psi(1S)\pi^{\pm}$, whose branching fraction is predicted to be $0.5\%$ \cite{Chang:1992pt}, and $J/\psi$ is reconstructed through $J/\psi \rightarrow l^+l^-(l=e,\mu)$ with a branching fraction of about $12\%$ \cite{ParticleDataGroup:2020ssz}, the number of the reconstructed $B_c$ candidates for LBS photon case would reach 12 per year.
Note, since $B_c^*$ almost all decays to $B_c$, a more exact prediction on $B_c$ candidates should take into account the $\gamma+\gamma\to B_c^*+b+\bar{c}$ process, and we leave it for future study.
\begin{figure}[!thbp]
\centering
\subfigure[]{\includegraphics[width=0.49\textwidth]{ptsuperkekb.eps}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{ptcepc.eps}}
\caption{The $p_t$ distribution for the $\eta_c c\bar{c}$ production via photon-photon fusion at (a) the SuperKEKB collider; (b) the CEPC. Here the renormalization scale $\mu=\sqrt{4m_{c}^2+p^{2}_{t}}$, the transverse momentum cut $0.2 \le p_{t} \le 4$ GeV and $1 \le p_{t} \le 10$ GeV is imposed on $\eta_c$ respectively.}
\label{figpt}
\end{figure}
As the number of events corresponding to $\eta_c+c+\bar{c}$ production is large, it is worthy to perform a more elaborate phenomenological analysis.
The differential cross sections versus $p_t$, i.e. the transverse momentum of $\eta_c$, at the SuperKEKB collider and CEPC are shown in Fig.\ref{figpt}.
It can be seen that the NLO corrections cause an upward shift of the LO distributions, and leave the shapes nearly unchanged.
\section{SUMMARY AND CONCLUSIONS}
In this work, we investigate the $\eta_c+c+\bar{c}$, $\eta_b+b+\bar{b}$, $B_c+b+\bar{c}$ production via photon-photon fusion at the NLO QCD accuracy in the framework of NRQCD factorization formalism.
The total cross sections and the differential cross sections versus transverse momentum at the SuperKEKB collider and the CEPC are given.
Numerical results shows that, after including the NLO corrections, the total cross sections are significantly enhanced, and their dependences on renormalization scale and heavy quark mass parameter are reduced as expected.
Due to the high luminosity of the SuperKEKB collider, the $\eta_c +c+\bar{c}$ production via photon-photon fusion is hopeful to be observed in the near future.
At the higher energy collider like CEPC, the production rate of $\eta_c +c+\bar{c}$ is largely enhanced, which leads to inspiring events number.
If the LBS photon collision can be realized, the observation of $B_c$ meson production at the CEPC can also be expected.
\vspace{1.4cm} {\bf Acknowledgments}
This work was supported in part by the National Key Research and Development Program of China under Contracts No. 2020YFA0406400,
and the National Natural Science Foundation of China (NSFC) under the Grants No. 11975236, No. 11635009, and No. 12047553.
|
1,108,101,565,276 | arxiv |
\section{Introduction}
Recently, there has been an increased interest in the possibility of investigating processes characteristic of extreme astrophysical objects in laboratories using powerful laser systems \cite{Remington2006, Bulanov2015a}. In particular, much attention is paid to the study of the magnetic reconnection process, which is widely encountered in space. Typical experiments in this area use two nanosecond laser pulses with energies ranging from a few Joules to several kilojoules and irradiate a metal surface. As a result of the interaction, ablation of a heated substance takes place, in which azimuthal magnetic fields are self-generated due to the Biermann battery effect. Such magnetized expanding plumes may be arranged to collide with each other, initiating a reconnection of oppositely directed magnetic lines in interaction region \cite{Nilson2006, Fox2011, Fiksel2014, Rosenberg2015, Matteucci2018}.
In most of the experiments carried out, however, the case of a nonrelativistic plasma was investigated. At the same time, magnetic reconnection in the relativistic regime are expected to occur in a vicinity of extreme astrophysical objects such as pulsars, magnetars, active galactic nuclei. A key parameter in this case is a cold magnetisation parameter, which is equal to a ratio of a magnetic pressure to an electron rest energy density $\sigma = B^2 / (\mu_0n_em_ec^2) $, where $B$ is the magnetic field induction, $\mu_0$ is the magnetic constant, $n_e$, $m_e$ are the concentration and mass of electrons, respectively, $c$ is the speed of light \cite{Lyubarsky2005}. The case of the magnetized relativistic plasma corresponds to $\sigma>1$, and until recently such values were unattainable in the laboratory.
In recent works, however, the $\sigma>1$ regime was obtained by two methods. One of them is similar to the approach of non-relativistic experiments, but it used either a pair of 2~J, 40~fs laser pulses from the HERCULES facility, or 500~J, 20~ps pulses from the OMEGA EP facility \cite{Raymond2018}. The high power of laser pulses made it possible to attain intensities of the order of $10^{18}$--$10^{19}$~W/cm$^2$ at the focus, which, as is known, is sufficient for the energy of the electrons to become comparable to their rest energy \cite{Mourou2006}. As a result, a field of about 1~kT was generated in the expanding plasma plume, which made it possible to reach $\sigma\sim10$. An integral property of the scheme used, however, was the high value of a parameter $\beta = \mu_0n_ekT_e/B^2$, equal to the ratio of the electron kinetic pressure to the magnetic pressure (here $k$ is the Boltzmann constant, $T_e$ is the electron temperature ). In the experiment, its value was estimated at $\beta\sim50$. At the same time, in the aspect of astrophysical applications, a cold plasma with a dominant role of a magnetic field and $\beta\ll 1 $ is usually considered.
The second method for achieving high degrees of magnetization in a laser-plasma experiment is based on the use of a picosecond laser pulse of a kilojoule energy level to generate a magnetic field in a microcoil \cite{Law2020}. When a coil was irradiated with laser radiation at opposite ends, counterstreaming currents of the order of 1~MA were generated in it, which lead to the creation of antiparallel fields of magnitude above 1~kT. According to estimates, the experiment reached $\sigma\sim20$--$100$ at $\beta\sim 0.01$--$0.1$. The main disadvantage of such experiments is a hard accessibility of such laser systems.
Other methods of organizing relativistic magnetic reconnection have been theoretically proposed as alternative approaches. For example, in the works \cite{Gu2015a,Gu2016b,Gu2016c,Gu2018,Gu2019,Gu2021} it is proposed to observe an antiparallel configuration of magnetic fields during a parallel propagation of two ultrashort superintense laser pulses in transparent plasma. In plasma wakefields generated by them, the azimuthal magnetic field is observed. When two wakefields come into contact, for example, when they gradually expand in plasma with a negative density gradient, those fields begin to annihilate. The disadvantage of this method is that the electrons in the interaction region are relativistic, and their Larmour radii exceed a size of the region of localization of magnetic fields. That is the reconnection is observed only in the electron diffusion region in the very vicinity of an X-point.
Finally, in a paper \cite{Yi2018} authors propose to use micro-slabs irradiated with laser pulses along a surface to observe relativistic magnetic reconnection. It was shown numerically that when using a femtosecond laser pulse of multi-terawatt power, $\sigma\sim 1$ can be achieved at $\beta\sim 0.1$, and when the power is increased to a petawatt level, $\sigma>100$ can be achieved. The main difficulty in the experimental implementation of such a scheme is the need to use high-contrast laser pulses and precise targeting of a relatively tightly focused pulse to the end of a slab of submicron thickness.
In this paper, we propose another method for generating cold ($\beta\sim 0.1$) magnetized ($\sigma\sim 10$) relativistic plasma, which is based on the interaction of powerful subpicosecond pulses with thin targets. As is known, the interaction of relativistically intense radiation with the surface leads to efficient generation of beams of energetic electrons injected by the field into the target \cite{Brunel1988, Mulser2012a, Liseykina2015}. These electrons, possessing relativistic energies, freely pass through targets up to several tens of microns thick. Escaping from the back side, they are decelerated by an emerging charge separation field and initiate ionization of the rear surface of the target and the expansion of the resulting plasma. As a result, a plasma sheath is formed, with electrons heated to relativistic temperatures and a laminar flow of cold ions \cite{Wilks2001}.
Continuing escaping electron bunches form a current on the axis of the sheath and generate an azimuthal magnetic field \cite{Robinson2009a}. At low laser pulse intensities and durations, this field does not significantly alter the expansion dynamics, although it can lead to the observed changes in the spectrum of the expanding ions. Recently, however, it was discovered that for pulses of several hundred femtoseconds duration and intensity above $10^{20}$~W/cm$^2$, the magnetic field is sufficient to magnetize the electrons and thereby appreciably slow down the expansion of the plasma \cite{Nakatsutsumi2018, Huang2021a}.
In this work, we studied in more detail the properties of the resulting plasma and found that conditions for a cold magnetized relativistic plasma are realized in it, which makes it possible to use this mechanism for observing relativistic magnetic reconnection under laboratory conditions.
\section{Methods}
An analysis of a formation of the magnetized plasma during the interaction of laser radiation with thin dense targets was carried out by numerical simulation by a particle-in-cell method using the PICADOR software package \cite{Surmin2016a} . Modeling was carried out in two-dimensional geometry. The length of the box was selected in the range from 280 to 380~$\mu$m, depending on the intensity and duration of the laser pulse, so that by the end of the simulation the plasma did not reach the right boundary. The box width was 80~$\mu$m. A grid step along both axes was 0.02~$\mu$m. A calculation time was 3000 fs, and a time step was 0.02~fs. Absorbing boundary conditions were used at all boundaries both for particles and fields. The number of particles in the cell at the initial moment of time was 200.
An aluminum foil with a thickness of 2~$\mu$m was used as the target. It was located at a distance of 30~$\mu$m from the left boundary of the box. Ions were preionized to the Al$^{9+}$ state. Their concentration at the initial moment of time was $5\times 10^{22}$~cm$^{- 3}$. The boundaries of the target were initially assumed to be perfectly sharp. A layer of fully ionized hydrogen with a thickness of 0.02~$\mu$m was added to the rear surface of the target, which imitated a natural contamination of the target surface, and also accelerated the expansion of the plasma due to the smaller mass of protons compared to the mass of aluminum ions.
A p-polarized laser pulse was incident on the target from the left along the normal to the surface and focused into a spot 4~$\mu$m in diameter at FWHM (Full Width at Half Maximum). The paper presents the results for pulses with a duration of 700~fs at FWHM; however, calculations were also performed with pulses with a duration of 100, 400, and 1000~fs. The results for pulses with the duration of 400 fs and longer are in qualitative agreement with each other, while for pulses with the duration of 100~fs the plasma stayed unmagnetized in the investigated range of intensities up to $10^{21}$~W/cm$^2$. The pulse envelope in both coordinates was a Gaussian.
The noisy distributions of electron density, electron energy density and magnetic field obtained as a result of modeling were smoothed by a Gaussian filter with a width of 0.2 ~ $\mu$m. All transformations have then been carried out on smoothed distributions. To obtain a magnetic field gradient, we used a convolution of the initial distribution of the magnetic field with the first derivative of the Gaussian function in both directions with the same width of 0.2~$\mu$m.
\section{Results}
Fig.~\ref{fig:vs-time-1} shows the time evolution of the electron density $n_e$, the electron temperature $T_e$ (obtained as a ratio of the local kinetic energy density of electrons to their concentration) and the transverse component of the magnetic field $B_z$ at different times for a pulse with a duration of 700~fs and an intensity of $10^{20}$~W/cm$^2$.
\begin{figure}[H]
\includegraphics[width=13cm]{pics/vs-time-1}
\caption{Distributions of electron concentration (left column), electron temperature (central column) and transverse magnetic field (right column) at (a)-(c) $t=0$~fs, (d)-(f) $t=500$~fs, (g)-(i) $t=1000$~fs, (j)-(l) $t=1500$~fs, (m)-(o) $t=2000$~fs after start of the simulation. The parameters of the simulation are in the text.\label{fig:vs-time-1}}
\end{figure}
It can be seen that at the beginning (Fig.~\ref{fig:vs-time-1} (d,e)) the target is deformed under the action of radiation and a noticeable pre-plasma layer is formed in the region $x<0$~$\mu$m, which enhances absorption of radiation. On the rear side of the target, the expulsion of the energetic electrons is observed. It triggers the expansion of the plasma and the formation of the plasma sheath. At a later moment in time (Fig.~\ref{fig:vs-time-1} (g)--(i)), an increase in the azimuthal magnetic field is observed in the formed plasma sheath. With increasing intensity, the magnitude of the generated magnetic field also grows, reaching a maximum at the moment when the maximum of the laser pulse arrives at the target (Fig.~\ref{fig:vs-time-1} (j)--(l)).
At this moment, the magnetic field at its peak reaches a value of the order of 400~MG, and is of the order of 100~MG in most of the plasma sheath. Note that the electron concentration in the same region is in the range $10^{20}$--$10^{21}$~cm$^{-3}$. As for the electron temperature, a high-energy electron current is observed on the sheath axis; however, to the side of it, the electron temperature is much lower and in a significant region near the target it is significantly below the relativistic value $kT_e<mc^2$. Due to the fact that in such a plasma there are no effective mechanisms for cooling electrons, then, most likely, these cold electrons were drawn out by a cold reverse current to compensate for the charge of hot electrons that had sufficient energy to leave the interaction region.
At later times, the plasma continues to expand and cool down a little, but the magnetic field is dissipating slowly and changes in magnitude insignificantly. An interesting feature is that, as follows from the performed calculations, the magnitude of the magnetic field at later times is practically independent of the intensity of the laser radiation. This can be seen from Fig.~\ref{fig:bz-vs-time}, which shows a time dependence of the peak value of the magnetic field for different intensities of the laser pulses with fixed duration. Although the field value at the maximum of the laser pulse increases with intensity, it relaxes to an almost identical value of about 100~MG after the pulse leaves.
\begin{figure}[H]
\includegraphics[width=10cm]{pics/bz-vs-time}
\caption{Time evolution of the peak magnetic field generated in the expanding plasma sheath for different laser intensities.\label{fig:bz-vs-time}}
\end{figure}
Let now turn to an analysis of the main parameters of the resulting plasma. Fig.~\ref{fig:vs-time-2} shows the distributions of the previously introduced parameters $\beta = \mu_0n_ekT_e/B_z^2$ and $\sigma = B_z^2/\mu_0n_em_ec^2$, and also a quantity $\delta = R_L / (B_z / |\nabla B_z|)$, equal to the ratio of the Larmour radius $R_L = \sqrt{kT_e/m_eB_z^2}$ to a characteristic scale of the magnetic field variation $L_B = B_z / |\nabla B_z|$. The analysis of $\delta$ is important because for effective plasma magnetization it is not enough to have a sufficiently large magnetic field, but it is also required that the electrons can make a full revolution in the magnetic field before it leaves the region of its localization. This can be a limiting factor in femtosecond laser plasma due to the small interaction volume and high gradients of the generated magnetic field. In particular, this is one of the limiting factors for the generation of magnetized plasma by pulses with a duration of less than 100~fs.
Within the discussed range of parameters, however, such a problem does not arise, although, as can be seen from Fig.~\ref{fig:vs-time-2}, the region in which $\delta<1$ turns out to be remarkably smaller than the region in which $\sigma>1$ and $\beta<1$.
Besides of that, it can be seen that at the beginning of the interaction, the plasma is non-magnetized (Fig.~\ref{fig:vs-time-2} (b)). However, by the arrival of the radiation maximum, a noticeable region of cold magnetized plasma appears (Fig.~\ref{fig:vs-time-2} (d)--(e)), which further begins to expand (Fig.~\ref{fig:vs-time-2} (g)--(h)). The achievable values of $\sigma$ are relatively small and do not exceed $10$.
\begin{figure}[H]
\includegraphics[width=13cm]{pics/vs-time-2}
\caption{Distributions of $\beta$ parameter (left column), $\sigma$ parameter (central column) and $\delta$ parameter (right column) at (a)-(c) $t=1000$~fs, (d)-(f) $t=1500$~fs, (g)-(i) $t=2000$~fs after start of the simulation. The parameters of the simulation are the same as at Fig.~\ref{fig:vs-time-1}.\label{fig:vs-time-2}}
\end{figure}
Achievement of larger values can be expected at higher intensities, therefore, we performed calculations in the range of intensities $3\times 10^{19}$--$10^{21}$~W/cm$^2$. The results are shown in Fig.~\ref{fig:vs-intensity}. It can be seen that at low intensities (Fig.~\ref{fig:vs-intensity} (a)--(c)) no magnetized plasma is formed. Although there is a noticeable region of low electron temperature in the sheath, the magnetic field is too weak to exceed the rest energy density of the electrons. With an increase in intensity, the plasma expanding with higher and higher speeds, which is in accordance with the theory of collisionless plasma expansion into vacuum \cite{Gurevich1966a, Mora1979a}. The magnitude of the generated magnetic fields also increases, and a significant region of cold magnetized plasma is formed. In this case, the degree of magnetization also increases and reaches $\sigma\sim 10$ for $10^{21}$~W/cm$^2$. Simultaneously, the value of $\delta$ remains small.
\begin{figure}[H]
\includegraphics[width=13cm]{pics/vs-intensity}
\caption{The same as at the Fig.~\ref{fig:vs-time-2} but at $t=2500$~fs after start of the simulations for laser intensities of (a)-(c) $3\times10^{19}$~W/cm$^2$, (d)-(f) $10^{20}$~W/cm$^2$, (g)-(i) $3\times10^{20}$~W/cm$^2$, (j)-(l) $10^{21}$~W/cm$^2$.\label{fig:vs-intensity}}
\end{figure}
\section{Discussion}
Concluding, our numerical simulation showed that as a result of irradiation of thin (about a few microns thick) foils by relativistically intense laser pulses with a duration of several hundred femtoseconds, an expanding plasma sheath is formed on the rear side of the target, the conditions in which correspond to a cold magnetized plasma with parameters $\beta\sim 0.01$--$0.1$ and $\sigma\sim 1$--$10$. To create such conditions, an intensity higher than $10^{20}$~W/cm$^2$ is required, which is already available in experiments. To organize magnetic reconnection, two pulses are required, which are focused at a distance of about 50--100~$\mu$m from each other. This opens up new possibilities for the study of relativistic magnetic reconnection in the laboratory.
Additionally, we note that it can be expected that next-generation laser facilities carrying hundreds of joules in subpicosecond pulses will be able to create conditions in which the effects of radiation cooling and generation of electron-positron pairs become observable. Recently, it has been proposed that they can play an important role in relativistic magnetic reconnection \cite{Hakobyan2019, Hakobyan2021}. The study of such a possibility is an interesting possible subject of a future work.
\funding{This research was supported by Center of Excellence ”Center of Photonics” funded by The Ministry of Science and Higher Education of the Russian Federation, contract 075-15-2020-906. The simulations were performed on resources provided by the Joint Supercomputer Center of the Russian Academy of Sciences.}
\dataavailability{The data that support the findings of this study are available from the corresponding author upon reasonable request.}
\conflictsofinterest{The author declares no conflict of interest.}
\end{paracol}
\reftitle{References}
\externalbibliography{yes}
|
1,108,101,565,277 | arxiv | \section{Introduction}
In this note we give a new characterization of spaces with curvature $\ge 0$ in the sense of Alexandrov.
Our work is inspired by \cite{berg-nikolaev} and \cite{sato}, where an analogous definition was given for curvature $\le 0$.
\begin{thm}{Main theorem}
Let $\mathcal X$ be a complete space with intrinsic metric.
Then $\mathcal X$ is an Alexandrov space with curvature $\ge 0$
if and only if any quadruple $p,x,y,z\in \mathcal X$
satisfies the following inequality
$$|p x|^2+|p y|^2+|p z|^2
\ge
\tfrac13\cdot\!\l(|x y|^2+|y z|^2+|z x|^2\r).
\eqlbl{*}$$
\end{thm}
The inequality \ref{*} is quite weak.
For example, one can%
\footnote{Say, consider the metric on $\{p,x,y,z\}$ defined as $|p x|=p y|=|p z|=1$, $|x y|=|x z|=2$ and $|y z|=\eps$ where $\eps>0$ is sufficiently small;
it satisfies \ref{*} for each relabeling but not \ref{1+3}.}
construct a metric space $\mathcal F$ with 4 points
which satisfies \ref{*} for each relabeling by $p,x,y,z$,
such that $\mathcal F$ does not admit an isometric embedding into
any Alexandrov space with curvature $\ge 0$.
The similar conditions which were known before simply describe all 4-point sets in a nonnegatively curved space.
For instance the following inequality for model angles:
$$\angk{}{p}{x}{y}+\angk{}{p}{y}{z}+\angk{}{p}{z}{x}\le2\cdot\pi.
\eqlbl{1+3}$$
In fact, if a 4-point metric space satisfies \ref{1+3} for each relabeling
then it can be isometrically embedded into
Euclidean plane
or a $2$-sphere of some radius $R>0$ (the proof is left to the reader).
\parbf{Why do we care.}
Since the condition \ref{*} is so weak,
it should be useful as a test to check that a given space has curvature $\ge0$ in the sense of Alexandrov.
However, we are not aware of a single case when it makes life easier.
To explain the real reason why we are interested in this topic,
we need to reformulate our Main theorem using the language similar to one given in \cite[Section 1.19$_+$]{gromov}.
Let us denote by $\mathbf{M}^4$ the set of isometry classes of 4-point metric spaces.
Let $\mathfrak A$ and $\mathfrak B$ be the sets of isometry classes in $\mathbf{M}^4$
which satisfy correspondingly \ref{*} and \ref{1+3} for all relabeling of points by $p,x,y,z$.
(As it mentioned above, $\mathfrak B\subsetneq\mathfrak A$.)
Further, given a metric space $\mathcal X$,
denote by $\mathbf{M}^4(\mathcal X)$ the set of isometry classes of 4-point subspaces in $\mathcal X$.
Main theorem says that if the space $\mathcal X$ has intrinsic metric and $\mathbf{M}^4(\mathcal X)\subset \mathfrak A$ then $\mathbf{M}^4(\mathcal X)\subset \mathfrak B$.
From above, the set $\mathfrak B$ is the smallest set which satisfies the above property for any $\mathcal X$.
It would be interesting to find a general pattern of such phenomena.
Assume you start with arbitrary $\mathfrak A\subset \mathbf{M}^4$, can you figure out what is the corresponding $\mathfrak B$,
or can one describe the properties of $\mathfrak B$ which might appear this way?
Note that Globalization theorem (see \cite{BGP})
as well as Berg--Nikolaev characterization of $\mathrm{CAT}(0)$ spaces
both admit interpretations in the above terms.
Also, in \cite{FOERTSCH-LYTCHAK-SCHROEDER}, it was shown that set defined by Ptolemy inequality can appear as $\mathfrak B$.
\parbf{Acknowledgment.}
We want to thank Alexander Lytchak for an interesting discussion in the M\"unster's botanical garden.
\section{The proof}
The ``only if'' part follows directly from generalized Kirszbraun's theorem, see \cite{lang-schroeder}.
One only has to check that the inequality \ref{*} holds in the model plane.
(Alternatively, one can prove \ref{1+3}~$\Rightarrow$~\ref{*} directly.)
\parit{``if'' part.}
We may assume that $\mathcal X$ is geodesic, otherwise pass to its ultraproduct.
It is sufficient to show that if $z$ is a midpoint of geodesic $[p q]$ in $\mathcal X$ then
$$2\cdot|x z|^2
\ge
|x p|^2+|x q|^2-\tfrac12\cdot|p q|^2,\eqlbl{**}$$
for any $x\in \mathcal X$.
Directly from \ref{*} we have the following weaker estimate%
$$
3\cdot|x z|^2\ge |x p|^2+|x q|^2-\tfrac12\cdot |p q|^2
\eqlbl{***}
$$
\begin{wrapfigure}{r}{22mm}
\begin{lpic}[t(-6mm),b(0mm),r(0mm),l(0mm)]{pics/mediana(0.5)}
\lbl[r]{0,0;$p$}
\lbl[l]{42,1;$q$}
\lbl[l]{34,47;$x_0$}
\lbl[l w]{26,16;$x_1$}
\lbl[l w]{24,6;$x_2$}
\lbl[t]{21,-1;$z$}
\end{lpic}
\end{wrapfigure}
Set $x_0=x$ and consider a sequence of points
$x_0,x_1,\dots$ on $[x z]$ such that $|x_n z|=\tfrac1{3^n}\cdot|x z|$.
Let $\alpha_n$ be a sequence of real numbers such that
$$\alpha_n\cdot|x_n z|^2
=
|x_n p|^2+|x_n q|^2-\tfrac12\cdot|p q|^2.$$
Applying \ref{*}, we get
$$|x_{n+1}p|^2+|x_{n+1}q|^2+|x_{n+1}x_{n}|^2
\ge
\tfrac13\cdot(|x_n p|^2+|x_n q|^2+|p q|^2).$$
Subtract $\tfrac12\cdot|p q|^2$ from both sides;
after simplification you get
$$\alpha_{n+1}\ge 3\cdot\alpha_n-4.$$
Now assume \ref{**} does not hold; i.e., $\alpha_0>2$
then $\alpha_n\to\infty$ as $n\to\infty$.
On the other hand, from \ref{***}, we get $\alpha_n\le 3$, a contradiction.
\qeds
\section*{P.S.: Arbitrary curvature bound}
One can obtain analogous characterization of Alexandrov spaces with curvature $\ge\kappa$ for any $\kappa\in\RR$.
Here are the inequalities for cases $\kappa=1$ and $-1$ which correspond to \ref{*} for quadruple $p,x^1,x^2,x^3$:
$$\l(\sum_{i=1}^3\cos|p x^i|\r)^2
\le
\sum_{i,j=1}^3\cos|x^ix^j|.
\eqno\text{\ref{*}}^{+}$$
$$\l(\sum_{i=1}^3\cosh|p x^i|\r)^2
\ge
\sum_{i,j=1}^3\cosh|x^ix^j|;
\eqno\text{\ref{*}}^{-}
$$
Note that in both cases we have equality if $p$ is the point of intersections of medians of the triangle $[x^1x^2x^3]$ in the corresponding model plane.
(In the model planes, medians also pass through incenter.)
The proof goes along the same lines.
The case $\kappa=1$ also follows from Main theorem
applied to the cone over the space.
|
1,108,101,565,278 | arxiv | \section{Introduction}
One of the most intriguing open questions in particle physics
is understanding the origin of CP-violation.
Even though the observable CP-violating effects in kaon decays
can be accommodated within the Standard Model via the
Cabibbo-Kobayashi-Maskawa (CKM) mechanism [1], the real nature of
CP-violation
is still to be uncovered.
A careful study of CP-violating effects in B-decays would reveal whether the
CKM model provides an adequate description of CP-violation in nature.
In particular, the CKM model implies the existence of a nondegenerate
unitarity triangle [2]: the condition of orthogonality of the columns or rows
of a unitary matrix can be represented by a triangle in the complex plane.
The angles of this triangle correspond to the relative complex phases
between the CKM matrix elements. For example, since sizeable complex phases
are
expected in the CKM matrix elements involving the first and third generations,
a triangle formed by $V(ub),\;V^{*}(td)$, and $-V(cb)\sin \theta_{c}$
should have relatively large angles: their typical values range from
$15^{\circ}$ to $145^{\circ}$ (see, for example, [3]).
Independent measurements of the sides and angles of this triangle would
overdetermine the unitarity triangle allowing us to check explicitly
the validity of the CKM approach. Any appreciable deviation from
the Standard
Model predictions would indicate the presence of ``New Physics"
such as the existence of a fourth family, supersymmetry, etc.
In this letter we study implications of an alternative to the
CKM approach - a model in which CP-symmetry is broken spontaneously.
In particular, we will consider minimal susy models and concentrate on
CP-violating effects in B-decays which allow us to extract information
about the angles of the unitarity triangle.
\section{Spontaneous CP-violation in SUSY Models}
The possibility of spontaneous CP-violation (SCPV) in the minimal susy models
has recently drawn considerable attention [6-13]. The basic idea is that,
in a general two Higgs doublet model, the Higgs fields can acquire complex
VEV's if the scalar potential is not Peccei-Quinn (PQ) invariant [4].
Phenomenologically acceptable susy models present a fertile ground for
SCPV since they require the presence of at least two Higgs doublets.
In the minimal supersymmetric Standard Model (MSSM), the desired PQ-breaking
terms can be generated only via radiative corrections [6] and, as a
result of the Georgi-Pais theorem [5], such a scenario predicts
the existence
of a light axion. Consequently, even though SCPV is possible in the MSSM
in principle [8,9], it is ruled out by experimental constraints on the
mass of the lightest Higgs boson [7].
The next-to-minimal supersymmetric
Standard Model (NMSSM) has been shown to be free of this problem [10]
and is, at the moment, experimentally viable.
The implications of this model for observable CP-violating effects in
$K-\bar K$ systems were first
studied by Pomarol [11].
He has shown that,
in a favorable region of the parametric
space, this scenario can predict correct values of $\epsilon$ and $\epsilon '$
while complying with the experimental bounds on the Neutron Electric Dipole
Moment (NEDM). Our more recent analysis [12] showed that the requirement
that the squarks be sufficiently heavy (300-400 GeV) allows one
to enlarge the available region of the parametric space. Yet, even in this
case some fine-tuning is required and
the values of the CP-phases would have to be small: from 0.01 to 0.1.
The experimental information on CP-violation in $K-\bar K$ systems and
constraints on the NEDM cannot distinguish between the
CKM model and susy models with spontaneously broken CP. To discriminate
against one of them, we have to combine
these data with B-decay phenomenology. In what follows, we estimate
the characteristic values of the angles of the unitarity triangle
$\alpha_{i}$ in the context of
spontaneous CP-violation in the NMSSM (or similar models).
We will see that they are significantly different from their Standard Model
counterparts.
\section{CP-violation in B Decays}
Let us consider nonleptonic B decays into CP eigenstates. The three angles of
the unitarity triangle $\alpha_{1-3}${\footnote{We follow the
notation of Ref.[3]}} can be probed, for example, via
the following decays
\begin{eqnarray}
&& B_{d} \rightarrow \psi K_{s} \;\;\;\;\sim \sin 2\alpha_1 \;,\\
&& B_{d} \rightarrow \pi^{+} \pi^{-} \;\;\sim \sin 2\alpha_2 \;,\\
&& B_{s} \rightarrow \rho^0 K_{s} \;\;\;\sim \sin 2\alpha_3 \;.
\end{eqnarray}
In these decays, CP-violation manifests itself as a deviation of the decay
rate from a pure exponent $e^{-\Gamma t}$. Since the CKM model predicts
enormous
(up to 30$\%$) CP-asymmetries [14], these decays offer excellent
opportunities for observing
large CP-violating effects.
Neglecting a small{\footnote {For the $B_{s}-\bar B_{s}$ system it is
not negligible. The corresponding CP-asymmetry can
be read off from
the $ e^{-{1\over2}\Delta \Gamma t}\sin \Delta m t$
term in the decay rate evolution.}} $B_{L}-B_{H}$
lifetime difference, the proper time evolution of the decay rate
can be written as
[3]
$$\Gamma (B^{0}(t)\rightarrow f_{i}) \;\propto e^{-\Gamma t} \biggl(
1- \sin 2 \alpha_{i} \;\sin \Delta m t \biggr)\;,$$
$$\Gamma (\bar B^{0}(t)\rightarrow f_{i}) \;\propto e^{-\Gamma t} \biggl(
1+ \sin 2 \alpha_{i} \;\sin \Delta m t \biggr)\;,$$
with $\Delta m$ being the $B_{L}-B_{H}$ mass difference.
Here we have taken into account that [14]
\begin{eqnarray}
\biggl| {A(\bar B^{0}\rightarrow f_{i})\over A(B^{0}\rightarrow f_{i})}
\biggr| &\approx &1 \;,\\
\biggl| {q\over p} \biggr| &\approx &1\;,
\end{eqnarray}
where $p,q$ are the Pais-Treiman coefficients [15] defining the mass
eigenstates in terms
of the flavor eigenstates $B^{0},\bar B^{0}$. The CP-asymmetry results
from an interference between the two processes
$$B^{0}\rightarrow f_{i} \;\;{\rm and}\;\; B^{0}\rightarrow \bar B^{0}\rightarrow
f_{i}\;,$$
and includes CP-violating effects in both mixing ($\vert\Delta B\vert=2$)
and decays ($\vert\Delta B\vert=1$).
The angles of the unitarity triangle
can be expressed as
\begin{eqnarray}
&& \sin 2\alpha_{i} =\eta^{CP}_{i} \; {\rm Im}\; \biggl[ {q\over p}\;
{\langle f_{i}\vert {\mathcal{L^{CP}}}\vert B^{0}\rangle
\over \langle f_{i} \vert {\mathcal{L}}\vert B^{0}\rangle } \biggr]
= -\eta^{CP}_{i} \; \sin (2\phi_{D_{i}} + \phi_{M})\;,
\end{eqnarray}
where $\eta^{CP}_{i}$ denotes the CP-parity of the final state ($-1$ for
$\psi K_{s}$ and $\rho^0 K_{s}$, and $+1$ for $\pi^{+} \pi^{-}$) and
$\phi_{D_{i}}, \phi_{M}$ are the weak phases entering the
$b\rightarrow q{\bar q}Q$ decay and $B-\bar B$ mixing diagrams, respectively.
Note that no hadronic uncertanties are involved in this formula
and, in the Standard Model, the $\alpha_{i}$ are functions of the CKM
matrix elements only. Even though one cannot predict the exact values of
these angles due to large uncertanties in the CKM matrix, eq.(6) allows us
to verify generic features of the CKM approach such as the existence of the
unitarity triangle. Should $\alpha_{i}$ not add up to $180^{\circ}$,
the necessity for an alternative theory of CP-violation would be manifest.
Thus, B decays into CP-eigenstates provide a useful and precise
tool in the search for physics beyond the Standard Model.
\section{Implications of Spontaneous CP-violation for B Decays}
Let us now proceed to evaluating the angles $\{ \alpha_{i} \}$ in the
context of the NMSSM. This model includes the MSSM superfields along
with an extra singlet superfield $\hat N$ and was first introduced
to rectify the so called ``$\mu -problem$'' [16].
A list of relevant interactions can be found in Refs.[11] and [17].
We assume that the initial Lagrangian conserves CP and
it is only the vacuum that breaks it.
In the process of electroweak symmetry breaking the neutral Higgs
components develop the following VEV's:
\begin{eqnarray}
&&\langle H_1^0 \rangle =v_1,\;\langle H_2^0 \rangle=v_2 e^{i\rho},
\langle N \rangle=n e^{i\xi}\;. \nonumber
\end{eqnarray}
If $\rho$ and/or $\xi$ are not equal to an integer multiple of $\pi$,
CP symmetry is violated. Through various interactions
these complex phases will enter the mass matrices and interactions
of the matter fields leading to observable CP-violating effects.
To estimate the consequent asymmetries in B decays,
first we will have to find the weak phase $\phi_{M}$
coming from the $B-\bar B$ mixing diagram.
Following the line of Ref.[11],
we adopt the following super-CKM ansatz (a squark version of the CKM matrix)
\begin{eqnarray}
&&{\tilde V} \approx \pmatrix{1&O(\epsilon)&O(\epsilon^2) \cr
O(\epsilon)&1&O(\epsilon)\cr
O(\epsilon^2)&O(\epsilon)&1\cr}
\end{eqnarray}
Here $\epsilon$ is of the order of Cabibbo mixing angle $\theta_{C}$.
Note that all the entries are real since we are considering spontaneous
CP-violation.
The real part of the $B-\bar B$
mixing is dominated by the SM box and chargino super-box diagrams (Fig.1a,b).
For simplicity, we assume that the gluino is sufficiently heavy and its
contribution to the $B-\bar B$ mixing is negligible.{\footnote{As long as the
gluino contribution does not dominate the $B-\bar B$ mixing, the essential
results of this paper remain unchanged.}}
There are three major contributions to the imaginary part
of the $B-\bar B$ mixing coming from the CP-violating diagrams in
Fig.2a,b and c.
The box diagram with Higgs exchange (Fig.2a) involves a complex phase
in the top quark mass insertion (this phase, of course, can be
absorbed into the Higgs vertex by a phase redefinition of $t_{R}$).
However, this diagram is suppressed by a factor of $(m_{b}/m_{W})^2$ and
can safely be neglected. The diagrams in Fig.2b,c contain phases in
propagators of the superparticles. In the case of $K-\bar K$ system,
an analog of the diagram 2b is responsible for a nonzero value for
$Re\;\epsilon$ [11].
To estimate its effect for the $B-\bar B$ system, we can repeat
the $K-\bar K$ mixing analysis
with heavy squarks $m_{\tilde q}^2 \gg m_{\tilde W}^2$ [12]. Note that
the diagram 2c, which did not play any role for the $K-\bar K$ system,
can give a significant contribution to the imaginary part of the
$B-\bar B$ mixing.
The corresponding $\Delta B=2$ operator is given by
\begin{eqnarray}
&& O_{\Delta B_{q}=2} =(k_{q}+ e^{i\phi} l_{q} + e^{2i\phi} l_{q}' )\;
\bar d \gamma^{\mu} P_{L} b\;
\bar d \gamma_{\mu} P_{L} b ,
\end{eqnarray}
where $k_{q}$, $e^{i\phi} l_{q} $ and $e^{2i\phi} l_{q}'$ ($q=d,s$)
result from
the diagrams shown in Fig.1, Fig.2b and Fig.2c, respectively.
The weak phase $\phi$ is a function of the complex phases of the Higgs VEV's
and is constrained to be between 0.01 and 0.1 (for the sake of definiteness, we
assume it to be positive) from the $K-\bar K$ and NEDM
analyses [11,12].
The Standard Model contribution is well known [19] and can be approximated by
\begin{eqnarray}
&&k_{q}^{SM} \approx {g^4\over 256 \pi^2 M_{W}^2}\; (V_{tb}V_{tq})^2\;.
\end{eqnarray}
Assuming that the first and second generation squarks are degenerate in mass
and the stop mass is different, we estimate the super-box contribution
(Fig. 1b) to be [18]
\begin{eqnarray}
&&k_{q}^{susy} \approx {g^4\over 192 \pi^2 m^2_{\tilde q }}\;
({\tilde V_{tb}}{\tilde V_{tq}})^2\;
\end{eqnarray}
with $m^2_{\tilde q }$ being the average squark mass.
The CP-violating super-box (Fig.2b) generates [12]
\begin{eqnarray}
&& l_{q} \approx
{g^4 \over 128 \pi^2} \biggl({gm_{t}\over \sqrt{2} m_{W} sin\beta} \biggr)^2
\;\frac{v \;z\; m^2_{LR}}
{m_{\tilde W} m^4_{\tilde q } }
\;
({\tilde V_{tb}} {\tilde V_{tq}})^2 \;.
\end{eqnarray}
Here $z\sim 1$ is a partial cancellation factor [11];
$v=\sqrt{v_1^2 + v_2^2}$;
$m_{\tilde W}$ and $m_{t}$ denote the chargino
and top quark masses, respectively;
$m_{LR}$ is the left-right squark mixing, and $tan\beta=v_2 /v_1$.
Finally, the diagram in Fig.2c gives rise to
\begin{eqnarray}
&& l_{q}' \approx
{g^4 \over 256 \pi^2} \biggl({gm_{t}\over \sqrt{2} m_{W} sin\beta} \biggr)^4
\;\frac{v^2 \;z'\; m^4_{LR}}
{m^8_{\tilde q } }
\;
({\tilde V_{tb}} {\tilde V_{tq}})^2 \;.
\end{eqnarray}
The factor $z'\sim 1$ results from the diagram
2c in which positions of $\tilde t_{L}$ and $\tilde t_{R}$ (as well as
$\tilde W$ and $\tilde H$) are interchanged. Such a diagram contributes with
opposite phase and leads to a partial cancellation.
To estimate a relative size of these couplings, let us assume a maximal
left-right mixing, $\tan\beta \sim 1$, $m_{\tilde W} \sim 100 GeV$ and
$m_{\tilde q }\sim 300 GeV$.
Using the super-CKM ansatz (7), it is not hard to see that
\begin{eqnarray}
&&l_{d}, l_{d}' \ll k_{d} \;,\\
&&l_{s}, l_{s}' \sim k_{s}\;.
\end{eqnarray}
An appreciable contribution of the CP-violating super-box to the
$B_{s}-\bar B_{s}$ mixing is an artifact of the chosen
super-CKM form (7). However, it reflects a general tendency for
models in which the mixing between the second and third generation squarks
is enhanced as compared to that of quarks.
As a result, the $O_{\Delta B_{s}=2}$ operator attains an overall
phase factor of $e^{i O(\phi )}$ and the corresponding weak phases are
\begin{eqnarray}
&& \phi_{M}(B_{s}) \sim \phi \;,\nonumber\\
&& \phi_{M}(B_{d}) \sim 0\;,
\end{eqnarray}
with $\phi \leq 0.1$.
Now we can proceed to calculating the remaining weak phase $\phi_{D_{i}}$.
In the Standard Model, the $b\rightarrow q{\bar q}Q$ decay is dominated
by the tree level process (Fig.3) and the weak phase results
from the complex CKM matrix elements entering the vertices. However, in our
case these entries are real. CP-violation must enter through a loop effect.
The simplest 1-loop diagram which involves complex phases in the propagators
of the superpartners is shown in Fig.4 (it is a version of the so called
``Superpenguin" diagram). Its $s\rightarrow q{\bar q}d$ analog
was calculated in [11,12] and shown to
successfully describe the observed value of $\epsilon'$ in $K$ decays.
To get the corresponding 4-fermion effective interaction for the
$b\rightarrow u{\bar u}d$ decay,
we simply need to change the super-CKM entries at the vertices. Then,
in the case $m_{\tilde q}^2 \gg m_{\tilde W}^2$, we obtain [12]
\begin{eqnarray}
&&O_{\Delta B=1}^{s.p.}= e^{i\phi}\;\vert f\vert\;
\bar d_{L} \gamma_{\mu} T^{a} b_{L}\;
\bar q_{R} \gamma^{\mu} T^{a} q_{R}\;,
\end{eqnarray}
with
\begin{eqnarray}
&& \vert f\vert \leq
{g^2_3\; g^2 \over 576 \pi^2} \biggl({gm_{t}\over \sqrt{2} m_{W} sin\beta}
\biggr)^2
\;\frac{v \; m^2_{LR}}
{m_{\tilde q}^5 } \;
\vert {\tilde V_{td}}{\tilde V_{tb}} \vert \;.
\end{eqnarray}
Here $g_3, g$ are the strong and weak
couplings, respectively.
It is easy to see that for a reasonable choice of the parameters
($\tan \beta \sim 1,\; m_{\tilde q} \sim 300 GeV,\;m_{LR}/m_{\tilde q}
\leq 1$) the effective coupling $f$ is negligible as compared
to the Fermi constant which describes the tree level process in Fig.3.
The same argument equally applies to the decay mode
$b\rightarrow c{\bar c}s$.
Hence, direct CP-violating effects in decay processes are strongly
suppressed and
the weak decay phases $\phi_{D_{1-3}}$ can be neglected. All CP-violation
in our scenario has to come from the $B-\bar B$ mixing and, consequently,
there is a universal phase which describes all CP-violating effects.
This is known as a {\it superweak} scheme of CP violation [20].
As a result, no CP-violation can be observed in $B_{d}-\bar
B_{d}$ systems. Eq.(6) now takes on the form
\begin{eqnarray}
&&\sin 2\alpha_{1}\approx 0 \;,\nonumber\\
&&\sin 2\alpha_{2}\approx 0 \;,\nonumber\\
&&\sin 2\alpha_{3}\approx \sin \phi \;.
\end{eqnarray}
Apparently, the
$\{ \alpha_{i} \}$ fail to add up to $180^{\circ}$. However, the discrepancy
can be too small to be detected: since $\phi \sim 0.1$ the $\alpha_3$ is
no larger than a few degrees.
So far we have considered the implications of SCPV using a specific form
of the super-CKM matrix (7). With a more general super-CKM matrix,
one naturally distinguishes two possibilities:
1. The CP-asymmetries in $B,\bar B$ decays are negligible
leading to a flat unitarity triangle (this, for
example, happens when the super-CKM matrix duplicates the standard CKM
matrix). This is very different from the SM prediction
since, in the CKM model, all angles of the unitarity triangle
are typically larger than $10^{\circ}$ [21].
2. CP-violation in B decays is noticeable. Then some of the angles
$\alpha_{i}$ are measurably
different from zero (this requires a favorable super-CKM matrix, for example,
(7)). Since direct CP-violation is negligible in this model,
eq.(6) takes on the form
\begin{eqnarray}
&&\sin 2\alpha_{1}=\;\;\;\sin \phi_1 \;,\nonumber\\
&&\sin 2\alpha_{2}=-\sin \phi_1 \;,\nonumber\\
&&\sin 2\alpha_{3}=\;\;\;\sin \phi_2 \;
\end{eqnarray}
where $\phi_{1,2} \leq 0.1$. This case is represented
by a squashed ``triangle'' formed, for example, by $\phi_1 /2,\;
\pi\; -\;\phi_1 /2$, and $\phi_2 /2$. A deviation of
$\alpha_1 + \alpha_2 +\alpha_3$ from $180^{\circ}$ can be as large as
a few degrees. This is a direct contradiction to the Standard Model.
In both cases the deviations from the SM predictions are significant.
Note that the CP-asymmetries in this model are quite small. That happens
because the same phase is responsible for both the NEDM and the CP-violating
effects in neutral meson systems. Since the EDM of individual quarks is
generated already at one loop level, the CP-violating phase has to be small
to comply with the experimental bound on the NEDM. As a result, large
CP-asymmetries cannot be accommodated within this model.
Another signature of spontaneous CP-violation may come from
independent measurements of the sides of the unitary triangle:
$\vert V(ub)\vert,\;\vert V(td)\vert$, and $\vert V(cb)\sin \theta_{c} \vert$.
Since CP-violation and quark mixing have different origins in SCPV models,
the relative values of these quantities do not have to be consistent
with the angles $\{ \alpha_{i} \}$.
In fact, $\vert V(ub)\vert,\;\vert V(td)\vert$,
and $\vert V(cb)\sin \theta_{c} \vert$
must form a completely flat triangle because all the CKM entries
have to be real. This observation combined with the constraints
on $\{ \alpha_{i} \}$ provides a very sensitive probe of the model.
To summarize, we have analyzed implications of
spontaneous CP-violation in the
simplest supersymmetric models for observable CP-asymmetries in B-decays.
We have argued that the SCPV approach realizes the {\it superweak}
scenario of CP-violation: all CP-violating effects are due to
$B-\bar B$ mixing. The expected asymmetries are significantly
smaller than those predicted by the Standard Model. A drastic
deviation from the SM predictions can, in principle, be observed
in decays (1)-(3):
the corresponding CP-phases do not form the unitarity triangle
(this, however, would require quite precise experimental data).{\footnote{Large
CP-asymmetries in $B,\bar B$ decays ($\sin 2\alpha_1 \geq 0.4$) recently
reported by CDF collaboration [22] suggest that SCPV is not likely to be
the only source of CP-violation. However, the statistics at this time
does not preclude the scenario discussed in this paper.}}
Finally, it is worth mentioning that if both spontaneous and
(super-)CKM mechanisms of CP-violation are present then the
former can be responsible only for small corrections to the
SM values of the angles $\{ \alpha_{i} \}$. Since a deviation
of $\alpha_1 + \alpha_2 +\alpha_3$ from $180^{\circ}$ due to SCPV
does not exceed a few degrees, this model would be indistinguishable
from a susy model with general complex squark mixings [3].
To conclude, we see that a thorough study of B-phenomenology
can reveal the origin of CP-violation and shed light on the source
of new physics.
The author is grateful to Lay Nam Chang and Tatsu Takeuchi for discussions
and critical reading of the manuscript.
\newpage
|
1,108,101,565,279 | arxiv | \section{Computational aspects of DP-ECMS}
Improving computation time of DP is relevant for frequent update of the velocity
profile which is important if road and traffic conditions are changing. Especially if information is available through V2X or V2V communication \cite{grumert2017using}. Therefore,
it is necessary to investigate different methods of reducing computation
time of the DP-ECMS algorithm.
\subsection{Parallelization of the DP algorithm}
The computations of the DP algorithm at each distance step can be
parallelized since they are not depending on each other. If there are infinite
resources for parallel computations, the lower bound of the total computational
time is proportional to the time it takes to evaluate the system model times $T$.
Some experiments on parallelization of the DP algorithm has been conducted
on a computer with 12 cpus where each cpu has two cores. Fig.~\ref{fig:cpu_time_par}
shows the computation time as a function of the number of cores.
When the computations are distributed on one core on each of the 12 cpus, the
computation time is reduced by $50\%$ when doubling the number of cpus. When
multiple cores are used on each cpu, the improvements are not as significant.
\begin{figure}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0,0) {
\resizebox{1\columnwidth}{!}{\includegraphics{Figures/cpu_time_par}}
};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\end{scope}
\end{tikzpicture}
\caption{Computational time of DP-ECMS as a function of number of cores on a computer with 12 cpu with 2 cores each. Computation time is cut by half each time the number of cpu doubles. When multiple cores are used on each cpu the computational time is reduced but not as fast.}
\label{fig:cpu_time_par}
\end{figure}
\subsection{Look-ahead interval analysis}
Instead of computing DP of the whole route, a limited distance can be used, similar to MPC.
Discuss and analyze the effects of distance horizon in DP-ECMS.
\section{Conclusion}
\label{sec::conclusion}
This paper proposes a multi-layer hierarchical control framework, termed DP-ECMS, for HEV eco-driving applications. The proposed control architecture uses GPS and route information to jointly compute the optimal vehicle velocity and powertrain torque split trajectories with the aims to significantly improve the energy economy while providing a pathway to real-time implementation in a vehicle. Embedding the ECMS algorithm within the DP framework offers computational benefits without appreciable compromise in the performance of the optimization.
To accommodate en route variabilities and/or uncertainty in route information in a computationally tractable manner, a look-ahead optimization scheme using the DP-ECMS algorithm is developed. Here, the dynamic program is solved using the rollout algorithm, a technique based on approximation in the value space. The corresponding $\lambda$-tuning strategy evaluates multiple $\lambda$ values in parallel to pick the optimal candidate that results in the minimum cost-to-go approximate. Here, it is shown that the proposed technique results in nominally charge-sustaining behavior over the entire trip.
Relative to the benchmark DP optimization, it is seen that the cost increments from the full-route DP-ECMS consumes are always less than $\SI{2}{\%}$ over the evaluated mixed route. Further, the look- ahead DP-ECMS algorithm is consistently less than $\SI{1}{\%}$ sub-optimal over different $\lambda$ grid sizes. While the choice of the horizon length and route features are recognized to impact the results obtained, the proposed DP-ECMS method significantly reduces the computational costs incurred in the optimization, and is a promising solution for real-time HEV eco-driving applications.
\section{Evaluation}
\label{sec::evaluation}
In this section, the different DP-ECMS approaches discussed in Sections \ref{sec::DPECMS_const_lambda_FR} and \ref{sec::DPECMS_grd_lambda_RH} are evaluated. Before the simulation results are presented, the selected test route and the benchmark solution developed are introduced.
\subsection{Test Route}
The test route over which the simulation study is performed is based on a mixed (urban/highway) route in the Columbus, OH region. This route is $\SI{7}{km}$ in length and comprises of 6 stop signs, as shown in Fig. \ref{fig::R19_features}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\columnwidth]{Figures/R19_features_SI.pdf}
\caption{Speed limits and locations of stop signs along the test route.}
\label{fig::R19_features}
\end{figure}
\subsection{Benchmark Case}
The benchmark case developed considers the same problem formulated in Section \ref{sec::objective_problem}. A 2-state, 2-input DP is used to solve the full route optimization comprising of $N$ steps. Specifically, the state variables chosen are the vehicle velocity and the battery SoC: $x_{b,k} = \left[v_k, \xi_k \right]^\mathsf{T}, \quad \forall k = 1,\dots,N-1$. The engine torque and BSG torque are chosen as the control variables: $u_{b,k} = [T_{eng,k}, T_{bsg,k}]^\mathsf{T}$. For consistency, the vehicle and powertrain models used in this are identical to those used for developing the DP-ECMS method.
\subsection{Evaluation Metrics}
The optimization schemes developed are compared both qualitatively and quantitatively against the benchmark full-route DP optimization. For qualitative comparison, the resulting Pareto fronts from the multi-objective optimizations are compared, along with sample state trajectories. To quantitatively evaluate the performance of the DP-ECMS optimization routine, the cumulative cost that it incurs is compared with that incurred by the benchmark DP. The cumulative cost function used for comparison is shown below:
\begin{equation*}
\begin{aligned}
J^*(x_1) &= \sum_{k = 1}^{N} \left(\gamma \cdot \frac{\dot{m}_{f,k}^*}{\dot{m}_f^{norm}} + (1-\gamma)\right) \cdot t_k^* \\
&= \gamma \cdot \frac{m_{f,N}^*}{\dot{m}_f^{norm}} + (1-\gamma) \cdot \tau_N^*
\end{aligned}
\end{equation*}
where $m_{f,N}^*,\tau_N^*$ are respectively the resulting cumulative fuel consumption and total travel time from applying the optimal control policy. For fair evaluation, the cumulative costs for both the DP-ECMS and benchmark DP algorithms are calculated while ensuring battery SoC neutrality over the entire driving mission.
\subsection{Evaluation of Full-Route DP-ECMS}
To recall, the full-route DP-ECMS refers to the approach in which the DP-ECMS algorithm is developed to solve the full-route optimization problem and the $\lambda$-tuning strategy comprises of computing a single $\lambda$ value over the entire trip using the shooting method. In this particular implementation of the full-route DP-ECMS, an unconstrained optimization routine is setup to tune the equivalence factor for achieving charge-sustaining behavior.
Fig. \ref{fig::dp_ecms_full_route_pareto} shows sample results from the full-route DP-ECMS optimization over the mixed test route. A general note on the Pareto fronts is that lower values of $\gamma$ represent increasingly aggressive drivers, characterized by decreased travel time and increased fuel consumption. On comparison with the Pareto curve from the benchmark DP optimization, it is seen that the sub-optimal DP-ECMS optimization consumes only $\SIrange{0.5}{1}{\%}$ more fuel while taking $\SIrange{1.5}{2}{\%}$ longer to travel the same route.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\columnwidth]{Figures/dp_ecms_full_route_pareto.pdf}
\caption{Pareto curve from DP-ECMS for full-route optimization and comparison with benchmark DP.}
\label{fig::dp_ecms_full_route_pareto}
\end{figure}
In Fig. \ref{fig::dp_ecms_full_route_sample_results}(a), the resulting vehicle velocity and battery SoC profiles for ($\gamma = 0.65, \lambda_0 = 2.87$) are compared with the corresponding benchmark DP solution. While the vehicle velocity trajectory is nearly identical to that from the DP, the battery usage from the DP-ECMS control strategy is more conservative but charge sustaining nonetheless. In the selected case shown in Fig. \ref{fig::dp_ecms_full_route_sample_results}(b), the DP-ECMS routine consumes only $\SI{0.9}{\%}$ more fuel cumulatively than the benchmark over the entire driving mission.
\begin{figure}[!htbp]
\centering
\vspace{-3mm}
\subfloat[Vehicle velocity and battery SoC profile, $\gamma = 0.65$]{\includegraphics[width=\columnwidth]{Figures/dp_ecms_full_route_V_veh_SOC_SI.pdf}}
\hfil
\subfloat[Cumulative fuel consumption]{\includegraphics[width=0.7\columnwidth]{Figures/dp_ecms_full_route_cum_fuel.pdf}}
\caption{Sample results from full-route optimization}
\label{fig::dp_ecms_full_route_sample_results}
\vspace{-1mm}
\end{figure}
\begin{table}[!htbp]
\centering
\begin{tabular}{cccc}
\hline
\multirow{2}{*}{$\gamma$} & Benchmark DP & DP-ECMS & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Cost Increment\\ $[\SI{}{\%}]$\end{tabular}} \\
& $J^* [\SI{}{-}]$ & $J^* [\SI{}{-}]$ & \\ \hline
0.3 & 318 & 322 & +1.4 \\
0.4 & 286 & 289 & +1.1 \\
0.5 & 255 & 257 & +0.5 \\
0.65 & 204 & 205 & +0.6 \\
0.7 & 186 & 189 & +1.4 \\
0.75 & 168 & 170 & +0.8 \\
0.8 & 150 & 152 & +1.3 \\
0.82 & 143 & 145 & +1.6
\end{tabular}
\caption{Quantitative evaluation of full-route optimization using DP-ECMS controller}
\label{tab::DPECMS_FR_eval}
\end{table}
Table \ref{tab::DPECMS_FR_eval} summarizes the quantitative evaluation of the full-route DP-ECMS controller. The resulting cumulative cost is compared against that obtained from the benchmark DP optimization. For brevity, the argument of the cumulative cost is suppressed. The results from the proposed DP-ECMS controls are promising, as cost increments relative to the benchmark DP are always less than $\SI{2}{\%}$ over the entire mixed test route.
\subsection{Evaluation of Look-Ahead DP-ECMS}
To recall, the look-ahead DP-ECMS refers to the approach in which a look-ahead control problem is formulated to adapt to en route variabilities, and solved using the proposed DP-ECMS algorithm. Further, the dynamic $\lambda$-tuning strategy comprises of exploring a uniform $\lambda$ grid such that an optimal equivalence factor is selected for each receding horizon. In the look-ahead DP-ECMS implemented, to achieve faster computation times, the DP routines corresponding to each $\lambda_i$ in the grid are run in parallel.
Fig. \ref{fig::dp_ecms_look_ahead_pareto} shows the resulting Pareto front from the look-ahead DP-ECMS optimization over the mixed test route. Here, the equivalence factor is tuned dynamically from a $\lambda$-grid containing 10 equally spaced points. With a discretization step size of $\SI{10}{m}$ used in the DP-ECMS, a realistic choice is made for the horizon length, $N_H = 20$ (or $\SI{200}{m}$). On comparison with the Pareto curve from the benchmark DP, it is seen that the look-ahead DP-ECMS optimization consumes at most $\SI{2.5}{\%}$ more fuel while taking no more than $\SI{0.5}{\%}$ longer to travel the same route.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\columnwidth]{./Figures/dp_ecms_look_ahead_pareto_red.pdf}
\caption{Pareto curve from DP-ECMS for look-ahead control and comparison with benchmark DP.}
\label{fig::dp_ecms_look_ahead_pareto}
\end{figure}
Fig. \ref{fig::dp_ecms_look_ahead_sample_results}(a) shows the vehicle velocity, battery SoC, and dynamic equivalence factor profiles for ($\gamma = 0.65$). The vehicle velocity trajectory is nearly identical to that from the benchmark DP. In contrast to Fig. \ref{fig::dp_ecms_full_route_sample_results}(b), it is seen that the look-ahead DP-ECMS controller utilizes the battery to a greater extent, resulting in larger swings of the SoC. As desired, the torque split strategy is nominally charge-sustaining over the entire trip. Further, the behavior of the dynamic equivalence factor is as expected, where larger $\lambda$ values make battery use more expensive. In the selected case shown in Fig. \ref{fig::dp_ecms_look_ahead_sample_results}(b), the DP-ECMS routine consumes around $\SI{2}{\%}$ more fuel cumulatively than the benchmark over the entire driving mission.
\begin{figure}[!htbp]
\centering
\vspace{-3mm}
\subfloat[Vehicle velocity, battery SoC profile and dynamic equivalence factor, $\gamma = 0.65$]{\includegraphics[width=\columnwidth]{Figures/dp_ecms_look_ahead_V_veh_SOC_lambda_SI.pdf}}
\hfil
\subfloat[Cumulative fuel consumption]{\includegraphics[width=0.7\columnwidth]{Figures/dp_ecms_look_ahead_cum_fuel.pdf}}
\caption{Sample results from look-ahead control}
\label{fig::dp_ecms_look_ahead_sample_results}
\vspace{-1mm}
\end{figure}
\begin{table}[!htbp]
\centering
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}$\lambda$-Grid\\ $n_i$\end{tabular}} & \multirow{2}{*}{$\gamma$} & Benchmark DP & DP-ECMS & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Cost Increment\\ $[\SI{}{\%}]$\end{tabular}} \\
& & $J^* [\SI{}{-}]$ & $J^* [\SI{}{-}]$ & \\ \hline
\multirow{8}{*}{4} & 0.3 & 318 & 320 & +0.9 \\
& 0.4 & 286 & 289 & +0.8 \\
& 0.5 & 255 & 256 & +0.2 \\
& 0.65 & 204 & 205 & +0.6 \\
& 0.7 & 186 & 188 & +0.6 \\
& 0.75 & 168 & 170 & +0.4 \\
& 0.8 & 150 & 152 & +0.8 \\
& 0.82 & 143 & 144 & +0.6 \\ \hline
\multirow{8}{*}{10} & 0.3 & 318 & 320 & +0.7 \\
& 0.4 & 286 & 288 & +0.7 \\
& 0.5 & 255 & 255 & +0 \\
& 0.65 & 204 & 205 & +0.4 \\
& 0.7 & 186 & 187 & +0.4 \\
& 0.75 & 168 & 169 & +0.3 \\
& 0.8 & 150 & 151 & +0.3 \\
& 0.82 & 143 & 143 & +0.3 \\ \hline
\multirow{8}{*}{40} & 0.3 & 318 & 319 & +0.6 \\
& 0.4 & 286 & 287 & +0.5 \\
& 0.5 & 255 & 255 & +0 \\
& 0.65 & 204 & 205 & +0.3 \\
& 0.7 & 186 & 187 & +0.4 \\
& 0.75 & 168 & 169 & +0.3 \\
& 0.8 & 150 & 150 & +0.1 \\
& 0.82 & 143 & 143 & +0
\end{tabular}
\caption{Quantitative evaluation of look-ahead control using DP-ECMS}
\label{tab::DPECMS_RH_eval}
\end{table}
Table \ref{tab::DPECMS_RH_eval} summarizes the quantitative evaluation of the look-ahead DP-ECMS controller. Further, the impact of $\lambda$ grid size (i.e. number of potential equivalence factor options) on the optimization results is evaluated by considering three different sizes $n_i = \{4,10,40\}$. The benchmark DP solution for each of these cases remains unchanged.
The cumulative cost from the look-ahead DP-ECMS is now compared against that obtained from the benchmark DP. It is seen that the cost increments relative to the benchmark DP optimization are further reduced compared to the constant $\lambda$ approach over the full route. Here, the look-ahead DP-ECMS scheme is always less than $\SI{1}{\%}$ sub-optimal over the entire mixed test route. These results suggest that being able to adapt the equivalence factor en route can bring noticeable improvement in both the battery usage as well as the ``proximity to optimality''.
\subsection{A Note on Computational Effort and Choice of Horizon Length}
The search spaces of the benchmark DP and the DP-ECMS algorithms can be used as intuitive measures of their respective time and space complexity.
The complexity of the DP-ECMS optimization method, with respect to the benchmark DP formulation with computational complexity \eqref{eq::computation_DP}, is given by:
\begin{equation}
\label{eq::computation_DP_ECMS}
\tilde{n}_{c} = \mathcal{O} \left( N \cdot \prod_{i = 1}^{n} \tilde{n}_{x,i} \cdot \prod_{i = 1}^{m-1} \tilde{n}_{u,i} \cdot \tilde{n}_{\bar{u}}\right)
\end{equation}
where $N$ is the number of stages, $\tilde{n}_x$ and $\tilde{n}_u$ are the number of discretized points in the state and control grids respectively and $\tilde{n}_{\bar{u}}$ refers to the discretization of the ECMS control (effectively the candidates for determining the torque split). Note that the dimension of the input space is now reduced by one since the DP-ECMS only explores over different powertrain torque values, with the power split being optimized within the ECMS. Here, the number of computations is further reduced as a smaller portion of the model is evaluated in simulating the ECMS controller. Further, a significant part of the model evaluations can be reused when performing the $\lambda$ grid search over different equivalence factors in \eqref{eq::DPECMS_grid}, thereby not significantly increasing the computational complexity.
When a vanilla DP algorithm is used to determine the optimal torque split strategy, the discretization of the $\xi_k$ grid is quite dense to minimize interpolation errors and avoid infeasibilities. In the DP-ECMS, the sole purpose of retaining the SoC as a state is to ensure constraint satisfaction over the optimization horizon. Simulation studies have revealed that the granularity of the $\xi_k$ has minimal impact on the actual torque split optimization as this is handled by the embedded ECMS routine. Thus, the proposed structure provides the freedom to select a highly coarsened $\xi_k$ grid without sacrificing on optimality and constraint satisfaction.
Simulations have shown that the ECMS allows for having a coarser discretization $n_{\bar{u}}$ compared to $n_u$. Further, these computations are much simpler in nature (with very few, low-dimensional interpolations). As a result, depending on the system configuration and application, the DP-ECMS optimization routine can offer over $10\times$ reduction in the number of computations performed.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\columnwidth]{Figures/dp_ecms_look_ahead_pareto_N_H_comp_V2.pdf}
\caption{Pareto curve from DP-ECMS for look-ahead optimization and comparison with benchmark DP.}
\label{fig::dp_ecms_look_ahead_pareto_N_H_study}
\end{figure}
Fig. \ref{fig::dp_ecms_look_ahead_pareto_N_H_study} shows the impact of horizon length on the optimality of the resulting solution. Here, the Pareto curves corresponding to $N_H = \{5,10,20,30 \}$ are all enclosed within the Pareto curves from the benchmark DP and full-route DP-ECMS. This observation becomes fairly obvious by recognizing that the performance of the benchmark DP cannot be mathematically improved, and by considering the upper limit of $N_H$, that eventually spans the entire (or remaining) route. As the value function of the full-route optimization is used as the terminal cost of the look-ahead control, it is expected that shorter $N_H$ will result in a Pareto front that more closely matches the benchmark DP. From these results, a choice of $N_H = 20$ (as made in Fig. \ref{fig::dp_ecms_look_ahead_pareto}) is justified as a reasonable compromise between adaptability to variabilities en route while preserving information from the original full-route DP and computational cost incurred. However, it is to be noted that this choice of horizon length cannot be generalized. Some of these observations are subject to change, depending on the powertrain configuration, and features of the test route i.e. speed limits, density of stop signs, elevation and so on.
\section{Introduction}
\label{sec::introduction}
Recent years have seen intensive research efforts and rapid advancements in technologies for connected and autonomous driving in the automotive industry. They pledge significant improvements in energy economy, safety and user convenience. In particular, the access to more information, increased computational power, and precision control, have expanded the energy saving potentials of these Connected and Automated Vehicles (CAVs) \cite{ozatay2014cloud, grumert2015analysis}.
Eco-driving typically refers to velocity control for minimizing the energy use over a driving mission \cite{jin2016power, sciarretta2015optimal}. The eco-driving controls use route information such as speed limits, stop sign locations, grade and so on to compute the energy-optimal speed trajectory. Depending on the level of vehicle automation, this velocity trajectory is used as a driver advisory \cite{wan2016optimal} or provided as a reference to the Advanced Driver Assistance Systems (ADAS) such as an Adaptive Cruise Controller (ACC).
When a conventional powertrain (having an internal combustion engine) or a battery electric vehicle (BEV) is considered, the eco-driving control scenario is simpler. A Dynamic Programming (DP) based approach for velocity control is developed for a conventional powertrain in \cite{mensing2013trajectory} and for a BEV in \cite{dib2014optimal}. In \cite{hellstrom2009look,hellstrom2010design}, a real-time velocity optimization algorithm using DP is proposed. The authors in \cite{ozatay2014cloud} solve a DP optimization in the cloud for optimizing the velocity reference that is tracked by a human driver.
Eco-driving applications in vehicles equipped with hybrid electric powertrains present an additional degree of complexity, as an intelligent energy management strategy (EMS) has to be designed along with the velocity optimization. Depending on the approach, the powertrain and velocity controls can be handled separately or optimized simultaneously. In \cite{ozatay2017velocity}, Pontryagin's Minimum Principle is used to solve the optimization problem. In \cite{uebel2017optimal}, a controller that combines the Pontryagin's Minimum Principle with DP is proposed. The look-ahead optimization scheme in \cite{olin2019reducing} utilizes DP for combined velocity and torque split optimization in a mild-hybrid electric vehicle. The DP-based approach in \cite{heppeler2014fuel} considers intelligent battery management along with velocity optimization. A Model Predictive Control (MPC) based scheme is developed in \cite{sun2014velocity}, that utilizes a neural network to predict the velocity profile. In \cite{xiang2017energy} as well, neural networks are used for predicting the velocity, with the energy management handled by a nonlinear MPC that uses a forward dynamic programming algorithm. An emerging area of interest is the use of reinforcement learning for eco-driving control, as in \cite{liu2017reinforcement}.
In literature, a hierarchical decision system is popularly adopted in an eco-driving control scenario \cite{homchaudhuri2016fast, sun2018robust, lim2016distance}. In \cite{heppeler2017predictive}, a multi-layer scheme is considered for optimizing the velocity and energy management over long and short distance ranges, with an on-line Equivalent Consumption Minimization Strategy for real-time powertrain control. This type of architecture is suitable for autonomous systems to handle the complex control problem of combining vehicle motion planning, velocity trajectory optimization and powertrain control \cite{paden2016survey}. When a travel route is selected, the velocity can be optimized with respect to route information, traffic, and the powertrain energy management system, such that the predicted fuel consumption is minimized. The optimized velocity trajectory, together with the resulting energy management strategy, are then used as input to the low-level real-time control of the vehicle.
\section{Model of Parallel Mild-Hybrid Vehicle}
\label{sec::model}
The parallel mHEV topology studied in this work is illustrated in Fig. \ref{fig::NEXTCAR_mHEV_topology}. A Belted Starter Generator (BSG) replaces the conventional alternator, and is connected to the crankshaft of a 1.8L turbocharged gasoline engine and a 48V battery pack.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\columnwidth]{Figures/NEXTCAR_mhEV_Topology.pdf}
\caption{Block diagram of P0 mHEV topology.}
\label{fig::NEXTCAR_mHEV_topology}
\end{figure}
A forward-looking dynamic powertrain model is developed for fuel economy evaluation and control strategy verification over prescribed routes. It contains the following key elements:
\begin{itemize}
\item Quasi-static models of the engine and BSG;
\item Low-frequency, dynamic model of the battery;
\item Quasi-static models of the torque converter and transmission;
\item Low-frequency model of the vehicle longitudinal dynamics.
\end{itemize}
The structure of this forward model has been depicted in Fig. \ref{fig::plant_model}. The inputs to the plant model are the desired BSG torque ($T_{bsg,t}^{des}$) and desired IMEP ($p_{ime,t}^{des}$). These are obtained from a (simplified) model of the Electronic Control Unit (ECU), that contains a baseline torque split strategy and other essential functions that allow conversion from driver’s input (pedal position) to powertrain commands.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\columnwidth]{Figures/Plant_Model.pdf}
\caption{Block diagram of 48V P0 mild-hybrid drivetrain.}
\label{fig::plant_model}
\end{figure}
\subsection{Engine Model}
The engine fuel consumption is modeled as a static nonlinear map $\psi_t(\cdot,\cdot)$ of engine torque and speed. The brake mean effective pressure, actual brake torque and fuel flow rate obtained from this model can be respectively expressed as:
\begin{equation}
\label{eq::engine_model_T_eng_fuel}
\begin{aligned}
p_{bme,t} &= p_{ime,t}^{des} - p_{fme,t}(T_{eng,t}^{des},\omega_{eng,t}) - p_{pme,t}(\omega_{eng,t})\\
T_{eng,t} &= p_{bme,t} \cdot \frac{V_d}{4\pi} \\
\dot{m}_{f,t} &= \psi_t(T_{eng,t},\omega_{eng,t})
\end{aligned}
\end{equation}
where, $\omega_{eng,t}$ is the engine speed, $p_{fme,t}$ is the friction mean effective pressure and $p_{pme,t}$ is the pumping mean effective pressure. These maps are based on steady-state engine test bench data provided by a supplier; the same maps and torque structure are used in the simplified ECU model.
\subsection{BSG Model}
A simplified, quasi-static model is used to compute the BSG torque and electrical power output:
\begin{equation}
\label{eq::BSG_model}
\begin{aligned}
\omega_{bsg,t} &= r_{belt}\cdot \omega_{eng,t}\\
P_{bsg,t} &= T_{bsg,t} \cdot \omega_{bsg,t} \cdot \bar{\eta}_{bsg,t} \\
\bar{\eta}_{bsg,t} &= {\begin{cases}
\eta_{bsg,t}(\omega_{bsg,t},T_{bsg,t}), & T_{bsg,t} < 0 \\
\frac{1}{\eta_{bsg,t}(\omega_{bsg,t},T_{bsg,t}) }, & T_{bsg,t} > 0
\end{cases}}
\end{aligned}
\end{equation}
where $r_{belt}$ is the belt ratio, $P_{bsg,t}$ is the electrical power required to produce a torque $T_{bsg,t}$ at speed $\omega_{bsg,t}$, and $\eta_{bsg,t}$ is the map-based BSG efficiency.
\subsection{Battery Model}
A zero-th order equivalent circuit model is used to compute the battery State-of-Charge (SoC) and voltage. The model equations are:
\begin{equation}
\label{eq::battery_model}
\begin{aligned}
I_{batt,t} &= \frac{V_{oc,t}(\xi_t) - \sqrt{V^2_{oc,t}(\xi_t) -4R_0\cdot P_{bsg,t}}}{2R_0} \\
\bar{I}_{batt,t} &= I_{batt,t} + I_{bias} \\
\frac{\mathrm{d}\xi_t}{\mathrm{d}t} &= -\frac{1}{C_{nom}}\cdot \bar{I}_{batt,t}
\end{aligned}
\end{equation}
where $V_{oc,t}$ is the battery open-circuit voltage, $R_0$ is a map-based approximation of the battery internal resistance, $I_{batt,t}$ is the battery current, $\xi_t$ is the battery SoC, and $C_{nom}$ is the nominal capacity of the battery. Further, a calibration term $I_{bias}$ is introduced as a highly simplified representation of the on-board electrical auxiliary loads.
\subsection{Torque Converter Model}
A simplified torque converter model is developed with the purpose of computing power losses during traction and regeneration modes. The model equations are:
\begin{equation}
\label{eq::TC_model}
\begin{aligned}
\omega_{p,t} &= \omega_{turb,t} + \omega_{slip,t}^{des}(n_{gr,t}(v_t,T_{eng,t}),\omega_{eng,t},T_{eng,t}) \\
\omega_{eng,t} &= \begin{cases}
\omega_{p,t}, & \omega_{p,t} \geq \omega_{eng,stall} \\
\omega_{idle}, & 0\leq \omega_{p,t} < \omega_{eng,stall} \\
0, & 0\leq \omega_{p,t} < \omega_{eng,stall} , stop = 1
\end{cases}\\
T_{turb,t} &= T_{pt,t}
\end{aligned}
\end{equation}
where $\omega_{p,t}$ is the speed of the torque converter pump, $\omega_{turb,t}$ is the speed of the turbine and $\omega^{des}_{slip,t}$ is the torque converter clutch slip (determined by the ECU based on engine operating condition); $\omega_{eng,stall}$ is the speed at which the engine stalls, $\omega_{idle}$ is the idle speed (target) of the engine, $stop$ is a flag from the ECU indicating engine shut-off when the vehicle is stationary, $T_{turb,t}$ is the turbine torque, and $T_{pt,t}$ is the powertrain torque.
\subsection{Transmission Model}
The transmission is modeled as a static gearbox with efficiency $\eta_{trans,t}$, which is determined empirically from vehicle test data:
\begin{equation}
\label{eq::transmission_model}
\begin{aligned}
\omega_{turb,t} &= r_{f} \cdot r_{gr,t}(n_{gr,t}(v_{t},T_{eng,t})) \cdot \frac{v_{t}}{R_w} \\
T_{out,t} &= r_{f} \cdot r_{gr,t}(n_{gr,t}(v_{t},T_{eng,t})) \cdot T_{turb,t} \cdot \bar{\eta}_{tran,t}\\
\bar{\eta}_{tran,t} &= \begin{cases}
\eta_{tran,t}(\omega_{turb,t},T_{turb,t}), & T_{turb,t} \geq 0 \\
\dfrac{1}{\eta_{tran,t}(\omega_{turb,t},T_{turb,t})}, & T_{turb,t} < 0
\end{cases}
\end{aligned}
\end{equation}
where $r_{f}$ is the final drive ratio, $r_{gr,t}$ is the gear ratio, $R_w$ is the rolling radius of the wheel, and $T_{out,t}$ is the transmission output shaft torque.
\subsection{Vehicle Longitudinal Dynamics Model}
This model is based on the road-load equation, in which only longitudinal dynamics is considered to obtain the balance of tractive force at the wheel and road-load \cite{guzzella2007vehicle}:
\begin{equation}
\label{eq::longitudinal_dyn_model}
\begin{aligned}
\frac{\mathrm{d}v_t}{\mathrm{d}t} &= \frac{F_{trc,t} - F_{road,t}(v_t)}{M}\\
F_{trc,t} &= \frac{T_{out,t}}{R_w} - F_{brk,t}
\end{aligned}
\end{equation}
where $v_t$ is the velocity of the vehicle, $M$ is the mass of the vehicle, $F_{trc,t}$ is the net force exerted by the propulsion system (including braking force, $F_{brk}$) of the vehicle and $F_{road,t}$ is the road-load; it is defined as the force imparted on a vehicle while driving at constant speed over a smooth level surface from sources such as tire rolling resistance, driveline losses, and aerodynamic drag:
\begin{equation}
\label{eq::road_load}
\begin{aligned}
F_{road,t}\left(v_{t} \right) = \dfrac{1}{2}C_x\rho_a A_f v_{t}^2 &+ Mg \cos{\alpha} \cdot C_{r,t}\left(v_{t} \right) \\
& + Mg\sin{\alpha}
\end{aligned}
\end{equation}
where $C_x$ is the aerodynamic drag coefficient, $\rho_a$ is the air density, $A_f$ is the effective aerodynamic frontal area, $C_r$ is rolling resistance coefficient, and $\alpha$ is the road grade (expressed in \emph{rad}).
\subsection{Model Verification}
\begin{figure}[!htbp]
\centering
\vspace{-3mm}
\subfloat[Vehicle speed and battery SoC comparison]{\includegraphics[width=\columnwidth]{Figures/FTP_V_veh_SOC_comp_SI.pdf}}
\hfil
\subfloat[Cumulative fuel consumption comparison]{\includegraphics[width=0.7\columnwidth]{Figures/FTP_fuel_comp.pdf}}
\caption{Validation of forward vehicle model over FTP cycle.}
\label{fig::model validation_FTP_results}
\vspace{-1mm}
\end{figure}
The forward model is calibrated and verified using experimental data for the FTP regulatory drive cycle. Fig. \ref{fig::model validation_FTP_results} shows the results of vehicle model verification over the FTP regulatory drive cycle. The experimental data for validation is obtained from chassis dynamometer testing.
The fuel consumed over the cycle is well estimated by the model, with error on the final value less than 4\% relative to the actual engine. Mismatches in fuel economy are caused by differences in the model of the production powertrain control strategy, which does not account for the different calibrations of engine and BSG during cold start conditions. In lieu of the approximations made, the calibration is considered satisfactory for the purpose of predicting fuel consumption and battery SoC profiles over user-defined routes.
\section{Motivation and Proposed Contributions}
\label{sec::motivation_contrib}
\begin{figure*}[!t]
\centering
\includegraphics[width=1.5\columnwidth]{Figures/DPECMS_Ctrl_Architecture_Simp.pdf}
\caption{Simplified rendition of proposed control architecture for eco-driving applications.}
\label{fig::DPECMS_ctrl_architecture}
\end{figure*}
Dynamic Programming (DP) has been used extensively in literature to solve numerically HEV energy management and eco-driving problems, due to its ability to handle discontinuities, integer states and nonlinearities in states or control. DP uses the Bellman equation to break down an optimization problem into a sequence of sub-problems, and solve them in a backward manner \cite{bellman1966dynamic}. Starting from the terminal stage, whose cost is defined as $J_N(x_N) = g_N(x_N)$, the intermediate calculation steps in DP are given by the following recursion \cite{guzzella2007vehicle}:
\begin{equation}
\label{eq::bellman}
\begin{aligned}
J_k(x_k) = \min_{\mu_k(x_k)} \quad J_{k+1}(f_k(x_k,\mu_k(x_k))) + g_k(x_k,\mu_k(x_k)), & \\
\forall k = 1,\dots,N-1 &
\end{aligned}
\end{equation}
where the optimal control policy $\mathcal{M}^* = \left(\mu_{1}^*, \dots, \mu_{N-1}^* \right)$ for an initial condition $x_1$ is given by the trajectory that minimizes $J_1(x_1)$, and $J_{k+1}(f_k(x_k,u_k))$ is the optimal cost-to-go from the projected state to the terminal state. In this way, the cost functional defined in \eqref{eq::prb_cost_fn_gen} is minimized systematically using a constrained DP algorithm. A general drawback of DP is the computational cost that grows exponentially with the dimensions of the state and action spaces, a phenomenon Bellman refers to as the curse of dimensionality \cite{bellman1964dynamic}. The total number of computations in DP is given by:
\begin{equation}
\label{eq::computation_DP}
n_{c} = \mathcal{O} \left(N \cdot \prod_{i = 1}^{n} n_{x,i} \cdot \prod_{i = 1}^{m} n_{u,i} \right)
\end{equation}
where $N$ is the number of stages, $n_x$ and $n_u$ are the number of discretized points in the state and control grids respectively, and $n,m$ refers to the dimensions of the state and action spaces respectively \cite{Larson:1982:PDP:578147}. To reduce the computational cost, models of HEVs for fuel economy optimization are often limited to a few states (vehicle velocity, battery State-of-Charge, and selected gear, for example).
An alternative to DP for on-line hybrid powertrain energy management is the Equivalent Consumption Minimization Strategy (ECMS). The ECMS is a real-time heuristic control strategy derived from the Pontryagin's Minimum Principle (PMP) by assuming that the co-state in the PMP formulation remains constant over the trip \cite{paganelli2002equivalent}. In the ECMS, the objective function comprises of the actual fuel consumption and an equivalent cost term that represents the electrical energy consumption from the battery. The following instantaneous minimization problem is solved at each stage $k$:
\begin{equation}
\label{eq::ECMS_opt_gen}
\begin{aligned}
\min_{\bar{u}_k} \quad \dot{m}_{f,k}(x_k,\bar{u}_k) + \lambda \cdot f_{pen,k}(x_k) \cdot \frac{P_{batt,k}^{des}(x_k,\bar{u}_k)}{Q_{lhv}}, &\\
\forall k = 1,\dots,N-1 &
\end{aligned}
\end{equation}
where $\bar{u}_k$ is the control input optimized by the ECMS (essentially the torque split), $Q_{lhv}$ is the energy density of the fuel (lower heating value), ${P_{batt,k}^{des}}$ is the power requested from the battery, $\lambda$ is the equivalence factor, a trade-off parameter that defines the battery-fuel equivalence and represents the future fuel cost needed for recharging the battery, and $f_{pen,k}$ is a SoC-dependent penalty function to ensure boundedness of the SoC state. The equivalence factor is route-dependent and needs to be tuned for achieving charge sustaining behavior. Adaptive tuning strategies have been proposed, see for instance \cite{musardo2005ecms, pisu2007comparative}.
\subsection{Proposed Contribution}
In this work, a novel control architecture is proposed for application to eco-driving in a HEV, as shown in Fig. \ref{fig::DPECMS_ctrl_architecture}. This architecture harnesses and further extends the benefits from both the DP and ECMS optimization schemes while reducing the computational and calibration effort required for real-time implementation. The DP-ECMS approach allows one to reduce the dimensions of the state and action spaces, thereby significantly reducing the computational effort with minimal loss in optimality.
One of the outputs from the DP-ECMS algorithm is the optimal vehicle velocity that can be fed as a reference for an on-board Adaptive Cruise Controller (ACC). Furthermore, the DP-ECMS algorithm and the $\lambda$-tuning strategy determine the optimal equivalence factor. This is used by the on-line ECMS controller based on \eqref{eq::ECMS_opt_gen}, along with the desired powertrain torque request from the ACC, to optimally determine the HEV powertrain torque split strategy. This implementation can significantly reduce the calibration effort to integrate the eco-driving controls with the existing on-board vehicle controllers.
\subsection{Reformulation of Optimization Problem}
\label{sec::problem_reformulation}
The problem discussed in Section \ref{sec::objective_problem} is now reformulated with additional details. The state variables used in the DP-ECMS optimization are the energy (or more precisely, the square of the vehicle velocity, $v_{k}^2$) and the battery SoC ($\xi_k$). One of the key benefits of using the energy as the state instead of the velocity is that the simple Euler forward method can be used with adequate solution characteristics, as discussed in \cite{hellstrom2010design}.
The typical control variables of choice when DP is employed to solve the classical HEV energy management problem are the engine brake torque ($T_{eng,k}$) and BSG torque ($T_{bsg,k}$). Here, at each discretized (state) grid point the DP explores several combinations of these two discretized control inputs. A computationally economical alternative to this approach is adopted in the proposed DP-ECMS algorithm, where this exploration is performed over the discretized powertrain torque instead, while the torque split is optimized by the ECMS. A clear benefit of this structure is that the overall optimization problem is still cast within the DP framework (which returns robust, closed-loop optimal policies), while reducing the dimension of the action space. The optimization states and control actions are summarized as follows:
\begin{equation}
\begin{aligned}
x_k = \begin{bmatrix}
v_{k}^2 \\
\xi_k
\end{bmatrix} \in \mathcal{X}_k \subset \mathbb{R}^2,
\hspace{0.5cm}
u_k = \begin{bmatrix}
T_{pt,k}
\end{bmatrix} \in \mathcal{U}_k \subset \mathbb{R}
\label{eq::states_inputs_DP_ECMS}
\end{aligned}
\end{equation}
where $T_{pt,k}$ is the powertrain torque. In the ECMS developed in this work, the control variable considered is the electric motor torque (specifically, the Belted Starter Generator or BSG torque): $\bar{u}_k = [T_{bsg,k}] \in \bar{\mathcal{U}}_k \subset \mathbb{R}$. Further, the cost index from \eqref{eq::prb_cost_fn_gen} that the DP-ECMS controller aims to minimize is reformulated as shown below:
\begin{equation}
\label{eq::prb_cost_fn_gen_DPECMS}
\begin{aligned}
J(\mathcal{M}; \lambda) = g_N(x_N) + \sum_{k = 1}^{N-1} g_k(x_k,u_k ; \lambda)
\end{aligned}
\end{equation}
where $g_k$ is the per stage cost function that is now parametrized by the equivalence factor $\lambda$ as the DP framework now contains the ECMS algorithm as well.
\section*{Acknowledgment}
Our team gratefully acknowledges the support from the United States Department of Energy, Advanced Research Projects Agency – Energy (award number DE-AR0000794). The authors are grateful to Mr. Leo Bauer for the discussions and research efforts as a part of his Master's thesis \cite{bauer2018distance}, which eventually matured into this work.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{ieeetr}
\section{Objective and Problem Formulation}
\label{sec::objective_problem}
The objective in this work is to develop a control system for autonomous velocity optimization and powertrain control of a mild-hybrid electric vehicle (mHEV). It is assumed that GPS data and route information, such as speed limits, road slope, and locations of stop signs, are available. Further, surrounding traffic is implicitly handled by a traffic management system that is assumed to control the traffic flow with variable speed limits to avoid collisions \cite{grumert2017using}. For adapting to such varying route conditions, it is necessary to continuously update the velocity optimization and energy management strategy during the driving mission.
To this end, a nonlinear dynamic optimization is formulated to minimize the fuel consumption of the vehicle over the entire driving mission by controlling its velocity trajectory and torque split strategy. This problem is cast in the spatial domain to simplify the process of incorporating position-based route information. Consider a discrete-time dynamic control problem having the form:
\begin{equation}
x_{k+1} = f_k(x_k,u_k) \quad \forall k = 1, \ldots, N-1
\end{equation}
where $k$ is the grid point along the route, $x_k \in \mathcal{X}_k \subset \mathbb{R}^n$ is the state, $u_k \in \mathcal{U}_k \subset \mathbb{R}^m$ is the control input, and $f_k$ is the state transition function. Typical control inputs in optimal eco-driving and classical HEV energy management problems include the engine and electric motor torques \cite{olin2019reducing, sciarretta2015optimal}. The control action and state are constrained by a function $h_k: \mathcal{X}_k \times \mathcal{U}_k \to \mathbb{R}^r$ that takes the form:
\begin{equation}
h_k(x_k,u_k) \leq 0, \quad \forall k = 1, \dots, N-1
\end{equation}
This may include route speed limits, operating limits of physical actuators and subsystems, constraints for drive comfort and so on. An admissible control map at grid point $k$ is defined as a map $\mu_k : \mathcal{X} \to \mathcal{U}$ such that $h(x,\mu_k(x)) \leq 0, \forall x \in \mathcal{X}$. The collection of admissible control maps, $\mathcal{M} := \left(\mu_{1}, \dots, \mu_{N-1} \right)$ is referred to as the control policy. The controller aims at minimizing a cost, given by:
\begin{equation}
\label{eq::prb_cost_fn_gen}
\begin{aligned}
J(\mathcal{M}) = g_N(x_N) + \sum_{k = 1}^{N-1} g_k(x_k,u_k)
\end{aligned}
\end{equation}
where $g_k : \mathcal{X}_k \times \mathcal{U}_k \to \mathbb{R}$ is the per stage cost function, considered in this work as a weighted average of the fuel consumption and travel time:
\begin{equation}
\label{eq::prb_cost_fn}
\begin{aligned}
g_k(x_k,u_k) = \left(\gamma \cdot \frac{\dot{m}_{f,k}(x_k,u_k)}{\dot{m}_f^{norm}} + (1-\gamma)\right) \cdot t_k \\
\end{aligned}
\end{equation}
where $\gamma$ is a tuning parameter that represents driver aggressiveness by penalizing the travel time, $\dot{m}_{f,k}$ is the fuel consumption rate, $\dot{m}_f^{norm}$ is a cost normalizing factor and $t_k$ is the per stage travel time.
\section{Optimization Problem Formulation}
\label{sec::problem}
The objective of the nonlinear dynamic optimization formulated is to minimize the fuel consumption of the vehicle over the entire driving mission. This problem is cast in the spatial domain to simplify the process of incorporating position-based route information.
The global optimization is now formulated over the full-route comprising of $N$ discrete grid points. Consider a discrete-time dynamic system in the form:
\begin{equation}
x_{k+1} = f_k(x_k,u_k) \quad k = 1, \ldots, N-1
\end{equation}
where $k$ is the grid point along the route, and the dynamic states $x_k$ and control inputs $u_k$ are discrete variables. The state variables used in the model are the energy (or more precisely, the square of the vehicle velocity, $v_{veh,k}^2$) and the battery SoC ($\xi_k$). One of the key benefits of using the energy as the state instead of the velocity is that the simple Euler forward method can be used with adequate solution characteristics, as discussed in \cite{hellstrom2010design}. The engine brake torque ($T_{eng,k}$) and BSG torque ($T_{bsg,k}$) are chosen as the control variables. The optimization states and control actions are summarized as follows:
\begin{equation}
\begin{aligned}
x_k = \begin{bmatrix}
v_{k}^2 \\
\xi_k
\end{bmatrix} \in \mathcal{X}_k \subset \mathbb{R}^2,
\hspace{0.5cm}
u_k = \begin{bmatrix}
T_{eng,k} \\
T_{bsg,k}
\end{bmatrix} \in \mathcal{U}_k \subset \mathbb{R}^2
\label{eq::states_inputs_DP}
\end{aligned}
\end{equation}
The function that constrains the control and the state, $h_k: \mathcal{X}_k \times \mathcal{U}_k \to \mathbb{R}^r$ is expressed as $h_k(x_k,u_k) \leq 0, \forall k = 1, \dots, N-1$. Specifically, the constraints of the $N$-step optimization are:
\begin{equation}
\label{eq::prb_constr_opt}
\begin{aligned}
v_{k} &\in \left[v_{k}^{min}, v_{k}^{max} \right], \quad \forall k = 2, \dots, N \\
\xi_k &\in \left[\xi^{min}, \xi^{max} \right], \quad \forall k = 2, \dots, N \\
v_1 &= v_1^{min}, \quad \xi_1 \in \left[\xi^{min}, \xi^{max} \right] \\
a_k &\in \left[a^{min}, a^{max} \right], \quad \forall k = 2, \dots, N \\
T_{eng,k} &\in \left[T_{eng,k}^{min}\left(v_k \right), T_{eng,k}^{max}\left(v_k \right) \right], \forall k = 1, \dots, N-1 \\
T_{bsg,k} &\in \left[T_{bsg,k}^{min}\left(v_k \right), T_{bsg,k}^{max}\left(v_k \right) \right], \forall k = 1, \dots, N-1
\end{aligned}
\end{equation}
where, $v_{veh}^{min}$ and $v_{veh}^{max}$ are the minimum and maximum route speed limits respectively; $\xi^{min}$ and $\xi^{max}$ represent the static limits applied on battery SoC variation; $T_{eng}^{min}$ and $T_{eng}^{max}$ are the state-dependent minimum and maximum torque limits of the engine respectively, and $T_{bsg}^{min}$ and $T_{bsg}^{max}$ are the state-dependent minimum and maximum BSG torque limits respectively. To ensure SoC-neutrality over the global optimization, a terminal constraint is applied on the battery SoC: $\xi_1 = \xi_N$. Further, dynamical constraints are imposed by the vehicle model dynamics, described in Equations \ref{eq::engine_model_T_eng_fuel} - \ref{eq::longitudinal_dyn_model}.
\section{Full-Route Optimization using DP-ECMS}
\label{sec::DPECMS_const_lambda_FR}
To perform the constrained optimization problem described in Section \ref{sec::motivation_contrib}, a custom DP-ECMS algorithm is developed and employed. The reader should note that the notations defined in Section \ref{sec::objective_problem} are used here as well, with frequent references to the DP and ECMS equations, introduced in the same section.
This algorithm requires the state dynamics introduced in \eqref{eq::longitudinal_dyn_model}, \eqref{eq::battery_model} to be discretized and transformed to spatial domain. For each grid point $k$ along the route:
\begin{equation}
\label{eq::state_equations_DP}
\begin{aligned}
v_{k+1}^2 &= v_k^2 + 2 \Delta d_k \cdot \biggl(\frac{F_{trc,k} - F_{road,k}(v_k)}{M} \biggr) \\
\xi_{k+1} &= \xi_k - \frac{\Delta d_k}{\bar{v}_{k}}\cdot \frac{\bar{I}_{batt,k}}{C_{nom}}, \quad \forall k = 1,\dots,N-1
\end{aligned}
\end{equation}
where $\Delta d_k$ is the distance over a stage (i.e. $\Delta d_k = d_{k+1}-d_k$, with $d_k$ as the distance traveled along the route at grid point $k$) and $\bar{v}_k \left(= \frac{v_k + v_{k+1}}{2} \right)$ is the average velocity over one stage. In this formulation, the signal phase information of each traffic light is deterministically incorporated as part of the initialization process before the trip begins. Varying timing information (i.e. time in each phase) however, cannot be utilized in the full-route optimization routine.
In this section, the DP-ECMS algorithm is introduced for the global optimization comprising of $N$ discrete grid points. In this methodology, a constant equivalence factor is used over the entire driving mission. The constraints of the $N$-step optimization are:
\begin{equation}
\label{eq::prb_constr_opt}
\begin{aligned}
v_{k} &\in \left[v_{k}^{min}, v_{k}^{max} \right], \quad \forall k = 2, \dots, N \\
\xi_k &\in \left[\xi^{min}, \xi^{max} \right], \quad \forall k = 2, \dots, N \\
v_1 &= v_1^{min}, \quad \xi_1 \in \left[\xi^{min}, \xi^{max} \right] \\
a_k &\in \left[a^{min}, a^{max} \right], \quad \forall k = 1, \dots, N \\
T_{eng,k} &\in \left[T_{eng,k}^{min}\left(v_k \right), T_{eng,k}^{max}\left(v_k \right) \right], \forall k = 1, \dots, N-1 \\
T_{bsg,k} &\in \left[T_{bsg,k}^{min}\left(v_k \right), T_{bsg,k}^{max}\left(v_k \right) \right], \forall k = 1, \dots, N-1
\end{aligned}
\end{equation}
where, $v_{veh}^{min}$ and $v_{veh}^{max}$ are the minimum and maximum route speed limits respectively; $\xi^{min}$ and $\xi^{max}$ represent the static limits applied on battery SoC variation; $T_{eng}^{min}$ and $T_{eng}^{max}$ are the state-dependent minimum and maximum torque limits of the engine respectively, and $T_{bsg}^{min}$ and $T_{bsg}^{max}$ are the state-dependent minimum and maximum BSG torque limits respectively. To ensure SoC-neutrality over the global optimization, a terminal constraint is applied on the battery SoC: $\xi_1 = \xi_N$. Further, dynamical constraints are imposed by the vehicle model dynamics, described in Equations \ref{eq::engine_model_T_eng_fuel} - \ref{eq::longitudinal_dyn_model}.
A key element in the DP-ECMS method is the mapping between the DP and the ECMS optimization problems. For a given powertrain torque (control input from DP), the ECMS yields the optimal torque split. Define at a grid point $k$, $\nu_k : u_k \to \bar{u}_k$ such that $\bar{u}_k^* = \nu_k^*(u_k)$ is the optimal BSG torque from the ECMS computed for each powertrain torque $u_k$. This map is used by the following DP problem, starting with terminal cost $J_N(x_N) = g_N(x_N)$:
\begin{equation}
\label{eq::DP_bellman_reformulated}
\begin{aligned}
J_k(x_k) = \min_{\mu_k(x_k)} \quad J_{k+1}(f_k(x_k,\mu_k(x_k), \nu_k^* \circ \mu_k(x_k))) &\\
+ g_k(x_k,\mu_k(x_k), \nu_k^* \circ \mu_k(x_k)), &\\
\forall k = 1,\dots,N-1 &
\end{aligned}
\end{equation}
where the $\circ$ operator denotes the composition of respective functions. In the model developed, the stage cost and cost-to-go functions are defined using the states and the torque split. As the control $u_k = \mu_k(x_k)$ contains only the powertrain torque, it is necessary to augment \eqref{eq::bellman} with $\nu_k^*(u_k)$ from the ECMS, that is obtained by solving the following instantaneous minimization problem at each stage $k$:
\begin{equation}
\label{eq::DPECMS_ECMS_eqn}
\begin{aligned}
\bar{J}_k^*(u_k) = \min_{\nu_k(u_k)} \quad &\dot{m}_{f,k}(x_k,\nu_k(u_k)) \\
&+ \lambda \cdot f_{pen,k}(x_k) \cdot \frac{P_{batt,k}^{des}(x_k,\nu_k(u_k))}{Q_{lhv}}
\end{aligned}
\end{equation}
Equations \ref{eq::DP_bellman_reformulated} and \ref{eq::DPECMS_ECMS_eqn} provide the optimal solution to the problem formulated in Section \ref{sec::problem_reformulation}. The DP-ECMS algorithm is compactly represented using the flowchart in Fig. \ref{fig::DPECMS_schematic}. At each grid point $k$ along the route, the DP explores all feasible combinations of the discretized state $x_k$ and control $u_k$ grids. At each discretized powertrain torque, the embedded ECMS algorithm determines the optimal torque split that is used in the DP cost-to-go recursion. Starting from the terminal stage (destination), the backward recursion yields the closed-loop optimal control policy as well as the corresponding optimal state trajectories over the full route.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\columnwidth]{Figures/DPECMS_Schematic_1.pdf}
\caption{Schematic of the DP-ECMS algorithm for full-route optimization.}
\label{fig::DPECMS_schematic}
\end{figure}
As evident from the formulation, this approach utilizes a constant $\lambda$ over the entire driving mission. Additional calibration effort is necessary to appropriately select both $\lambda$ and the penalty function $f_{pen,k}$ that ensures boundedness of the SoC state:
\begin{equation}
\lambda \cdot f_{pen,k}(\xi_k) = \lambda_0 + \tan(-(\xi_k - \xi^{des})\cdot \lambda_1)
\end{equation}
where $\xi^{des}$ is the target value that is commonly chosen as the initial SoC $\xi_1$, and $(\lambda_0,\lambda_1)$ are constant calibration terms. Here, a $\tan(\cdot)$ function is chosen as its slope in the neighborhood of $\xi^{des}$ is nearly zero (can be tuned using $\lambda_1$). The optimal value of $\lambda_0$ that results in a charge-sustaining strategy is dependent on the route characteristics. It is tuned using the shooting method, in which the full-route optimization (in which the ECMS routine is contained) is run with different $\lambda_0$ to iteratively compute its optimal value.
\section{Look-Ahead Control using DP-ECMS}
\label{sec::DPECMS_grd_lambda_RH}
\begin{figure*}[!t]
\centering
\includegraphics[width=1.9\columnwidth]{Figures/DPECMS_Schematic_RH_1.pdf}
\caption{Schematic of the DP-ECMS algorithm for look-ahead control.}
\label{fig::DPECMS_RH_schematic}
\end{figure*}
To accommodate variability in route conditions and/or uncertainty in route information in a computationally tractable manner, a look-ahead control scheme using DP-ECMS is proposed. Here, the full-route horizon of $N$-steps is truncated to $N_H \ll N$ steps. The goal of the receding horizon optimization in this paper is to add the capability of adapting to short-term changes in route conditions while yielding results comparable to the full-route optimization in the absence of such uncertainties or variabilities. To achieve this within the DP-ECMS framework, two key challenges need to be addressed: choice of terminal cost and constraints, and determination of an appropriate $\lambda$ for ECMS particularly for short horizons.
The terminal cost of the receding horizon optimization problem is constructed using principles in Approximate DP, specifically techniques based on approximation in the value space. For some valid map $\nu_k$, consider the following $N_H$-horizon one-step look-ahead control problem:
\begin{equation*}
\tilde{J}_{j+N_H}(x_{j+N_H}) = g_{j+N_H}(x_{j+N_H}), \quad \forall j = 1,\dots,N-N_H
\end{equation*}
\begin{equation}
\label{eq::DP_bellman_RH_approx}
\begin{aligned}
\tilde{J}_{k}(x_k) = \min_{\hat{\mu}_{k}(x_k)} \quad \tilde{J}_{k+1}(f_{k}(x_k,\hat{\mu}_{k}(x_k), \nu_{k}^* \circ \hat{\mu}_{k}(x_k))) &\\
+ g_{k}(x_k,\hat{\mu}_{k}(x_k), \nu_{k}^* \circ \hat{\mu}_{k}(x_k)), &\\
\quad \forall k = j,\dots,j+N_H-1 &
\end{aligned}
\end{equation}
where, $\tilde{J}_{j+N_H}$ (and as a result $\tilde{J}_{k+1}$) is the cost-to-go of a known suboptimal policy - a base policy, and $\hat{\mathcal{M}}^* := \left(\hat{\mu}_j^*, \dots, \hat{\mu}_{j+N_H-1}^* \right)$ is the rollout policy. This type of formulation, which falls under the class of rollout algorithms, offers the desirable property of cost improvement - as long as the base policy produces a feasible solution, the rollout algorithm also produces a feasible solution, whose cost is no worse than the cost corresponding to the base policy (proof in \cite{bertsekas1995dynamic}).
In this work, the base policy is assigned as the optimal cost-to-go function from the full-route DP optimization. Approximating the terminal cost in \eqref{eq::DP_bellman_RH_approx} as the optimal cost-to-go function from the full-route optimization results in \eqref{eq::DP_bellman_reformulated}, a derivative of the Bellman equation. In this way, while providing policy improvement and adaptation to short-term variabilities, this approach also provides a closed-loop optimal policy that matches the full-route optimization if minimal or no variabilities are experienced en route.
The methodology prescribed in Section \ref{sec::DPECMS_const_lambda_FR} for the full-route (or a long horizon) optimization involves defining a terminal constraint on the battery SoC and tuning the $\lambda$ for charge-sustaining behavior. Applying the same methodology to shorter horizons results in a highly conservative torque split strategy. One of the inherent benefits from adopting the aforementioned rollout algorithm approach is that it eliminates the need to define terminal constraints on the battery SoC, while ensuring approximately charge-sustaining behavior over the entire trip.
The DP-ECMS algorithm for look-ahead control is compactly represented using the flowchart in Fig. \ref{fig::DPECMS_RH_schematic}. In this approach, the optimal $\lambda$ for the ECMS algorithm is selected dynamically from a uniform $\lambda$ grid. For each value $\lambda_i$ in the grid, the following minimization problem is solved $\forall k = j,\dots,j+N_H-1, \quad \forall j = 1,\dots,N-N_H$:
\begin{equation}
\label{eq::DPECMS_grid}
\begin{aligned}
\bar{J}_{k,i}^*(u_k) = \min_{\nu_{k}(u_k)}& \quad \dot{m}_{f,k}(x_k,\nu_{k}(u_k)) \\
&+ \lambda_i \cdot f_{pen,k}(x_k) \cdot \frac{P_{batt,k}^{des}(x_k,\nu_{k}(u_k))}{Q_{lhv}} \\
& \qquad \qquad \qquad \qquad \qquad \forall i = 1,\dots,n_i
\end{aligned}
\end{equation}
where $n_i$ is the number of equally spaced $\lambda$ candidates considered. To further reduce the computational cost of this instantaneous minimization, it is to be noted that terms $\dot{m}_{f,k}(x_k,\nu_{k}(u_k))$ and $\frac{P_{batt,k}^{des}(x_k,\nu_{k}(u_k))}{Q_{lhv}}$ remain the same $\forall i$. These terms are computed only once, and the cost $\bar{J}_{k,i}^*(u_k)$ is then obtained as a linear combination of these terms with only $\lambda_i$ changing for each $i$. The map $\nu_{k,i}^*(u_k)$ that defines the optimal torque split for each $u_k$ and $\lambda_i$ is used by the rollout algorithm below:
\begin{equation*}
\tilde{J}_{j+N_H}(x_{j+N_H}) = g_{j+N_H}(x_{j+N_H}), \quad \forall j = 1,\dots,N-N_H
\end{equation*}
\begin{equation}
\label{eq::DP_bellman_RH_DPECMS}
\begin{aligned}
\hat{J}_{k,i}(x_k) = \min_{\hat{\mu}_{k,i}(x_k)} \quad \hat{J}_{k+1,i}(f_{k,i}(x_k,\hat{\mu}_{k,i}(x_k), \nu_{k,i}^* \circ \hat{\mu}_{k,i}(x_k))) &\\
+ g_{k,i}(x_k,\hat{\mu}_{k,i}(x_k), \nu_{k,i}^* \circ \hat{\mu}_{k,i}(x_k)) &\\
\forall i = 1,\dots,n_i, \quad \forall k = j,\dots,j+N_H-1 &
\end{aligned}
\end{equation}
where $\left(\hat{\mu}_{j,i}^*, \dots, \hat{\mu}_{j+N_H-1,i}^* \right)$ is the rollout policy for each $\lambda_i$. This is subject to the same constraints defined in \eqref{eq::prb_constr_opt}, except that they are suitably truncated over the $N_H$ horizon. Finally, the $\lambda_i$ that results in the minimum cost-to-go approximate $\tilde{J}_j(x_j) $ (defined below) is selected as the optimal equivalence factor for that receding horizon.
\begin{equation*}
\tilde{J}_j(x_j) = \min \quad \{\tilde{J}_{j,1}(x_j),\dots,\tilde{J}_{j,n_i}(x_j) \}
\end{equation*}
The corresponding optimal control policy is applied as the torque split strategy. The highlights of this approach are that this structure still conforms to the assumptions made in the classical ECMS approach, in which the optimal $\lambda$ results in charge-sustaining behavior over the entire trip, and that the $\lambda$ exploration can be easily parallelized.
|
1,108,101,565,280 | arxiv | \section{Introduction}
\label{intro}
Waves play an important role in the dynamics of the solar
atmosphere. They are believed as carriers of photospheric energy
into the corona leading to plasma heating. Waves are intensively
observed in the solar chromosphere and corona by SOHO (Solar and
Heliospheric Observatory) and TRACE (Transition Region and Coronal
Explorer) (Nakariakov \& Verwichte 2005). Dominant intensity
oscillations in a chromospheric network have a period of
$\sim$3-min. The 3-min oscillations are believed to be either
standing waves in a chromospheric cavity (Leibacher \& Stein 1981)
or propagating waves, which arise as the response of the atmosphere
to a general disturbance set at its base (Fleck \& Schmitz 1991).
Coupling of these oscillations to the Alfv{\'e}n waves is a crucial
process in both cases, as the Alfv{\'e}n waves may easily pass
through the transition region taking the energy into the corona.
On the other hand, recent numerical (Rosenthal et al., 2002),
analytical (Zaqarashvili \& Roberts, 2006) and observational
(Muglach et al., 2005) studies point out the importance of
$\beta\sim 1$ region of the solar atmosphere in the wave coupling
phenomena.
Here we consider the 3-min oscillations as the standing acoustic
waves oscillating along an uniform vertical magnetic field and study
their weakly nonlinear coupling to Alfv{\'e}n waves near the
chromospheric $\beta\sim 1$ regions.
\section[]{The wave coupling}
We use Cartesian coordinate system $(x,y,z)$, where spatially
inhomogeneous (along the $x$ axis) magnetic field is directed along
the $z$ axis, i.e. $B_0=(0,0,B_z(x))$. Unperturbed plasma pressure
$p_0(x)$ and density $\rho_0(x)$ are also assumed to vary along $x$.
In the equilibrium, magnetic and hydrodynamic pressures satisfy the
transverse balance, i.e. $p_0(x) + {{B^2_z(x)}/{8\pi}}=const$.
Plasma $\beta$ is defined as $\beta = {{8\pi p_0(x)}/{B^2_z(x)}}=
{{2c^2_s}/{\gamma v^2_A(x)}}$, where $c_s=\sqrt{\gamma p_0/\rho_0}$
and $v_A(x)=B_z/\sqrt{4 \pi \rho_0}$ are the sound and Alfv{\'e}n
speeds respectively and $\gamma$ is the ratio of specific heats. For
simplicity, temperature is suggested to be homogeneous so that the
sound speed does not depend on the $x$ coordinate.
We consider wave propagation along the $z$ axis (thus along the
magnetic field) and wave polarisation in $yz$ plane. Then in this
case only sound and linearly polarised Alfv{\'e}n waves arise. The
velocity component of sound wave is polarised along the $z$ axis and
the velocity component of Alfv{\'e}n wave is polarised along the y
axis. Then the ideal MHD equations can be written as
\begin{equation}
{{{\partial b_y}}\over {\partial t}} + u_z{{{\partial b_y}}\over
{\partial z}} = - b_y{{{\partial u_z}}\over {\partial z}} +
B_z(x){{{\partial u_y}}\over {\partial z}},
\end{equation}
\begin{equation}
{\rho}{{{\partial u_y}}\over {\partial t}} + {\rho}u_z{{{\partial
{u_y}}}\over {\partial z}} = {{B_z(x)}\over {4\pi}}{{\partial
b_y}\over {\partial z}},
\end{equation}
\begin{equation}
{{{\partial {\rho}}}\over {\partial t}}= - {\rho}{{{\partial
{u_z}}}\over {\partial z}} - u_z{{{\partial {\rho}}}\over {\partial
z}},
\end{equation}
\begin{equation}
{\rho}{{{\partial u_z}}\over {\partial t}} + {\rho}u_z{{{\partial
{u_z}}}\over {\partial z}} = - {{{\partial p}}\over {\partial z}} -
{{\partial}\over {\partial z}}{{b^2_y}\over {8\pi}},
\end{equation}
\begin{equation}
{{{\partial p}}\over {\partial t}} = - {\gamma}p{{{\partial
{u_z}}}\over {\partial z}} - u_z{{{\partial p}}\over {\partial z}},
\end{equation}
where $p=p_0 + p_1$ and $\rho={\rho}_0 + \rho_1$ denote the total
(unperturbed plus perturbed) hydrodynamic pressure and density,
$u_y$ and $u_z$ are velocity perturbations (of the Alfv{\'e}n and
sound waves, respectively), $b_y$ is the perturbation in the
magnetic field. Note, that the $x$ coordinate stands as a parameter
in these equations.
Eqs. (1)-(5) describe the fully nonlinear behavior of sound and
linearly polarized Alfv{\'e}n waves propagating along the magnetic
field. However the sound waves may be trapped in a chromospheric
cavity between the photosphere and transition region leading to
standing patterns. Therefore here we consider the sound waves
oscillating along the $z$ axis and bounded at $z=0$ and $z=l$
points. Thus there are the standing patterns
\begin{equation}
u_z=v(t)\sin(k_nz),\,\,\,\,\,\, \rho_1 = {\tilde
\rho}(t)\cos(k_nz),
\end{equation} where $k_n$ is the wavenumber of sound wave such that
$ k_nl={{2\pi l}/{\lambda_n}}= n\pi$, so $ {l/{\lambda_n}}=n/2$,
where $n=1,2...$ denotes the order of corresponding harmonics.
It is natural that almost whole oscillation energy in bounded
systems is stored in a fundamental harmonic. Therefore here we
consider the first ($n=1$) harmonic of acoustic oscillations,
however the same can be applied to harmonics with arbitrary $n$.
Recently, Zaqarashvili and Roberts (2006) found that the harmonics
of acoustic and Alfv{\'e}n waves are coupled when the wave length of
acoustic wave is half of Alfv{\'e}n wave one. Therefore let express
the Alfv{\'e}n wave components as
\begin{equation}
u_y=u(t)\sin(k_Az),\,\,\,\,\,\, b_y=b(t)\cos(k_Az),
\end{equation}
where $k_A$ is the wavenumber of the Alfv{\'e}n waves and the
condition $k_1=2k_A$ is satisfied.
Then the substitution of expressions (6)-(7) into Eqs. (1)-(5) and
averaging with $z$ over the distance $(0,l)$ leads to
\begin{equation}
{{{\partial b}}\over {\partial t}} = k_AB_0u + k_Avb,
\end{equation}
\begin{equation}
{{{\partial u}}\over {\partial t}} = - {{k_AB_0}\over {4\pi\rho_0}}b
- k_Auv,
\end{equation}
\begin{equation}
{{{\partial v}}\over {\partial t}} = {{k_1c^2_s}\over
{\rho_0}}{\tilde \rho} + {{k_A}\over {8\pi\rho_0}}b^2,
\end{equation}
\begin{equation}
{{{\partial {\tilde \rho}}}\over {\partial t}} = - \rho_0k_1v.
\end{equation}
Here Eqs. (10)-(11) describe the time evolution of acoustic
oscillation forced by ponderomotive force of Alfv{\'e}n waves, while
Eqs. (8)-(9) governs the dynamics of Alfv{\'e}n waves forced by
acoustic waves in a parametric way. It must be noted that the
coupling between sound and Alfv{\'e}n waves at $\beta \sim 1$ has
been recently studied by Zaqarashvili \& Roberts (2006). They
consider the general case of propagating waves and show an alternate
energy exchange between the waves during the propagation. Here we
consider the coupling between standing patterns of the waves, which
is the particular case of their results.
Substitution of $u$ from Eq. (8) into Eq. (9) and neglecting of all
third order terms leads to the second order differential equation
for Alfv{\'e}n waves
\begin{equation}
{{{\partial^2 b}}\over {\partial t^2}} + k^2_Av^2_A\left [1 -
{{2\tilde \rho}\over {\rho_0}} \right ] b= 0.
\end{equation}
This equation reflects the parametric influence of standing acoustic
wave due to the density variation. Then the particular time
dependence of density perturbation determines the type of equation
and consequently its solutions. If we consider the initial amplitude
of Alfv{\'e}n waves smaller than the amplitude of acoustic waves,
then the term with $b^2$ in Eq. (10) can be neglected. Physically it
means that the back reaction of Alfv{\'e}n waves due to the
ponderomotive force is small. Then the solution of Eqs. (10)-(11) is
just harmonic function of time $ {\tilde \rho} = \alpha \rho_0
\cos(\omega_1 t)$, where $\omega_1$ is the frequency of the first
harmonic of standing acoustic wave and $\alpha>0$ is the relative
amplitude. Here we consider the small amplitude acoustic waves
$\alpha \ll 1$, so the nonlinear steepening due to the generation of
higher harmonics is negligible. Then the substitution of this
expression into Eq. (12) leads to Mathieu equation
\begin{equation}
{{{\partial^2 b}}\over {\partial t^2}} + k^2_Av^2_A\left [1 -
{{2\alpha}}\cos(\omega_1 t) \right ] b= 0.
\end{equation}
The solution of this equation with the frequency of ${\omega_1}/2$
has an exponentially growing character, thus the main resonant
solution occurs when
\begin{equation}
{\omega_A}= v_Ak_A ={{\omega_1}\over 2},
\end{equation}
where $\omega_A$ is the frequency of Alfv{\'e}n waves. Since
$k_A=k_1/2$, the resonance takes place when $v_A =c_s$. Since the
Alfv{\'e}n speed $v_A(x)=B_0(x)/\sqrt{4 \pi \rho_0(x)}$ is a
function of the $x$ coordinate, then this relation will be satisfied
at particular locations along the $x$ axis. Therefore the acoustic
oscillations will be resonantly transformed into Alfv{\'e}n waves
near this layer. We call this region {\it swing layer} (see similar
consideration in Shergelashvili et al. 2005).
Under the resonant condition (14) the solution of Eq. (13) is
\begin{equation}
{b}(t)=b_0\exp{\left ({{\left |{\alpha}{\omega_1}\right |}\over 4}t
\right )}\left [{\cos}{{\omega_1}\over 2}t - {\sin}{{\omega_1}\over
2}t \right ],
\end{equation}
where $b_0= b(0)$.
The solution (15) has a resonant character within the frequency
interval ${\left |{\omega_A} - {{\omega_1}/ 2} \right |}<{\left
|{{\alpha \omega_1}/ 2} \right |}$. This expression can be rewritten
as ${\left |{{v_A}/ c_s} - 1 \right |}<{\left |{{\alpha }} \right
|}$. Thus the thickness of the resonant layer depends on the
acoustic wave amplitude. Therefore the acoustic oscillations are
converted into Alfv{\'e}n waves not only at the surface $v_A =c_s$
but in the region where $c_s \left (1 - {{\alpha }}\right )< v_A <
c_s \left (1 + {{\alpha }}\right )$. Thus the resonant layer can be
significantly wider for stronger amplitude acoustic oscillations.
Note, that the resonant Alfv{\'e}n waves expressed by Eqs. (7) and
(15) are standing patterns with the velocity node at the bottom
boundary ($z=0$) and antinode at the top boundary ($z=l$); the wave
length of Alfv{\'e}n waves is twice than that of the acoustic
oscillations due to the condition $k_A=k_1/2$. Therefore the
oscillation of magnetic field lines at the upper boundary may excite
the waves in an external plasma, which carry energy away.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{fig1.eps}
\caption{Numerical simulations of wave conversion in $c_s=v_A$ region. Upper panels
show the density and velocity components of standing acoustic
oscillations, while lower panels show the magnetic field and
velocity component of Alfv{\'e}n waves. Relative amplitude of
acoustic oscillations is 0.1. Alfv{\'e}n waves with twice the
period of acoustic oscillations show rapid exponential increase in time.}
\end{center}
\label{FigVibStab}
\end{figure}
It must be noted that the Alfv{\'e}n waves may undergo a phase
mixing due to the $x$ dependence of the Alfv{\'e}n speed (Tsiklauri
\& Nakariakov 2002). However the aim of this paper is to show the
energy conversion from 3-min oscillations into Alfv{\'e}n waves
only, therefore we do not consider the effect of phase mixing here.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{fig2.eps}
\caption{The same as on Fig.1, but for $c_s=0.7v_A$ region. Amplitude of Alfv{\'e}n waves (lower panels)
shows no increase in time, so there is no wave coupling in this region.}
\end{center}
\label{FigVibStab}
\end{figure}
Numerical solutions of Eqs. (8)-(11) (here the back reaction of
Alfv{\'e}n waves, i. e. the second term of right hand side in Eq.
(10), is again neglected) are presented on Fig.1-2. Figure 1 shows
the wave dynamics in $c_s=v_A$ region and we see the rapid growth of
Alfv{\'e}n waves with twice the period of acoustic oscillations.
However Fig.2 shows the wave dynamics away from the resonant layer
(in $c_s = 0.7v_A$ region) and we see no energy exchange between the
waves. Thus there is a good agreement between numerical results and
analytical solutions.
It must be mentioned that the equilibrium used in this paper is
simplified as gravitational stratification, which is important in
the solar chromosphere, is ignored. The stratification leads to
Klein-Gordon equation for propagating waves with a cut-off for wave
frequencies (Roberts, 2004). The 3-min oscillations and Alfv{\'e}n
waves are above the cut-off, so they may propagate in the
chromosphere. Unfortunately, the stratification greatly complicates
mathematical description of the non-linear coupling. On the other
hand, plasma $\beta$ can be constant along a vertical magnetic tube
even in the case of stratification (Roberts, 2004); this is the case
when sound and Alfv{\'e}n speeds vary with height but their ratio
remains constant. Therefore if the waves are coupled at $\beta \sim
1$ region in an unstratified atmosphere, then the same can be
expected in a stratified medium. However the wave coupling in the
stratified atmosphere needs further detailed study, but is not the
scope of present paper. Our goal is just to show that the 3-min
oscillations may transfer energy into incompressible Alfv{\'e}n
waves in $\beta \sim 1$ region, which was recently observed by
Muglach et al. (2005).
\section{Conclusions}
Here we show that acoustic oscillations trapped in transversally
inhomogeneous medium can be resonantly absorbed by Alfv{\'e}n waves
near the layer of $v_A \approx c_s$. The spatial width of the layer
depends on amplitude of acoustic oscillations and can be
significantly wider for strong amplitude oscillations.
We consider the observed 3-min oscillations as the standing
fundamental harmonic of acoustic waves trapped in the solar
chromospheric cavity with vertical magnetic field and show that
their nonlinear coupling to Alfv{\'e}n waves may take place near
$\beta \sim 1$ layer. The coupling may explain the recent
observational evidence of compressible wave absorption near
$\beta\sim 1$ region of solar lower atmosphere (Muglach et al.,
2005). The amplified Alfv{\'e}n waves with the period of $\sim$ 6
min may carry energy into the corona. There they may deposit the
energy back to density perturbations leading to observed intensity
variations in coronal spectral lines.
Thus the layer of $\beta \sim 1$ may play a role of {\it energy
channel} for otherwise trapped acoustic oscillations guiding the
photospheric energy into the solar corona. Therefore the process of
wave coupling can be of importance for coronal heating problem, but
requires further study especially for a stratified atmosphere.
\section{Acknowledgements}
The work was supported by the grant of Georgian National Science
Foundation GNSF/ST06/4-098 and the NATO Reintegration Grant FEL.RIG
980755. A part of the work is supported by the ISSI International
Programme "Waves in the Solar Corona".
|
1,108,101,565,281 | arxiv | \section{Introduction}
\label{sec:introduction}
Setting goals for yourself in different aspects of your life and publicly announcing those goals is nothing new. All year long, our social media feeds are filled with various instances of month, or year-long fitness \cite{site:fitness_challenge,ehrlen2020shared}, coding \cite{site:coding_challenge}, or art \cite{site:art_challenge} challenges, where users announce they are going to be doing a certain activity for a duration of time or share their progress. Sharing of new years resolutions is also a common occurrence these days \cite{site:instagram_new_years}. Hashtags such as \#paintingchallenge and \#codingchallenge, each have tens of thousands of posts on Instagram, showing many partake in such public advertisements of their goals and progress. But does the setting and announcement of goals on public platforms help or hurt your actual progress?\\
The media and numerous individuals \cite{site:should_you_talk_about_goals,site:dont_talk_about_goals,site:talk_about_your_goals,sivers2010keep} have weighed in with their thoughts on the effects of sharing your goals with others. With some supporting the act and some being strictly against it. There are also many scientific studies on the matter, which we will thoroughly review in Section \ref{sec:related_work}. In this study, we will be focusing on a specific instance of this question: reading goals. And to this end, we will be analyzing Goodreads.\\
Goodreads is a social media platform for readers. Users can form social connections and add their readings on the website. One feature of the website, which is of great interest to this study, is "\textit{Goodreads Reading Challenge}". The challenges are yearly events where users can set a certain number of books they plan to read that year (known as "\textit{pledging}" on the platform) and monitor their progress throughout the year. Goodreads makes it a point for users to see their challenges by showing your progress on your homepage, as well as adding it to your profile. Additionally, users can view the pledged counts and advances of other participants.\\
In this study, we aim to take a closer look at these challenges and answer the following questions:
\begin{itemize}
\item How has challenge participation changed throughout the years? Have pledged counts and success rates changed since the feature was first introduced?
\item How do demographic variables influence reading habits?
\item Does challenge participation (and the public commitment to reading a certain number of books) increase the number of books read?
\end{itemize}
We further extend the study by looking at discussion of the topic on more mainstream social networks, such as Instagram and Twitter, to see how users feel about these challenges and what they share with people outside their reading community.\\
The rest of this paper is structured as follows: In Section \ref{sec:related_work}, a brief overview of related studies is presented. Next, our data collection methods and dataset statistics are discussed in Section \ref{sec:data}. Our results are presented in Section \ref{sec:results}, and finally Section \ref{sec:conclusion} concludes the paper.
\section{Related Work}
\label{sec:related_work}
We will begin this section by first providing a review of studies conducted on Goodreads. Then we will briefly review studies on goal setting and sharing and the effects they could have on performance.\\
As Goodreads is primarily a platform for users to add and talk about the books they have read, a large proportion of the body of work on Goodreads is on the analysis of book reviews. In \cite{kousha2017goodreads}, the viability of using Goodreads as a means to assess book impact is put to the test. With a focus on academic books, the impact factor and Goodreads engagement of the books were compared. The authors report that the engagements on the website, while prone to manipulations, could be used to assess books' impact. Additionally, book reading behavior on Goodreads has been shown to predict how well the book will do regarding sales \cite{maity2017book}. The content of reviews are also extensively studied \cite{driscoll2019faraway,parksepp2019sentiment,reisler2019cognitive}. \cite{hajibayova2019investigation} investigates the lingual features of reviews, demonstrating that user-generated reviews are unreliable as they are mostly positive and attempt to persuade others to read the book as well. \cite{shahsavari2020automated} use aggregate reviews as a means for story-graph creation and find their method to be quite accurate compared to the book's true network.\\
Moreover, some studies have looked at specific instances and events concerning Goodreads. For instance, its acquisition by Amazon \cite{albrechtslund2017negotiating}, or some policy changes made by the platform \cite{matthews2016professionals}. \\
Other studies have also investigated the social aspect of Goodreads by examining how users behave on the platform and who they form friendships with \cite{nakamura2013words,thelwall2017goodreads,thelwall2019reader}. \cite{thelwall2017goodreads} finds that men and woman mostly have similar behaviors, with women usually adding more books and usually rating them less favorably. Another study reports significant gender differences in rating books of different genres, finding that users usually rate books by authors of their own gender more favorably \cite{thelwall2019reader}. \cite{sabri2020cross} is another study that takes advantage of the website's users' multinational nature to investigate cross-country reading preferences and the factors that influence these preferences. However, to the best of our knowledge, no studies have been conducted on Goodreads challenges.\\
Goals are defined as "\textit{Desired states that people seek to obtain, maintain, or avoid}" \cite{emmons1989personal,klein2008goal}. Goal-setting theory and goal-choice have been studied extensively, but most prominently in the context of organizations and work. Research has found that more challenging and specific goals result in higher levels of performance \cite{lunenburg2011goal,locke2006new}. Goal-choice is significantly affected by gender, and self-esteem \cite{levy1991effects}. \\
Sharing your goals with others is often viewed through a couple of different lenses. The first is premature praise and the intention-behavior gap. Praise is defined as "\textit{positive evaluations made by a person of another's products, performances, or attributes, where the evaluator presumes the validity of the standards on which the evaluation is based}" \cite{kanouse1981semantics,delin1994praise}. \cite{haimovitz2011effects} explores how person and process praise could affect performance and reports that more generally, process praise increases motivation while personal praise decreases it. \cite{gollwitzer2009intentions} similarly finds that identity-related behavioral intentions that were noticed by others would result in less intense actions. However, since reading is not an identity-related behavior, the finding of \cite{gollwitzer2009intentions} might not apply, and the sharing of progress could potentially result in process praise, which could help improve performance.\\
Another view is that of accountability. Accountability is defined as "\textit{stewardship with responsibility for creation and use of resources and a public reckoning of how they are used}" \cite{hubbell2007quality}. A thorough review of the literature on accountability is available in \cite{lerner1999accounting}. While some studies have found accountability to help performance, others show that it is not always the case. To better perform appears to be connected to who you decide to share your goals with \cite{klein2020goals,site:share_but_be_careful_with_who}. Reporting that the group with whom you share your goal must be perceived by you to have a higher status, for the sharing to be effective \cite{klein2020goals}. \\
Social attention is another matter to consider. Research has shown that people act differently when they know they could be observed compared to when they are alone \cite{herman2003effects,kurzban2007audience}. In more detail, studies have demonstrated that performance of simple tasks is improved (in terms of speed and accuracy) in the presence of an audience \cite{zajonc1965social}, while the performance of complex tasks are worsened \cite{bond1983social}. \cite{steinmetz2017beyond} provides an in-depth review of studies on the topic. So whether we consider reading to be a simple or a complex task, the effects based on this theory would be different.
\section{Data Collection and Preparation}
\label{sec:data}
In this section, the data collection process is first described, then the steps we took to clean and extract various features from the data to prepare it for further analysis are explained.
\subsection{Challenges}
Annual Goodreads challenges are one of the features that help the Goodreads community define a goal, specifying how many books they want to read in the following year. This number is known as the "\textit{pledged}" number of books. These challenges begin every January and finish when the year comes to an end. Users can keep track of the number of books they read during this time, and every book they read will get them closer to their goal. The Goodreads reading challenge data and the associated books are the main datasets used in this research to help us understand how reading challenges affect users' reading habits. The exact features available in this dataset are depicted in Table \ref{tab:challenge}. The data is collected through Goodreads' public API \footnote{https://www.goodreads.com/api}. This dataset includes 5,523,896 instances of challenge data for 4,363,093 unique users and 289,078 books associated with these challenges. We query the API with random challenge identifiers to retrieve this data. Since identifiers are assigned incrementally, with higher numbers corresponding to newer years, we make sure to request numbers distantly apart. With this method, at least 25,549 challenge entries were retrieved for each year from 2011 to 2020. At the time of writing this paper, the users' challenges in 2020 were still in progress, and we decided not to use their information, thus omitted them from the dataset. Moreover, we excluded the information of challenges with more than 500 pledged books or more than 200 read books, as they tend to be outliers. Also, as Goodreads users are members of a reading community, it is logical for them to read at least one book while participating in a challenge; therefore, we did not consider challenges with 0 read books. One possible explanation is that users with 0 read books did not know that they had to record the finish time of the corresponding books or update their read books as Goodreads is not as popular as other social platforms such as Instagram or Twitter. 2,233,517 out of 5,523,896 entries were deleted for this reason; therefore, 3,254,382 challenges for 2,251,574 users remained. The information regarding the number of read and pledged books during these challenges can be seen in Table \ref{tab:challenge_pledged_read}. This information indicates that users tend to overestimate their abilities in reading books. \\
In addition to the data explained above, we collected the data from users' profiles indicating all the challenges they have participated in, including the number of pledged books and read books during each one. This data was then used in order to compare users' performance while participating in a yearly challenge versus the years they were not part of an annual challenge. This dataset has 10,649 entries belonging to 4,558 unique users; the users were selected randomly from the aforementioned pool of users. This data also contained 283 challenges with 0 read books, which were deleted due to the same reason mentioned above. \\
\begin{table*}[htbp]
\centering
\captionsetup{justification=centering}
\caption{Reading Challenge Information}
\label{tab:challenge}
\begin{tabular}{c c}
\hline
Feature&Description\\
\hline
Challenge ID&Unique identifier for a challenge \\
\hline
User ID&Unique identifier for the user associated with this challenge \\
\hline
Read Count&Number of books this user completely read in this challenge \\
\hline
Pledged Count&Number of books this user planned to read in this challenge \\
\hline
Year & The year this challenge took place \\
\hline
Book IDs& A list of unique identifiers for the books read in this challenge\\
\hline
\end{tabular}
\end{table*}
\begin{table}[htbp]
\centering
\captionsetup{justification=centering}
\caption{Reading Challenge Pledged and Read Counts}
\label{tab:challenge_pledged_read}
\begin{tabular}{c c c}
\hline
&Pledged&Read\\
\hline
mean&36.59&23.30\\ \hline
median&25.0&13.0\\ \hline
standard deviation&34.13&29.25\\
\hline
\end{tabular}
\end{table}
\subsection{Users and Books}
To analyze demographic features and how such features affect users' participation and reading habits, users' personal information is needed. For this purpose, Goodreads' public API was used again. In order to use the API, user IDs were extracted from corresponding challenges and used to retrieve the aforementioned data. Moreover, books' information, such as their format, is needed for further analysis. This information can also be retrieved similarly. User information and book information dataset columns are shown in Table \ref{tab:user} and Table \ref{tab:book}, respectively.
In total, 35,695 instances of user information were retrieved in Users dataset, among which, only 186 users' \textit{gender}, 1,977 users' \textit{age}, and 5,735 users' \textit{location} were initially available. This should not be confused with the Challenges dataset. It was previously mentioned that 3,254,382 instances of challenge data (after deleting challenges with 0 read books) for 2,251,574 unique users were retrieved, but we do not have these user's personal information. We know that these challenges belonged to 2,251,574 unique users based on their \textit{user ID}. As mentioned, only a small number of users' \textit{gender}, \textit{age}, and \textit{location} were initially available; therefore, these features for the rest of these 35,695 users were extracted by analyzing their profile pictures and personal details. Particularly, for extracting \textit{countries}, both \textit{location} and \textit{about} features were analyzed and the user's country, city, or state name were retrieved in case of availability using \textit{geotext}\footnote{https://pypi.org/project/geotext/} and \textit{pycountry}\footnote{https://pypi.org/project/pycountry/} libraries. This data was then checked manually, and the required corrections were made. Gender detection was done using each user's \textit{name}, \textit{about}, and \textit{image} columns. Moreover, age was detected from the \textit{about} column by finding age keywords such as \textit{age}, \textit{years}, \textit{y/o}, \textit{year} and then categorized in ranges. After doing so, users with ages less than 9 and more than 100 were deleted from the dataset as they are unlikely to be real values. In total, 10\% of users' \textit{age} was detected. Others had not mentioned anything regarding their age. \\
There is also one more dataset that shows which users have read which books, regardless of their participation in reading challenges. This data is then used to compare users' reading habits while they are not participating in any challenges as opposed to the time they are. The columns for this dataset are depicted in Table \ref{tab:userBook}.
\begin{table}[htbp]
\centering
\captionsetup{justification=centering}
\caption{User Information}
\label{tab:user}
\begin{tabular}{c c}
\hline
Column&Description\\
\hline
User ID&Unique Identifier for a User \\ \hline
Name&Name of this user \\ \hline
Image&Profile picture of this user \\ \hline
Location&User's location - can be country, city, or state \\ \hline
Gender&Specified as male, female, or unknown \\ \hline
Age&This is between 9 to 110 \\ \hline
About&User's "\textit{about me}" information
\\
\hline
\end{tabular}
\end{table}
\begin{table*}[htbp]
\centering
\captionsetup{justification=centering}
\caption{Book Information}
\label{tab:book}
\begin{tabular}{c c}
\hline
Column&Description\\
\hline
Book ID & Unique identifier for a book\\ \hline
Name & Name of this book\\ \hline
Format & Format which can be Hardcover, Paperback, Audio, etc.\\ \hline
Number of Pages & This book's page count\\
\hline
\end{tabular}
\end{table*}
\begin{table}[htbp]
\centering
\captionsetup{justification=centering}
\caption{User-Book Information}
\label{tab:userBook}
\begin{tabular}{c c}
\hline
Column&Description\\
\hline
User ID & Unique identifier for a user\\ \hline
Book ID & Unique identifier for a book this user read \\ \hline
Read At & Date the book was marked as read\\ \hline
Read Count & Number of times the user read the book \\
\hline
\end{tabular}
\end{table}
\subsection{Twitter and Instagram}
To analyze how people talk about their reading challenges outside their reading community, we collected Instagram posts and tweets which contained \#readingChallenge and \#goodreadsChallenge hashtags. These hashtags were used in different years, and therefore, the posts belonged to various years. For Instagram, specifically, we also searched for hashtags such as \#2015goodreadsChallenge and \#2015readingChallenge, which included the years. This was not done for Twitter as we had enough data for each year. Furthermore, since our analysis was made on these posts' distributions through months, we did not necessarily need data for all years. In total, 418,913 Instagram posts and 48,320 tweets were collected. By manually studying these posts and tweets, we realized that they contained book reviews and challenge updates. Using the timestamp of each entry, the month they were posted was extracted, and using the text or caption, the sentiment for each was retrieved. For extracting the sentiments, \textit{TextBlob} library \footnote{https://textblob.readthedocs.io/en/dev/} was used. Texts with sentiment between -0.1 and 0.1 were considered neutral, whereas sentiments between -1 and -0.1 and between 0.1 and 1 were considered negative and positive, respectively. Several examples for the tweets and posts' captions are shown in Table \ref{tab:sentiment_example}. This data was then used to analyze the posts' distribution through the months of the year. The result of which has been reported in the following sections.
\begin{table*}[htbp]
\centering
\captionsetup{justification=centering}
\caption{Posts and Tweets' Sentiment Examples}
\label{tab:sentiment_example}
\begin{tabular}{c c}
\hline
Text&Sentiment\\
\hline
I woke up with a cold!! Oh no!!! At least it got me out of work so now I can work on the& -0.65\\
\#YASavesChallenge \#readingchallenge \\ \hline
October is the perfect month to read a book that takes place at night! Check out our recommendations & 0.75 \\
to fulfill this reading challenge prompt. Our picks are seasonally appropriate with witches \\and vampires! \#booklist \#readingchallenge \#bookclubbelles\\ \hline
I have read and reviewed 6 books from my TBR list. I am woefully behind schedule when it comes & -0.5\\
to meeting my Goodreads 2020 challenge, but I'm alright with it because these reads were fantast-\\
ic! \#ReadingChallenge2020 \#readersofinstagram \#romancereadersofinstagram \#wonderfulbook\\ \#AmReadingRomance \#slowreader \#amreading \#readingromance \\ \hline
The Perfect Girlfriend" is a pretty good read so far. I usually only say this if I'm 50 pages & 0.45 \\
in but at 31 pages it's got a pace set. So far, so good. \#readingtime \#readingchallenge2020 \#bookworm \\
\hline
\end{tabular}
\end{table*}
\section{Results}
\label{sec:results}
\subsection{Throughout the years}
Our goal is to find how reading habits have changed through the years and whether people read less or more compared to the past. For this matter, we tried to find the trend in the read and pledged counts during challenges. By studying the average count of pledged books and read books, we realized that the average pledged count in challenges is 36.59 per challenge, and since each of these challenges correspond to a year, it means finishing 36.59 books every year, or 3.04 books every month. Conversely, the average number of books read during challenges is 23.30, which means finishing 1.94 books per month. This difference shows that people tend to read less but aim higher. It is crucial to bear in mind that this number of read books is slightly higher in a reading community and cannot be generalized to the entire society. For instance, based on a research \cite{site:average_number_of_read_books}, a person reads 12 books a year on average, or in other words, one book a month.\\
We also wanted to see how this overestimation has changed throughout the years. In order to find out, we grouped the challenge data by their year and calculated the median and mean pledged and read count for each year separately. The difference between these two counts can be seen in Figure \ref{fig:pledged_read_years}.
Based on these figures, the difference between the pledged count and read count has mainly decreased, and people tend to better estimate their abilities compared to the past. The reason to plot both the mean and the median is that the median is less sensitive to outliers. It is also apparent that people are generally reading and pledging less than before.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{Figures/Fig1.png}
\captionsetup{justification=centering}
\caption{The left-side plots show the median (top) and mean (bottom) count of pledged and read books for each year from 2011 to 2019. The right plots show the difference between the two for each year.}
\label{fig:pledged_read_years}
\end{figure}
\subsection{Do challenges make people read more?}
In this part, the data collected from users' profiles containing all the challenges they had participated in was used. Each book that these users had read during different years was also retrieved and collected in a separate dataset. This way, we could find out how many books these users had read through the years and also had the information about which year they were participating in an annual challenge. After calculating the average number of books each user reads in years that they are not participating in a yearly challenge and the years they are and comparing them, it was concluded that 81\% of people read more on average while being part of a reading challenge. This information belongs to the users that we have information on their readings both in years they were participating in challenges and years they were not, namely 787 unique users. Users read 298\% more books on average while participating in a challenge than when they are not. In order to find out whether this difference is significant, we performed a hypothesis test: $H_0: \mu_1 - \mu_2 = 0$ and $H_A: \mu_1 - \mu_2 \ne 0$, where $\mu_1$ and $\mu_2$ are the average read books during challenges and outside them, respectively.
This test's p-value is almost 0, which shows that the observed difference is statistically significant, and people read more books while taking part in yearly challenges. The reason could be that users feel more motivated to reach their goal when publicly announcing it or are more likely to update their Goodreads profile and their reading progress while participating in a challenge.
\\
Another interesting result is the number of users who have read a specific number of books in a challenge. This data is depicted in Figure \ref{fig:zipf_vs_powerlaw}. Before plotting this histogram, we assumed that it would have a normal distribution as it corresponds to human performance. However, the result showed a completely different distribution. We tried to fit power law and Zipf distributions, and they had the sum of squared errors 0.070 and 0.0027, respectively.
\\
\begin{figure}[ht]
\centering
\includegraphics[scale=0.25]{Figures/Fig2.png}
\captionsetup{justification=centering}
\caption{Powerlaw vs Zipf Distributions for the Read Count Histogram}
\label{fig:zipf_vs_powerlaw}
\end{figure}
\subsection{Are demographic variables influential in reading habits?}
This section investigates whether people's demographic identities, including gender and place of residence, significantly influence their reading habits and preferences. \\
The questions below were studied in this section:
\begin{itemize}
\item What are the reading challenge success rates in different countries?
\item Is gender statistically significant in whether people read audiobooks or not?
\item Is gender statistically significant in succeeding at Goodreads reading challenges?
\end{itemize}
After extracting the country names for users based on the location and personal details, each country's success rate was calculated. To avoid bias, the countries with less than 100 challenges were not considered in this calculation. After the filtering, 27 countries were left. The success rate for these countries is shown in Figure~\ref{fig:success_countries}. The most successful countries were Poland, Portugal, and Italy, with success rates of 0.4656, 0.4552, and 0.4522. Furthermore, the number of challenges in each of these countries is shown in Figure \ref{fig:count_countries}. This figure shows how much of our data belonged to each country. The top three countries are the United States, Canada, and the United Kingdom, with 3557, 888, and 790 challenges, respectively. \\
Another question that we will address here is whether gender has a significant role in users' tendency to read audiobooks during challenges. To find each user's gender belonging to the dataset containing each user and their books, \textit{gender-guesser}\footnote{https://pypi.org/project/gender-guesser/} library was used; this dataset was created by performing inner join on the challenge dataset and the books they had read during these challenges. There were 124,263 male and 575,406 female users in this data, and 303,245 users' gender could not be detected. Proportion difference hypothesis test is used to investigate whether gender has a significant role in choosing audiobooks over other book formats during challenges. (See Table \ref{tab:audiobook_gender}).
Based on the results of a hypothesis test we performed where $H_0: p_{male} - p_{female} = 0$ and $H_1: p_{male} - p_{female} > 0$, the p-value for one-tailed hypothesis is $<$ .00001 and the result is significant at the significance level 0.05. As a result, We can conclude that male users are more likely to choose audiobooks over other forms of books during challenges.\\
The other question was whether gender is significant in reading challenge success rate (See Table \ref{tab:success_gender}).
Based on a hypothesis test we performed where $H_0: p_{female} - p_{male} = 0$ and $H_1: p_{female} - p_{male} > 0$, the p-value is $<$ .00001 and the result is significant at the significance level 0.05. To be more specific, considering the count of challenge participants of each gender and their success rate, it can be concluded that gender has a significant influence on Goodreads users' success in annual reading challenges, and women are more successful in these challenges.\\ \\
\begin{table}[htbp]
\centering
\captionsetup{justification=centering}
\caption{People Reading Audiobooks}
\label{tab:audiobook_gender}
\begin{tabular}{c c c}
\hline
Gender&People reading audiobooks count&All count\\
\hline
Male & 1731 & 124263 \\ \hline
Female & 6424 & 575406 \\
\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\captionsetup{justification=centering}
\caption{People Success Count}
\label{tab:success_gender}
\begin{tabular}{c c c}
\hline
Gender&Successful count&All count\\
\hline
Male & 154,010 & 533,153 \\ \hline
Female & 424,566 & 1,394,773 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[ht] \centering
\includegraphics[scale=0.255]{Figures/Fig3.png}
\captionsetup{justification=centering}
\caption{Success Rate in Countries}
\label{fig:success_countries}
\end{figure}
\begin{figure}[ht] \centering
\includegraphics[scale=0.255]{Figures/Fig4.png}
\captionsetup{justification=centering}
\caption{Challenge Count in Countries}
\label{fig:count_countries}
\end{figure}
\subsection{Other social networks}
We analyzed the number of reading challenge tweets and posts through different months of the year. Generally, people announce their goals at the beginning of the year as these reading challenges are annual events, and people usually set goals as their new year's resolution. For instance, 19\% of Instagram posts and 22\% of tweets regarding reading challenges were posted in January. They will keep posting less and less in the following months until December when the challenge is close to the end, and a sudden increase in the number of posts is observed then. The number of reading challenge Instagram posts and tweets through months are depicted in Figure \ref{fig:insta_count_months} and Figure \ref{fig:twitter_count_months}, respectively, which confirm the information given above.
The sentiment analysis results show that users' posts regarding reading challenges are rarely negative. Users might merely post a report on their progress, express their feeling about a specific book, or how successful they are in their challenge. They might also express their disappointment in falling behind during the challenge. Some users also post reviews about books and add their opinions about them, which can be positive or negative. The average sentiment through months in Instagram and Twitter are shown in Figure \ref{fig:insta_sentiment_months} and Figure \ref{fig:twitter_sentiment_months}, respectively. In Instagram, the general attitude towards reading challenges is similar through months, but Twitter data shows some fluctuations in the tweets' sentiments. However, the difference between the highest and lowest average sentiments in months is merely 0.123, and therefore, we can assume that the general attitude is similar.
The percentage of posts with negative and positive sentiments on both social platforms are almost identical. This is depicted in Table \ref{tab:sentiment}. This shows that people on both platforms have similar attitudes toward their reading challenges.
\begin{figure}[ht]
\centering
\captionsetup{justification=centering}
\includegraphics[scale=0.25]{Figures/Fig5.png}
\caption{Instagram Posts Count through Months}
\label{fig:insta_count_months}
\end{figure}
\begin{figure}[ht]
\centering
\captionsetup{justification=centering}
\includegraphics[scale=0.25]{Figures/Fig6.png}
\caption{Tweets Count through Months}
\label{fig:twitter_count_months}
\end{figure}
\begin{figure}[ht]
\centering
\captionsetup{justification=centering}
\includegraphics[scale=0.25]{Figures/Fig7.png}
\caption{Average Sentiment of Instagram Posts through Months}
\label{fig:insta_sentiment_months}
\end{figure}
\begin{figure}[ht]
\centering
\captionsetup{justification=centering}
\includegraphics[scale=0.25]{Figures/Fig8.png}
\caption{Average Sentiment of Tweets through Months}
\label{fig:twitter_sentiment_months}
\end{figure}
\begin{table}[htbp]
\centering
\captionsetup{justification=centering}
\caption{Sentiment Percentage}
\label{tab:sentiment}
\begin{tabular}{c c c c}
\hline
Platform&Positive&Negative&Neutral\\
\hline
Instagram &50.54\% & 4.77\%&44.67\% \\ \hline
Twitter & 47.74\% & 4.64\%&47.60\% \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this study, we examined the effects of public reading challenges on how much people read. Specifically, we showed that people are significantly more likely to read more once they have taken part in a challenge in comparison to their normal performance. We further show how gender is a significant factor in how well people perform in their challenges. \\
We would like to discuss one limitation of our study now. This analysis was conducted on Goodreads, and while the selection process was conducted completely at random, the website itself tends to attract avid readers mostly. Hence, the sample is out of people with a keen interest in reading, introducing bias into our analysis.
\section{Declarations}
\subsection{Funding}
Not applicable / No funding was received.
\subsection{Conflicts of interest}
The authors declare that they have no competing interests.
\subsection{Availability of data and material}
Although all data was collected from public accounts, we have decided not to make this data publicly available due to the availability of user information in our data.
\subsection{Code availability}
Not applicable
\bibliographystyle{spmpsci}
|
1,108,101,565,282 | arxiv | \section{Introduction}
More recent development of QA systems \cite{song2018exploring, de2018question, zhong2019coarse} has started to focus on multi-hop reasoning on text passages, aiming to propose more sophisticated models beyond the shallow matching between questions and answers. Multi-hop reasoning requires the ability to gather information from multiple different passages to correctly answer the question, and generally the task would be unsolvable by using only similarities between the question and answer.
Recent multi-hop QA datasets, such as WikiHop~\cite{welbl2018constructing}, ComplexWebQuestions~\cite{talmor2018repartitioning}, and HotpotQA \cite{yang2018hotpotqa}, have accelerated the rapid progress of QA models for multi-hop reasoning problems.
There have been several reading comprehension models proposed to address the problem. Some methods \cite{yang2018hotpotqa,zhong2019coarse} rely on cross-attention among the question and evidence passages. BERT \cite{devlin2018bert} is one successful model of such an approach.
Moreover, a substantial amount of query reformulation approaches \cite{weston2014memory,wu2016ask, shen2017reasonet,das2018multistep} have been proposed. Most of these methods adopt a soft version of reformulation, i.e. modifying the question embeddings based on the attention computed from each reasoning step. Similarly, some hard query reformulation approaches \cite{buck2018ask} propose to rewrite the question in the original language space. These methods provide more transparency to the reasoning processes. However, their performance usually lags behind their soft counterparts when no supervision on re-writing is available.
This paper aims to investigate the following two questions for multi-hop reasoning QA systems:
\textbf{\emph{Do existing models indeed have the multi-hop reasoning ability?}}
To answer this question, we design a dataset with chains of passages ordered by the ground-truth reasoning path. Then we conduct the comparisons between two settings: (1) training and evaluating the models with the correct ordering of the passage chains~(\textbf{the ordered-oracle setting}); (2) training and evaluating the models with only the single passage that contain the answer~(\textbf{the single-oracle setting}).
We hypothesize that if the dataset indeed requires multi-hop reasoning and if a model could conduct multi-hop reasoning, it should perform significantly better in the first setting.
However, we discovered that, for all the existing multi-hop reading comprehension models, the performance improvement with the ordered passages is rather limited, with the highest F1 improvement from BERT as 1.29\%.
\textbf{\emph{Is it beneficial to explore the usage of the reasoning chains?}}
To answer this question, we try to find a reader model which could indeed make a better use of the the ordered passage information to improve performance.
Inspired by the recent progress on the co-matching approaches for answer option selection \cite{wang2018co, zhang2019dual}, we propose to adopt a similar idea for multi-hop question answering. We extend both the HotpotReader \cite{yang2018hotpotqa} and the BERT model~\cite{devlin2018bert} with co-matching and observe 3.88\% and 2.91\% F1 improvement in the ordered-oracle setting over the single-oracle setting.
These results confirm that the utilization of passage chains is important for multi-hop question answering, and there is untapped potential of designing new models that could perform ``real'' multi-hop reasoning.
\section{Analysis Methods} \label{analysis-method}
The goal of this analysis is to validate each model's multi-hop reasoning ability by a specifically designed dataset with three comprehensive experiment settings.
\subsection{Dataset}
We conduct the analysis over a recently released multihop QA dataset HotpotQA~\cite{yang2018hotpotqa}.
We created a new empirical setting based on the HotpotQA distractor setting: for each question-answer pair, two supporting passage are labeled by human annotators that are sufficient for answering the question.
We release the data of our analysis setting, to make our results comparable for future works.\footnote{\url{https://gofile.io/?c=FDsda1}.}
There have been several multi-hop QA datasets released, but none of them has the ground truth reasoning chains annotated. The reason we choose HotpotQA is that the provided supporting passages serve as a good start point for identifying the approximately correct reasoning chain of passages, based on the heuristics described below.\footnote{The HotpotQA also contains a subset of \emph{comparison} questions, which aims to select between two options by comparing a property of theirs queried by the question, e.g., \emph{Did LostAlone and Guster have the same number of members?}. These questions are not typical multi-hop questions by our community from the view of deduction. Therefore in this analysis we focus on non-comparison questions.}
The key idea to recover the reasoning chain is that the chain must end at a passage that contains the answer.
Specifically, given a question-answer pair $(q, a)$ and its two supporting passages\footnote{This heuristic only works for chains of length 2. To investigate longer chains, more complex rules are required to deal with noise in distant supervision. Popular datasets generally do not require more than 2 hops to answer questions correctly. For example all the questions in HotpotQA has no more than 2 hops. We thus leave this to future work.} $p_0$, $p_1$. Each passage $p_i$ is an abstract paragraph of a Wikipedia page, thus corresponding to a topic entity $e_i$ that is the title of the page.
To determine the reasoning chain of passages, we have the following steps:
\noindent$\bullet$ We first check whether the answer $a$ appears in any of the passages. If there is only one passage $p_i$ containing the answer, then we have a reasoning chain with $p_i$ as the final link of the chain, i.e., $p_{1-i} \rightarrow p_i$.
\noindent$\bullet$ If both passages contain $a$, then we use the following rule to determine the order: we check whether topic entity $e_i$ appears in $p_{1-i}$. If true, we have the chain $p_{1-i} \rightarrow p_i$. If there are still multiple matches, we simply discard the question.
For a chain $p_i \rightarrow p_j$, we denote the first passage as the \textbf{context passage} and the second as the \textbf{answer passage}.
\subsection{Analytical Method for the Ability of Multi-Hop Reasoning}
\label{ssec:analysis_method}
Based on the aforementioned dataset, we propose a systematical approach to assess the multi-hop reasoning ability of different QA models. We design three experiment settings for different passage chain compositions.
\noindent $\bullet$ \textbf{Single-Oracle}, similar to the conventional QA setting that only the question and answer passage are provided while any context passages are omitted.
\noindent $\bullet$ \textbf{Ordered-Oracle}, that the question and the extracted ordered context and answer passages are provided.
\noindent $\bullet$ \textbf{Random}, similar to \textbf{Ordered-Oracle} but the passages are randomly ordered.
Based on the three settings,\footnote{Please note that both the Single-Oracle and the Ordered-Oracle settings are not valid realizations of the full task since they require a-priori knowledge of the answers. The settings are used in this paper only for analysis purpose.} we conduct the following analysis that each answers a research question related the multi-hop ability of the reading comprehension models:
First, we evaluate existing models on these settings, to answer the question \textbf{\emph{Q1: whether the existing models have the multi-hop reasoning ability}}. To answer the question, we mainly look at the gap between \emph{Single-Oracle} and \emph{Ordered-Oracle}.
A model with strong multi-hop reasoning capacity should have better performance in the \emph{Ordered-Oracle} setting as the reasoning path is given.
Second, if the existing methods do not show great improvement when the reasoning paths are given, we will hope to confirm \textbf{\emph{Q2: whether our dataset does not require multi-hop reasoning because of some data biases}} (see Section \ref{sec:discussion} for examples and discussions of the biases).
It is difficult to directly answer Q2, therefore in our analysis we try to answer a relevant question
\textbf{\emph{Q2$'$: whether the existing models can be further improved on the same dataset with better reasoning techniques}}.
Obviously, if there exists a technique that does better with the oracle-order information.
It shows the reasoning paths can indeed introduce additional information in our settings, therefore the answer to \emph{Q2} is likely \emph{yes}.
Therefore our dataset and settings can be used as a criterion for evaluating different models' multi-hop reasoning ability, i.e. used for answering \emph{Q1}.
\section{Baseline Models}
For all methods, there are three inputs for the model: $q$ represents the question, $p_1$ the context passage, and $p_2$ the answer passage. Accordingly, the word-level encoded hidden sequences for these three inputs are $H^{q} \in \mathbb{R}^{l \times Q}$, $H^{p_1} \in \mathbb{R}^{l \times P_1}$, and $H^{p_2} \in \mathbb{R}^{l \times P_2}$ respectively.
\subsection{Baseline Models}
\paragraph{Bi-Attention Reader (HotpotReader)} One common state-of-the-art QA system is the HotpotReader \cite{yang2018hotpotqa} which is reported to benefit from the context passages. The system includes self-attention and bi-attention which are the standard practice in many question answering systems. We take this as one baseline as many other methods \cite{liu2017stochastic, xiong2017dcn+} generally have similar model architectures.
\paragraph{BERT Reader} Another strong baseline is to use the pre-trained BERT model to encode $q$, $p_1$, and $p_2$ all together, expecting the inner-attention mechanism to capture the order information.
Given the fact that BERT could only take one input which contains the question and answer separated by ``[SEP]", one straightforward approach to encode all three inputs by concatenating the two passages $p_1$ and $p_2$ to form the answer text ``$q$ [SEP] $p_1$ $p_2$". A more explicit way to introduce the separation of the two passages is to include a learnable boundary token by using the reserved token ``[unused0]". Therefore we design another input for BERT as ``$q$ [SEP] $p_1$ [unused0] $p_2$". We adopt both approaches for completeness.
\section{Multi-hop Reasoning Approaches}
We seek to extend these two baseline models with two commonly used approaches for multi-hop reasoning, i.e.
query-reformulation and co-matching.
\subsection{Query-Reformulation Approach}
Query-reformulation is an idea widely used in many multi-step reasoning QA models~\cite{wu2016ask,shen2017reasonet,das2018multistep}.
The key idea is that after the model reads a paragraph, the question representation should be modified according to the matching results between the question and the paragraph. In this way, when the next paragraph comes, the model could focus on ``what is not covered'' from the history.
Most of the previous methods represent the question as a single vector so that the reformulation is performed in the embedding space.
However, representing a question with a single vector performs badly in our task, which is not surprising since most of the top systems on recent QA leaderboards adopt word-by-word attention mechanisms.
Therefore, to have a fair comparison, we need to extend the existing methods from reformulating single vectors to reformulating the whole hidden state sequences $H^q$.
To compare the first passage $H^{p_1}$ with the question $H^q$, we applied the $BiAtt$ function and result in the matching states $\tilde{H}^q \in \mathbb{R}^{l\times Q}$, where each $\tilde{H}^q[:,i]$ states how the $i$th word of the question is matched by the passage $p_1$. Then we use these matching states to reformulate the $H^q$ as follows:
\begin{equation}
\small
\begin{aligned}
\tilde{H}^{q} &= BiAtt(H^{p_1}, H^q)\\
M^q &=\gamma H^q + (1-\gamma) \mathrm{tanh}(W[H^q:\tilde{H}^{q}:H^q-\tilde{H}^{q}]) \\
\tilde{H}^{p_2} &= BiAtt(M^{q}, H^{p_2})\\
M &= BiLSTM(\tilde{H}^{p_2})\\
M' &= SelfAtt(M)
\end{aligned}
\label{eq:soft_reform}
\end{equation}
where $\gamma = \sigma(W_g[\tilde{H}^{q}:{H}^{q}:H^q-\tilde{H}^{q}])$ is a gate function. For the reformulation equation of $M^q$, we have also tried some other popular options, including only with $M^q = \mathrm{tanh}(W[H^q:\tilde{H}^{q}:H^q-\tilde{H}^{q}])$, $M^q=BiLSTM[\tilde{H}^{q}:{H}^{q}:H^q-\tilde{H}^{q}]$ and directly set
$M^q = \tilde{H}^{q}$. Among them, our gated function achieves the best performance.
\subsection{Co-Matching Approach}
The work from \cite{wang2018co} proposed a co-matching mechanism which is used to jointly encode the question and answer with the context passage. We extend the idea to conduct the multi-hop reasoning in our setup. Specifically, we integrate the co-matching to the baseline readers by firstly applying bi-attention described in Equation \ref{bi-attention} on ($H^{q}$, $H^{p_2}$), and ($H^{p_1}$, $H^{p_2}$) using the same set of parameters.
\begin{equation}
\small
\begin{aligned}
\bar{H}^{q} &= {H}^{q}{G}^{q} \\
{G}^{q} &= SoftMax(({W}^{g}{H}^{q} + {b}^{g}\otimes{e}_{p_2})^T{H}^{p_2}) \\
\bar{H}^{p_1} &= {H}^{p_1}{G}^{p_1} \\
{G}^{p_1} &= SoftMax(({W}^{g}{H}^{p_1} + {b}^{g}\otimes{e}_{p_2})^T{H}^{p_2})
\end{aligned}
\label{bi-attention}
\end{equation}
where ${W}^{g} \in \mathbb{R}^{l \times l}$ and ${b}^{g} \in \mathbb{R}^{l}$ are learnable parameters and ${e}_{p_2} \in \mathbb{R}^{P_2}$ denotes a vector of all $1$s and it is used to repeat the bias vector into the matrix.
We further concatenate the two output hidden sequences $\bar{H}^{q}$ and $\bar{H}^{p_1}$, followed by a BiLSTM model to get the final hidden sequence for answer prediction as shown in Equation \ref{co-match}. The start and end of the answer span is predicted based on $M$.
\begin{equation}
\small
M = BiLSTM([\bar{H}^{q}:\bar{H}^{p_1}]) \\
\label{co-match}
\end{equation}
\paragraph{Co-Matching in HotpotReader}
We follow the above co-matching approach on the HotporReader's output directly.
\paragraph{Co-Matching in BERT}
One straightforward way to achieve co-matching in BERT is to separately encode the question, the first passage and the second one with BERT, and then apply the above co-matching functions on the output hidden sequence as proposed in \cite{zhang2019dual}.
However, as observed in the experiments, we believe the inter-attention mechanism (i.e. cross paragraph attention) could capture the order information in an implicit way. Therefore, we still hope to benefit from the cross passage attention inside BERT, but make it better cooperate with three inputs. After the original encoding from BERT, we apply the co-matching\footnote{To follow the original BERT's setup, we also apply the same attention dropout with a probability of 0.9 on the attention scores.} on the output sequence to explicitly encourage the reasoning path. $H^{q}$, $H^{p_1}$, and $H^{p_2}$ could be easily obtained by masking the output sequence according to the original text.
\section{Experiments}
\subsection{Settings}
We trained and evaluated each model for comparison for each setting separately.
Following previous work \cite{yang2018hotpotqa}, we report the exact-match and F1 score for the answer prediction task.
\subsection{Results}
\label{ssec:exp_results}
In Table \ref{baseline-result},
the original HotpotReader method does not show significant performance improvement when comparing the Single-Oracle setting with the Ordered-Oracle setting. BERT was able to get a small improvement from its inner cross passage attention which introduces some weak reasoning. Surprisingly, overall the context passage in the reasoning path does not inherently contribute to the performance of these methods, which indicates that the models are not learning much multi-hop reasoning as previously thought.
\begin{table}[!htbp]
\small
\centering
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{\bf Model} & \multicolumn{2}{c}{\bf Single-Oracle} & \multicolumn{2}{c}{\bf Ordered-Oracle} \\
& \bf EM & \bf F1 & \bf EM & \bf F1 \\
\midrule
HotpotReader & 55.07 & 70.00 & 55.17 & 70.75 \\
Bert & 64.08 & 77.86 & 65.03 & 79.15 \\
\bottomrule
\end{tabular}
\caption{Baseline results for HotpotReader and BERT}
\label{baseline-result}
\end{table}
We show our proposed improvements in Table \ref{hotpotreader-result} and \ref{bert-result}.
Compared to the Single-Oracle baseline (HotpotReader), when applying the co-matching mechanism in
the Ordered-Oracle setting, there is a significant improvement of 4.38\% in exact match and 4.26\% in F1.
The soft query reformulation also improves the performance but not as significantly.
In order to confirm that the improvement of co-matching does come from the usage of reasoning paths (instead of the higher model capacity), we make another comparison that runs the co-matching model over the Single-Oracle setting. To achieve this, we duplicate the single oracle passage twice as $p_1$ and $p_2$. Our results show that this method does not give any improvement.
Therefore the co-matching method indeed contributes to the performance gain of multi-hop reasoning.
\begin{table}[!htbp]
\small
\centering
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{\bf Model} & \bf Order &\multicolumn{2}{c}{\bf Performance}\\
& & \bf EM & \bf F1 \\
\midrule
\multirow{3}{*}{HotpotReader} & Random & 52.23 & 69.80 \\
& Single-Oracle & 55.07 & 70.00\\
& Ordered-Oracle & 55.17 & 70.75\\
\midrule
\quad w/ Query-Reform & Ordered-Oracle & 56.89 & 71.69 \\
\midrule
\multirow{2}{*}{\quad w/ Co-Matching} & Single-Oracle & 55.00 & 70.23 \\
& Ordered-Oracle & \bf 59.45 & \bf 74.26 \\
\bottomrule
\end{tabular}
\caption{Results for HotpotReader on 3 oracle settings}
\label{hotpotreader-result}
\end{table}
BERT achieved promising results even in the Single-Oracle setting which proves its original capacity for QA.
The original BERT was improved by 1.23\% in exact match when both context passage and answer passage are provided and separated by an extra token. Nonetheless,
the co-matching mechanism contributes to an additional 1.66\% exact match improvement which indicates the success of co-matching for reasoning. Co-matching result also shows that multi-hop over passage chain contains additional information, and thus multi-hop ability is necessary in our analysis setting.
\begin{table}[!htbp]
\small
\centering
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{\bf Model} & \bf Order &\multicolumn{2}{c}{\bf Performance}\\
& & \bf EM & \bf F1 \\
\midrule
\multirow{2}{*}{BERT} & Random & 59.18 & 75.27 \\
& Single-Oracle & 64.08 & 77.86\\
& Ordered-Oracle & 65.03 & 79.15\\
\quad w/ split token & Ordered-Oracle & 65.31 & 79.49 \\
\midrule
\quad w/ Co-Matching & Ordered-Oracle & \bf 66.97 & \bf 80.77 \\
\bottomrule
\end{tabular}
\caption{Results for BERT on 3 oracle settings}
\label{bert-result}
\end{table}
Among both approaches, co-matching shows promising performance improvement especially for the well pre-trained BERT model. This proves the co-matching mechanism is able to conduct multi-hop reasoning following the passage chains.
Finally, both models perform worse in the Random setting compared to the Single-Oracle setting, although the Random setting contains sufficient information of the whole reasoning chain. From the analysis, we find that it is difficult for the models to correctly predict the orders from the randomly-ordered passages. For example, we created a binary classification task to predict which passage is the context passage and which is the answer passage. BERT model gives an accuracy of only 87.43\% on this task.
This gives further evidence that the existing models do not have appropriate inductive biases for utilizing the reasoning chains.
\paragraph{Answers to our research questions}
The above results answer our research questions as follows: (1) in our experimental setting, the reasoning paths are indeed useful, thus multi-hop reasoning is necessary, as there exists a method, i.e., co-matching, that has demonstrated significant improvement; (2) existing reader models usually cannot fully make use of the reasoning paths, indicating their limited reasoning abilities. Among the existing methods, BERT can do slightly better on making use of the reasoning paths. Our new proposed co-matching approach improves the reasoning abilities over both the two different base models (HotpotReader and BERT).
\section{Discussion}
\label{sec:discussion}
\paragraph{Difference from prior work}
Our work conducts the first analysis of \emph{models' behaviors}.
In comparison, a concurrent analysis work~\cite{min2019compositional}, which is also conducted on HotpotQA, focuses more on the properties of the dataset.
For example, \citep{min2019compositional} finds that for 80\% of the questions in HotpotQA, humans do not need the full paths of paragraphs to correctly answer some of the questions.
One of the major reasons is the bias of factoid questions that look for certain types of entities as answers. For example, a question asking ``\emph{which sports team}'' can be directly answered if there is only one sports team mentioned in the documents.
Our analysis focuses on whether the full reasoning paths can help the \emph{machine learning models} to (1) improve their performance on those 80\% of the questions, as well as (2) cover the left 20\% of questions that indeed require the multi-hop ability.
Moreover, compared to the prior analysis, we are the first to analyze the effects of reasoning paths in an explicit way, and construct a dataset for this purpose.
\paragraph{The effect of data biases on our analysis}
The aforementioned biases make the full reasoning paths less useful for a large portion of data, therefore making it more challenging for reader models to improve with full reasoning paths.
Because of the data bias, it is critical to verify that the dataset we created can still benefit from the improved reasoning skills. That is why answering \emph{Q2} in Section \ref{ssec:analysis_method} is important for the whole analysis.
The results in Section \ref{ssec:exp_results} show that our co-matching methods can indeed benefit from the reasoning paths, confirming the effectiveness of our proposed dataset and settings for the analysis purpose.
\paragraph{Encouraging model design with better evaluation}
Finally, continued from the previous paragraph, we hope to highlight the problem that the less biased a dataset is, the more likely a model can easily benefit from the availability of reasoning paths.
On many existing benchmark datasets that are biased, it is less likely to achieve improvement with specific designs for achieving multi-hop reasoning ability.
This makes multi-hop reasoning a less important factor when people design models for these multi-hop QA datasets, if the goal is simply to improve the answer accuracy.
To encourage model design towards real reasoning instead of fitting the data biases, we believe that an improved evaluation is necessary. To this end, one way is certainly to create datasets with fewer biases. While our analysis also suggests the other way: we can keep the biased training data, but created small evaluation datasets with human-labeled reasoning paths. Then during evaluation, we compute the accuracy of the predicted reasoning paths. This is an extension of the idea of HotpotQA that jointly evaluates the support selection and answer extraction, but with a more explicit focus on the reasoning processes.
\section{Conclusion}
In this paper, we analyze QA models' capability in multi-hop reasoning by assessing if the reasoning chain could help existing multi-hop readers. We observed the general weakness of stat-or-the-art models in multi-hop reasoning and proposed a co-matching based method to mitigate. Despite the fact that co-matching is designed to encode only three input sequences to achieve limited multi-hop reasoning, we consider this as the most promising one that demonstrates the concrete reasoning capability and has the potential for real multi-hop reasoning.
\section*{Acknowledgments}
We thank the anonymous reviewers for their very valuable comments and suggestions.
|
1,108,101,565,283 | arxiv | \section{The nature of the problem}
Probably one of the most interesting problems in modern physics, since it
appears to be one of the most fundamental and, at the same time, one
of the simplest, is the problem of interference. The measurements of
Tonomura {\it et al.} \cite{tonomura89}, displayed in Fig. \ref{fig001},
reveal the impact of single
electrons on a detector screen, only gradually and without regularity evolving
into the familiar interference pattern of a two-slit interferometer.
In Feynman's view this experiment ''has been designed to contain all of
the mystery of quantum mechanics, to put you up against the paradoxes
and mysteries and peculiarities of nature one hundred percent''
\cite{feynman63}. His judgement has gained, with these pictures,
a particularly striking illustration. The effect is commonly associated
with the notion of ''wave-particle duality'' \cite{selleri92} and is seen
as one of the main examples, where our classical concepts of reality
break down, since, after all, how can an electron passing through one
slit ''know'' whether the second aperture is opened,
a knowledge, it seems to possess, because its impact on the screen
depends on the setup. These measurements can formally be described by
the quantum theory of motion \cite{philippidis79,philippidis82},
but although the
description is highly suggestive, it does not solve the problem, what the
''quantum-mechanical'' \cite{bohm52a} potential, substantial in this model,
is actually supposed to mean. In this sense the claim that the problem is
solved \cite{goldstein98} seems to be unjustified.
We propose, in this paper, a solution which is a step by step procedure,
where every step in the mathematical formalism can be justified by precise
and conceptually consistent physical arguments. The solution will be based on
our recent analysis of measurement processes \cite{hofer99a}, which in turn is
founded on the notion of extended particles \cite{hofer98a}. It will be seen
that the solution given by the quantum theory of motion is formally correct,
although it requires a full understanding of the ensemble structure of
quantum theory to be physically sound.
The paper is organized as follows: first we restate the classical solution to
a single slit experiment and show that the actual interference effect cannot
be localized. Based on a conjecture about the interaction with the detector
screen we propose a measurement to detect an eventual incompleteness of
the standard model. Then the ensemble structure of quantum theory is briefly
discussed, and it will be shown that the probability interpretation of $\psi$
makes quantum theory (QT) inherently non-local. Based on the concept of quantum
ensembles a new interpretation will be given to the quantum potential Q
\cite{holland93}, and Q will be found, as Bohm suggested \cite{bohm84},
related to information, since it describes the change of ensembles due to the
physical environment. Finally we shall analyze the solution for the
two-slit interferometer in the quantum
theory of motion (QTM), in this case the analysis leads to the result that
the simplest way to account for varying amplitudes and volumes of the
extended and wave-like objects underneath the fundamental statements of QT
is to describe ensembles by varying densities of trajectories of point-like
objects. In this sense the ''particle'' in QTM has a double meaning: it is
a single and well defined physical object but, due to the quantum potential,
also a member of the full quantum ensemble. It will be advocated that this
is the actual meaning of the term ''guiding-wave'', figuring prominently in
de Broglie's original concept \cite{broglie27}.
\section{Single-slit diffraction}
To analyze the main conceptual difficulties and
the inherent non-locality in the classical
treatment of interference problems, it is sufficient to consider, in the
scalar approximation, a single slit experiment. We assume that the
electrons incident in our interferometer can be described as solutions
of the Helmholtz equation of their wave-function $\psi(\vec{r})$, the
wave shall cover a sufficient region for the relation to make sense.
With this definition we wish to avoid the discussion of boundaries and
coherence of the wave, since it will be seen that the fundamental
feature, which is the inherent non-locality, arises as soon as we
admit a continuous wave-field as a suitable description. Helmholtz's
equation in the vacuum states:
\begin{equation}
\left(\Delta + k^2 \right) \psi(\vec{r}) = 0
\end{equation}
With Green's theorem and using the vacuum Green's function $\psi(\vec{r})$ is
transformed into an integral over the boundary R(V):
\begin{eqnarray}
\psi(\vec{r}) & = & \oint_{R(V)} d^2 \vec{f}'
\frac{e^{ik |\vec{r} - \vec{r}'|}}{4 \pi |\vec{r} - \vec{r}'|} \times
\nonumber \\
& \times & \left[ \nabla' \psi (\vec{r}') +
\frac{2 \pi i \psi (\vec{r}')}{\lambda}
\frac{\vec{r} - \vec{r}'}{|\vec{r} - \vec{r}'|}
\left( 1 + \frac{i}{k |\vec{r} - \vec{r}'|} \right) \right]
\end{eqnarray}
\begin{figure}
\epsfxsize=0.9\hsize
\epsfbox{fig001.eps}
\vspace{0.5cm}
\caption{Electron interference measurement according to Tonomura {\it et al.}
[1]. The cumulative pattern is generated by electrons sent one by
one through a two-slit interferometer. Number of electrons: (a) 100,
(b) 3000, (c) 20000, (d) 70000}
\label{fig001}
\end{figure}
Using a $\delta$-functional to localize the wavelet in our system
we may rewrite $\psi(\vec{r})$ as:
\begin{equation}
\psi(\vec{r}') = \int_{- \infty}^{+ \infty} dt
\psi_{t}(\vec{r}',t) =: \int_{- \infty}^{+ \infty} dt
\delta^3 \left(\vec{r}' - \vec{c}_{p}t\right)
e^{ik |\vec{R} - \vec{c}_{p}t|}
\end{equation}
where - $\vec{R}$ is the source and $\vec{c}_{P}$ the velocity of the
''particle''. If we neglect the derivatives of $\delta$, which, after
all, shall only signify the existence of single entities, i.e. the fact that
a single electron has limited extension and is small compared to the system,
then we get at the moment $t = 0$:
\begin{eqnarray}
\psi_{t=0}(\vec{r}) & = & e^{ikR} \Gamma(k,\vec{r}) \\
\Gamma(k,\vec{r}) & = & \frac{i}{2 \lambda} \oint_{R(V)}
d^2 \vec{f}' \frac{\vec{r} - \vec{r}'}{|\vec{r} - \vec{r}'|^2}
\left( 1 + \frac{i}{k |\vec{r} - \vec{r}'|} \right)
e^{ik |\vec{r} - \vec{r}'|} \nonumber
\end{eqnarray}
With a Kirchhoff approximation the calculation yields the familiar results
of classical electrodynamics: the amplitude $\Gamma(k,\vec{r})$ depends on the
setup geometry of the interferometer and the wavelength $ \lambda $. But it
provides the result already at the moment, when the ''particle'' passes the
slit (t = 0). Clearly, therefore, it does not describe the causal propagation
of single particles, but the probabilities of their impact on the screen, even if
this probability is contained in the intensity of the wave. A different way of
stating the same result is saying that the effect cannot be localized.
Now let the interferometer be operating in the Fraunhofer limit of diffraction
(distance between screen and slit sufficiently high), then the
variations of the wavefunction are \cite{landau81}:
\begin{equation}
\psi (k,\theta) \propto \frac{\sin (k \theta)}{k \theta}
\end{equation}
where $k$ is the wavevector and $\theta$ the azimuthal angle.
A phase $\alpha$ of the wave at its origin $ - \vec{R}$ only adds a phasefactor,
which has no effect on the intensity in the conventional model. In the realistic
model \cite{hofer98a} it does have
an effect, though, if the energy effecting the measurement is either only the
kinetic component or only the field component of particle energy.
Since the field component can be described as the real part
of the wave $ \psi$, the intensity from this energy component alone is given by:
\begin{equation}
dI (k, \theta, \alpha) = \left|
\frac{\sin (k \theta)}{k \theta} \right|^2
\cos^2 \alpha d \theta
\end{equation}
The intensity as a function of the azimuthal angle $\theta$ and the phase
$\alpha$ is shown in Fig. \ref{fig002}.
Therefore, a measurement of the interference fringes by using waves of a defined
phase may add to our information about the interaction process. Either the result
is independent of $ \alpha$, in this case the interaction energy can only be the
total energy and the conventional model is sufficient, or the result depends,
like in Fig. \ref{fig002}, on the phase $ \alpha$, in this case the interaction
energy depends not on total energy but on an unbalanced composition of field
energy and kinetic energy. In either case the result yields an increase of
information about the process: this applies to photons as well as electrons.
\begin{figure}
\epsfxsize=1.0\hsize
\epsfbox{fig002.eps}
\vspace{0.5cm}
\caption{Intensity of interference fringes due to the field component of
energy of the particle. The initial phase $\alpha$ determines the visibility
of the pattern on the screen in the far-field limit (Fraunhofer limit) of
the interferometer.}
\label{fig002}
\end{figure}
Summing up the result of this short classical analysis it can be said that
in classical field theory the origin of the interference effect cannot be
localized, its result is obtained essentially by a summation of all components
over the whole system: it is therefore, in a sense, non-local, because the setup
can be chosen in such a way that the distance between the detector screen and the
slit environment is space-like for a given problem. And since the amplitude is
given already at the moment, when the photon passes the slit, its trajectory
contains information about the whole system in a non-local manner.
\section{Ensembles in quantum theory}
In non-relativistic QT a system is generally described by a suitable
Schr\"odinger equation \cite{schrodinger26}:
\begin{equation}
\hat{H} \psi = E \psi
\end{equation}
which we assume to be given also in case of a two-slit interferometer. It shall
consist of a Hamiltonian with potentials suitable to describe the slit environment.
The main conceptual problem here is that the single impacts, e.g. measured by
Tonomura {\it et al.} \cite{tonomura89}, are not described by the wavefunction
$ \psi (\vec{r})$, where $\vec{r}$ shall be a location on the detector screen,
since $\psi (\vec{r})$ only yields the overall intensity on the screen
but not the single hit. One does not need Einstein's fundamental criticism of
QT \cite{einstein35} to be left unsatisfied with a situation, where one
observes events, which cannot be explained and even, if one follows the
orthodox \cite{bohr48}, are unexplainable in principle.
Bohr, for example,
considered any real event on the atomic level, which is, what these impacts
amount to, as beyond the means of scientific analysis, in his view ''such an
analysis is {\it in principle} excluded'' \cite{bohr49}.
We shall, in the following, interpret the single impacts as a result of the
ensemble structure of QT, which lies underneath its fundamental equations, and which
recently has been consistently analyzed for the first time \cite{hofer99a}.
The origin of the quantum ensemble is the unknown phase of $\psi$, together with
an intrinsic and field-like energy component of particle propagation
\cite{hofer98a}, not accounted for in the conventional framework of QT. In
particular it was shown that the uncertainty relations can be referred to
this intrinsic energy. Considering electrons of defined energy where
\begin{equation}
E_{T} = \hbar \omega = m u^2
\end{equation}
is the total energy including the intrinsic components,
an external potential $V (\vec{r})$ leads to an ensemble wavefunction,
described by an integral over k-space:
\begin{eqnarray}
\psi (\vec{r}) & = & \frac{1}{(2 \pi)^{3/2}} \int_{0}^{k_{1}} d^3 k \, \chi_{0} (\vec{k})
e^{i \vec{k} \vec{r}} \nonumber \\
k_{1} & = & k_{1}(\vec{r}) = \sqrt{\frac{m}{\hbar^2} \left(E_{T} - V(\vec{r}) \right)}
\end{eqnarray}
The integral limit $k_{1}$ describes the error margin due to the undefined intrinsic
energy components in quantum theory.
Since the amplitude $\chi_{0} (\vec{k})$ is undefined, the condition that the density
of a single plane-wave component integrated over real space equals the mass of one
electron:
\begin{equation}
\int_{- \infty}^{+ \infty} d^3 r \left| \chi (\vec{r}, \vec{k}) \right|^2 =
\chi_{0}^2 (\vec{k}) =: m_{e}
\end{equation}
leads, together with the probability interpretation of $\psi$ to the
following integral:
\begin{eqnarray}
\int_{- \infty}^{+ \infty} d^3 r \left|\psi (\vec{r}, \vec{k}) \right|^2 & = &
\frac{m_{e}}{(2 \pi)^3} \int_{- \infty}^{+ \infty} d^3 r \times
\nonumber \\ & \times &
\int_{0}^{k_{1}} d^3 k \, d^3k' e^{i \vec{r} (\vec{k}' - \vec{k})}
\end{eqnarray}
For a given potential in the system, $U = const.$ the integral reduces to
an integration over a sphere in k-space:
\begin{equation}
\int_{- \infty}^{+ \infty} d^3 r \left|\psi (\vec{r}, \vec{k}) \right|^2 =
\frac{4 \pi m_{e}}{3} k_{1}^3 \qquad
k_{1} = \sqrt{\frac{m}{\hbar^2} \left(E_{T} - U\right)}
\end{equation}
In general the probability interpretation of $\psi$ thus has the following effects:
(i) The amplitude $\chi_{0} (\vec{k})$ has to be renormalized according to the
integration of $\psi$ over the whole system under the condition that
\cite{born26}:
\begin{equation}
\int_{- \infty}^{+ \infty} d^3 r \left|\psi (\vec{r}, \vec{k}) \right|^2
=: 1
\end{equation}
And (ii) the ensemble wavefunction at a given location within the
system then depends on the potentials and amplitudes in all the other parts of
the system. A physical process, described via the wavefunction $\psi$ therefore
cannot be localized at a specific point $ \vec{r}$ of the system.
\section{Ensembles in the quantum theory of motion}
If we assume, as in QT, that the two-slit interference problem can be solved
by solving the Schr\"odinger equation:
\begin{equation}
\hat{H} \psi = E_{in} \psi
\end{equation}
where $\hat{H}$ shall be a suitable Hamiltonian and $\psi$ the wavefunction of
the problem, while $E_{in}$ denotes the kinetic energy of the incident electrons, then
the potentials of the problem should be known. Considering now, that $\psi$ is
the wavefunction of the ensemble, it carries two different informations: (i) The
information about the behavior of single electrons in the potential environment,
the {\em physical} side of the problem, and (ii) the information about the change
of the quantum ensemble due to the changes of the environment, the {\em statistical}
side of the problem.
Not considering the magnetic properties of electrons an analysis of Tonomura's
experiments may be limited to the scalar problem. Which means, that the polarizations
of intrinsic fields may be disregarded. From the viewpoint of single wavelets
the unknown properties are therefore the amplitude and/or the volume of single
''particles'' \cite{hofer99a,hofer98a}. For practical reasons we could assume
the volume of a single electron to be constant and, in the simplest possible
model, even point-like compared to all relevant distances within our system.
Then a single point-particle has to comply with two different constraints:
(i) The potentials in our system, the physical constraints, and (ii) the
ensemble structure of solutions of Schr\"odinger's equation due to the variation
of the ensemble with the external potentials. While the first is a strictly
classical term, in this formalization of the problem described by classical
mechanics, the latter is essentially non-classical and originates from the
probability interpretation of the wavefunction.
Comparing with the standard expressions of QTM, where the change of S,
the exponent of the ensemble wavefunction is described by:
\begin{equation}\label{eq015}
- \frac{\partial S}{\partial t} = \underbrace{V + \frac{(\nabla S)^2}{2 m_{e}} }_{I}
- \underbrace{ \frac{\hbar^2}{2 m_{e}} \frac{\nabla^2 R}{R} }_{II = Q}
\end{equation}
it is obvious that (I) describes the classical energy components of particle motion,
while (II), where R is the amplitude of the ensemble wavefunction:
\begin{equation}
\psi = R \cdot e^{i S/ \hbar}
\end{equation}
is a non-classical term. The main difference here is our interpretation of R and
Q. In QTM it is assumed that '' matter ... [is endowed] ... with a field aspect, mathematically
described by $R^2$ and $S$ ... which enables ... [QTM] ... to avoid the paradox of an individual's
properties apparently depending on an ensemble." \cite{holland93_65}. In our view Q, the
''quantum mechanical'' \cite{bohm52a} potential is not due to some non-classical and still
physical field, but describes the change of the ensemble due to the change of the ''physical''
potential $V(\vec{r})$. In this sense it is a basically statistical term, even if it
appears as an energy, and it is, as Bohm suspected \cite{bohm84}, related to the information
about the ensemble rather than any physical quality of the single ''particle''. The
classical limit of motion then is the case, where $\psi$ does not influence the
trajectory $\vec{x}(t)$ of any single member.
\begin{figure}
\epsfxsize=1.0\hsize
\epsfbox{fig003.eps}
\vspace{0.5cm}
\caption{Quantum potential for two Gaussian slits of the interferometer, the
potential is viewed from the detector screen, its origin is the change of the
quantum ensemble due to the change of the physical environment (from
Philippidis {\it et al.} [4])}
\label{fig003}
\end{figure}
That this interpretation of Q is better suited to describe its real meaning than the
current one can also be seen from the fact that the ''particle equation of motion is
a deduction from the Schr\"odinger equation'' \cite{holland93_79}, but not vice versa, and,
most importantly, that although the particle '' responds to the local value of
the field in its vicinity (via Q) ... there is no reciprocal action of the particle
on the wave'' \cite{holland93_79}. If the quantum potential were of physical origin,
the last statement would be equivalent to a violation of Newton's third law
({\it actio = reactio}). Therefore it cannot be of physical origin.
From a formal point of view the realistic interpretation and QTM consider the
Schr\"odinger equation valid, and it has been held against Bohm's concept, that
it does not provide additional information \cite{zeh98}. In this case it must
be conceded that QTM is {\it logically} equivalent with standard QT. Which
equally applies to the realistic interpretation. Then the result that the
Schr\"odinger equation is not an exact equation
(a result of the realistic interpretation \cite{hofer98a})
requires the existence of
additional terms describing the evolution of a system, and which are not related to
the evolution of any single - and well defined - member. And in this case one
arrives at the same conclusion: the additional term, showing up in QTM as
the quantum potential, can
{\it only} be related to this evolution of the system, described by the
Schr\"odinger equation. In this sense it is also correct to say that QT
does not know any well defined object, because of its fundamental equation.
\begin{figure}
\epsfxsize=1.0\hsize
\epsfbox{fig004.eps}
\vspace{0.5cm}
\caption{Trajectories of single particles in a two-slit
interferometer from [4]. The unusual curvature of the
trajectories is due to the quantum potential Q.}
\label{fig004}
\end{figure}
It is interesting to note that, as Peter Holland pointed out, most orthodox
physicists attack the quantum theory of motion for two, mutually exclusive,
reasons: for being too classical (in its conception of particles, trajectories
etc.) and for being not classical enough (in its non-locality, the concept of
the quantum potential etc.) \cite{holland93_26}. Zeh recently gave a similar
example in an article with the programmatic title: ''Why Bohm's Quantum Theory?''
\cite{zeh98}, when he criticized the theory for being logically equivalent
to QT (''successful only because it keeps Schr\"odinger's (exact) wave mechanics'')
and going beyond QT (although ''the rest of it is observationally meaningless'',
the trajectories it describes are thought to result from ''unobservable causes
for stochastic events''). Now QTM is, of course, logically equivalent to the
Schr\"odinger equation. And once it is accepted, that it is a simplified account
of events - the point particle is the simplest possible model for the single
wavelets in QT -, then it teaches us something new, since it combines
the notion of an individual with the notion of an ensemble in a consistent
and instructive manner - something QT has not been able to achieve in seventy
years of arguing. The GRW model \cite{ghirardi86} mentioned
can not be considered on an equal footing, since it depends on a
non-linear addition to the Hamiltonian: where this term should
actually come from, remains a mystery, even if Zeh refers it to
''fundamentally irreversible {\it spontaneous localization}''.
In his treatment of the two-slit problem Philippidis begins by calculating the
wavefunction $ \psi$ after the slits, assumed Gaussian for convenience,
and which is then employed to derive the quantum potential Q from Eq. (\ref{eq015}).
The numerical calculation used characteristic experimental data
of a 1961 electron interference measurement by J\"onsson \cite{joensson61},
it was calculated for the whole region between the slits and the detector screen.
It is essentially due to the local constraints of the wavefunction at the two-slits,
contrary to the previous chapter, where our main emphasis was on constraints due to
external potentials. Given the symmetry of QT (and also QTM) concerning real space
and momentum space, the difference should not change the picture: a change of the
ensemble gives rise, in QTM, to a quantum potential Q.
Fig. \ref{fig003} reproduces the result of the calculation, the two parabolic
peaks in the back coincide with the slit positions. The particle trajectories in
QTM are calculated by integrating the equation:
\begin{equation}
\vec{p} = \nabla S
\end{equation}
where $\vec{p}$ is the momentum of the particle and $S$ the exponent of the wavefunction.
Initially, the trajectories from the two slits fan out like for a single slit
interference measurement, it is only where Q becomes appreciable, that distinct
kinks appear, which are due to a rapid acceleration at the troughs of the quantum
potential. The trajectories of single particles are displayed in Fig.
\ref{fig004}. The single hits on the screen reproduce the overall pattern of the intensity
calculated in the conventional manner, but the single impacts are at distinct
locations: in that sense QTM reproduces the full extent of the experimental findings
(in contrast to QT, where only the intensity is given). And while in QT the electron
is a particle and a wave simultaneously, it is in QTM a particle, guided by its
quantum potential Q, which represents the full quantum ensemble of a given
environment.
\section{Non-locality, or what?}
The question, whether or not an individual electron ''knows'' of the
setup, most remain open, since, as the analysis reveals, none of our
current theories gives a local description of the problem. In classical
field theories the fact has been known for quite some time, this is,
what we mean, when we speak about ''waves'', but in QT, due to the
essentially abstract framework, the same feature is less obvious.
Non-locality, it has been shown, enters the framework essentially via
the normalization of the wavefunction, because this cannot be done without
considering the ensemble over the whole system.
But while in a classical context the non-locality could be argued on the
basis of the rather high field extension, this seems no longer a possibility
considering current experimental techniques. With femto-lasers and,
most strikingly, atom interference measurements \cite{leibfried98} the
typical extension of a single wavelet is well below the separation of
the slits of an interferometer. Conceptually, one either has to concede
that the formal non-locality is also an experimental fact, because it is
hard to support the notion that interactions of an atom with one slit-environment
depend on the arrangement of atoms some $10^4$ atom diameters away
(distance of the second slit about 8 $\mu m$ \cite{leibfried98}). Or,
along a completely different line of reasoning, one argues that the
coherence of the beam is the ultimate origin of non-locality, since it
guarantees that a single atom fits into the overall pattern (e.g. via its phase).
A possibility, which we shall explore in future publications.
The main improvement, compared to the current state of affairs in QTM,
is conceptual. While, for example, the calculation of the quantum potential
(a {\it physical} cause for the motion of a single particle) from the
wavefunction (according to current belief \cite{born26} a {\it statistical}
measure of events and locations) must remain logically inconsistent,
the procedure becomes perfectly sound, once the wavefunction gains a double
meaning (a fundamental result of the realistic interpretation \cite{hofer99a})
and if we concede that the point-particle is only a very crude approximation.
In this sense Bohm's quantum theory of motion appears to be the simplest
mathematical form the realistic interpretation can take. Which means also
that it's extension to all fields, e.g. to the atomic domain, might not make
too much sense. Because in fundamental processes, where the physics of a system
are as well known as in hydrogen \cite{hofer98b}, the simplifications
of the quantum theory of motion may well
lead to a completely distorted picture.
|
1,108,101,565,284 | arxiv | \section{Introduction}
The topic of pattern avoidance has received a lot of attention since Knuth's work in~\cite{knu:acp3}.
To define pattern avoidance we start with saying that two words of integers $a_1a_2 \ldots a_k$ and $b_1 b_2 \ldots b_k$ are {\it order isomorphic} when $a_i \leq a_j$ if and only if $b_i \leq b_j$ and $a_i \geq a_j$ if and only if $b_i \geq b_j$ for all $i$ and $j$. A permutation $\sigma \in {\mathfrak S}_n$ of the set $[n]=\{1,2,\dots, n\}$ is said to {\it contain the pattern } $\pi \in {\mathfrak S}_k$ if there exists an increasing sequence of indices $m_1, m_2, \ldots , m_k$ such that $\sigma(m_1)\sigma(m_2)\ldots \sigma(m_k)$ is order isomorphic to $\pi$. We say that $\sigma$ $\mathit{avoids}$ the pattern $\pi $ if $\sigma$ does not contain the pattern $\pi$. We notate these pattern avoidance classes by
$${\mathfrak S}_n(\pi) = \{ \sigma \in {\mathfrak S}_n : \sigma \text{ avoids } \pi \}.$$
Two patterns $\pi_1$ and $\pi_2$ are {\it Wilf-equivalent} if $|{\mathfrak S}_n(\pi_1)| =|{\mathfrak S}_n(\pi_2)| $ for all $n\geq 0$. Knuth~\cite{knu:acp3} found that there is only one Wilf-equivalence class, $|{\mathfrak S}_n(\pi)|=\frac{1}{n+1}\binom{2n}{n}$ the $n$th Catalan number, for patterns $\pi\in{\mathfrak S}_3$.
For length four patterns there are three Wilf-equivalence classes. Though the original proof of these classes includes the work of~\cite{S96} and~\cite{W95} it is sufficient to use symmetries, Stankova's equality in~\cite{S94} $|{\mathfrak S}_n(4132)|=|{\mathfrak S}_n(3142)|$ (Theorem 3.1) and Backelin, West, and Xin's result~\cite{BWX07} (Theorem 2.1) that $12\dots i \pi(i+1)\dots\pi(k)$ and $i\dots 21 \pi(i+1)\dots\pi(k)$ are Wilf-equivalent. The class represented by 1234 was enumerated by Gessel~\cite{G90} (page 281) using symmetric functions and the class represented by 1342 was enumerated by B\'{o}na~\cite{B97} (Theorem 3), however, the last class represented by 1324 has not yet been enumerated.
Simion and Schmidt in~\cite{ss:rp} enumerated classes avoiding multiple length three patterns. They also considered pattern avoidance for the subclasses of involutions ${\mathcal I}_n=\{\sigma\in {\mathfrak S}_n:\sigma^2=\text{id}\}$, even permutations and odd permutations.
If we denote $${\mathcal I}_n(\pi) = \{ \iota \in {\mathcal I}_n : \iota \text{ avoids } \pi \}$$ as the pattern avoidance class for involutions avoiding $\pi$ we say that $\pi_1$ and $\pi_2$ are {\it ${\mathcal I}$-Wilf equivalent} if $|{\mathcal I}_n(\pi_1)| =|{\mathcal I}_n(\pi_2)| $.
Simion and Schmidt found that there are two ${\mathcal I}$-Wilf equivalent classes for patterns in ${\mathfrak S}_3$.
\begin{thm}[Simion and Schmidt~\cite{ss:rp} Propositions 3, 5 and 6] There are two ${\mathcal I}$-Wilf equivalence classes for patterns in ${\mathfrak S}_3$.
$$
|{\mathcal I}_n(\pi)| = \left\{
\setstretch{1.5}
\begin{array}{ll}
\binom{n}{\ceil{ n/2}} &\pi \in \{123, 132, 321, 213\}, \\
2^{n-1} & \pi \in\{ 231, 312\}.
\end{array}
\right.
$$
\label{thm:SimionSchmidt}
\vspace{-.4in}
\hfill \qed
\vspace{.5cm}
\end{thm}
For length four patterns there are eight different ${\mathcal I}$-Wilf equivalence classes. The original classification of these classes includes the work of~\cite{GPP01} and~\cite{J02}, however, we can piece together the eight classes from the following. From Guibert's work~\cite{G95} we have $|{\mathcal I}_n(3412)|=|{\mathcal I}_n(4321)|$.
Bousquet-M\'{e}lou and Steingr\'{i}msson in~\cite{BS05} (Theorem 1) proved that the map Backelin, West, and Xin defined in~\cite{BWX07} commutes with inverses so showed that $12\dots i \pi(i+1)\dots\pi(k)$ and $i\dots 21 \pi(i+1)\dots\pi(k)$ are ${\mathcal I}$-Wilf equivalent. Bloom and Saracino in~\cite{BS12} present a shortened proof using growth diagrams. Using some implications of the Robinson-Schensted-Knuth map and symmetries, the classification can be completed.
Of these eight classes only a few have been enumerated. Regev~\cite{R81} (Section 4.5) enumerated the class represented by 1234, which is the Motzkin numbers. This class has received special attention as equalities between the pattern avoidance sets were established using bijections to 1-2 trees with $n$ edges~\cite{G95,GPP01,J02}. The class represented by 2413 was enumerated by Brignall, Huczynska and Vatter~\cite{BHV08} (Example 6.6) where $|{\mathcal I}_n(2413)|$ equals the number separable involutions. Two other classes represented by 2341 and 1342 were enumerated by B\'{o}na et.\ al.~\cite{BHPV16} (Section 5 and 6) who also did work on the asymptotic growth of all classes.
The cardinalities for multiple pattern avoidance in involutions has been classified and enumerated by Guibert and Mansour in~\cite{GM02a} (Examples 2.6, 2.8, 2.12, 2.18, 2.20)
who consider sets of patterns containing 132, Egge and Mansour~\cite{EM04} who enumerate the sets containing the pattern 231, and Wulcan who enumerates all pairs of length three patterns in~\cite{W02}.
Though there is much more work on pattern avoidance in permutations and involutions for longer patterns, multiple patterns and many other kinds of restrictions or generalizations, we instead turn our focus to the more refined equivalence class defined by the distribution of statistics.
Sagan and Savage~\cite{SS12} asked about the distribution of statistics over pattern avoidance classes of permutations, which Dokos et.\ al.\ answer in~\cite{DDJSS12}.
In~\cite{DDJSS12} they consider the distribution of two permutation statistics, number of inversions and major index, over the permutation avoidance classes for any set of length three patterns.
An {\it inversion} is a pair of indices $i<j$ such that $\pi(i)>\pi(j)$. The {\it set of inversions} is
$$\Inv (\sigma) = \{ (i,j):i<j \text{ and } \sigma (i)>\sigma(j) \}$$
and the {\it inversion number} is $\inv(\sigma) = |\Inv(\sigma)|$. The {\it decent set} of an integer word $w=w_1w_2\dots w_n$ is $\Des(w) = \{i\in [n-1]:w_i>w_{i+1}\}$ from which we define $\des(w)=|\Des(w)|$ and the {\it major index},
$$\maj(w)=\sum_{i\in \Des(w)} i.$$
One reason these statistics hold interest is because of a result by Major Percy MacMahon who found that the generating function of ${\mathfrak S}_n$ for $\maj$ or $\inv$ is
$$\sum_{\sigma\in{\mathfrak S}_n}q^{\maj(\sigma)}=\sum_{\sigma\in{\mathfrak S}_n}q^{\inv(\sigma)}=[n]_q!=[n]_q[n-1]_q\cdots[1]_q$$
the standard $q$-analogue for $n!$ where $[n]_q=1+q+\cdots + q^{n-1}$ is the standard $q$-analogue for $n$. This result can be found in~\cite{S97} (Corollary 1.3.13 and Proposition 1.4.6).
This function has many beautiful properties including symmetry and log-concavity~\cite{S97} (Exercise 1.50(e)). A polynomial $a_0+a_1q+\cdots +a_kq^k$ is said to be {\it symmetric} if $a_{i}=a_{k-i}$ for all $i$ and is {\it log-concave } if $a_i^2 \geq a_{i-1}a_{i+1}$ for all $i$. Log-concavity is particularly interesting because it implies that the polynomial is {\it unimodal}~\cite{S97} (Exercise 1.50(a)), that $a_0\leq a_1\leq \dots \leq a_j \geq a_{j+1}\geq \dots\geq a_k$ for some $j$.
The associated generating functions for the restricted class of involutions have been studied by Dukes~\cite{D07}. He found that the generating function for $\maj$ over ${\mathcal I}_n$ is symmetric (Corollary 2.4), he conjectured it to be additionally log-concave and proved some partial results about its unimodality.
The generating function for involutions and $\inv$ was studied by D\'{e}sarm\'{e}nien~\cite{D82} who related the function to $q$-Hermite polynomials, however, this function is not unimodal nor log-concave.
Dokos et.\ al.\ in~\cite{DDJSS12} have a full study for the permutation generating functions for any avoidance class that is a subset of ${\mathfrak S}_3$ for $\inv$ and $\maj$. Specifically, the generating functions they studied were
$$I_n(\pi) = I_n(\pi;q) = \sum_{\sigma \in {\mathfrak S}_n} q^{\inv (\sigma)}$$
and
$$M_n(\pi) = M_n(\pi ;q) = \sum_{\sigma \in {\mathfrak S}_n(\pi)} q^{\maj (\sigma)}.$$
They defined that $\pi_1$ and $\pi_2$ are {\it $I$-Wilf equivalent} if $I_n(\pi_1) =I_n(\pi_2)$ and {\it $M$-Wilf equivalent} if $M_n(\pi_1) =M_n(\pi_2)$. Let $[\pi]_I$ and $[\pi]_M$ denote the associated equivalence classes. They determined these classes for length three patterns and described the generating functions.
\begin{thm}[Dokos et.\ al.~\cite{DDJSS12} Theorem 2.3 and 2.6] The $I$-Wilf equivalence and $M$-Wilf equivalence classes for single length three patterns are as follows.
\begin{enumerate}[(i)]
\item The non-singular $I$-Wilf classes are $[132]_I=\{132,213\}$ and $[231]_I=\{231,312\}$.
\item The non-singular $M$-Wilf classes are $[132]_M=\{132,231\}$ and $[213]_M=\{213,312\}$.\hfill \qed
\end{enumerate}
\label{thm:Dokos}
\end{thm}
Since all these pattern classes are counted by the Catalan numbers the generating functions described are all $q$-analogues for $C_n$. Further work on these generating functions can be found in~\cite{B14,C15,CEKS13,GM09,T15,YGZ15}.
Our work in this paper parallels the work of Dokos et.\ al.\ since we aim to describe the generating functions
$$I{\mathcal I}_n (\pi)= I{\mathcal I}_n(\pi;q) = \sum_{\iota \in {\mathcal I}_n(\pi)} q^{\inv (\iota)}$$
and
$$M{\mathcal I}_n(\pi) = M{\mathcal I}_n(\pi ;q) = \sum_{\iota \in {\mathcal I}_n(\pi)} q^{\maj (\iota)}$$
for single and later multiple patterns of length three as well as determine which patterns give equal generating functions.
This is the first full study of these functions, though, some of these functions have been well-studied individually by others.
The generating function for ${\mathcal I}_n(132)$ has been studied before by Guibert and Mansour in~\cite{GM02} (Theorem 4.2) who studied the generating function for $\des$ and the number of occurrences of the pattern $12\dots k$, which counts $\binom{n}{2}$ minus $\inv$ when $k = 2$.
The function $M{\mathcal I}_n(321)$ was studied by Barnabei et.\ al.~\cite{BBES14} (Theorem 3.3) who found that this function is the standard $q$-analogue for the central binomial coefficient where the standard $q$-analogue for a general binomial coefficient is
\begin{equation}{n\brack{k}}_q=\frac{[n]_q!}{[n-k]_q![k]_q!}.
\label{eq:qbinom}
\end{equation}
In their proof they establish a connection to hook decompositions. We independently determined this result with a shorter proof that establishes a connection to core, a topic that is usually used to prove symmetric chain decomposition in poset theory. Additionally, our proof is easily modified to prove another interpretation of the standard $q$-analogue for any binomial coefficient, which is a result that also appears in~\cite{BBES16} (Corollary 14) by Barnabei et.\ al. Some ideas of the bijection we present later can be seen in~\cite{EFPT15, BBES16} in their discussions of associating involutions avoiding 321 to Dyke paths, though, our phrasing of it in terms of core is new.
In~\cite{E04} Egge considers the length four pattern 3412 and studies $I{\mathcal I}_n(3412)$. There seems to be no more work done on these generating functions for longer patterns.
One goal of this paper is to determine which length three patterns give equivalent generating functions.
We will say that $\pi_1$ and $\pi_2$ are {\it $I{\mathcal I}$-Wilf equivalent} if $I{\mathcal I}_n(\pi_1) =I{\mathcal I}_n(\pi_2)$, $M{\mathcal I}$-Wilf equivalent if $M{\mathcal I}_n(\pi_1) =M{\mathcal I}_n(\pi_2)$ and write $[\pi]_{I{\mathcal I}}$ and $[\pi]_{M{\mathcal I}}$ for the associated equivalence classes. However, these equivalence classes can be established quickly so this paper moreover considers the description of the generating functions. Some of these generating functions already have explicit descriptions. Since ${\mathfrak S}_n(231,312)={\mathcal I}_n(231)$, which can be concluded from the work in~\cite{ss:rp} (Propositions 6 and 8), all generating functions regarding the pattern $231$ have been determined by Dokos et.\ al.~\cite{DDJSS12}. We include a description for all patterns and generating function for the completeness of this study.
This paper is organized as follows. In the next section we introduce some background information about using symmetries of the square and writing permutations as inflations. Section~\ref{inv} focuses on describing $I{\mathcal I}_n(\pi)$ for length three patterns and considers the fixed-point-free case, $\iota(i)\neq i$ for all $i$. We find a connection to a $q$-Catalan analogue defined by Carlitz and Riordan in~\cite{CR64} when determining $I{\mathcal I}_n(132)$ in Proposition~\ref{II132}.
This section finishes with the result that any $\iota\in{\mathcal I}_{2k+1}(123)$ has $\inv(\iota)$ even if and only if $k$ is even in Corollary~\ref{cor:123inv}. In Section~\ref{maj} we present our results about $M{\mathcal I}_n(\pi)$ including our result about $M{\mathcal I}_n(321)$ and $q$-analogues for binomial coefficients in Theorem~\ref{theorem:321qanalougeequiv}. For each pattern we also consider the fixed-point-free case. In this section we also re-present the symmetry $M{\mathcal I}_n(\pi_1;q)=q^{\binom{n}{2}}M{\mathcal I}_n(\pi_2;q^{-1})$ between the patterns $\pi_1 =123$ and $\pi_2=321$ in Proposition~\ref{thm:123&321 symmetry}, which has been shown before in~\cite{BBES14,DRS07,ss:rp}. We particularly present this symmetry because we also prove this symmetry for the pair of patterns 132 and 213 in Theorem~\ref{thm:132symm213}. We then summarize in Section~\ref{multi} the generating functions for multiple pattern avoidance in involutions. We finish the paper with Section~\ref{permsymm} that includes a result about the symmetries for the larger class of permutations, $M_n(\pi_1;q)=q^{\binom{n}{2}}M_n(\pi_2;q^{-1})$, between patterns 123 and 321 as well as 132 and 213. Both of these results are natural and the result about the pair of patterns 123 and 321 is classical as it is an elegant generalization of the involution case. What is innovative is the map we define between ${\mathfrak S}_n(132)$ and ${\mathfrak S}_n(213)$ in equations~\eqref{eq:theta1} and~\eqref{eq:theta2} that restricts to involutions. We conclude with Conjectures~\ref{conj} and~\ref{conj:invo} that this symmetry always happens for involutions and permutations for the pair of patterns $k(k-1)\dots 1(k+1)(k+2)\dots m$ and $12\dots (k-1) m(m-1)\dots k$ for any $1\leq k \leq m$.
\section{Diagrams and inflations of permutations}
The proofs behind the $M{\mathcal I}$-Wilf and $I{\mathcal I}$-Wilf equivalence classes are quick and can be shown using a geometrical approach to permutations. The {\it diagram} of a permutation $\sigma \in {\mathfrak S}_n$ is the collection of points $(i,\sigma(i))$ in the coordinate plane inside the square with corners at $(1,1)$ and $(n,n)$. Figure \ref{fig:boxexample} illustrates the involution $\iota = 216543$.
\begin{figure}[h]
\centering
\begin{tikzpicture} [scale=.5]
\filldraw [black]
(1,2) circle (3pt)
(2,1) circle (3pt)
(3,6) circle (3pt)
(4,5) circle (3pt)
(5,4) circle (3pt)
(6,3) circle (3pt);
\draw (1,1) -- (6,1) -- (6,6) -- (1,6) -- (1,1);
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture} [scale=.5]
\filldraw [black]
(2.3,5.1) circle (3pt)
(1.8,4.6) circle (3pt)
(1.3,4.1) circle (3pt)
(3,1.8) circle (3pt)
(3.5,1.3) circle (3pt)
(4.1,2.9) circle (3pt)
(4.6,2.4) circle (3pt)
(5.1,3.4) circle (3pt)
(5.7,5.7) circle (3pt);
\draw (1,34/9) rectangle (24/9,49/9);
\draw (34/9,19/9) rectangle (49/9,34/9);
\draw (24/9,1) rectangle (34/9,19/9);
\draw (49/9,49/9) rectangle (6,6);
\draw (1,1) -- (6,1) -- (6,6) -- (1,6) -- (1,1);
\end{tikzpicture}
\caption{From left to right we have the diagrams of $216543$ and $3124[{\mathfrak i}_3,{\mathfrak d}_2,213,1] = 678214359$.}
\label{fig:boxexample}
\end{figure}
On this square we can do a number of operations that preserve the square. We can reflect the square across a line through the center of the square. If the square is preserved then the reflected diagram represents a permutation and we will notate this new permutation $r_m(\sigma)$ where $m$ is the slope of the line. We can also perform rotations about the center of the square and those rotations that preserve the square will also produce a diagram associated to a permutation $R_{\theta}(\sigma)$ where we rotate counterclockwise by $\theta$. The reflections $r_1$, $r_0$, $r_{-1}$, and $r_{\infty}$ and rotations $R_0$, $R_{90}$, $R_{180}$ and $R_{270}$ all preserve the square and give us bijections ${\mathfrak S}_n\rightarrow{\mathfrak S}_n$. We will say equivalence classes between patterns are proven {\it trivially} if they can be proven purely from these maps.
Since we are particularly interested in involutions we are only going to be interested in the operations that map an involution to another involution. A two-cycle $(i,j)$ in a permutation implies we have the points $(i,j)$ and $(j,i)$, which are symmetric around the line with slope $m=1$. If a permutation has only two-cycles and one-cycles then the diagram must be symmetric around the line with slope $m=1$ or its main diagonal. This means that any operation that maps a permutation with this symmetry to another with this symmetry will be a bijection ${\mathcal I}_n\rightarrow{\mathcal I}_n$.
\begin{lemma} We have the following properties for the operations on the square.
\begin{enumerate}[(i)]
\item The operations $r_1$, $r_{-1}$, $R_0$ and $R_{180}$ are bijective maps ${\mathcal I}_n\rightarrow {\mathcal I}_n$.
\item The map $r_1$ and $R_0$ are both the identity map on involutions.
\item The maps $r_{-1}$ and $R_{180}$ are the same map ${\mathcal I}_n\rightarrow {\mathcal I}_n$. \hfill \qed
\end{enumerate}
\label{lemma:operations and involutions}
\end{lemma}
We say that a map {\it preserves a statistic} if the statistic remains unchanged under the map. For example if we have a map $\phi :{\mathfrak S}_n \rightarrow {\mathfrak S}_n$ that preserves $\inv$ then we mean that $\inv(\sigma)=\inv(\phi(\sigma))$ for all $\sigma$ in the domain ${\mathfrak S}_n$.
Dokos et.\ al.~\cite{DDJSS12} detailed which operations preserve $\inv$ and $\maj$. We find that all the maps that map involutions to involutions preserve $\inv$ and no operation except the identity preserves $\maj$.
\begin{lemma}[Dokos et.\ al.~\cite{DDJSS12} Lemma 2.1] For $\iota \in {\mathcal I}_n$ the operations $r_1$, $r_{-1}$, $R_0$ and $R_{180}$ preserve $\inv$. \hfill \qed
\label{lemma:inv preserved}
\end{lemma}
Another tool that we will use often in this paper is describing a permutation as an inflation of another permutation. A {\it block} in a permutation is a subsequence on some indices $[i,j]=\{i,i+1,\dots, j\}$ whose values $\sigma(i),\dots,\sigma(j)$ in union form an interval $[a,b]$ for some $b-a=j-i$. Given a permutation $\tau\in {\mathfrak S}_k$ and a collection of permutations $\sigma_1,\sigma_2,\dots,\sigma_k$ the {\it inflation} of $\tau$ by the collection $\sigma_1,\sigma_2,\dots,\sigma_k$ is the permutation we get from $\tau$ by replacing the point $(i,\tau(i))$ with a block order-isomorphic to $\sigma_i$. Note that this definition also works if we have an empty block $\sigma_j=\epsilon\in {\mathfrak S}_0$ for some $j$. Often times we will have blocks order-isomorphic to a strictly increasing or decreasing sequence, so for convenience we define ${\mathfrak i}_k=12\dots k$ and ${\mathfrak d}_k=k\dots 21$. For example $3124[{\mathfrak i}_3,{\mathfrak d}_2,213,1] = 678214359$, which is displayed in Figure~\ref{fig:boxexample}.
One can describe many pattern avoidance classes using inflations. The following proposition contains several well-known descriptions of permutations that avoid a certain pattern as inflations.
\begin{prop}We have the following descriptions of pattern avoiding permutations.
\begin{enumerate}[(i)]
\item If $\sigma$ avoids 132 then $\sigma=231[\sigma_1,1,\sigma_2]$ for some $\sigma_1,\sigma_2$ that avoid 132.
\item If $\sigma$ avoids 213 then $\sigma=312[\sigma_1,1,\sigma_2]$ for some $\sigma_1,\sigma_2$ that avoid 213.
\item If $\sigma$ avoids 231 then $\sigma=132[\sigma_1,1,\sigma_2]$ for some $\sigma_1,\sigma_2$ that avoid 231.
\item If $\sigma$ avoids 312 then $\sigma=213[\sigma_1,1,\sigma_2]$ for some $\sigma_1,\sigma_2$ that avoid 312. \hfill \qed
\end{enumerate}
\label{prop:decomp}
\end{prop}
We illustrate this for the pattern 132 in Figure~\ref{fig:132}. Typically knowing how to write $\sigma$ as an inflation makes calculating $\maj$ or $\inv$ easier. For a set $A=\{a_1,a_2,\dots, a_i\}$ we write $A+j=\{a_1+j,a_2+j,\dots,a_i+j\}$ to be the set of all the elements in $A$ increased by $j$ and let $|\sigma|$ be the length of $\sigma$. We can calculate the descent set, $\Des(\sigma)$, by considering the descents in the blocks and the descents between the blocks of the inflation.
For example say $\sigma$ avoids 132 and is written as $\sigma=231[\sigma_1,1,\sigma_2]$ for some $\sigma_1,\sigma_2$ that avoid 132 with $|\sigma_2|\neq 0$. We then have $\Des(\sigma)=\Des(\sigma_1)\cup\{|\sigma_1|+1\}\cup (\Des(\sigma_2)+|\sigma_1|+1)$. Further we can calculate $\maj$ by adding up the descents between the blocks and the descents in each block in the inflation by noting that $\maj$ contributed by the descents in $(\Des(\sigma_2)+|\sigma_1|+1)$ is $\maj(\sigma_2)+(|\sigma_1|+1)\des(\sigma_2)$ so $\maj(\sigma)=\maj(\sigma_1)+|\sigma_1|+1+\maj(\sigma_2)+(|\sigma_1|+1)\des(\sigma_2)$.
\section{Number of inversions and length three patterns}
\label{inv}
We find that the $I{\mathcal I}$-Wilf equivalence classes for length three patterns are trivially determined. As result, most of this section will be spent discussing the decomposition of involutions that avoid a single pattern of length three with the goal of describing the generating functions for $\inv$. Some of these generating functions have been studied by others including Guibert and Mansour~\cite{GM02} (Theorem 4.2) who studied involutions avoiding 132. Their generating function counts the number of occurrences of the pattern ${\mathfrak i}_k$, which counts $\binom{n}{2}$ minus the number of inversions when $k=2$. Dokos et.\ al.~\cite{DDJSS12} studied permutations avoiding length three patterns and their generating functions and since ${\mathfrak S}_n(231,312)={\mathcal I}_n(231)$, by Simion and Schmidt~\cite{ss:rp}, their work determines the generating function for involutions avoiding the pattern $231$. The goal of this section is to give a complete description for all the generating functions of all length three patterns.
In this section we show connections to some $q$-analogues of the Catalan numbers and standard Young Tableau. We prove a formula that quickly computes $\inv$ for $\iota\in{\mathcal I}_n(321)$ using the two-cycles in Lemma~\ref{lemma:321inv} and for $\iota\in{\mathcal I}_{2k+1}(123)$ we discover that $\inv(\iota)$ is even if and only if $k$ is even, which is stated in Corollary~\ref{cor:123inv}.
We describe some generating functions, not directly, but in steps by first considering the subset of fixed-point-free involutions. A permutation $\sigma$ has a {\it fixed point} if there exists a $j$ such that $\sigma(j)=j$ and we call $\sigma$ {\it fixed-point-free} if $\sigma$ does not have any fixed points. To notate the subsets we will write $F{\mathcal I}_n=\{\iota\in {\mathcal I}_n:\iota(i)\neq i, \forall i\}$, $F{\mathcal I}_n(\pi) = {\mathcal I}_n(\pi)\cap F{\mathcal I}_n$ and let the associated generating function be
$$IF{\mathcal I}_n(\pi)=IF{\mathcal I}_n(\pi;q)=\sum_{\iota\in F{\mathcal I}_n(\pi)} q^{\inv \pi}.$$
Since there are no fixed-point-free involutions of odd length we will let $IF{\mathcal I}_n(\pi)=0$ when $n$ is odd.
We also find in some cases it is easier to determine the generating function for the number of coinversions rather than the number of inversions. A {\it coinversion} of $\sigma \in {\mathfrak S}_n$ is a pair of indices $(i,j)$ such that $i<j$ and $\sigma(i)<\sigma(j)$. Let $\Coinv(\sigma)$ be the {\it set of coinversions} and the {\it number of coinversions} be $\coinv(\sigma)=|\Coinv(\sigma)|$. The associated generating functions are
$$\overline{ I{\mathcal I}}_n(\pi)=\overline{I{\mathcal I}}_n(\pi;q)=\sum_{\iota \in {\mathcal I}_n(\pi)} q^{\coinv(\iota)}$$
and
$$\overline{ IF{\mathcal I}}_n(\pi)=\overline{IF{\mathcal I}}_n(\pi;q)=\sum_{\iota \in F{\mathcal I}_n(\pi)} q^{\coinv(\iota)}.$$
The two statistics $\inv$ and $\coinv$ are closely related, and their generating functions determine each other.
\begin{lemma}We have the following equalities involving inversions and coinversions.
\begin{enumerate}[(i)]
\item For $\sigma \in {\mathfrak S}_n$ we have $\inv(\sigma)=\binom{n}{2}-\coinv(\sigma)$.
\item $\displaystyle q^{\binom{n}{2}}I{\mathcal I}_n(\pi;q^{-1})=\overline{ I{\mathcal I}}_n(\pi;q).$
\item $\displaystyle q^{\binom{n}{2}}IF{\mathcal I}_n(\pi;q^{-1})=\overline{ IF{\mathcal I}}_n(\pi;q).$
\end{enumerate}
\label{lemma:invproperties}
\end{lemma}
\begin{proof}
For a length $n$ permutation the total number of pairs of indices $(i,j)$ such that $i<j$ is $\binom{n}{2}$. Since all such pairs are either an inversion or a coinversion we have the equality in (i). The equations in (ii) and (iii) follow.
\end{proof}
For some patterns it will be simpler to describe properties using ascent sets rather than descent sets. The {\it ascent set} of $\sigma \in {\mathfrak S}_n$ is $\Asc(\sigma)=\{i:\sigma(i)<\sigma(i+1)\}$ with the {\it number of ascents} equal to $\asc(\sigma)=|\Asc(\sigma)|$.
\subsection{The $I{\mathcal I}$-Wilf equivalence classes for length three patterns}
The $I{\mathcal I}$-Wilf equivalences classes are all determined trivially.
\begin{prop} There are only two non-singular $I{\mathcal I}$-Wilf equivalences classes for length three patterns that are $\{132,213\}$ and $\{231,312\}$.
\label{prop:IIWilfequiv}
\end{prop}
\begin{proof} An involution $\iota$ avoids 132 if and only if $r_{-1}(\iota)$ avoids 213 since $r_{-1}(132)=213$.
By Lemma~\ref{lemma:operations and involutions} the operation $r_{-1}$ is a bijection from ${\mathcal I}_n$ to itself so restricts to a bijection between ${\mathcal I}_n(132)$ and ${\mathcal I}_n(213)$. Finally since this operation also preserves $\inv$ by Lemma~\ref{lemma:inv preserved} we must have that $213$ and $132$ are $I{\mathcal I}$-Wilf equivalent.
By a similar argument using the map $r_1$ we can show that $231$ and $312$ are $I{\mathcal I}$-Wilf equivalent. Lastly, we can see that we have four distinct classes just by looking at the case of $n=3$.
\end{proof}
We conjecture that all $I{\mathcal I}$-Wilf equivalence classes are trivially determined.
\begin{conj}
The $I{\mathcal I}$-Will equivalence class for any pattern $\pi$ is $[\pi]_{I{\mathcal I}}=\{\pi, r_1(\pi), r_{-1}(\pi), R_{180}(\pi)\}$.
\end{conj}
The above conjecture is confirmed for patterns up to length 5. However, for permutations the $I$-Wilf equivalence classes are not always determined trivially. Chan in~\cite{C15} (Proposition 5) proved if $\pi_1$ and $\pi_2$ are shape and $I$-Wilf equivalent, a stronger condition than $I$-Wilf equivalence, then so are $12[\pi_1,\gamma]$ and $12[\pi_2,\gamma]$ for any permutation $\gamma$. This particularly applies to the pair $12[231,231]$ and $12[312,231]$ (Chan~\cite{C15} Corollary 6), which are not in the same symmetry class on the square. We note that this particular pair is not $I{\mathcal I}$-Wilf equivalent because the generating functions are not equal when $n=8$.
\subsection{The patterns $231$ and $312$}
It turns out that any permutation that avoids both $231$ and $312$ is actually an involution and these involutions are precisely those that avoid $231$, which was first determine by Simion and Schmidt ~\cite{ss:rp}. They also determined the decomposition of involutions in ${\mathcal I}_n(213)$ so the generating function $I{\mathcal I}_n(231)$ has been previously determined by
Dokos et.\ al.~\cite{DDJSS12}. This section includes this result for completeness.
\begin{prop}[Simion and Schmidt~\cite{ss:rp} Proposition 6]
All involutions $\iota \in {\mathcal I}_n(231)$ have decomposition ${\mathfrak i}_k[{\mathfrak d}_{j_1},{\mathfrak d}_{j_2},\dots ,{\mathfrak d}_{j_k}]$ for some $k$ with $j_i\geq1$ for all $i\in[k]$.\hfill \qed
\label{prop:SSdecompI(231)}
\end{prop}
Using this we can show any permutation that avoids both $231$ and $312$ is actually an involution in ${\mathcal I}_n(231)$.
\begin{prop}[Simion and Schmidt~\cite{ss:rp} Propositions 6 and 8] For $n\geq 1$ we have ${\mathfrak S}_n(231,312)={\mathcal I}_n(231)$ and further ${\mathcal I}_n(231)={\mathcal I}_n(312)$.
\label{prop:S(213,312)=I(231)}
\end{prop}
\begin{proof}
Obviously ${\mathcal I}_n(231)\subseteq{\mathfrak S}_n(231,312)$. Since all $\sigma \in {\mathfrak S}_n(231,312)$ avoid $231$ it is really a matter of showing that $\sigma$ is really an involution. We will do this by showing $\sigma= {\mathfrak i}_k[{\mathfrak d}_{j_1},{\mathfrak d}_{j_2},\dots ,{\mathfrak d}_{j_k}]$ for some $k$ with $j_i\geq1$ for all $i\in[k]$, which we will do using induction. Since this is an involution we will be done at this point.
This is easy to see for $n=1$, so we assume $n>1$ and all permutations in ${\mathfrak S}_m(231,312)$ have this form for $m<n$. Since $\sigma$ avoids $231$ we can write $\sigma= 132[\sigma_1,1,\sigma_2]$ as we noted in Proposition~\ref{prop:decomp} for some $\sigma_1\in {\mathfrak S}_{n_1}(213,312)$ and $\sigma_2\in {\mathfrak S}_{n_2}(213,312)$ with $n_1<n$. Since $\sigma_2$ also avoids $312$ we know $\sigma_2$ must have no ascents so is equal to ${\mathfrak d}_{n_2}$. Hence, $\sigma= 12[\sigma_1,{\mathfrak d}_{n_2+1}]$. By induction $\sigma_1$ has the decomposition stated, so we can conclude that $\sigma$ does as well.
The map $r_1$ and the decomposition in Proposition~\ref{prop:SSdecompI(231)} imply ${\mathcal I}_n(231)={\mathcal I}_n(312)$.
\end{proof}
Using the set equality shown in Proposition~\ref{prop:S(213,312)=I(231)} we have $I{\mathcal I}_n(231)$, which was originally shown by Dokos et.\ al. who determined $I_n(132,213)=\overline{I{\mathcal I}}_n(231)$ using~\cite{DDJSS12} (Lemma 2.1).
\begin{prop} [Dokos et.\ al.~\cite{DDJSS12} Proposition 4.3] With $I{\mathcal I}_0(231)=1$ we have for $n\geq 1$
that
$$\displaystyle I{\mathcal I}_n(231)=\sum_{j=1}^n q^{\binom{j}{2}}I{\mathcal I}_{n-j}(231).$$
\end{prop}
\begin{proof}
We can define $I{\mathcal I}_0(231)=1$ so let $n>0$. For $\iota \in {\mathcal I}_n(213)$ we can use Simion and Schmidt's decomposition in Proposition~\ref{prop:SSdecompI(231)} to write $\iota = 12[{\mathfrak d}_j,\tau]$ for some $j\geq 1$ and $\tau\in {\mathcal I}_{n-j}(231)$. Since there are $\binom{j}{2}$ inversions in ${\mathfrak d}_j$ and no inversions between ${\mathfrak d}_j$ and $\tau$ we find $\inv(\iota)=\binom{j}{2}+\tau$, which proves the equation.
\end{proof}
\subsection{The patterns $132$ and $213$}
\label{subsec132inv}
Guibert and Mansour in~\cite{GM02} study involutions avoiding 132 and describe a decomposition and a generation function that counts the number of occurrences of the patterns ${\mathfrak i}_k$. When $k = 2$ this counts the number of coinversions. Specifically, their Theorem 4.2 in~\cite{GM02} produces the generating function for involutions $C_I(x,q)=\sum_{\iota \text{ avoids }132}x^{|\iota|}q^{\coinv{\iota}}$ using the generating function for permutations $C_I(x,q)=\sum_{\sigma \text{ avoids }132}x^{|\sigma|}q^{\coinv{\sigma}}$, which is
$$C_I(x,q)=\frac{1+xC_I(xq,q)}{1-x^2C_S(x^2q^2,q^2)}\text{ where } C_S(x,q)=\frac{1}{1-xC_S(xq,q)}.$$
We begin this section by recounting a decomposition of involutions in ${\mathcal I}_n(132)$ that can be found in~\cite{GM02} and~\cite{ss:rp} and then give a recursive definition of the generating function $\overline{I{\mathcal I}_n}(132)$. We also describe the generating function for fixed-point-free involutions avoiding 132 as this will be very useful in determining $\overline{I{\mathcal I}}_n(132)$.
\begin{lemma} [Guibert and Mansour~\cite{GM02} Proposition 3.17] The set ${\mathcal I}_n(132)$ is the disjoint union of
\begin{enumerate}[(i)]
\item $\{12[\alpha,1]: \alpha \in {\mathcal I}_{n-1}(132)\}$ and
\item $\{45312[\alpha,1, \beta, r_1(\alpha),1]: \alpha \in {\mathfrak S}_{k-1}(132), \beta \in {\mathcal I}_{n-2k}(132) \text{ and } 1\leq k\leq \floor{\frac{n}{2}}\}$.
\item Also, $F{\mathcal I}_{2m}(132)=\{21[\alpha,r_1(\alpha)]:\alpha \in {\mathfrak S}_{m}(132)\}$.
\end{enumerate}
\label{lemma:132decomp}
\end{lemma}
\begin{proof}
First we will show that all $\iota \in {\mathcal I}_n(132)$ have decomposition as in (i) or (ii). It is known that since $\iota \in {\mathfrak S}_n(132)$ that $\iota = 231[\sigma_1,1,\sigma_2]$ for some permutations $\sigma_1$ and $\sigma_2$ that avoid $132$ as on the left in Figure~\ref{fig:132}. If $|\sigma_2|=0$ then $\iota$ is part of the set in (i).
Otherwise, $|\sigma_2|\neq 0$. First we argue that $|\sigma_1|< |\sigma_2|$. We know that $\iota(|\sigma_1|+1)=n$ and $\sigma_2$ occurs in $\iota$ using the values in $[1,|\sigma_2|]$. Since $\iota$ is an involution $\iota(n)=|\sigma_1|+1\in [1,|\sigma_2|]$, which shows $|\sigma_1|< |\sigma_2|$.
Since involutions are symmetric about the main diagonal and $|\sigma_1|< |\sigma_2|$ we have $\sigma_2=312[\beta,r_{1}(\sigma_1),1]$ for some $\beta \in {\mathcal I}_{n-2k}(213)$ if $|\sigma_2|=k-1$. This assures that $r_1(\iota)=\iota$, which proves that $\iota$ is an element of the set in (ii).
Next we will show that given an involution with decomposition as in (i) or (ii) that the involution avoids $132$. Consider we have an involution as stated in (i), $\iota =12[\alpha, 1]$ for $\alpha \in {\mathcal I}_{n-1}(132)$, and a subsequence $abc$ that is a pattern 132. The subword $abc$ can not be part of $\alpha$ because $\alpha$ avoids $132$. We must have that $n$ is part of the pattern, but $n$ can only play the role of 3 in the pattern, which is not possible because $n$ occurs at the rightmost index. Hence, $\iota$ avoids $132$. Now consider an involution as described in (ii), $\iota =45312[1,\alpha,\beta,1,r_{1}(\alpha)]$ for some $ \alpha\in {\mathfrak S}_{k-1}(213)$ and $ \beta\in {\mathcal I}_{n-2k}(213)$ with $1\leq k\leq\floor{n/2}$. Let $abc$ be a subsequence of $\iota$. We will show $abc$ is not the pattern 132 by considering how $abc$ occurs in the the five blocks. If all three letters occur in the same block then $abc$ is not the pattern $132$ since every block avoids $132$. If they occur in three different blocks then the pattern is still not $132$ since $45312$ avoids $132$. If $ab$ is in one block that doesn't contain $c$ then due to the decomposition $c>\max\{a,b\}$ or $c<\min\{a,b\}$, which implies $abc$ is not the pattern 132. Say that $bc$ is in one block that doesn't contain $a$, then due to block sizes $bc$ is in either the second or third block, which implies $a>\max\{b,c\}$ and that $abc$ is not the pattern $132$. Hence all permutations described in (i) and (ii) avoid $132$.
Lastly, we will show (iii) the decomposition for fixed-point-free involutions. If $\iota\in F{\mathcal I}_0(132)$, then $\iota=21[\epsilon,\epsilon]$ so assume $m>0$. In this case $\iota \in F{\mathcal I}_{2m}(132)$, which implies that $\iota$ is not part of the set in (i) because $\iota$ is fixed-point-free. Since $\iota$ falls under case (ii) we have that $\iota = 45312[1,\alpha,\beta,1,r_1(\alpha)]$ as stated in this lemma. The involution $\beta$ must also avoid $132$ and be fixed-point-free, so by induction $\beta = 21[\gamma,r_1(\gamma)]$.
Hence, $\iota=21[\tau,r_1(\tau)]$ for some $\tau=231[\alpha,1,\gamma] \in {\mathfrak S}_m(132)$.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture} [scale = .4]
\draw (8,4.5) rectangle (3.5,0);
\draw (2.5,7.5) rectangle (0,5);
\filldraw [black]
(3,8) circle (5pt);
\draw (0,0) -- (8,0) -- (8,8) -- (0,8) -- (0,0);
\draw (5.8,2) node {$\sigma_2$};
\draw (1.3,6.3) node {$\sigma_1$};
\end{tikzpicture}
\hspace{30mm}
\begin{tikzpicture} [scale = .4]
\draw (0,0) rectangle (7,7);
\filldraw [black]
(8,8) circle (5pt);
\draw (0,0) -- (8,0) -- (8,8) -- (0,8) -- (0,0);
\draw (3.5,3.5) node {$\alpha$};
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture} [scale = .4]
\draw (7.5,2.5) rectangle (5,0);
\draw (2.5,7.5) rectangle (0,5);
\draw (4.5,4.5) rectangle (3.5,3.5);
\filldraw [black]
(3,8) circle (5pt)
(8,3) circle (5pt);
\draw (0,0) -- (8,0) -- (8,8) -- (0,8) -- (0,0);
\draw (4,3.95) node {$\beta$};
\draw (6.3,1.3) node {$r_1(\alpha)$};
\draw (1.3,6.3) node {$\alpha$};
\end{tikzpicture}
\end{center}
\caption{On the left a general $\sigma \in {\mathfrak S}_n(132)$. On the right the possible diagrams for $\iota \in {\mathcal I}_n(132)$.}
\label{fig:132}
\end{figure}
The structure for involutions in ${\mathcal I}_n(213)$ can be similarly determined.
\begin{lemma} The set ${\mathcal I}_n(213)$ is the disjoint union of
\begin{enumerate}[(i)]
\item $\{ 12[1,\alpha]: \alpha \in {\mathcal I}_{n-1}(213)\}$ and
\item $\{45312[1,\alpha,\beta,1,r_{1}(\alpha)]: \alpha\in {\mathfrak S}_{k-1}(213), \beta\in {\mathcal I}_{n-2k}(213),1\leq k\leq\floor{n/2}\}$.
\item Also, $ F{\mathcal I}_{2m}(213)=\{21[\alpha,r_1(\alpha)]: \alpha \in {\mathfrak S}_{m}(213)\}$.
\end{enumerate}
\label{lemma:213decomposition}
\end{lemma}
\begin{proof}
All these results come from Lemma~\ref{lemma:132decomp} and the map $r_{-1}$ since $r_{-1}(132)=213$.
\end{proof}
We find that these generating functions are related to the $q$-Catalan numbers, $\tilde{C}_n(q)$, which Carlitz and Riordan~\cite{CR64} defined. We will particularly be seeing $C_n(q)=q^{\binom{n}{2}}\tilde{C}_n(q^{-1})$ in our calculations, which is recursively defined by
$C_0(q)=1$ and
$$C_n(q)=\sum_{k=0}^{n-1}q^kC_k(q)C_{n-k-1}(q).$$
We use the result by Dokos et.\ al.~\cite{DDJSS12} (Theorem 3.1) that the generating function for ${\mathfrak S}_n(132)$ and coinversions is $C_n(q)$.
\begin{thm} For $m\geq 0$ we have $\overline{IF{\mathcal I}}_{2m}(132)=\overline{IF{\mathcal I}}_{2m}(213)$ and
$$\displaystyle \overline{IF{\mathcal I}}_{2m}(132)=C_m(q^2).$$
\label{thm:barFI_n(132)}
\end{thm}
\begin{proof}
We can write $\iota\in {\mathcal I}_{2m}(132)$ by Lemma~\ref{lemma:132decomp} as $\iota = 21[\alpha,r_{1}(\alpha)]$ for $\alpha\in {\mathfrak S}_m(132)$. By Proposition~\ref{lemma:inv preserved} $r_{1}$ preserves $\inv$ and so preserves $\coinv$ as well, which tells us $\coinv(\iota)=2\coinv(\alpha)$.
As result $\overline{IF{\mathcal I}}_{2m}(132)$ is equivalent to the generating function for ${\mathfrak S}_m(132)$ using $\coinv$ with the substitution of $q^2$ for $q$. Dokos et.\ al.~\cite{DDJSS12} found that this generating function for ${\mathfrak S}(132)$ using $\coinv$ is $C_m(q)$, which proves the result.
Since the map $r_1$ is a bijection from ${\mathcal I}_n(132)$ to ${\mathcal I}_n(213)$ that preserves $\inv$, $\coinv$ and the number of fixed points we must have $\overline{IF{\mathcal I}}_{2m}(132)=\overline{IF{\mathcal I}}_{2m}(213)$.
\end{proof}
We now have what we need to describe $I{\mathcal I}_n(132)$. Recall from Theorem~\ref{thm:SimionSchmidt} that the cardinality of ${\mathcal I}_n(132)$ is the central binomial coefficient so the generating function for $\overline{I{\mathcal I}}_n(132)$ will be a $q$-analogue for the central binomial coefficient. This is not the standard $q$-analogue, but one that will parallel the following identity. A corollary of Gould and Kaucky's work in~\cite{GK66} is
$$a_n=a_{n-1}+\sum_{k=1}^{\floor{n/2}}C_{k-1}a_{n-2k}$$
where $a_n=\binom{n}{\ceil{n/2}}.$ This identity appears in Simion and Schmidt's paper~\cite{ss:rp} (equation 5) with a discussion about integer lattice paths.
\begin{prop} With $\overline{I{\mathcal I}}_0(132)=1$ we have for $n\geq 1$ that
$$ \overline{I{\mathcal I}}_n(132)=q^{n-1} \overline{I{\mathcal I}}_{n-1}(132)+\sum_{k=1}^{\floor{n/2}} q^{2(k-1)}C_{k-1}(q^2)\overline{I{\mathcal I}}_{n-2k}(132).$$
\label{II132}
\end{prop}
\begin{proof}
By Lemma~\ref{lemma:132decomp} if $\iota(n)=n$ then $\iota = 12[\alpha,1]$ for some $\alpha\in {\mathcal I}_{n-1}(132)$, which implies $\coinv(\iota)=\coinv(\alpha)+n-1$. In any other case $\iota(n)=k\neq n$ and
$\iota = 45312[\alpha,1,\beta,r_1(\alpha),1]$ for some $\alpha\in {\mathfrak S}_{k-1}(132)$ and $\beta \in {\mathcal I}_{n-2k}(132)$.
It follows from the decomposition of $\iota$ and from Proposition~\ref{lemma:inv preserved} that $r_1$ preserves $\coinv$ and so $\coinv(\iota)=2\coinv(\alpha)+\coinv(\beta)+2(k-1)$. Putting this all together and using Theorem~\ref{thm:barFI_n(132)} we get the above equality.
\end{proof}
\subsection{The pattern $321$}
\label{sec:inv321}
In this section we describe involutions avoiding $321$ particularly focusing the structure of the two-cycles and the associated standard Young Tableaux, a concept we give a brief introduction to below. When listing the two-cycles of an involution we use in this section and future sections the convention of listing all two-cycles $(s_1,t_1), (s_2,t_2),..., (s_m,t_m)$ so that each cycle is written with its minimum element on the left, $s_i<t_i$, and the cycles themselves are ordered so that their minimum elements increase, $s_i<s_{i+1}$.
To introduce standard Young Tableau we first define an {\it integer partition} $\lambda=(\lambda_1, \lambda_2, \dots, \lambda_k)$ of $n$, $\lambda\vdash n$, which is a weakly decreasing sequence of positive integers that sum to $n$. Given an integer partition we can construct its {\it Young diagram} that has $\lambda_i$ boxes in row $i$ left-justified and labeled from top to bottom. We label the columns from left to right and define the {\it size}, $|\lambda |$, of a Young diagram to be the number of boxes. A standard Young Tableau, SYT, of size $n$ is a Young diagram of size $n$ filled with numbers $1,2,\dots, n$ so that each box contains a unique number, each row is strictly increasing left to right and each column is strictly increasing from top to bottom. We will call the numbers in the boxes {\it fillings} and the underlying integer partitions its {\it shape}. See Figure~\ref{fig:RSK_invo} for an example. The descent set, $\Des(P)$, of a SYT $P$ is the collection of all fillings $i$ such that $i+1$ appears in a lower row. There is a well-known bijection from permutations to pairs of SYT of the same shape called the Robinson-Schensted-Knuth, RSK, correspondence. This correspondence has many beautiful properties and we state the ones relevant to this paper in the next proposition. For more information see~\cite{S01} or~\cite{S99}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale = 1]
\begin{scope}[shift={(0,0)}]
\draw (0,0) node { \young({}{}{}{}{}{},{}{}{}{}{}{},{}{}{}{}{},{}{}{}) };
\draw (3.7,0) node { \young(135,24,6) };
\draw (7,0) node { \young(134,256,78) };
\end{scope}
\end{tikzpicture}
\end{center}
\caption{From left to right we have the Young diagram for $\lambda=(5,5,4,2)$, the SYT associated the involution $\hat\iota =216453$ and the SYT associated to $21785634 =\hat\iota + (4,8)$ . }
\label{fig:RSK_invo}
\end{figure}
\begin{prop} Let $\sigma\in {\mathfrak S}_n$ correspond by RSK to the pair of SYT tableau $(P,Q)$ of the same shape and size $n$.
\begin{enumerate}[(i)]
\item The descent sets $\Des(\sigma)=\Des(Q)$ are equal.
\item The pair of SYT for the inverse $\sigma^{-1}$ is $(Q,P)$.
\item If $\sigma$ is an involution then $P=Q$.
\item The length of the longest increasing sequence in $\sigma$ equals the length of the longest row in $P$.
\item The length of the longest decreasing sequence in $\sigma$ equals the length of the longest column in $P$.
\item The number of fixed points in $\sigma$ equals the number of columns in $P$ with odd length. \hfill \qed
\end{enumerate}
\label{SYTfacts}
\end{prop}
By (ii) in the above proposition we know an involution $\iota$ corresponds by RSK to $(P,P)$. This means we can associated an involution by RSK to a single SYT $P$. The next lemma is our first about the structure of two-cycles in an involution avoiding 321 and fillings of the associated SYT. We note that part (ii) in Lemma~\ref{lemma:321structure} can be seen as a corollary of B\'{o}na and Smith's Proposition 3.1 in~\cite{BS16}. We provide an alternative proof using an algorithm by Beissinger~\cite{B87} that we introduce before the proof. The first part of the next lemma appears in Manara and Perelli Cippo's paper~\cite{MP11} (Proposition 2.3).
\begin{lemma}
Let $\iota \in {\mathcal I}_n(321)$ and suppose its two-cycles are
$(s_1,t_1), (s_2,t_2),..., (s_m,t_m)$.
\begin{enumerate}[(i)]
\item We must have that $t_1 < t_2 < ... < t_m$.
\item If $\iota$ is fixed-point-free then the associated SYT has two rows of equal length, $m$ columns, and the $i$th column is filled with $s_i$ and $t_i$.
\end{enumerate}
\label{lemma:321structure}
\end{lemma}
Between the hundreds of sets that are counted by the Catalan numbers, with recurrence $C_0=1$ and
\begin{equation}C_n=\sum_{k=0}^{n-1}C_kC_{n-k-1},\label{Catalan}\end{equation}
there are even more established bijections. See Stanley's book~\cite{S99} for more information. To prove the lemma above we show one of these bijections between $F{\mathcal I}_n(321)$ and SYT with two rows of equal length using the RSK correspondence. To aid us in this proof and some later proofs we will present a short-cut algorithm by Beissinger~\cite{B87} that quickly determines SYT of involutions. Say that we have an involution $\hat\iota\in {\mathcal I}_{n-2}$ and we want to add another two-cycle $(i,j)$ with $1\leq i<j \leq n$. What we mean is, in one-line notation, increase all numbers $\geq i$ in $\hat\iota$ by one, further increase all numbers $\geq j$ by another one and have this word written in the indices $[n]\setminus \{i,j\}$. We get our new involution by placing $i$ at index $j$ and $j$ at index $i$. Beissinger notates this new involution $\iota=\hat\iota+(i,j)$. Her algorithm describes the SYT of $\iota$ based off of the SYT of $\hat\iota$ where $\iota=\hat\iota+(i,n)$ in the case of $j=n$. If $\hat{T}$ is the SYT for $\hat\iota$ we get the SYT for $\iota$ following three steps.
\begin{enumerate}
\item Increase all fillings $\geq i$ in $\hat{T}$ by one.
\item Insert $i$ as the Robinson-Schensted-Knuth bumping algorithm dictates. Say the bumping algorithm ends on row $r$.
\item Insert $n$ at the end of row $r+1$.
\end{enumerate}
With these three steps we arrive at the SYT for $\iota=\hat\iota+(i,n)$. See Figure~\ref{fig:RSK_invo} for an example.
We now begin the proof for Lemma~\ref{lemma:321structure}.
\begin{proof}[Proof for Lemma~\ref{lemma:321structure}]
To show (i) let $\iota\in {\mathcal I}_n(321)$ have two-cycles as stated but $t_j>t_{j+1}$ for some $j$. In this case we have the subword $t_jt_{j+1}s_{j+1}$ that is the pattern $321$.
Now consider $\iota \in F{\mathcal I}_{2m}(321)$ with two-cycles $(s_1,t_1), (s_2,t_2),..., (s_m,t_m)$. We will show (ii) by inducting on $m$. Part (ii) is true by vacuum if $m=0$, so we assume that $m>0$ and part (ii) is true for all fixed-point-free involutions with length less than $2m$. By part (i) of this lemma the sequence of $t_i$'s increases so the Young diagram with two rows of length $m$ and $s_i,t_i$ filling the $i$th column in increasing order is indeed a SYT. Call this SYT $T$. We will show that $T$ is also the SYT corresponding to $\iota$. Consider the involution $\hat\iota$ with two-cycles $(\hat s_1,\hat t_1), (\hat s_2,\hat t_2),..., (\hat s_{m-1},\hat t_{m-1})$ where $\hat s_i=s_i$ or $\hat s_i=s_i-1$ (similarly $\hat t_i=t_i$ or $\hat t_i=t_i-1$) depending on whether $s_i<s_m$ or $s_i>s_m$ respectively. This assures that $\hat\iota$ is a fixed-point-free involution avoiding $321$ and $\iota = \hat\iota+(s_m,t_m)$. By our inductive assumption the SYT $\hat T$ for $\hat\iota$ has two rows length $m-1$ with $\hat{s}_i$ and $\hat{t}_i$ in the $i$th column.
Since $\iota= \hat\iota+(s_m,t_m)$ and $t_m=2m$ we can use Beissinger's algorithm. First increase all fillings $\geq s_m$ in $\hat T$ by one, which means we have $s_i$ and $t_i$ are in the $i$th column for $i<m$. Step two has us insert $s_m$ via the bumping algorithm. Since the maximum of the first row is $s_{m-1}$, which is less than $s_m$, we place $s_m$ at the end of the first row, i.e. $m$th column. Since the algorithm ends on the first row we place $t_m=2m$ at the end of the second row, i.e. the $m$th column. Hence our final tableau is $T$, which proves part (ii).
\end{proof}
We can actually calculate $\inv$ from $\iota$ easily from the two-cycles of an involution that avoids $321$.
\begin{lemma}
Let $\iota \in {\mathcal I}_n(321)$ and suppose its two-cycles are
$(s_1,t_1), (s_2,t_2),..., (s_m,t_m)$.
If $(a,b)\in \Inv(\iota)$ then $a=s_i$ and $b=t_j$ for some $i$ and $j$ and additionally
$$\displaystyle \inv(\iota)=\sum_{i=1}^{m}t_i-s_i.$$
\label{lemma:321inv}
\end{lemma}
\begin{proof}
Consider $(a,b)\in \Inv(\iota)$ for $\iota\in {\mathcal I}_n(321)$ with two-cycles written as in this lemma. There are three possibilities for both $a$ and $b$ as they can be equal to a $s_i$, a $t_i$, or a fixed point. By Lemma \ref{lemma:321structure} the $s_i$'s and $t_i$'s form increasing sequences in $\iota$ so we know $a$ and $b$ cannot both be $s_i$'s nor can they be both $t_i$'s. If $a$ and $b$ are fixed points then $(a,b)$ is not an inversion. If instead $a$ is a fixed point and $b$ is not then
$ba\iota(b)$ is the pattern $321$. Similarly $b$ can not be a fixed point, so neither $a$ nor $b$ are fixed points. Take the case where $a=t_i$ and $b=s_j$ for some $i$ and $j$, then $\iota(t_i)<t_i<s_j<\iota(s_j)$ and $(a,b)$ is not an inversion. The only remaining case possible is $a=s_i$ and $b=t_j$ for some $i$ and $j$.
Next we will show for any $i$ that the number of inversions $(s_i,b)\in \Inv(\iota)$ is $t_i-s_i$, and by the first part this is enough to complete the proof. Note that $s_j\in [s_i,t_i-1]$ implies that index $t_j$ is to the right of index $s_i$ so $(s_i,t_j)$ is an inversion. Similarly, $t_j\in [s_i,t_i-1]$ implies that $(s_i,t_j)$ is an inversion. Because $\iota$ avoids $321$ the interval $[s_i,t_i-1]$ is comprised of only $s_j$'s and $t_j$'s, which implies that the number of inversions $(s_i,b)$ is at least the size of the interval $|[s_i,t_i-1]|=t_i-s_i$.
This counts all possible inversions $(s_i,b)$ because of the following.
From the first part of the proof we know if $(s_i,b)$ is an inversion that $b=t_j$ for some $j$ and since $(s_i,t_j)$ is an inversion we must have $\iota(s_i)=t_i>\iota(t_j)=s_j$.
There are then two cases $s_j\in [s_i,t_i-1]$ or $s_j<s_i$ and this second case implies $t_j\in [s_i,t_i-1]$.
\end{proof}
Now we have all the tools we need to establish the generating functions for $IF{\mathcal I}_{2m}(321)$ and $I{\mathcal I}_n(321)$. The generating function for the fixed-point-free case analogues the Catalan recurrence in equation~\eqref{Catalan}.
\begin{thm} We have $IF{\mathcal I}_{0}(321)=1$ and for $m>0$
$$ IF{\mathcal I}_{2m}(321)=\sum_{k=1}^{m}q^{2k-1}IF{\mathcal I}_{2k-2}(321)IF{\mathcal I}_{2(m-k)}(321).$$
\label{thm:fpfinv321}
\end{thm}
\begin{proof}
By Lemma~\ref{lemma:321structure} any involution $\iota \in F{\mathcal I}_{2m}(321)$ with two-cycles $(s_1,t_1), (s_2,t_2),..., (s_m,t_m)$ is associated to an SYT, $P$, with two rows of length $m$ and the $i$th column filled with $s_i$ and $t_i$. Let $k$ number the smallest column such that the collection of fillings in the first $k$ columns is the set $[2k]$. As result the first $k$ columns form a SYT, $P_1$, which is associated an involution $\tau_1$ with two-cycles $(s_1,t_1), (s_2,t_2),..., (s_k,t_k)$. The remaining columns, $(k+1)$st through $m$th, are filled with the numbers $[2k+1,2m]$ and if these fillings are decreased by $2k$ then we have a SYT, $P_2$, associated to an involution $\tau_2$ with two-cycles $(s_{k+1}-2k,t_{k+1}-2k),..., (s_m-2k,t_m-2k)$. Note that by Lemma~\ref{lemma:321inv} we have $\inv(\iota)=\inv(\tau_1)+\inv(\tau_2)$.
Because rows and columns in SYT increase the maximum filling in any rectangle of squares is located in the lower-right corner and the minimum is in the upper-left corner.
Recall our choice of $k$. The result is for any $i<k$ the first $i$ columns are not filled with all of $[2i]$, and since columns and rows increase the filling in column $i$ row $2$ must be larger than $2i$. Let $s\in[2i]$ be the smallest filling in columns $i+1$ through $k$. Since rows and columns increase $s$ must be in column $i+1$ and row $1$. As result we can see in $P_1$ that any filling in the second row is larger than the filling of its upper-right neighbor.
Define $P_1'$ to be $P_1$ but we remove the upper-left square filled with $1$, remove the lower-right square filed with $2k$, left align the squares and decrease all fillings by one. Because of what we have noted $P_1'$ has two rows length $k-1$ filled with $[2(k-1)]$, is increasing along rows and columns and so must be a SYT associated to the involution $\tau_1'$ with two-cycles $(s_2-1,t_1-1),(s_3-1,t_2-1),\dots,(s_k-1,t_{k-1}-1)$. Using Lemma~\ref{lemma:321inv} again we have $\inv(\tau_1)=\inv(\tau_1')+2k-1$. Putting it all together $\inv(\iota)=\inv(\tau_1')+\inv(\tau_2)+2k-1$, which implies our recurrence.
\end{proof}
We now use the fixed-point-free case to describe the generating function $I{\mathcal I}_n(321)$.
\begin{prop} We have $I{\mathcal I}_1(321)=1$ and for $n>1$
$$I{\mathcal I}_n(321)=IF{\mathcal I}_{n}(321)+\sum_{k=0}^{\ceil{n/2}-1} IF{\mathcal I}_{2k}(321) I{\mathcal I}_{n-2k-1}(321).$$
\end{prop}
\begin{proof}
Consider $\iota \in {\mathcal I}_n(321)$. If $\iota$ is fixed-point-free then $n$ is even, which gives us the term $IF{\mathcal I}_{n}(321)$. Otherwise $\iota$ will a have fixed point. Let $f$ be the smallest index of any fixed point. Note that $f$ is precluded by a fixed-point-free involution of even length, which implies $f=2k+1$ is odd. Because $\iota$ avoids $321$ $\iota$ can be written as the inflation $123[\tau_1,1,\tau_2]$ where $\tau_1\in F{\mathcal I}_{2k}(321)$ and $\tau_2\in {\mathcal I}_{n-2k-1}(321)$. Since $\inv(\iota)=\inv(\tau_1)+\inv(\tau_2)$ we have our result.
\end{proof}
\subsection{The pattern 123}
In this section we describe $\overline{I{\mathcal I}}_n(123)$. We show when $n$ is odd that $\overline{I{\mathcal I}}_n(123)$ only has non-zero terms with even powers of $q$, which is also true for $\overline{F{\mathcal I}}_{2m}(123)$.
Our approach in this section is to decompose $\iota$ by writing the involution as an addition of several two-cycles using Beissinger's~\cite{B87} notation defined in Section~\ref{sec:inv321}. We first determine which two-cycles we can add to an involution and preserve the avoidance of 123.
\begin{lemma} We have that
\begin{enumerate}[(i)]
\item if $\hat\iota\in{\mathcal I}_{n-2}(123)$ and $a\leq\min\Asc(\hat\iota)+1$ where $\Asc(\hat\iota)\neq\emptyset$ then $\iota = \hat\iota + (a,n)$ avoids 123, and
\item if $\iota\in{\mathcal I}_n(123)$ and $\iota = \hat\iota + (a,n)$ then either $\Asc(\hat\iota)=\emptyset$ or $a\leq\min\Asc(\hat\iota)+1$.
\end{enumerate}
\label{lem:123andminasc}
\end{lemma}
\begin{proof}Assume $\hat\iota\in{\mathcal I}_{n-2}(123)$, $a\leq\min\Asc(\hat\iota)+1$, $\Asc(\hat\iota)\neq\emptyset$ and $\iota = \hat\iota + (a,n)$. We will show $\iota$ avoids 123 by considering instead if $\iota$ contains the pattern 123. Because $\hat\iota$ avoids 123 the pattern of 123 in $\iota$ must involve $n$ or $a$. The pattern can not use $n$ because $a\leq\min\Asc(\hat\iota)+1$ implies $\iota$ decreases before $n$. If the pattern involves $a$ then $a$ plays the role of 3 and there exists a coinversion $(i,j)$ with $i,j<a$ so $\iota(i)<\iota(j)$, which contradicts $\iota$ decreasing before $a$. Hence $\iota$ avoids 123.
Now assume that $\iota\in{\mathcal I}_n(123)$ and $\iota = \hat\iota + (a,n)$. If we have $\Asc(\hat\iota)=\emptyset$ then we are done. In any other case $\Asc(\hat\iota)\neq \emptyset$. We know that $\iota$ avoids 123 so $\iota$ is decreasing before $n$ on the indices in $[a-1]$. This means that $\hat\iota$ must also be decreasing on the indices $[a-1]$, which implies that $a\leq\min\Asc(\hat\iota)+1$.
\end{proof}
Given an involution $\iota$, we say we {\it pull off a two-cycle} when we write $\iota=\hat\iota +(a,|\iota|)$ for $a<|\iota|$. Consider $\iota$ where we pull off many two-cycles,
$$\iota = \tau + (a_1,m+2) + (a_2,m+4) + \dots + (a_{\ell},m+2\ell).$$
We can continue to do this until $\tau$ has a fixed point at the end or $\tau$ is the empty permutation.
However, to better describe the sequence $(a_1,a_2,\dots,a_{\ell})$ we will pull off two-cycles until $\tau$ ends in a fixed point or until $\tau$ for the first time has an empty ascent set.
First, consider the case where $\tau$ ends with a fixed point. Note that since $\tau$ avoids 123 we must have $\tau$ decreasing before this fixed point, which implies that $\tau = 12[{\mathfrak d}_{m-1},1]$ and $\min\Asc(\tau) =m-1$. Further this means by Lemma~\ref{lem:123andminasc} that $a_1\in [m]$.
This also tells us the minimum ascent of $\tau + (a_1,m+2)$ is $a_1-1$ if $a_1\neq 1$. By Lemma~\ref{lem:123andminasc} the values $a_2$ can take are $1\leq a_2\leq a_1$. This seems to imply that the sequence $(a_1,a_2,\dots,a_{j-1})$ is weakly decreasing, which is not fully the case. If we had $a_2 = 1$ then the $\min\Asc( \tau + (a_1,m+2)+(a_2,m+4))$ is $a_1$ so instead of having $a_3\leq a_2$ we actually have $a_3\leq a_1+1$, which brings us to the following definition.
Let $A_{m,\ell}$ define a set of sequences $(a_1,a_2,\dots,a_{\ell})$ of positive integers such that
\begin{enumerate}
\item $a_1\leq m$,
\item if $a_1,a_2,\dots, a_{i}$ are all $1$ then $a_{i+1}\leq m+i$ and
\item if $a_{i}\neq 1$ and $a_{i+1},a_{i+2},\dots,a_{i+r}$ all equal to 1 then $a_{i+r+1}\leq a_{i}+r$.
\end{enumerate}
Next consider the case where $\tau$ has an empty ascent set so $\tau = {\mathfrak d}_{m}$. Recall the earlier assumption that we had stopped pulling off two-cycles because $\tau + (a_1,m+2)$ had an ascent but $\tau$ did not. This does not change the requirements for the sequence $(a_1,a_2,\dots a_{\ell})$ except now $a_1\neq 1$ and $a_1\leq m+1$. For this we define
$$B_{m,\ell}=\{(a_1,a_2,\dots,a_{\ell})\in A_{m,\ell}:a_1\neq 1\}.$$
\begin{lemma} for all $\iota \in {\mathcal I}_n(123)$ we can write $\iota$ uniquely as $$\iota = \tau + (a_1,m+2) + (a_2,m+4) + \dots + (a_{\ell},m+2\ell)$$
\begin{enumerate}[(i)]
\item where $\tau =12[{\mathfrak d}_{m-1},1]$, $m>1$ and $(a_1,a_2,\dots,a_{\ell})\in A_{m,\ell}$ or
\item $\tau ={\mathfrak d}_m$ and $(a_1,a_2,\dots,a_{\ell})\in B_{m+1,\ell}$.
\end{enumerate}
Conversely, any $\iota$ from case (i) or (ii) will avoid 123.
\label{lem:123decomp}
\end{lemma}
\begin{proof}
Certainly if $\iota$ avoids 123 we can write $\iota = \tau + (a_1,m+2) + (a_2,m+4) + \dots + (a_{\ell},m+2\ell)$ where $\tau$ has a fixed point at $m$, $\tau=12[{\mathfrak d}_{m-1},1]$, or $\ell$ was the smallest integer where $\tau$ has an empty ascent set, $\tau={\mathfrak d}_m$. These two cases intersect when $\tau= 1$, so to make these cases distinct the case when $\tau=1$ will fall under $\tau={\mathfrak d}_1$ and we will only let $\tau$ fall under the case $\tau=12[{\mathfrak d}_{m-1},1]$ when $m>1$. If we do not have one case or the other then we could pull off another two-cycle from $\tau$. This means to show the first part of the lemma we only have to show that the sequence $(a_1,a_2,\dots,a_{\ell})$ is in $A_{m,\ell}$ or $B_{m+1,\ell}$ respectively. For ease, define $\tau_i = \tau + (a_1,m+2) + (a_2,m+4) + \dots + (a_{i},m+2i)$ so $\iota=\tau_{\ell}$. If $\tau=12[{\mathfrak d}_{m-1},1]$ then $\min\Asc(\tau)=m-1$ so by Lemma~\ref{lem:123andminasc} $a_1\in [m]$. If instead
$\tau={\mathfrak d}_m$ where $\tau_1$ has an ascent then $2\leq a_1\leq m+1$. This proves the condition on $a_1$ in both cases. We show the second and third conditions by inducting on $\ell$ and assume that $(a_1,\dots,a_{\ell-1})$ satisfies the second and third conditions. Say that there exists an $a_{i}\neq 1$ but $a_{i+1},a_{i+2},\dots, a_{\ell-1}$ all equal 1.
This means that $\min\Asc(\tau_i)=a_{i}-1$ and $\min\Asc(\tau_{\ell-1})=a_{i}+\ell-i-2$ where $\ell-i-1$ is the number of terms in $[i+1,\ell-1]$ so $a_{\ell}\leq a_{i}+\ell-i-1$. This proves the third condition so all we have left is to consider the case where $a_i= 1$ for all $i<\ell$. In this case $a_1,\dots,a_{\ell-1}$ all are 1, which can only happen in the case where $\tau=12[{\mathfrak d}_{m-1},1]$. So $\min\Asc(\tau_{\ell-1})=m+\ell-2$, which implies that $a_{\ell}\leq m-1+\ell-1$ and shows that we satisfy the second condition.
Conversely, assume that $\iota$ is as in case (i) or (ii) from this lemma. We will show that $\iota$ avoids 123 by induction on $\ell$. Certainly by Lemma~\ref{lem:123andminasc} we know $12[{\mathfrak d}_{m-1},1]+(a_1,m+2)$ avoids 123 since $a_1\in [m]$ and $\min\Asc(12[{\mathfrak d}_{m-1},1])=m-1$. Since ${\mathfrak d}_m+(a_1,m+2)$ avoids 123 and gains an ascent for any choice of $a_1\in[2,m+1]$ any $\iota$ from case (i) or (ii) avoids 123 and has an ascent when $\ell=1$. Assume $\ell>1$ and $\tau_{\ell-1}$ avoids 123 and has an ascent. If $a_{\ell-1}\neq 1$ then $\min\Asc(\tau_{\ell-1})=a_{\ell-1}-1$. Whether $(a_1,a_2,\dots,a_{\ell})$ is in $A_{m,\ell}$ or $B_{m+1,\ell}$ we still have $a_{\ell-1}\geq a_{\ell}$ so by Lemma~\ref{lem:123andminasc} we must have that $\tau_{\ell}=\iota$ avoids 123. Next consider the case when $a_{i}\neq 1$ but $a_{i+1},a_{i+2},\dots, a_{\ell-1}$ all equal 1. Whether $(a_1,a_2,\dots,a_{\ell})$ is in $A_{m,\ell}$ or $B_{m+1,\ell}$ we still have $a_{\ell}\leq a_{i}+\ell-i-1$, $\min\Asc(\tau_{i})=a_i-1$ and $\min\Asc(\tau_{\ell-1})=m+\ell-2$, which by Lemma~\ref{lem:123andminasc} implies $\tau_{\ell}=\iota$ avoids 123. Our last case is when $a_i= 1$ for all $i<\ell$ so $a_{\ell}\leq m+\ell-1$, which only happens in the case when $\tau=12[{\mathfrak d}_{m-1},1]$. Hence, $\min\Asc(\tau_{\ell-1})=m+\ell-2$, which implies by Lemma~\ref{lem:123andminasc} that $\tau_{\ell}=\iota$ avoids 123.
\end{proof}
This lemma describes how we can decompose $\iota$ avoiding 123 uniquely as an addition of two-cycles. We are particularly interested in this because we can calculate $\inv(\iota)$ from an addition of two-cycles. However, our calculation turns out nicer when considering $\coinv$ instead.
\begin{lemma}
If $|\tau|=m$ and $\iota = \tau + (a_1,m+2) + (a_2,m+4) + \dots + (a_{\ell},m+2\ell)$
then $$\coinv(\iota)=\coinv(\tau)+2(a_1+a_2+\dots + a_{\ell}-\ell).$$
\label{lem:123inv}
\end{lemma}
\begin{proof}
We will first consider $\iota = \tau + (a,n)$ for $|\iota|=n$. The coinversions of $\iota$ come from the coinversions of $\tau$, the coinversions from index $n$ and the coinversions from index $a$. The number of coinversions from $\tau$ is $\coinv(\tau)$. The number of coinversions from index $a$ is $a-1$ because all indices $i$ to the left of $\iota(a)=n$ in $\iota$ form a coinversion $(i,a)$ because $\iota(i)<n$. The number of coinversions from index $n$ is $a-1$ because $\iota(n)=a$ is at the end of $\iota$ and all numbers smaller than $a$ are to its left. There is not a coinversion between $n$ and $a$ so $\coinv(\iota)=\coinv(\tau)+2a-2$.
Applying this to the full sum of two-cycles, $\iota= \tau + (a_1,m+2) + (a_2,m+4) + \dots + (a_{\ell},m+2\ell)$, gives us $\coinv(\iota)=\coinv(\tau) + 2(a_1+a_2+\dots + a_{\ell}-\ell).$
\end{proof}
Using this lemma we can calculate the generating function $\overline{I{\mathcal I}}_n(123)$ from the set of sequences $(a_1,a_2,\dots,a_{\ell})$ in $A_{m,\ell}$ or $B_{m,\ell}$ so we define the function
$$A_{m,\ell}(q)=\sum_{(a_1,a_2,\dots,a_{\ell})\in A_{m,\ell}}q^{a_1+a_2+\dots +a_{\ell}-\ell}$$
for the set $A_{m,\ell}$ and define $B_{m,\ell}(q)$ similarly for the set $B_{m,\ell}$. We give a recurrence on $\ell$ for these functions and then describe $\overline{I{\mathcal I}}_n(123)$.
\begin{lemma}Given $A_{m,0}(q)=1$ for all $m\geq 1$ we have for $m,\ell\geq 1$ that
$$A_{m,\ell}(q)=A_{m+1,\ell-1}(q)+\sum_{i = 2}^{m}q^{i-1}A_{i,\ell-1}(q),$$
and with $B_{1,\ell}(q)=0$ for $\ell\geq 0$, $B_{m,0}(q)=1$ for all $m>1$ we have for $m>1$ and $\ell\geq 1$
$$B_{m,\ell}(q)=\sum_{i = 2}^{m}q^{i-1}A_{i,\ell-1}(q).$$
\end{lemma}
\begin{proof} First, we will prove the recurrence for $A_{m,\ell}(q)$. Consider the sequence $(a_1,a_2,\dots, a_{\ell})\in A_{m,\ell}$. Because the associated term is $q^{a_1+a_2+\dots +a_{\ell}-\ell}$ we can say that each $a_i$ contributes $q^{a_i-1}$ to the product. For any $a_1\in[2,m]$ we know $(a_2,\dots, a_{\ell})\in A_{a_1,\ell-1}$, which gives us the terms in the summation. If instead $a_1 = 1$ then $a_2$ can be at most $m+1$ so $(a_2,\dots, a_{\ell})\in A_{m+1,\ell-1}$, which gives us the term $A_{m+1,\ell-1}(q)$ and completes the proof for the first recurrence.
For the second recurrence consider $(a_1,a_2,\dots, a_{\ell})\in B_{m,\ell}$ so we always have $a_1\in[2,m]$. It follows that $(a_2,\dots, a_{\ell})\in A_{a_1,\ell-1}$ since $a_2$ can actually be 1. Since $a_1\neq 1$ this finishes the proof of the second recurrence.
\end{proof}
We now have everything we need to determine $\overline{I{\mathcal I}}_n(123)$, but we will do so in two cases. The first will be when $n$ is odd and the second when $n$ is even. This distinction will be important since we consider the number of fixed points in $\iota$, which is tied to the parity of $n$. If an involution has $k$ two-cycles then these two-cycles form a perfect matching and together use $2k$ indices. The remaining $n-2k$ indices are fixed points so the number of fixed points in $\iota$ always shares parity with $n$. Any involution avoiding 123 will have at most two fixed points else we form the pattern 123. If $n=2k+1$ is odd then certainly there is exactly one fixed point.
\begin{thm}For $n=2k+1$ and $k\geq 0$ we have
$$\overline{I{\mathcal I}}_{2k+1}(123)=\sum_{j=1}^kq^{2j}A_{2j+1,k-j}(q^{2})+\sum_{j = 0}^kB_{2j+2,k-j}(q^{2}).$$
\label{thm:123invodd}
\end{thm}
\begin{proof}
If $\iota$ avoids 123 and has length $n=2k+1$ then $\iota$ must have exactly one fixed point. Looking at Lemma~\ref{lem:123decomp} we have two cases. If $\iota$ falls under case (i) then $\iota = 12[{\mathfrak d}_{2j},1] + (a_1,2j+3) + \dots + (a_{k-j},2k+1)$ for some $1\leq j\leq k$ where $(a_1,\dots, a_{k-j})\in A_{2j+1,k-j}$. Also this tells us by Lemma~\ref{lem:123inv} that $\coinv(\iota)=2j+2(a_1+\dots+a_{k-j}-(k-j))$, which gives us the first summation.
If $\iota$ instead falls under (ii) of Lemma~\ref{lem:123decomp} then $\iota = {\mathfrak d}_{2j+1} + (a_1,2j+3) + \dots + (a_{k-j},2k+1)$ for some $0\leq j\leq k$ where $(a_1,\dots, a_{k-j})\in B_{2j+2,k-j}$. By Lemma~\ref{lem:123inv} we have that $\coinv(\iota)=2(a_1+\dots+a_{k-j}-(k-j))$, which gives us the second summation.
\end{proof}
If $\iota$ avoids 123 and $n=2k$ is even then there can be either no fixed point or two fixed points.
\begin{thm}For $n=2k$ and $k\geq 1$ we have
$$\overline{I{\mathcal I}}_{2k}(123)=\sum_{j=1}^kB_{2j+1,k-j}(q^{2})+\sum_{j = 1}^kq^{2j-1}A_{2j,k-j}(q^{2}).$$
\label{thm:123inveven}
\end{thm}
\begin{proof}
If $\iota$ avoids 123 and has length $n=2k$ then $\iota$ must have zero or two fixed points. Considering the case where $\iota$ has zero fixed points $\iota$ must fall under case (ii) in Lemma~\ref{lem:123decomp} and $\iota = {\mathfrak d}_{2j}+ (a_1,2j+2) + \dots + (a_{k-j},2k)$ for some $1\leq j\leq k$ where $(a_1,\dots, a_{k-j})\in B_{2j+1,k-j}$. By Lemma ~\ref{lem:123inv} we have $\coinv(\iota)=2(a_1+\dots+a_{k-j}-(k-j))$, which gives us the first summation.
If $\iota$ instead has two fixed points then $\iota$ fall under case (i) in Lemma~\ref{lem:123decomp} meaning $\iota = 12[{\mathfrak d}_{2j-1},1]+ (a_1,2j+2) + \dots + (a_{k-j},2k)$ for some $1\leq j\leq k$ where $(a_1,\dots, a_{k-j})\in A_{2j,k-j}$. By Lemma~\ref{lem:123inv} we have $\coinv(\iota)=2j-1+2(a_1+\dots+a_{k-j}-(k-j))$, which gives us the second summation.
\end{proof}
We can use the proof of Theorem~\ref{thm:123inveven} to determine the generating function for $\coinv$ and involutions avoiding 123 in the fixed-point-free case.
\begin{cor}For $n=2k$ and $k\geq 1$ we have
$$\overline{IF{\mathcal I}}_{2k}(123)=\sum_{j=1}^k B_{2j+1,k-j}(q^{2}).$$
\vspace{-1cm}
\hfill \qed
\vspace{.2cm}
\end{cor}
One interesting observation about $\overline{I{\mathcal I}}_n(123)$ happens for odd $n=2k+1$. From the formula in Theorem~\ref{thm:123invodd} we can see that $\overline{I{\mathcal I}}_{2k+1}(123)$ can only have even powers of $q$. As result $\coinv(\iota)$ is even for all $\iota\in {\mathcal I}_{2k+1}(123)$. This is similarly true for $\iota\in F{\mathcal I}_{2k}(123)$.
\begin{cor}
For $\iota$ avoiding $123$, $\coinv(\iota)$ is odd if and only if $|\iota|$ is even and $\iota$ has a fixed point. \hfill \qed
\label{cor:123inv}
\end{cor}
\section{Length three patterns and maj}
\label{maj}
Just like for inversions we find that the $M{\mathcal I}$-Wilf equivalence classes for length three patterns are determined trivially, so this section's focus will be on describing the generating functions. Some of these functions have been studied by others like Dokos et.\ al.~~\cite{DDJSS12} who determined $M{\mathcal I}_n(231)$ and Barnabei et.\ al.~\cite{BBES14} who independently found that $M{\mathcal I}_n(321)$ is the standard central $q$-binomial coefficient whose proof gives a connection to hook decompositions. The bijection we present later in Section~\ref{subsec321} is shorter and gives a connection to core, an unrelated concept that is used for proving symmetric chain decompositions in poset theory.
In order to be complete we present a description for every generating function for all length three patterns.
The functions $M{\mathcal I}_n(123)$ and $M{\mathcal I}_n(213)$ will be described using $M{\mathcal I}_n(321)$ and $M{\mathcal I}_n(132)$ respectively because we will additionally be proving the symmetry $M{\mathcal I}_n(\pi_1;q)=q^{\binom{n}{2}}M{\mathcal I}_n(\pi_2;q^{-1})$ between the pairs of respective patterns in Theorem~\ref{thm:132symm213} and Proposition~\ref{thm:123&321 symmetry}. The symmetry between the patterns 123 and 321 can be proven using the Robinson-Schensted-Knuth correspondence and transposing tableaux, a map that has been studied and used in many papers including Simion and Schmidt~\cite{ss:rp}, Barnabei et.\ al.~\cite{BBES14} and Deutsch et.\ al.~\cite{DRS07}. This map has also been described in more explicit detail by B\'{o}na and Smith in~\cite{BS16} (Section 3) whose description subverts the RSK algorithm and transposition. Despite it being a similar symmetry, proving this symmetry between the patterns 132 and 213 will require a different map.
Though mostly for the pattern 132, it will be easier for us to describe the generating function in some cases using the different but related statistics $\comaj$ and $\asc$, defined early in Section~\ref{inv}, and the generating function in the fixed-point-free case.
The associated generating functions will be notated
$$\overline{ M{\mathcal I}}_n(\pi;q,t)=\sum_{\iota \in {\mathcal I}_n(\pi)} q^{\comaj(\iota)}t^{\asc(\iota)}$$
and
$$\overline{ MF{\mathcal I}}_n(\pi;q,t)=\sum_{\iota \in F{\mathcal I}_n(\pi)} q^{\comaj(\iota)}t^{\asc(\iota)}.$$
Determining these functions is equivalent to determining the ones for the major index due to the following identities.
\begin{lemma}
We have the following equalities involving the statistics $\maj$, $\des$, $\comaj$ and $\asc$.
\begin{enumerate}
\item For $\sigma \in {\mathfrak S}_n$ we have $\maj(\sigma)=\binom{n}{2}-\comaj(\sigma)$ and $\des(\sigma)=n-1-\asc(\sigma)$.
\item $\displaystyle q^{\binom{n}{2}}t^{n-1}\overline{M{\mathcal I}}_n(\pi;q^{-1},t^{-1})=\sum_{\iota \in {\mathcal I}_n(\pi)} q^{\maj(\iota)}t^{\des(\iota)}.$
\item $\displaystyle q^{\binom{n}{2}}t^{n-1}\overline{MF{\mathcal I}}_n(\pi;q^{-1},t^{-1})=\sum_{\iota \in F{\mathcal I}_n(\pi)} q^{\maj(\iota)}t^{\des(\iota)}.$
\end{enumerate}
\label{lemma:majproperties}
\end{lemma}
\begin{proof}
All these equalities are true by the fact that $i\in[n-1]$ is an ascent or a descent for any permutation $\sigma$ of length $n$. So the disjoint union $\Des(\sigma)\cup\Asc(\sigma)$ is $[n-1]$.
\label{lemma:ascdessymm}
\end{proof}
These bivariate functions determine the generating function for $\maj$ since $q^{\binom{n}{2}}\overline{M{\mathcal I}}_n(\pi;q^{-1},1)=M{\mathcal I}_n(\pi)$.
\subsection{$M{\mathcal I}$-Wilf equivalence classes for length three patterns}
The equivalence classes for the major index are trivially determined.
\begin{prop} The only non-singleton $M{\mathcal I}$-Wilf equivalence class for length three patterns is $[231]_{M{\mathcal I}}=\{231,312\}$.
\label{prop:MIWilfequiv}
\end{prop}
\begin{proof} From Proposition~\ref{prop:S(213,312)=I(231)} we know that ${\mathcal I}_n(231)={\mathcal I}_n(312)$ so these patterns are in the same $M{\mathcal I}$-Wilf equivalence class. From the case of $n=3$ we can see that this is the only non-singleton class for length three patters.
\end{proof}
Using the map $r_1$ we can easily show that the patterns $\pi$ and $r_1(\pi)=\pi^{-1}$ are always in the same $M{\mathcal I}$-Wilf equivalence class since $r_1$ is the identity map on involutions. From computational data it does seem that the $M{\mathcal I}$-Wilf equivalence classes are precisely these formed by a pattern $\pi$ and its inverse.
\begin{conj}
The only non-singleton $M{\mathcal I}$-Wilf Equivalence classes are $[\pi]_{M{\mathcal I}}=\{\pi, r_1(\pi)\}$ when $\pi$ is not an involution.
\end{conj}
The above conjecture is checked to be true by computer for patterns up to length 5. For permutations the $M$-Wilf equivalence classes can be much larger for example Dokos el. al.~\cite{DDJSS12} conjectured $[1423]_M=\{1423,2314,2413\}$ and $[3142]_M =\{3142,3241,4132\}$, which was proven by Bloom~\cite{B14} (Theorem 2.1 and Corollary 2.1). Dokos et.\ al.\ also conjectured $132[{\mathfrak i}_m,1,{\mathfrak d}_k]$ and $231[{\mathfrak i}_m,1,{\mathfrak d}_k]$ are $M$-Wilf equivalent and $213[{\mathfrak d}_m,1,{\mathfrak i}_k]$ and $312[{\mathfrak d}_m,1,{\mathfrak i}_k]$ are as well. Yan, Ge and Zhang~\cite{YGZ15} (Theorem 1.3) proved this conjecture to be true in the case of $k = 1$.
\subsection{The patterns 231 and 312}
By Proposition~\ref{prop:S(213,312)=I(231)} we know ${\mathfrak S}_n(312,231)={\mathcal I}_n(231)$, so the generating function has already been determined by Dokos et.\ al.~\cite{DDJSS12} to be the following.
\begin{prop}[Dokos et.\ al.\ Proposition 5.2] We have for $n\geq 1$
$$M{\mathcal I}_n(231)=\prod_{k=0}^{n-1}(1+q^k).$$
\end{prop}
\begin{proof}The decomposition of an involution avoiding 231 by Proposition~\ref{prop:SSdecompI(231)} is ${\mathfrak i}_k[{\mathfrak d}_{j_1},{\mathfrak d}_{j_2},\dots ,{\mathfrak d}_{j_k}]$ where $j_i\geq1$ for all $i$. This determines the unique descent set $\Des(\iota)=[n-1]\setminus\{j_1,j_1+j_2,\dots, j_1+\dots+ j_{k-1}\}$. Conversely, given a set $D\subseteq [n-1]$ we can construct $\iota$ with $\Des(\iota)=D$. This tells us $$M{\mathcal I}_n(231)=\sum_{D\subseteq [n-1]} q^{\sum_{i\in D}i},$$ which is known to be $\prod_{k=0}^{n-1}(1+q^k).$
\end{proof}
The argument presented above is an extension of the argument used by Simion and Schmidt~\cite{ss:rp} (Proposition 6) to count ${\mathcal I}_n(231)$.
\subsection{The pattern 132}
The generating function $M{\mathcal I}_n(132)$ is different from most in that it has internal zeros. A polynomial has an {\it internal zero} if there is a term $q^k$ with a zero coefficient but there exists two other terms $q^i$ and $q^j$ with $i<k<j$ that have non-zero coefficients. The internal zeros of $M{\mathcal I}_n(132)$ occur on a single interval of powers just before the term $q^{\binom{n}{2}}$, which can even be seen when $n=3$. After proving this fact we describe $M{\mathcal I}_n(132)$ recursively in several steps using nothing more than the decomposition of involutions avoiding 132.
\begin{prop}If $\iota \in {{\mathcal I}}_n(132)$ then
\begin{enumerate}[(i)]
\item $\maj(\iota)=\binom{n}{2}$ or $\maj(\iota) \leq \binom{n}{2}-{\lceil n/2 \rceil}$,
\item this bound is sharp and
\item for every non-negative $k\leq \binom{n}{2}-{\lceil n/2 \rceil}$ there exists some $\iota \in {{\mathcal I}}_n(132)$ with $\maj(\iota)=k$.
\end{enumerate}
\end{prop}
\label{thm:132internalzeros}
\begin{proof}
To show (i) we will show that any $\iota\in {\mathcal I}_n(132)$ has either $\Asc(\iota)=\emptyset$ or an $i \geq {\lceil n/2 \rceil}$ with $i\in \Asc(\iota)$ by induction on $n$. This is easy to see for $n =1,2$.
Let $n>2$. According to Lemma~\ref{lemma:132decomp} we have two cases for $\iota\in {\mathcal I}_n(132)$. The first is if $\iota(n)=n$ then $\iota$ has an ascent at $n-1$ that is at least ${\lceil n/2 \rceil}$ so we are done. The second case is that $\iota(n)=k\neq n$ then $\iota$ has the form $\iota =45312[\alpha,1, \beta, r_1(\alpha),1]$ where $\alpha \in {\mathfrak S}_{k-1}(132)$, $\beta \in {\mathcal I}_{n-2k}(132)$ and $1\leq k\leq \floor{\frac{n}{2}}$, which can been seen in Figure~\ref{fig:132}. If $\alpha$ is not the empty permutation then we again have an ascent at $n-1$ and we are done just like in the first case. Consider the case where $\alpha$ is empty so $\iota =321[1, \beta, 1]$ with $\beta\in {\mathcal I}_{n-2}(132)$. By our inductive assumption we could have $\Asc(\beta)=\emptyset$, which implies $\Asc(\iota)=\emptyset$ so we are done. Otherwise by induction there is some $i \geq {\lceil (n-2)/2 \rceil}$ that is in $\Asc(\beta)$. This implies that $i+1\geq {\lceil n/2 \rceil}$ is in $\Asc(\iota)$ and we have finished the proof of (i).
Note that (iii) implies (ii). Say $k \in [0,\binom{n}{2}-{\lceil n/2 \rceil}]$ then there exists a choice of $a$ and $b$ with $b\leq a$ where $k$ is rewritten as $k = \binom{a+1}{2}-b$. If $b\leq {\lfloor n/2 \rfloor}$ then consider
$$\iota =453126[{\mathfrak d}_b,1,{\mathfrak d}_{a-2b},{\mathfrak d}_b,1,{\mathfrak i}_{n-a-2}]$$
that has $\Des(\iota)=[a]-\{b\}$ and $\maj(\iota)=\binom{a+1}{2}-b$. If $b> {\lfloor n/2 \rfloor}$ then consider
$$\iota =42315[{\mathfrak d}_{a-b},{\mathfrak d}_{2b-a},1,{\mathfrak d}_{a-b},{\mathfrak i}_{n-a-1}]$$
that has $\Des(\iota)=[a]-\{b\}$ and $\maj(\iota)=\binom{a+1}{2}-b$. See Figure~\ref{fig:132allmaj} for an example. This proves (ii) and (iii).
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture} [scale = .4]
\draw (0,5) rectangle (1,6);
\draw (5,0) rectangle (6,1);
\draw (1.5,1.5) rectangle (4.5,4.5);
\draw (7,7) rectangle (8.5,8.5);
\filldraw [black]
(1.5,6.5) circle (5pt)
(6.5,1.5) circle (5pt)
(5.5,.5) circle (5pt)
(.5,5.5) circle (5pt)
(2,4) circle (5pt)
(4,2) circle (5pt)
(2.6,3.3) circle (5pt)
(3.3,2.6) circle (5pt)
(7.5,7.5) circle (5pt)
(8,8) circle (5pt);
\draw (0,0) -- (8.5,0) -- (8.5,8.5) -- (0,8.5) -- (0,0);
\end{tikzpicture}
\caption{The diagram for $\iota =453126[{\mathfrak d}_1,1,{\mathfrak d}_4,{\mathfrak d}_1,1,{\mathfrak i}_2]$ with $\maj(\iota)=20=\binom{7}{2}-1$.}
\end{center}
\label{fig:132allmaj}
\end{figure}
We describe the generating function for the pattern 132 in several steps just as we did for $\inv$ in Section~\ref{subsec132inv} by first determining the generated function in the fixed-point-free case. First, we present a useful lemma that describes a fact about $\maj$ and $\des$ for involutions avoiding 132.
\begin{lemma} For $\iota \in F{\mathcal I}_{2m}(132)$ with $\iota=21[\alpha,r_{1}(\alpha)]$ and $\alpha \in {\mathfrak S}_m(132)$ we have
\begin{enumerate}[(i)]
\item $\asc(\iota)=2\asc(\alpha)$ and
\item $\comaj(\iota)=\comaj(\alpha)+\comaj(r_{1}(\alpha))+m\asc(\alpha)$.
\end{enumerate}
\label{lemma:132majasc}
\end{lemma}
\begin{proof}Let $\iota \in F{\mathcal I}_{2m}(132)$ with $\iota=21[\alpha,r_{1}(\alpha)]$ as stated. Part (ii) quickly follows from the fact that $\Asc(\iota)=\Asc(\alpha)\cup (\Asc(r_{1}(\alpha))+m)$ and part (i).
For (i) it will be sufficient to show that $\asc(\alpha)=\asc(r_{1}(\alpha))$ for any $\alpha \in {\mathfrak S}_m(132)$ using induction on $m$. It is easy to see this is true for $m=1$ so let $m>1$. We can decompose $\alpha = 231[\alpha_1,1,\alpha_2]$ for some
$\alpha_1$ and $\alpha_2$ that avoid 132.
By induction we know $\asc(\alpha_1)=\asc(r_{1}(\alpha_1))$ and $\asc(\alpha_2)=\asc(r_{1}(\alpha_2))$. First consider the case where $|\alpha_1|\neq 0$. Since in this case $\asc(\alpha)=\asc(\alpha_1)+\asc(\alpha_2)+1$ and $\asc(r_{1}(\alpha))=\asc(r_{1}(\alpha_2))+\asc(r_{1}(\alpha_1))+1$ we quickly can conclude that $\asc(\alpha)=\asc(r_{1}(\alpha))$. If instead $|\alpha_1|=0$ then $\asc(\alpha)=\asc(\alpha_2)$ and $\asc(r_1(\alpha))=\asc(r_1(\alpha_2))$ so $\asc(\alpha)=\asc(r_1(\alpha))$.
\end{proof}
Using the lemma above we describe the fixed-point-free generating function for $\maj$ and involutions avoiding 132.
\begin{prop} Define $F_{2m}(q,t)= \overline{MF{\mathcal I}}_{2m}(132;q,t)$. We have $F_0(q,t)=1$ and for $m\geq 1$
$$F_{2m}(q,t)=F_{2(m-1)}(q,qt)+\sum_{k=1}^{m-1}q^{2m+k-1}t^2F_{2k}(q,q^{\frac{2m-2k-1}{2}}t)F_{2(m-k-1)}(q,q^{k+1}t).$$
\end{prop}
\begin{proof}
Let $\iota \in F{\mathcal I}_{2m}(132)$.
By Lemma~\ref{lemma:132decomp} we know that $\iota = 21[\alpha,r_{1}(\alpha)]$ for some $\alpha \in {\mathfrak S}_m(132)$. Since $\alpha$ avoids 132 we can write $\alpha = 231[\alpha_1,1,\alpha_2]$ for some $\alpha_1 \in {\mathfrak S}_k(132)$, $\alpha_2 \in {\mathfrak S}_{m-k-1}(132)$ and $0\leq k\leq m-1$. Also, we will define $x = 21[\alpha_1,r_{1}(\alpha_1)]$ and $y= 21[\alpha_2,r_{1}(\alpha_2)]$.
First consider the case where $k=0$ we then have that $\comaj(\iota)=\comaj(y)+\asc(y)$ and $\asc(\iota)=\asc(y)$. Summing over all possible $y\in F{\mathcal I}_{2(m-1)}(132)$ will give us the term $F_{2(m-1)}(q,qt)$ in our sum.
Next consider the case where $1\leq k\leq m-1$. We have $\comaj(\alpha)=\comaj(\alpha_1)+k+\comaj(\alpha_2)+(k+1)\asc(\alpha_2)$. Similarly, $\comaj(r_{1}(\alpha_1))=\comaj(r_{1}(\alpha_2))+\comaj(r_{1}(\alpha))+(m-k-1)\asc(\alpha_1)+m-1$. Also, $\asc(\alpha)=\asc(\alpha_1)+\asc(\alpha_2)+1$. Using the result of Lemma~\ref{lemma:132majasc} we get
$$\comaj(\iota)=\comaj(x)+\comaj(y)+\frac{2m-2k-1}{2}\asc(x)+(k+1)\asc(y)+2m+k-1$$
and
$$\asc(\iota)
=\asc(x)+\asc(y)+2.$$
Summing over all possible $x\in F{\mathcal I}_{2k}(132)$ and $y\in F{\mathcal I}_{2(m-k-1)}(132)$ gives us the term
in the summation and we are done. \end{proof}
Now that we have described the fixed-point-free generating function for the pattern 132 we can describe the generating function $M{\mathcal I}_n(132;q,t)$ recursively.
\begin{prop} Defining $M_n(q,t)=\overline{M{\mathcal I}}_n(132;q,t)$ and $F_{n}(q,t)=\overline{MF{\mathcal I}}_n(132;q,t)$ we have $M_0(q,t)=M_1(q,t)=1$ and for $n\geq 2$,
$$ M_n(q,t)=q^{n-1}tM_{n-1}(q,t)+M_{n-2}(q,qt)+\sum_{k=2}^{\floor{n/2}}q^{n+k-2}t^2F_{2(k-1)}(q,q^{\frac{n-2k+1}{2}}t){M}_{n-2k}(q,q^kt).$$
\end{prop}
\begin{proof}Given an involution $\iota$ that avoids 132 by Lemma~\ref{lemma:132decomp} we can write $\iota = 12[\tau,1]$ for some $\tau\in {\mathcal I}_{n-1}(132)$. In this case $\comaj(\iota)=\comaj(\tau)+n-1$ and $\asc(\tau)=\asc(\iota)+1$, which gives us the term $q^{n-1}tM{\mathcal I}_{n-1}(132;q,t)$.
In any other case we can write $\iota = 45312[\alpha, 1, \beta, r_1(\alpha), 1]$ for some $\alpha\in {\mathfrak S}_{k-1}(132)$, $\beta \in {\mathcal I}_{n-2k}(132)$ and $1\leq k\leq \floor{\frac{n}{2}}$, which can be seen in Figure~\ref{fig:132}. Let $21[\alpha,r_1(\alpha)]=x$. If $k=1$ then the only ascents in $\iota$ come from $\alpha$, $r_1(\alpha)$ and $\beta$. Particularly, $\comaj(\iota)=\comaj(\beta)+\asc(\beta)$ and $\asc(\iota)=\asc(\beta)$, which gives us the term $M{\mathcal I}_{n-2}(132;q,qt)$.
If instead $k>1$ then we have additional ascents at $k-1$ and $n-1$. Using Lemma~\ref{lemma:132majasc} in this case we have
$$\comaj(\iota)=\comaj(x)+\frac{n-2k+1}{2}\asc(x)+\comaj(\beta)+k\asc(\beta)+n+k-2$$
and
$$\asc(\iota)=\asc(x)+\asc(\beta)+2.$$
This gives us the term in the summation and we are done.
\end{proof}
\subsection{The pattern 213}
\label{maj213}
We find that $M{\mathcal I}_n(132)$ and $M{\mathcal I}_n(213)$ display the symmetry \\$M{\mathcal I}_n(132)=q^{\binom{n}{2}}M{\mathcal I}_n(213;q^{-1})$. However, the reason for this does not come from the map $r_{-1}$ and is also different from the reason for the similar symmetry between the patterns $123$ and $321$, which we discuss in Section~\ref{maj123}.
The map used to prove this will be defined in stages, but as an overview will take $\iota\in {\mathcal I}_n(213)$ and will map it to $\Des(\iota)$, which is an unique element in
$$G_n=\{ \{a_1,a_2,\dots,a_{\ell}\}\subseteq[n-1]: a_i<a_{i+1} \text{ and }a_i+a_{\ell -i+1}\geq n\}.$$
This descent set then is mapped to its complement $[n-1]\setminus \Des(\iota)$ that lies in
$$L_n=\{ \{a_1,a_2,\dots,a_{\ell}\}\subseteq[n-1]: a_i<a_{i+1} \text{ and } a_i+a_{\ell -i+1}\leq n\},$$
which is associated to an unique involution in ${\mathcal I}_n(132)$ with this as its descent set. See Figure~\ref{fig:213<->132} for an example. Though we do prove the bijection from the involutions to their descent sets using induction the full map can not be defined directly using this induction.
\begin{figure}
$$798456132\in {\mathcal I}_9(213)\overset{\varphi}{\longleftrightarrow} \{2,3,6,8\}\in G_9 \overset{f}{\longleftrightarrow} \{1,4,5,7\}\in L_9\overset{\psi}{\longleftrightarrow} 867952314\in {\mathcal I}_9(132)$$
\caption{An example of an involution $\iota\in {\mathcal I}_n(213)$ that is mapped to $\tau\in {\mathcal I}_n(132)$ with $\Des(\iota)=[n-1]\setminus \Des(\tau)$. }
\label{fig:213<->132}
\end{figure}
In
Section~\ref{sec:inv321}
we introduced the notation Beissinger defined in her paper~\cite{B87} to insert another two-cycle into an involution. We will use this notation to deconstruct an involution $\iota$ as $\iota = \hat{\iota} +(i,n)$ for $i< n$. This will be one key in describing our inductive map with the next lemma giving us the conditions for preserving the avoidance of the pattern 213 as well as describing the resulting descent set.
\begin{lemma} Assume that $\iota = \hat{\iota} +(i,n)$, $i< n$, $\hat{\iota}\in {\mathcal I}_{n-2}$ and $\iota\in {\mathcal I}_n$. Let $d=n$ if $ \Des(\hat{\iota})=\emptyset$ and otherwise $d=\min\Des(\hat{\iota})$. Then $\iota$ avoids $213$ if and only if $i\leq d+1$ and $\hat{\iota}$ avoids 213. Also,
$$
\Des(\iota)= \left\{
\begin{array}{ll}
\Des(\hat{\iota})+1 & i=d+1 \\
(\Des(\hat{\iota})+1)\cup\{i,n-1\}& i<d+1.
\end{array}\right.
$$
\label{lemma:213Des}
\end{lemma}
\begin{proof}
Let $\iota = \hat{\iota} +(i,n)$, $i< n$, $\hat{\iota} \in {\mathcal I}_{n-2}$, and $\iota\in {\mathcal I}_n$. First consider the case where $\Des(\hat{\iota})=\emptyset$ or equivalently $\hat{\iota} = 12\dots (n-2)$. In this case we let $d=n$ and for all $i<n$ it is easy to see that $\hat{\iota}+(i,n)$ avoids $213$ and has descent set $\{i,n-1\}$. The second part of the lemma holds as well in this case since $i<d+1$.
In any other case $\Des(\hat{\iota})\neq \emptyset$ and we let $d=\min \Des(\hat{\iota})$. First we will assume that $\iota$ avoids $213$. It follows that $\hat{\iota}$, a subword of $\iota$, must also avoid $213$. Since $\iota$ avoids $213$ we know $\iota$ increases before $n$ at index $i$, which implies $i\leq d+1$. Instead assume that $\hat{\iota}$ avoids $213$ and $i\leq d+1$. If $\iota$ were to have the pattern $213$ then either $n$ or $i$ must be part of the pattern since $\hat{\iota}$ avoids $213$. The only possibility is that $n$ plays the roll of $3$ and $\iota$ has a descent before index $i$, which is impossible because this descent would come from a descent in $\hat{\iota}$ and we assumed that $i\leq d+1$.
Finally, we will finish by showing the second part of the lemma in the case where $\Des(\hat{\iota})\neq \emptyset$ by determining the descent set of $\iota=\hat{\iota}+(i,n)$ from the descent set of $\hat{\iota}$. The general descent set of $\iota$ is the union of $\Des(\hat{\iota})\cap [1,i-2]$, $\{i\}$ and $(\Des(\hat{\iota})\cap [i,n-3])+1$ with additionally $\{n-1\}$ if $\hat{\iota}(n-2)\geq i$. Because $i\leq d+1$ we must have $\Des(\hat{\iota})\cap [1,i-2]=\emptyset$. Consider the case where $i=d+1$. Because $\hat{\iota}$ avoids $213$ we must have no descent before the occurrence of $n-2$ in $\hat{\iota}$ so $\hat{\iota}(n-2)=d$. Since $i=d+1$ we have $\hat{\iota}(n-2)<i$ so $n-1\notin\Des(\iota)$ and $\Des(\iota)=\{i\}\cup((\Des(\hat{\iota})\cap [i,n-3])+1)=\Des(\hat{\iota})+1$.
If instead $i<d+1$ we still have $\hat\iota(n-2)=d$ but now $\hat{\iota}(n-2)\geq i$ so $\Des(\iota)=\{i,n-1\}\cup ((\Des(\hat{\iota})\cap [i,n-3])+1)=(\Des(\hat{\iota})+1)\cup \{i,n-1\}$. With this we are done.
\end{proof}
We next proceed through some technical lemmas that will step by step prove our bijection ${\mathcal I}_n(213)\rightarrow{\mathcal I}_n(132)$.
\begin{lemma} We have a bijection $\varphi:{\mathcal I}_n(213)\rightarrow G_n$ such that $\varphi(\iota)=\Des(\iota)$.
\label{lemma:varphi}
\end{lemma}
\begin{proof} Let $\iota \in {\mathcal I}_n(213)$. We first show that $\varphi$ is well defined, that is $\Des(\iota)\in G_n$, which we will show using induction. It is not hard to see this in the case of $n=1$ or $2$. We now assume that $n>2$ and all $\hat{\iota}\in {\mathcal I}_k(213)$ have $\Des(\hat{\iota})\in G_k$ for any $k<n$.
If $n$ is a fixed point then we must have that $\iota = 12\dots n$ because $\iota$ avoids $213$ so then $\Des(\iota)=\emptyset \in G_n$.
Otherwise we have by Lemma~\ref{lemma:213Des} that $\iota = \hat{\iota}+(i,n)$ for some $i\leq d+1$ where $d=\min \Des(\hat{\iota})$ if $\Des(\hat{\iota})\neq \emptyset$ and otherwise $d = n$. By our inductive assumption $\Des(\hat{\iota})=\{a_1,\dots, a_k\}\in G_{n-2}$. By Lemma~\ref{lemma:213Des} if $i=d+1$ then $\Des(\iota)=\Des(\hat{\iota})+1$ and it follows $\Des(\iota)\in G_n$. If instead $i<d+1$ by Lemma~\ref{lemma:213Des} we have $\Des(\iota)=\{i,a_1+1,\dots a_k+1,n-1\}$ that again implies $\Des(\iota)\in G_n$.
Next we define the inverse map $\varphi^{-1}:G_n\rightarrow {\mathcal I}_n(213)$ inductively. Let $A\in G_n$. We define $\varphi^{-1}(\emptyset)=12\dots n$ and otherwise for $A\neq \emptyset$
\begin{equation}
\hat{A} = \left\{
\begin{array}{ll}
(A\setminus \{\min A,n-1\})-1 & n-1\in A,\\
A-1 & n-1\notin A.
\end{array}
\right.
\label{eq:setB}
\end{equation}
so
$$\varphi^{-1}(A)=\varphi^{-1}(\hat{A})+(\min A,n).$$
This is well defined because $\hat{A}\in G_{n-2}$ and $\varphi^{-1}(A)$ avoids $213$ since $\min A\leq \min \hat{A}+1$ by Lemma~\ref{lemma:213Des}.
We lastly need to show that these two maps are indeed inverses. The cases of $n=1$ or $2$ are easy, so we can assume that $n>2$ and that $\varphi$ is a bijection ${\mathcal I}_k(213)\rightarrow G_k$ for $k<n$. Let $A\in G_n$. If $A=\emptyset$ then $\varphi \circ \varphi^{-1}(A)=\varphi(12\dots n)=A$. In any other case we define $\hat{A}$ as in equation~\eqref{eq:setB} and we have $\varphi \circ \varphi^{-1}(A)=\Des(\varphi^{-1}(\hat{A})+(\min A,n))$. By induction $\Des(\varphi^{-1}(\hat{A}))=\hat{A}$. Consider the case where $n-1\notin A$ then we defined $\hat{A}=A-1$ so $\min A=\min \hat{A}+1$ and by Lemma~\ref{lemma:213Des} this implies $\Des(\hat{A}+(\min A,n))=\hat{A}+1=A$ and we are done. Otherwise $n-1\in A$ and $\hat{A}=(A\setminus\{\min A,n-1\})-1$, which implies $\min A <\min \hat{A}+1$ so by Lemma~\ref{lemma:213Des} we have $\Des(\varphi^{-1}(\hat{A})+(\min A,n))=(\hat{A}+1)\cup \{i,n-1\}=A$ and we are done.
For the other direction we need to show for $\iota \in {\mathcal I}_n(213)$ that $\varphi^{-1}\circ\varphi(\iota)=\iota$. If $n$ is a fixed point of $\iota$ then $\iota = 12\dots n$ because $\iota$ avoids $213$. Then $\varphi^{-1}(\varphi(12\dots n))=\varphi^{-1}(\emptyset)=12\dots n$. We will now assume that $n$ is not a fixed point and $\iota = \hat{\iota}+ (i,n)$ for $i<n$ and $\hat{\iota}\in {\mathcal I}_{n-2}(213)$. We next consider the set $\varphi(\iota)=\Des(\iota)=A$ and its associated $\hat{A}$ set determined by equation~\eqref{eq:setB}. Note that $i=\min A$ because $\iota$ avoids $213$ and if we have $\hat{A}=\Des(\hat{\iota})$ then we have $\varphi^{-1}(\hat{A})=\hat{\iota}$ by induction so $\varphi^{-1}(A)=\hat{\iota}+(\min A,n)=\iota$. So all we have to show is that $\Des(\hat{\iota})=\hat{A}$. Consider the case where $i= \min \Des(\hat{\iota})+1$ so then
by Lemma~\ref{lemma:213Des} we know $\Des(\iota)=\Des(\hat{\iota})+1$, which implies that $n-1\notin \Des(\iota)$ so by equation~\eqref{eq:setB} $\hat{A}=\Des(\hat{\iota})$. In the other case $i<\min \Des(\hat{\iota})+1$ so we have $\Des(\iota)=(\Des(\hat{\iota})+1)\cup \{i,n-1\}$ and $n-1\in \Des(\iota)$. Also in this case $\hat{A}=\Des(\hat{\iota})$, hence, $\varphi$ is a bijection.
\end{proof}
We have a similar lemma describing the conditions for when an involution $\hat{\iota} +(1,i)$ avoids 132 and its resulting descent set.
\begin{lemma} Assume that $\iota = \hat{\iota} +(1,i)$, $i>1$, $\hat{\iota} \in {\mathcal I}_{n-2}$ and $\iota\in {\mathcal I}_n$. Let $d=-1$ if $ \Des(\hat{\iota})=\emptyset$ and otherwise $d=\max\Des(\hat{\iota})$. Then $\iota$ avoids $132$ if and only if $i\geq d+2$ and $\hat{\iota}$ avoids 132. Also,
$$
\Des(\iota)= \left\{
\begin{array}{ll}
\Des(\hat{\iota})+1 & i=d+2 \\
(\Des(\hat{\iota})+1)\cup\{1,i-1\}& i>d+2.\hfill
\end{array}\right.
$$
\vspace{-1cm}
\hfill \qed
\end{lemma}
We exclude the proof because it is similar to the proof of Lemma~\ref{lemma:213Des}. There is also a map ${\mathcal I}_n(132)\rightarrow L_n$ similar to $\varphi:{\mathcal I}_n(213)\rightarrow G_n$.
\begin{lemma}
There is a bijection $\psi:{\mathcal I}_n(132)\rightarrow L_n$ where $\iota$ is sent to $\Des(\iota)$. \hfill \qed
\label{lemma:psi}
\end{lemma}
The proof is similar to the proof of Lemma~\ref{lemma:varphi}, so it is left out. The last piece of the bijection we need is the one between the sets $G_n$ and $L_n$.
In this next lemma for sets $A=\{a_1,a_2,\dots ,a_k\}$ and $B=\{b_1,b_2,\dots ,b_k\}$, where we write the elements in increasing order, define $A\leq B$ if $a_i\leq b_i$ for all $i$. This relation is only for sets with equal cardinality.
\begin{lemma}The map $f:G_n\rightarrow L_n$ defined by $f(A)=[n-1]\setminus A$ is a bijection.
\label{lemma:f}
\end{lemma}
\begin{proof}
This map has an inverse, which is itself, so our work in this proof will be to show that this map is well defined. For this proof we will let $A=\{a_1,a_2,\dots, a_k\}\subseteq [n-1]$ where the elements are written in increasing order and $S=\{s_1,s_2,\dots, s_k\}\subseteq[n-1]$ where $s_i=n-a_{k-i+1}$ so that the elements of $S$ are also in increasing order. Similarly, we will let $B=[n-1]\setminus A=\{b_1,b_2,\dots, b_{n-k-1}\}$ with $b_i<b_{i+1}$ and $T=\{t_1,t_2,\dots, t_{n-k-1}\}\subseteq[n-1]$ where $t_i=n-b_{n-k-i}$ so that the elements of $T$ increase. Our goal is to show $A\in G_n$ if and only if $B\in L_n$.
First we will note that $S=[n-1]\setminus T$. Secondly we will note that $a_{k-i+1}+s_i=n$ so $a_i+a_{k-i+1}\geq n$ is equivalent to $a_i\geq s_i$. Thus, $A\in G_n$ if and only if $A\geq S$. Similarly $B\in L_n$ if and only if $B\leq T$.
We will next argue that if $S\leq A$ then $[n-1]\setminus S\geq [n-1]\setminus A$. We will argue this by inducting on the cardinality $|S|=|A|=k$. The base case is $k=0$ where $S=A=\emptyset$ where by vacuum we have $S\leq A$ and certainly $[n-1]\setminus \emptyset \geq [n-1]\setminus \emptyset$. Otherwise $S$ and $A$ have minimum elements $s_1\leq a_1$ respectfully. It is not hard to see that $S\setminus \{s_1\}\leq A\setminus \{a_1\}$ so by induction $[n-1]\setminus(S\setminus \{s_1\})\geq [n-1]\setminus(A\setminus \{a_1\})$ or equivalently $([n-1]\setminus S)\cup \{s_1\}\geq ([n-1]\setminus A)\cup \{a_1\}$. Generally it is not hard to see for sets $|U|=|V|$ that if $U\leq V$ with $u\in U$, $v\in V$ and $v\leq u$ that $U\setminus\{u\}\leq V\setminus \{v\}$. From this since $s_1\leq a_1$ we have $[n-1]\setminus S \geq [n-1]\setminus A$.
Putting everything together $A\in G_n$ if and only if $S\leq A$ if and only if $[n-1]\setminus S \geq [n-1]\setminus A$. This is equivalently $T\geq B$, which is true if and only if $B\in L_n$.
\end{proof}
Now putting all our maps together we get a bijection from ${\mathcal I}_n(213)$ to ${\mathcal I}_n(132)$ such that if $\iota$ is mapped to $\tau$ we have that $\Des(\iota)=[n-1]\setminus \Des(\tau)=\Asc(\tau)$.
\begin{thm} For $n\geq 0$,
$$M{\mathcal I}_n(213) = q^{\binom{n}{2}}M{\mathcal I}_n(132;q^{-1}).$$
\label{thm:132symm213}
\end{thm}
\begin{proof}
This equality is true because there is a bijection $\psi^{-1}\circ f\circ\varphi:{\mathcal I}_n(213)\rightarrow {\mathcal I}_n(132)$ using Lemmas~\ref{lemma:varphi},~\ref{lemma:psi} and~\ref{lemma:f}. Further, if $\iota \in {\mathcal I}_n(213)$ then $\varphi(\iota)=\Des(\iota)$, $f(\Des(\iota))=[n-1]\setminus\Des(\iota)$ and $\psi^{-1}([n-1]\setminus \Des(\iota))=\tau$ so $\Des(\tau)=[n-1]\setminus \Des(\iota)$.
\end{proof}
As a corollary we can prove that $M{\mathcal I}_n(213)$ has similar but symmetric internal zeros as were found in $M{\mathcal I}_n(132)$, which was proven in Proposition~\ref{thm:132internalzeros}.
\begin{cor} If $\iota \in {{\mathcal I}}_n(213)$ then
\begin{enumerate}[(i)]
\item $\maj(\iota)=0$ or $\maj(\iota) \geq {\lceil n/2 \rceil}$
\item this bound is sharp and
\item for every $k\geq {\lceil n/2 \rceil}$, $k\leq \binom{n}{2}$ there exists some $\iota \in {{\mathcal I}}_n(213)$ with $\maj(\iota)=k$.
\end{enumerate}
\end{cor}
\begin{proof}Using Proposition~\ref{thm:132internalzeros} and the map in Theorem~\ref{thm:132symm213} we quickly get this result.
\end{proof}
\subsection{The pattern 321}
\label{subsec321}
In this section we will show another interpretation for the standard $q$-analogue of the binomial coefficient defined in equation~\eqref{eq:qbinom}.
It turns out that $M{\mathcal I}_n(321)$ is equal
to the standard $q$-analogue for the central binomial coefficient. This result was proven independently by Barnabei et.\ al.~\cite{BBES14} (Theorem 3.3) whose proof gives a connection to hook decompositions. Our proof has the advantage of being shorter and gives a connection to the concept of a core. The core is a concept due to Greene and Kleitman~\cite{GK76} (page 82) that originated in the study of posets. It has traditionally been used to prove that a poset has a symmetric-chain decomposition, but our use of it is new and quite different from the original. Also, our proof can be easily generalized to give another interpretation for the general $q$-analogue for the binomial coefficient, not just the central one, which we present at the end of this section in Corollary~\ref{cor:321t<=k}. This result also appears in~\cite{BBES16} (Corollary 14) by Barnabei et.\ al. Parts of the bijection we present can be seen in~\cite{BBES16} and~\cite{EFPT15} in their association between involutions avoiding 321 and Dyke paths.
Given a length $n$ word composed of left parentheses and right parentheses the core is a subsequence of the word and is defined inductively. To find the core we begin by matching a left parenthesis with right parenthesis if they are adjacent and the left parenthesis is on the left. Excluding all previously matched parentheses we continue to match more in a similar matter until there are no more possible matchings. The subsequence that contains all of the matched parentheses is called the {\it core}. We say that a specific parenthesis is in the core if that parenthesis is part of a matching. Similarly, we will say an index $i$ is part of the core if the parenthesis at index $i$ is part of the core. For example the word $(()()))((()($ has the core $(()())()$ and the indices $\{1, 2, 3, 4, 5, 6, 10, 11\}$ are in the core.
Given a binary word of $0$s and $1$s we can similarly define its core. Consider all $1$s to be left parentheses and all $0$s to be right parentheses. With this we equate the word $(()()))((()($ to $11010011101$ and its core is $11010010$, which still occurs on the indices $\{1,2,3,4,5,6,10,11\}$. Note that the core itself is a perfect matching whose index set inside the word can be broken down uniquely into disjoint intervals with the following properties. The first property is that the subsequence of the core associated to any one of the intervals is itself a perfect matching. The second is that no interval can be broken into two intervals that both satisfy the first property. We will call each interval, or the subsequence of the core associated to that interval, a {\it block}. In our example we have two blocks that are $110100$ and $10$.
The next two lemmas will establish some basic facts the core and about descents in involutions avoiding 321.
\begin{lemma}
Let $\iota \in {\mathcal I}_n(321)$.
If $d\in \Des(\iota)$ then $\iota$ has the two-cycle $(d,d+1)$ or two distinct two-cycles $(d,t)$ and $(s,d+1)$ such that $d<t$ and $s<d+1$.
\label{lemma:321Des}
\end{lemma}
\begin{proof}
Let $d\in \Des(\iota)$, which implies that $(d,d+1)$ is an inversion. By Lemma~\ref{lemma:321structure} we then have two-cycles $(d,t)$ with $d<t$ and $(s,d+1)$ with $s<d+1$ that may not be distinct. If the two-cycles are distinct we are done, and if they are not then we have the two-cycle $(d,d+1)$.
\end{proof}
\begin{lemma}
Let $w$ be a binary word.
\begin{enumerate}[(i)]
\item The subword of $w$ composed of all elements not in the core is weakly increasing.
\item If the $i$th $1$ inside the core occurs at index $s$ in $w$ and the $i$th $0$ occurs at $t$ in $w$ then $s<t$ and all indices in the interval $[s,t]$ are inside the core.
\end{enumerate}
\label{lemma:binarystructure}
\end{lemma}
\begin{proof}
Say that the subword of $w$ composed of all elements outside the core is not weakly increasing, which means there is some strict decrease. For a binary word to have a strict decrease it would need a $1$ directly followed by a $0$. By the inductive construction of the core, these two letters would be matched and be inside the core, which is a contradiction.
Each block of the core has an equal number of $1$s and $0$s since it is a perfect matching. As result, the first block will have the $1$st through $m_1$th $1$ and $0$, The second block will have the $(m_1+1)$st through $m_2$th $1$ and $0$, and so generally the $i$th block will have the $(m_{i-1}+1)$st through $m_i$th $1$ and $0$. This means that the $j$th $1$ and the $j$th $0$ will always be in the same block. Since blocks occur on consecutive indices all letters between the $j$th $1$ and the $j$th $0$ are in the core. \end{proof}
Now we are ready to delve into the main topic of this section, that $M{\mathcal I}_n(321)$ is a standard $q$-analogue for the central binomial coefficient. Our method of proof is to show that there is a bijection from $M{\mathcal I}_n(321)$ to another combinatorial object that has been well established to have a generating function equal to standard $q$-analogues for binomial coefficient, see~\cite{S97} (exercise 1.56).
\begin{prop}If $W_{n, k}$ is the set of binary words of length $n$ with
$n-k$ zeros and $k$ ones then
$${n \brack k}_q = \sum_{w\in W_{n,k}} q^{\maj(w)}.$$
\label{prop:qbinom}
\vspace{-1cm}
\hfill \qed
\end{prop}
In proving the next theorem we establish a bijection from involutions avoiding 321 to binary words that preserves the descent set using the concept of core.
\begin{thm}
For $n\geq 0$ we have the following equality of $q$-analogues,
$$M{\mathcal I}_n(321)={n \brack {\ceil{n/2}}}_q.$$
\label{theorem:321qanalougeequiv}
\end{thm}
\begin{proof}
To prove the equality we will use a well-known interpretation of the $q$-analogue for binomial coefficients stated in Proposition~\ref{prop:qbinom}. We will construct a bijection $\phi:{\mathcal I}_n(321)\rightarrow W_{n,\ceil{n/2}}$ that preserves the decent set. Preserving the decent set will preserve the major index, which will give us the equalities
$$M{\mathcal I}_n(321)= \sum_{\iota \in {\mathcal I}_n(321)} q^{\maj (\iota)}= \sum_{w\in W_{n,\ceil{ n/2}}} q^{\maj(w)}={n \brack {\ceil{ n/2}}}_q.$$
Let $\iota \in {\mathcal I}_n(321)$ have two-cycles $(s_1,t_1), (s_2,t_2),..., (s_m,t_m)$ such that $s_i<t_i$ and $s_i<s_{i+1}$. Also, let the fixed points be $f_1,f_2,\dots f_{n-2m}$ such that $f_i<f_{i+1}$ for all $i$. We want to define a binary word $\phi(\iota)=w=w_1\dots w_n$ that has ${\lceil n/2 \rceil}$ ones and ${\lfloor n/2 \rfloor}$ zeros. Note that $2m\leq n$ so $m\leq \floor{n/2}$, which means $a_1=\ceil{n/2}-m \geq 0$ and $a_0=\floor{n/2}-m\geq 0$. We define $w$ to be the binary word with $w_{s_i}=1$, $w_{t_i}=0$ and we replace the remaining letters that form the subword $f_1f_2\dots f_{n-2m}$ with $0^{a_0}1^{a_1}$ where $i^j$ is the word of $j$ consecutive $i$'s. We can easily see that $w$ has ${\ceil{ n/2 }}$ ones and ${\floor{n/2}}$ zeros, so $\phi$ is well defined. For example $\phi(132458967)=010111100$.
Say that we have a binary word $w\in W_{n,\ceil{n/2}}$.
Let $s_i$ be the index in $w$ at which the $i$th $1$ in the core appears and $t_i$ be the index in $w$ at which the $i$th $0$ in the core appears. Note that $s_i<s_{i+1}$ since the $i$th 1 is before the $(i+1)$st $1$. Similarly $t_i<t_{i+1}$. We define the involution $\phi^{-1}(w)=\iota$ to be
$$ \iota(j) = \left\{
\begin{array}{lr}
t_i & j=s_i,\\
s_i & j=t_i,\\
j&\text{else}.
\end{array}
\right.$$
We can easily see since all $s_i$'s and $t_i$'s are distinct so $\iota$ has the two-cycles $(s_i,t_i)$ and everything else is a fixed point. This means that $\iota$ is an involution. Say $f_1,f_2,\dots, f_{n-2m}$ are the fixed points listed so that $f_i<f_{i+1}$.
We will now show that this involution avoids $321$.
Consider the subword $x$ of $\iota$ that occurs at the indices $\{s_1,\dots, s_m,f_1,\dots f_{n-2m}\}$. We will show that this subword is increasing by showing that it doesn't have any inversions. Since $\iota(s_i)<\iota(s_{i+1})$ and $f_i<f_{i+1}$ any inversion will have to occur between a pair of indices $s_i$ and $f_j$. By Lemma~\ref{lemma:binarystructure} since $s_i<t_i$ we know that all indices in the interval $[s_i,t_i]$ are in the core. Say $s_i<f_j$. Since index $f_j$ is not in the core and all indices in $[s_i,t_i]$ are in the core we must have that $\iota(s_i)=t_i<f_j$. Similarly if $f_j<s_i$ then $f_j<t_i$. This shows that $x$ doesn't have any inversions and is increasing.
The subword $\iota(t_1)\iota(t_2)\dots \iota(t_m)$ is also increasing. This means that $\iota$ is composed of two disjoint increasing subsequences, so the longest decreasing subsequence has length at most two. From this we can conclude that $\iota$ avoids $321$ and $\phi^{-1}$ is well defined.
Next, we will show that the two maps are inverses. It suffices to show that $\phi(\phi^{-1}(w))=w$ since $|{\mathcal I}_n(321)|=|W_{n,\ceil{n/2}}|$. Let $w\in W_{n,\ceil{n/2}}$, $\phi^{-1}(w)=\iota$ and $\phi(\iota)=v$. If $w_i$ is the $r$th $1$ in the core of $w$ then $\iota$ has a two-cycle $(i,j)$ where $w_j$ is the $r$th $0$ in the core of $w$.
Since $w_{j}$ is the index of the $r$th $0$ in the core and according to Lemma~\ref{lemma:binarystructure} the $r$th $1$ occurs before the $r$th $0$, we have that $i<j$. This means that $v_i=1$ and $v_j=0$, so for all indices $i$ inside the core of $w$ we have that $w_i=v_i$.
Say the core of $w$ has $2m$ elements then $\iota$ has $m$ two-cycles. Let $a_1=\ceil{n/2}-m$ and $a_0=\floor{n/2}-m$. By definition of $\phi$, the subword of $v$ corresponding to fixed points of $\iota$, or equivalently the indices outside the core of $w$, is $0^{a_0}1^{a_1}$.
Note that the subword of $w$ composed of letters outside the core is made of $a_1$ ones and $a_0$ zeros. By Lemma~\ref{lemma:binarystructure} this subword is weakly increasing and so must equal $0^{a_0}1^{a_1}$. Hence $v_i=w_i$ for all indices $i$ outside the core of $w$ so $w=v$.
Lastly, we will show that if $\phi(\iota)=w$ then $\Des(\iota)=\Des(w)$. First we will make a quick note about $\phi^{-1}$. If $w_i=1$ is the $r$th one in the core of $w$, then the $r$th zero occurs at $w_j$ for some $i<j$. This means that the corresponding involution $\iota$ has the two-cycle $(i,j)$ with $i<j=\iota(i)$. Similarly, if $w_j$ is the $r$th $0$ in the core then $\iota$ has the two-cycle $(i,j)$ with $i =\iota(j)<j$. Say $d\in \Des(w)$ then $w_d=1$ and $w_{d+1}=0$, which implies that both these indices are in the core.
From the map $\phi^{-1}$ since $w_d=1$ is in the core $\iota$ must have a two-cycle $(d,\iota(d))$ with $d<\iota(d)$.
Similarly, $\iota$ must have a two-cycle $(\iota(d+1),d+1)$ with $\iota(d+1)<d+1$. It is possible for these two-cycles to be the same, but in either case this implies that $\iota(d)>\iota(d+1)$ and $d\in \Des(\iota)$.
Conversely consider $d\in \Des(\iota)$. According to Lemma~\ref{lemma:321structure} we must have the two-cycle $(d,d+1)$ or a pair of two-cycles $(d,\iota(d))$ and $(\iota(d+1),d+1)$ with $\iota(d+1)<d+1$ and $d<\iota(d)$. In either case this implies that $w_d=1$, $w_{d+1}=0$ and $d\in \Des(w)$. Hence $\Des(w)=\Des(\iota)$.
\end{proof}
By slightly modifying the proof from Theorem~\ref{theorem:321qanalougeequiv} we derive another interpretation for the standard $q$-analogue for the binomial coefficients.
This result also appears in~\cite{BBES16} by Barnabei et.\ al.
\begin{cor}[Barnabei et.\ al.~\cite{BBES16} {Corollary 14}]
Let $t(\iota)$ be the number of two-cycles in $\iota$ and $k\leq n/2$. Then we have the following equality of $q$-analogues,
$$ \underset{t(\iota)\leq k}{\sum_{\iota \in {\mathcal I}_n(321)}} q^{\maj(\iota)}= {n\brack k}_q.$$
\label{cor:321t<=k}
\end{cor}
\begin{proof} This proof will be similar to the proof of Theorem~\ref{theorem:321qanalougeequiv}. The bijection will instead be defined from length $n$ binary words with $k\leq n/2$ ones and $n-k$ zeros to involutions in ${\mathcal I}_n(321)$ that have at most $k$ two-cycles. The map and its inverse will be defined exactly the same except for a small modification in $\phi$ where we alter the number of ones and zeros we want in our binary word. The bijection will be well defined since the changed number of ones and zeros will bound the maximum number of possible two-cycles.
\end{proof}
Barnabei et.\ al.~\cite{BBES16} used Corollary~\ref{cor:321t<=k} to describe the generating function for $\maj$ and involutions avoiding 321 where the number of two-cycles is fixed.
\begin{cor}[Barnabei et.\ al.~\cite{BBES16} {Corollary 14}] For $n\geq 1$ and $k\leq n/2$ we have
$$\underset{t(\iota)= k}{\sum_{\iota\in{\mathcal I}_{n}(321)}}q^{\maj(\iota)}={n\brack k}_q - {n\brack k-1}_q.$$
\label{cor:321t=k}
\end{cor}
\begin{proof}Using Corollary~\ref{cor:321t<=k} we have the series of equalities
$$\underset{t(\iota)= k}{\sum_{\iota\in{\mathcal I}_{n}(321)}}q^{\maj(\iota)}=\underset{t(\iota)\leq k}{\sum_{\iota\in{\mathcal I}_{n}(321)}}q^{\maj(\iota)}-\underset{t(\iota)\leq k-1}{\sum_{\iota\in{\mathcal I}_{n}(321)}}q^{\maj(\iota)}={n\brack k}_q - {n\brack k-1}_q,$$
which finishes the proof.
\end{proof}
Though fixed-point-free involutions were used to determine $I{\mathcal I}_n(321)$ in Section~\ref{sec:inv321} we see in this section that they are not required to determine $M{\mathcal I}_n(321)$. However, using the previous corollary we can ascertain the generating function in the fixed-point-free case.
\begin{cor} For $n=2m\geq 2$ we have
$$MF{\mathcal I}_{2m}(321)={2m\brack m}_q - {2m\brack m-1}_q.$$
\vspace{-1cm}
\hfill \qed
\vspace{.5cm}
\label{cor:321fpf}
\end{cor}
\subsection{The pattern 123}
\label{maj123}
There is a similar symmetry regarding $M{\mathcal I}_n(123)$ and $M{\mathcal I}_n(321)$ as we found for the patterns 132 and 213. This symmetry is not present when restricting to fixed-point-free involutions because the enumerations for $F{\mathcal I}_{2m}(321)$ and $F{\mathcal I}_{2m}(123)$ have been shown to be different by Deutsch, Robertson, and Saracino in~\cite{DRS07} who enumerated the avoidance classes by number of fixed points. They found $|F{\mathcal I}_{2m}(123)|=\binom{2m-1}{m}$ but $|F{\mathcal I}_{2m}(321)|=C_m$ in~\cite{DRS07} (Theorem 2.1).
It has been shown by many including Simion and Schmidt~\cite{ss:rp}, Barnabei et.\ al.~\cite{BBES14} and Deutsch et.\ al.~\cite{DRS07} that there is a symmetry between $M{\mathcal I}_n(123)$ and $M{\mathcal I}_n(321)$, specifically $M{\mathcal I}_n(123)=q^{\binom{n}{2}}M{\mathcal I}_n(321;q^{-1})$, which is shown again here using the RSK correspondence and tableau transposition. Another detailing of this map by B\'{o}na and Smith can be found in~\cite{BS16} (Section 3) whose description subverts the RSK algorithm and transposition. This symmetry is essential in determining the form of $M{\mathcal I}_n(123)$ since we have established that $M{\mathcal I}_n(321)$ is the standard $q$-analogue for the central binomial coefficient in Theorem~\ref{theorem:321qanalougeequiv}.
\begin{prop}
For $n\geq 0$,
$$M{\mathcal I}_n(123) =q^{\binom{n}{2}} M{\mathcal I}_n(321;q^{-1}).$$
\label{thm:123&321 symmetry}
\end{prop}
The well-used proof is a bijection between involutions using SYT, which were defined in Section~\ref{sec:inv321}. There are two facts that we will need to recall from Proposition~\ref{SYTfacts}. The first is that the length of the longest decreasing sequence in a permutation is equal to the length of the first column in its SYT, and the length of the longest increasing sequence in a permutation is equal to the length of the first row in its SYT. The second is that $d$ is a descent of an involution $\iota$ if and only if $d+1$ appears in a lower row than $d$ in the associated SYT.
\begin{proof}[Proof of Theorem~\ref{thm:123&321 symmetry}]
It suffices to define a map from ${\mathcal I}_n(321)$ to ${\mathcal I}_n(123)$ such that $\iota$ is mapped to an involution with descent set $[n-1]\setminus \Des(\iota)$.
This is sufficient because then $\iota$ will be mapped to an involution with $\maj$ equal to $\binom{n}{2}-\maj(\iota)$.
The set ${\mathcal I}_n(321)$ contains involutions with longest decreasing sequences of length one or two. Similarly, the set ${\mathcal I}_n(123)$ contains involutions with longest increasing sequences of length one or two.
So the collection of SYT associated to ${\mathcal I}_n(321)$ is all SYT of size $n$ with at most two rows, and the collection of SYT associated to ${\mathcal I}_n(123)$ is all SYT of size $n$ with at most two columns. Note that the transpose of a SYT with at most two columns is a SYT with at most two rows. So if we use RSK correspondence on $\iota \in {\mathcal I}_n(321)$ to get a SYT $P$, transpose the SYT to get $P^T$ and then using the RSK correspondence on $P^T$ to get another involution $\tau\in {\mathcal I}_n(123)$ we have defined a well-defined bijection from ${\mathcal I}_n(321)$ to ${\mathcal I}_n(123)$. This map is illustrated in Figure~\ref{fig:321to123}.
Let $\iota \in {\mathcal I}_n(321)$, $P$ be its SYT and $\iota\mapsto\tau$ by the map described in the previous paragraph. We will show that $i\in \Des(\iota)$ if and only if $i\notin \Des(\tau)$, which will imply that $\Des(\tau)=[n-1]\setminus \Des(\iota)$.
It is known that if $i\in \Des(\iota)$ then $i+1$ is in a row below $i$ in $P$. Because rows and columns strictly increase this further implies that $i+1$ is in is the same column as $i$ or to the right of $i$ in $P$. Hence in $P^T$ we have $i+1$ in the same row as $i$ or in a row above $i$ and thus $i\notin\Des(\tau)$. For a very similar reason if $i\notin \Des(\iota)$ then $i\in \Des(\tau)$, which completes the proof.
\end{proof}
\begin{figure}
\begin{align*}
\begin{tikzpicture}
\draw (0,0) node {${\mathcal I}_n(321)$};
\draw (0,1) node {3516247};
\draw (2,.5) node {$\longrightarrow$};
\end{tikzpicture}
&&
\begin{tikzpicture}
\draw (0,2) -- (2,2);
\draw (0,1.5) -- (2,1.5);
\draw (0,1) -- (1.5,1);
\draw (0,1) -- (0,2);
\draw (.5,1) -- (.5,2);
\draw (1,1) -- (1,2);
\draw (1.5,1) -- (1.5,2);
\draw (2,1.5) -- (2,2);
\draw (1,0) node {$P$};
\draw (.25,1.75) node {$1$};
\draw (.75,1.75) node {$2$};
\draw (1.25,1.75) node {$4$};
\draw (1.75,1.75) node {$7$};
\draw (.25,1.25) node {$3$};
\draw (.75,1.25) node {$5$};
\draw (1.25,1.25) node {$6$};
\draw (3,.5) node {$\longrightarrow$};
\end{tikzpicture}
&&
\begin{tikzpicture}
\draw (0,3) -- (1,3);
\draw (0,2.5) -- (1,2.5);
\draw (0,2) -- (1,2);
\draw (0,1.5) -- (1,1.5);
\draw (0,1) -- (.5,1);
\draw (0,1) -- (0,3);
\draw (.5,1) -- (.5,3);
\draw (1,1.5) -- (1,3);
\draw (1,.5) node {$P^{T}$};
\draw (.25,2.75) node {$1$};
\draw (.25,2.25) node {$2$};
\draw (.25,1.75) node {$4$};
\draw (.25,1.25) node {$7$};
\draw (.75,2.75) node {$3$};
\draw (.75,2.25) node {$5$};
\draw (.75,1.75) node {$6$};
\end{tikzpicture}
&&\begin{tikzpicture}
\draw (0,0) node {${\mathcal I}_n(123)$};
\draw (0,1) node {4271653};
\draw (-2,.5) node {$\longrightarrow$};
\end{tikzpicture}
\end{align*}
\caption{Illustration of $\phi:{\mathcal I}_n(321)\rightarrow {\mathcal I}_n(123)$ with $\Des(\phi(\iota))=[n-1]\setminus \Des(\iota)$.}
\label{fig:321to123}
\end{figure}
Automatically by Theorem~\ref{theorem:321qanalougeequiv} and Theorem~\ref{thm:123&321 symmetry} we know $M{\mathcal I}_n(123)$.
\begin{cor}[Barnabei et.\ al.~\cite{BBES14} Corollary 4.3]
We have for $n\geq 0$
$$M{\mathcal I}_n(123)=q^{\binom{n}{2}} {{n}\brack{\ceil{n/2}}}_{q^{-1}}.$$
\vspace{-1cm}
\hfill \qed
\vspace{.2cm}
\end{cor}
For every pattern excluding 231 we have considered the generating function in the fixed-point-free case. We do so here for the pattern 123. We rely on the result in Corollary~\ref{cor:321t=k}.
\begin{cor}We have for $m\geq 1$,
$$\overline{MF{\mathcal I}}_{2m}(123)=\sum_{k=0}^{2\floor{m/2}}(-1)^{k}{2m\brack k}_{q}.$$
\end{cor}
\begin{proof}
Proposition~\ref{SYTfacts} says if $\iota$ avoids 123 then the SYT $P$ of $\iota$ has one or two columns. Also, if $\iota$ is fixed-point-free then both of these columns have even length and $|\iota|=2m$ for some $m$. The $\tau\in{\mathcal I}_{2m}(321)$ associated to $P^T$ must then have at most two rows of even length. This is equivalent to $\tau$ avoiding 321 and having an even number of two-cycles. Since $\comaj(\iota)=\maj(\tau)$ we have the equality
$$\sum_{\iota\in F{\mathcal I}_{2m}(123)}q^{\comaj(\iota)}=\underset{t(\iota) \text{ is even}}{\sum_{\iota\in{\mathcal I}_{2m}(321)}}q^{\maj(\iota)}.$$
If we have an even $2j$ two-cycles with $2\leq 2j\leq m$ by Corollary~\ref{cor:321t=k} we have
$$\underset{t(\iota)=2j}{\sum_{\iota\in{\mathcal I}_{2m}(321)}}q^{\maj(\iota)}={2m\brack 2j}_q - {2m\brack 2j-1}_q.$$
summing over all $1\leq j\leq \floor{m/2}$ and including the identity at $j=0$ gives us the result.
\end{proof}
\section{Multiple patterns}
\label{multi}
In this section we consider ${\mathcal I}_n(\pi_1,\pi_2,\dots, \pi_j)={\mathcal I}_n(S)$ the set of all involutions $\iota \in {\mathcal I}_n$ that avoid all the patterns in $S=\{\pi_1,\pi_2,\dots, \pi_j\}\subseteq {\mathfrak S}_3$ where $S$ contains more than one pattern. Two sets $S$ and $T$ of patterns are ${\mathcal I}$-Wilf equivalent if $|{\mathcal I}_n(S)|=|{\mathcal I}_n(T)|$ and we write $[S]_{{\mathcal I}}=[\pi_1,\pi_2,\dots ,\pi_j]_{{\mathcal I}}$ for the collection of sets of patterns. The cardinalities for multiple pattern avoidance in involutions has been classified and enumerated by Guibert and Mansour in~\cite{GM02a} (Examples 2.6, 2.8, 2.12, 2.18, 2.20)
who classify all pattern sets containing 132, Egge and Mansour~\cite{EM04} who enumerate the sets containing the pattern 231 and Wulcan who enumerates all pairs of length three patterns in~\cite{W02}.
Also in this section we further describe the generating functions for $\inv$ and $\maj$ for multiple patterns. Since the avoidance classes and associated generating functions for multiple patterns are fairly simple we instead consider the statistics $\inv$, $\maj$ and $\des$ altogether as a single generating function.
For a set of patterns $S$ define
$$F_n(S;p,q,t)=F_n(S)=\sum_{\iota\in{\mathcal I}_n(S)}p^{\inv(\iota)}q^{\maj(\iota)}t^{\des(\iota)}$$
and similarly define
$$\bar{F}_n(S;p,q,t)=\bar{F}_n(S)=\sum_{\iota\in{\mathcal I}_n(S)}p^{\coinv(\iota)}q^{\comaj(\iota)}t^{\asc(\iota)}.$$
The functions $F_n(S)$ and $\bar{F}_n(S)$ determine each other since
$$F_n(S)=(pq)^{\binom{n}{2}}t^{n-1}\bar{F}_n(S;p^{-1},q^{-1},t^{-1}).$$
Two sets $S$ and $T$ of patterns are $I{\mathcal I}$-Wilf equivalent if $F_n(S;q,1,1)=F_n(T;q,1,1) $ and $M{\mathcal I}$-Wilf equivalent if $F_n(S;1,q,1)=F_n(T;1,q,1) $. Let $[S]_{I{\mathcal I}}=[\pi_1,\dots ,\pi_j]_{I{\mathcal I}}$ be the set of sets of patterns that are $I{\mathcal I}$-Wilf equivalent to $S$ and we similarly define
$[S]_{M{\mathcal I}}$ for the major index. We include the description for all equivalence classes and generating functions for all sets of multiple patterns for completeness of this study, but do not include the proofs.
We find, as before, any avoidance class that contains the pattern $231$ can be described as the avoidance class of permutations, which were studied by Dokos et.\ al.~\cite{DDJSS12}. In~\cite{BBES16} (Section 4.3) Barnabei et.\ al.\ find $M{\mathcal I}_n(213,321)$. Since the avoidance class for any set that contains 123 and 321 becomes empty for $n>5$ we exclude all sets with these two patterns.
\begin{prop} The decompositions for involutions that avoid two patterns are as follows.
\begin{enumerate}[(i)]
\item If $\iota \in {\mathcal I}_n(123, 132)$ then either $\iota = 12[ {\mathfrak d}_{n-1}, 1]$ or $\iota = 45312[ {\mathfrak d}_k,1, \tau, {\mathfrak d}_k, 1]$ where $\tau \in {\mathcal I}_{n-2k-2}(123, 132)$ and $0\leq k<\floor{n/2}$.
\item If $\iota \in {\mathcal I}_n(123, 213)$ then either $\iota = 12[1, {\mathfrak d}_{n-1}]$ or $\iota = 45312[1, {\mathfrak d}_k, \tau, 1, {\mathfrak d}_k]$ where $\tau \in {\mathcal I}_{n-2k-2}(123, 213)$ and $0\leq k<\floor{n/2}$.
\item If $\iota \in {\mathcal I}_n(123, 231)={\mathcal I}_n(123, 312)$ then $\iota = 12[{\mathfrak d}_{k},{\mathfrak d}_{n-k}]$ for $k \in [n]$ .
\item If $\iota \in {\mathcal I}_n(132, 231)={\mathcal I}_n(132, 312)$ then $\iota = 12[{\mathfrak d}_k, {\mathfrak i}_{n-k}]$ for $1\leq k \leq n$.
\item If $\iota \in {\mathcal I}_n(132,321)$ then $\iota = 213[{\mathfrak i}_k, {\mathfrak i}_k, {\mathfrak i}_{n-2k}]$ for some $0\leq k\leq \floor{n/2}$.
\item If $\iota \in {\mathcal I}_n(132, 213)$ then $\iota={\mathfrak i}_n$ or $\iota = 321[{\mathfrak i}_k,\tau, {\mathfrak i}_k]$ with $\tau \in {\mathcal I}_{n-2k}(213, 132)$ and $1\leq k\leq \floor{n/2}$.
\item If $\iota \in {\mathcal I}_n(213, 231)={\mathcal I}_n(213, 312)$ then $\iota = 12[{\mathfrak i}_k, {\mathfrak d}_{n-k}]$ for $0\leq k \leq n-1$.
\item If $\iota \in {\mathcal I}_n(213,321)$ then $\iota = 132[{\mathfrak i}_{n-2k}, {\mathfrak i}_k, {\mathfrak i}_k]$ for some $0\leq k\leq \floor{n/2}$.
\item If $\iota \in {\mathcal I}_n(312,321)$ then $\iota = {\mathfrak i}_k[\tau_1,\ldots , \tau_l]$ where $\tau_j =1$ or $\tau_j=21$ for all $j$. \hfill \qed
\end{enumerate}
\label{prop:twoavoid}
\end{prop}
From these decomposition we can quickly determine the cardinalities.
\begin{prop} The ${\mathcal I}$-Wilf equivalence classes for pairs of permutations in ${\mathfrak S}_3$.
\begin{enumerate}[(i)]
\item $[123,132]_{{\mathcal I}}=\{\{123, 132\},\{123, 213\}, \{132,213\}\}$ with $|{\mathcal I}_n(123,132)|=2^{\lfloor n/2 \rfloor}$.
\item $[123, 231]_{{\mathcal I}}=\{\{123, 231\}, \{213, 231\}, \{132,231\}\}$ with $|{\mathcal I}_n(123,312)|=n$
\item $[132,321]_{{\mathcal I}}=\{\{132,321\}, \{213,321\}\}$ with $|{\mathcal I}_n(321,132)|=\lfloor n/2 \rfloor +1$.
\item $[213,321]_{{\mathcal I}}=\{ \{231,321\} \}$ with $|{\mathcal I}_n(231,321)|=F_n$ the Fibonacci numbers. \hfill \qed
\end{enumerate}
\end{prop}
We can also quickly determine the generating function $F_n(S)$ for any pair of patterns, which we present in Table~\ref{double}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$S=\{\pi_1,\pi_2\}$ & $F_n(S)$ or $\bar{F}_n(S)$\\
\hline
$\{123, 132\}$ & $\displaystyle \bar{F}_n(S;p,q,t)=(pq)^{n-1}t+\bar{F}_{n-2}(S;p,q,qt)+\sum_{k=1}^{\floor{n/2}-1} p^{2k}q^{n+k-1}t^2\bar{F}_{n-2k-2}(S;p,q,q^{k+1}t)$\\
\hline
$\{123, 213\}$ & $\displaystyle\bar{F}_n(S;p,q,t)=p^{n-1}qt+\bar{F}_{n-2}(S;p,q,qt)+\sum_{k=1}^{\floor{n/2}-1} p^{2k}q^{n-k+1}t^2\bar{F}_{n-2k-2}(S;p,q,q^{k+1}t)$\\
\hline
$\{123, 213\}$ & $\displaystyle \bar{F}(S;p,q,t)=1+\sum_{k = 1}^{n-1}p^{k(n-k)}q^{k}t$\\
\hline
$\{132, 231\}$ & $\displaystyle F(S;p,q,t)=\sum_{k = 1}^n(pq)^{\binom{n}{2}}t^{k-1}$\\
\hline
$\{132,321\}$ & $\displaystyle F(S;p,q,t)=1+\sum_{k = 1}^{\floor{n/2}}p^{k^2}q^kt$\\
\hline
$\{132, 213\}$ & $\displaystyle \bar{F}_n(S;p,q,t)=(pq)^{\binom{n}{2}}t^{n-1}+\sum_{k = 1}^{ \floor{n/2}}p^{k(k-1)}q^{n(k-1)}t^{2(k-1)}\bar{F}_{n-2k}(S;p,q,q^{k}t)$\\
\hline
$\{213, 312\}$ & $\displaystyle F_n(S;p,q,t)=\sum_{k = 0}^{n-1}p^{\binom{n-k}{2}}q^{\binom{n-k}{2}+k(k-1)}t^{n-k-1}$\\
\hline
$ \{213,321\}$ & $ \displaystyle F_n(S;p,q,t)=1+\sum_{k = 1}^{\floor{n/2}}p^{k^2}q^{n-k}t$\\
\hline
$ \{312,321\}$ & $F_n(S)=F_{n-1}(S)+pq^{n-1}tF_{n-2}(S)$\\
\hline
\end{tabular}
\end{center}
\caption{The generating functions for doubletons. }
\label{double}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$S$ & $F_n(S)$ or $\bar{F}_n(S)$\\
\hline
$\{123,132,213\}$ & $\displaystyle \bar{F}_n(S;p,q,t)=\bar{F}_{n-2}(S;p,q,qt)+p^{2}q^{n}t^2\bar{F}_{n-4}(S;p,q,q^2t)$\\
\hline
$\{123,132,231\}$ & $\displaystyle \bar{F}_n(S)=1+(pq)^{n-1}t$\\
\hline
$\{123,213,231\}$ & $\displaystyle \bar{F}_n(S)=1+p^{n-1}qt$\\
\hline
$\{132,213,321\}$&$\displaystyle F_{2k+1}(S)=1$ or $\displaystyle F_{2k}(S)=1+p^{k^2}q^kt$\\
\hline
$\{132,231,321\}$&$\displaystyle F = 1+pqt$\\
\hline
$\{132,213,231\}$&$\displaystyle F = 1+(pq)^{\binom{n}{2}}t^{n-1}$\\
\hline
$\{213,231,321\}$&$\displaystyle 1+pq^{n-1}t$\\
\hline
\end{tabular}
\end{center}
\caption{The generating functions for tripletons. }
\label{triple}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$S$ & $F_n(S)$ or $\bar{F}_n(S)$\\
\hline
$\{123,213,132,312\}$ & $F_n(S)=(pq)^{\binom{n}{2}}$\\
\hline
$\{321,213,132,312\}$ & $F_n(S)=1$ \\
\hline
\end{tabular}
\end{center}
\caption{The generating functions for four patterns. }
\label{four}
\end{table}
We next describe the same for triples of patterns. Again when describing the sets and functions we exclude triples that both contain 231 and 312 or 123 and 321.
\begin{prop} The decompositions for involutions that avoid three patterns.
\begin{enumerate}[(i)]
\item If $S = \{123,132,213\}$ then for $\iota \in {\mathcal I}_n(S)$ we have $\iota = 321[1, \tau, 1]$ for $\tau \in {\mathcal I}_{n-2}(S)$ or $\iota = 321[12, \tau, 12]$ for $\tau \in {\mathcal I}_{n-4}(S)$.
\item If $\iota \in {\mathcal I}_n(123,132,231)$ then $\iota$ is $12[{\mathfrak d}_{n-1},1]$ or ${\mathfrak d}_n$.
\item If $\iota \in {\mathcal I}_n(123,213,231)$ then $\iota$ is $12[1,{\mathfrak d}_{n-1}]$ or ${\mathfrak d}_n$.
\item If $\iota \in {\mathcal I}_n(321,132,213)$ then $\iota$ is ${\mathfrak i}_n$ and if $n=2k$ then $\iota$ could be $12[{\mathfrak i}_k,{\mathfrak i}_k]$.
\item If $\iota \in {\mathcal I}_n(321,132,231)$ then $\iota$ is ${\mathfrak i}_n$ or $12[21,{\mathfrak i}_{n-2}]$.
\item If $\iota \in {\mathcal I}_n(321,213,231)$ then $\iota$ is ${\mathfrak i}_n$ or $12[{\mathfrak i}_{n-2},21]$.
\item If $\iota \in {\mathcal I}_n(213,132,231)$ then $\iota$ is ${\mathfrak i}_n$ or ${\mathfrak d}_n$. \hfill \qed
\end{enumerate}
\end{prop}
From the decomposition we can quickly determine the cardinalities.
\begin{prop} The ${\mathcal I}$-Wilf equivalence classes for triples of permutations in ${\mathfrak S}_3$.
\begin{enumerate}[(i)]
\item If $S = \{123,132,213\}$ then $|{\mathcal I}_n(S)|=F_{\floor{n/2}}$.
\item If $S\in \{\{123,132,213\},\{123,213,231\},\{321,132,231\},\{321,213,231\},\{213,132,312\}\}$ then $|{\mathcal I}_n(S)|=2$.
\item If $S=\{321,132,213\}$ then $|{\mathcal I}_n(S)|=2$ when $n$ is even and $|{\mathcal I}_n(S)|=1$ when $n$ is odd. \hfill \qed
\end{enumerate}
\end{prop}
The generating functions for three patterns are in Table~\ref{triple}.
\begin{prop} The decompositions for involutions that avoid four patterns.
\begin{enumerate}[(i)]
\item If $\iota\in{\mathcal I}_n(123,213,132,312)$ then $\iota = {\mathfrak d}_n$.
\item If $\iota\in{\mathcal I}_n(321,213,132,312)$ then $\iota = {\mathfrak i}_n$. \hfill \qed
\end{enumerate}
\end{prop}
The generating functions for four patterns are in Table~\ref{four}.
Also, the $I{\mathcal I}$-Wilf and $M{\mathcal I}$-Wilf equivalence classes for multiple patterns mirror those in the singleton case. Given a set $S$ of patterns define $r_m(S)=\{r_m(\pi):\pi\in S\}$ and similarly define $R_{\theta}(S)$.
\begin{thm}
The $I{\mathcal I}$-Wilf and $M{\mathcal I}$-Wilf equivalence classes for multiple patterns in ${\mathfrak S}_3$ are described as follows.
\begin{enumerate}[(i)]
\item The only equalities between $I{\mathcal I}$-Wilf equivalence classes for sets $S\subseteq {\mathfrak S}_3$ of the same size are between $S$, $r_1(S)$, $r_{-1}(S)$ and $R_{180}(S)$.
\item The only equalities between $M{\mathcal I}$-Wilf equivalence classes for sets $S\subseteq {\mathfrak S}_3$ of the same size are between $S$ and $r_1(S)$. \hfill \qed
\end{enumerate}
\end{thm}
\section{Symmetries for permutations}
\label{permsymm}
In Sections~\ref{maj213} and~\ref{maj123} we demonstrated that the pairs of patterns 123 and 321 as well as 132 and 213 exhibit the symmetry $M{\mathcal I}_n(\pi_1)=q^{\binom{n}{2}}M{\mathcal I}_n(\pi_2;q^{-1})$. In both cases this symmetry holds for the larger class of permutations in that $M_n(\pi_1)=q^{\binom{n}{2}}M_n(\pi_2;q^{-1})$. We thank Vasu Tewari for asking about this generalization. We prove this by describing maps ${\mathfrak S}_n(\pi_1)\rightarrow{\mathfrak S}_n(\pi_2)$ that commute with $r_1$ (i.e. taking inverses) as well as if $\sigma_1\mapsto\sigma_2$ then $\Asc(\sigma_1)=\Des(\sigma_2)$.
The map ${\mathfrak S}_n(123)\rightarrow {\mathfrak S}_n(321)$ with the stated properties is classical and an elegant generalization of the map for involutions in Proposition~\ref{thm:123&321 symmetry}.
\begin{prop}There exists a map ${\mathfrak S}_n(123)\rightarrow{\mathfrak S}_n(123)$ that
\begin{enumerate}
\item commutes with $r_1$ and
\item if $\sigma_1\mapsto\sigma_2$ then $\Asc(\sigma_1)=\Des(\sigma_2)$.
\end{enumerate}
\end{prop}
\begin{proof} In this proof we use facts stated in Proposition~\ref{SYTfacts}. We will define a bijective map ${\mathfrak S}_n(123)\rightarrow{\mathfrak S}_n(123)$ that has the two stated properties. To define the map we first take a permutation $\sigma\in {\mathfrak S}_n(123)$, which by RSK corresponds to a pair of SYT $(P,Q)$ of the same shape. This shape has at most two-columns because the longest increasing sequence of $\sigma$ has length at most two. The transposed pair $(P^T,Q^T)$ of SYT of the same shape have at most two rows which will correspond to another permutation that has the longest decreasing sequence length at most two, so is in ${\mathfrak S}_n(321)$. This certainly defines a bijection ${\mathfrak S}_n(123)\rightarrow{\mathfrak S}_n(321)$.
We have proven before in Proposition~\ref{thm:123&321 symmetry} that $\Des(Q)=\Asc(Q^T)$. If $\sigma_1\mapsto\sigma_2$ and $(P,Q)$ and $(P^T,Q^T)$ correspond to $\sigma_1$ and $\sigma_2$ by RSK respectively we know that $\Des(\sigma_1)=\Des(Q)$ and $\Des(\sigma_2)=\Des(Q^T)=Asc(\sigma_1)$, which proves property (ii).
To show that this map commutes with $r_1$ we only need to show that if $\sigma_1\mapsto\sigma_2$ then $r_1(\sigma_1)\mapsto r_1(\sigma_2)$. Because $r_1(\sigma)$ is the inverse of $\sigma$ we must have that $r_1(\sigma_1)$ corresponds to $(Q,P)$. Then $r_1(\sigma_1)$ will map to the permutation associated to $(Q^T,P^T)$ that is the inverse of $\sigma_2$ or equivalently $r_1(\sigma_2)$, which proves property (i).
\end{proof}
\begin{cor}
For $n\geq 0$ we have the symmetry
$$M_n(123)=q^{\binom{n}{2}}M_n(321;q^{-1}).$$
\vspace{-1.1cm}
\hfill \qed
\end{cor}
Though the proof for Proposition~\ref{thm:123&321 symmetry} generalizes quickly to permutations the proof provided in Theorem~\ref{thm:132symm213} for the pair of patterns 132 and 213 does not leave room for an obvious generalization. However, just defining a map to prove the symmetry between these two patterns for permutations is not too difficult to either. One can define a map $\tilde\theta:{\mathfrak S}_n(132)\rightarrow{\mathfrak S}_n(213)$ inductively by mapping $\sigma=231[\alpha,1,\beta]$ that avoids 132 to $\sigma=312[\tilde\theta(\alpha),1,\tilde\theta(\beta)]$ that avoids 213. This map certainly changes ascents to descents so will map a permutation with an ascent set of $A$ to a permutation with a descent set of $A$. On the other hand, this map also certainly does not commute with $r_1$, a property we are interested in.
The rest of this section is dedicated to defining a map $\theta:{\mathfrak S}_n(132)\rightarrow{\mathfrak S}_n(213)$ that has the additionally property of commuting with $r_1$, $\theta \circ r_1=r_1\circ\theta$. Before we define the map we need some definitions. We say that $\sigma(i)$ is a {\it left-to-right maximum} of $\sigma$ if $\sigma(i)$ is larger than everything to its left, $\sigma(i)=\max\{\sigma(1)\dots \sigma(i)\}$. The {\it LR maximums} of $\sigma$ will refer to the subsequence of all {\it left-to-right maximums} of $\sigma$. Similarly, $\sigma(i)$ is a {\it right-to-left minimum} of $\sigma\in {\mathfrak S}_n$ if $\sigma(i)$ is smaller than everything to its right, $\sigma(i)=\min\{\sigma(i)\dots \sigma(n)\}$. The {\it RL minimums} of $\sigma$ will refer to the subsequence of all {\it right-to-left minimums} of $\sigma$. We can similarly define RL maximums. For example the LR maximums of $371958264$ are $3,7,9$, the RL minimums are $1,2,4$ and the RL maximums are $9,8,6,4$. See Figure~\ref{fig:LRmaxExample} for an illustration. Note that the RL minimums and the LR maximums are increasing sequences but the RL maximums form a decreasing sequence. This is a fact we use often throughout the rest of the section. The next lemma details a few more specific properties of these subsequences that will be needed in proving properties of our map $\theta$.
\begin{figure}
\begin{center}
\begin{tikzpicture} [scale = .4]
\begin{scope}[shift={(0,0)}]
\draw[gray,fill] (3,1) rectangle (1,9);
\draw[gray,fill] (7,2) rectangle (1,9);
\draw[gray,fill] (9,4) rectangle (1,9);
\draw[step=1cm,black,dashed] (1,1) grid (9,9);
\filldraw [black]
(1,3) circle (5pt)
(2,7) circle (5pt)
(3,1) circle (5pt)
(4,9) circle (5pt)
(5,5) circle (5pt)
(6,8) circle (5pt)
(7,2) circle (5pt)
(8,6) circle (5pt)
(9,4) circle (5pt);
\draw (1,1) rectangle (9,9);
\end{scope}
\begin{scope}[shift={(11,0)}]
\draw[gray,fill] (1,3) rectangle (9,1);
\draw[gray,fill] (2,7) rectangle (9,1);
\draw[gray,fill] (4,9) rectangle (9,1);
\draw[step=1cm,black,dashed] (1,1) grid (9,9);
\filldraw [black]
(1,3) circle (5pt)
(2,7) circle (5pt)
(3,1) circle (5pt)
(4,9) circle (5pt)
(5,5) circle (5pt)
(6,8) circle (5pt)
(7,2) circle (5pt)
(8,6) circle (5pt)
(9,4) circle (5pt);
\draw (1,1) rectangle (9,9);
\end{scope}
\end{tikzpicture}
\caption{The LR maximums of $371958264$ are $3,7,9$, the RL minimums are $1,2,4$ and the RL maximums are $9,8,6,4$.}
\label{fig:LRmaxExample}
\end{center}
\end{figure}
\begin{lemma} We have the following facts about RL minimums and LR maximums.
\begin{enumerate}[(i)]
\item Given $x\in {\mathfrak S}_n$ with RL minimums $x(i_1),x(i_2),\dots ,x(i_a)$ we must have the LR maximums in $r_1(x)$ be $i_1,i_2,\dots, i_a$.
\item Given any $x,y$ that avoid 213 the LR maximums occur on consecutive indices and the RL minimums occur on consecutive values. This means the LR maximums of $y$ are $y(1),y(2),\dots, y(b)$ for some $b$ with $y(b)=n$ and the RL minimums of $x$ are $x(i_1),x(i_2),\dots, x(i_a)=1,2,\dots, a$ for some $a$ where $i_a=n$.
\item Given $y\in {\mathfrak S}_n(213)$ the union of the LR maximums and RL minimums form the pattern $132[{\mathfrak i}_r,{\mathfrak i}_s,{\mathfrak i}_t]$ in $y$.
\end{enumerate}
\label{lem:LRmaxFacts}
\end{lemma}
\begin{proof} To prove (i) we will consider the diagram for a permutation $x$.
Given the diagram shade everything above, to the left and between for every point. The right-to-left minimums will be those dots on the edge of the shading. We illustrate this shading in Figure~\ref{fig:LRmaxExample}. Similarly given the diagram of $x$ shade for every point everything below, to the right and between. The left-to-right maximums will be those dots on the edge of the shading. Using these facts it is easy to see that if the RL minimums of $x$ are $x(i_1),x(i_2),\dots ,x(i_a)$ then the LR maximums in $r_1(x)$ are $i_1,i_2,\dots, i_a$.
Next, we prove (ii). Let $y$ avoid 213 and $y(j_1),y(j_2),\dots, y(j_b)$ be its LR maximums. Certainly $j_1=1$. We will also assume that there exists some index $k<j_a$ where $y(k)$ is not a left-to-right maximum. Because $j_1=1$ there exists a left-to-right maximum to the left of $y(k)$ because $y(k)$ is not a left-to-right maximum so there is some $y(j_p)>y(k)$ to the left of $y(k)$. The subsequence $y(j_p)y(k)y(i_a)$ forms the pattern 213 and we have a contradiction. Hence the LR maximums of $y$ are $y(1),y(2),\dots, y(b)$ for some $b$ with $y(b)=n$. Recall that $r_1(x)$ avoids 213 if $x$ avoids 213. We get the second part of (ii) using part (i) so the RL minimums of $x$ are $x(i_1),x(i_2),\dots, x(i_a)=1,2,\dots, a$ for some $a$ where $i_a=n$.
Lastly we prove (iii). Using part (ii) we know that the LR maximums of $y$ are $y(1),y(2),\dots, y(b)$ for some $b$ with $y(b)=n$ and the RL minimums are $y(i_1),y(i_2),\dots ,y(i_c)=1,2,\dots, c$ for some $c$ where $i_c=n$.
Consider the case where $y(1)\neq y(i_1)=1$. This implies that $c<y(1)$ since otherwise $y(1)1c$ forms the pattern 213 in $y$. Because $c<y(1)$ the two sequences form the pattern $21[{\mathfrak i}_b,{\mathfrak i}_c]$. Now consider if $y(1)= y(i_1)=1$ and further $y(i)=i$ for all $i\leq j$ for some $j$ with $y(j+1)\neq j+1$. This implies that our two sequences intersect for the first $j$ terms and then form the pattern $21[{\mathfrak i}_{b-j},{\mathfrak i}_{c-j}]$. All together we have that the LR maximums and the RL minimums form the pattern $132[{\mathfrak i}_j,{\mathfrak i}_{b-j},{\mathfrak i}_{c-j}]$, which finishes part (iii).
\end{proof}
In order to more easily define the map $\theta:{\mathfrak S}_n(132)\rightarrow{\mathfrak S}_n(213)$ we will define two operations $*$ and $\star$ on permutations and prove some properties about these operations. After we finish these lemmas defining our map $\theta$ and proving all the properties for $\theta$ will be effortless.
We define an operation $x*y$ on permutations $x$ and $y$, which will be a key feature in our map.
\begin{enumerate}
\item Let $x(i_1),x(i_2),\dots, x(i_a)$ be the sequence of RL minimums of $x$ and $y(j_1),y(j_2),\dots, y(j_b)$ be the sequence of LR maximums of $y$.
\item Note that $x(i_1),\dots, x(i_a),y(j_1),\dots ,y(j_b)$ forms the pattern $21[{\mathfrak i}_a,{\mathfrak i}_b]$ in $21[x,y]$. Replace this pattern of $21[{\mathfrak i}_a,{\mathfrak i}_b]$ with ${\mathfrak i}_{a+b}$ in $21[x,y]$ to get $x*y$.
\end{enumerate}
See Figure~\ref{fig:*andstarExample} for an example. This operation $*$ has some nice properties all of which we prove in the next lemma.
\begin{lemma} Let $x\in {\mathfrak S}_k$ and $y\in {\mathfrak S}_{\ell}$. The operation $*$ is
\begin{enumerate}[(i)]
\item associative on permutations that avoid 213,
\item takes $x,y$ that avoid 213 to $x*y$ that avoids 213,
\item has $r_1(x*y)=r_1(y)*r_1(x)$ and
\item has $\Des(x*y)=\Des(x)\cup(\Des(y)+k)$.
\item If $x$ and $y$ avoid 213 then the left $k$ points of $x*y$ form the pattern $x$ and the bottom $\ell$ points of $x*y$ form the pattern $y$.
\end{enumerate}
\label{lem:*facts}
\end{lemma}
\begin{proof} First we will show (i) that $*$ is associative on permutations that avoid 213. Let $x$, $y$ and $z$ be permutations avoiding 213. We will show that $(x*y)*z=x*(y*z)$.
By Lemma~\ref{lem:LRmaxFacts} if $y$ avoids 213 the union of the LR maximums and RL minimums form the pattern $132[{\mathfrak i}_r,{\mathfrak i}_s,{\mathfrak i}_t]$. The RL minimums of $x$ will form the pattern ${\mathfrak i}_a$ and the LR maximums of $z$ will form the pattern ${\mathfrak i}_c$. All together these LR maximums and RL minimums form the pattern $52431[{\mathfrak i}_a,{\mathfrak i}_r,{\mathfrak i}_s,{\mathfrak i}_t,{\mathfrak i}_c]$ in $321[x,y,z]$. We will show that we replace this pattern with $132[{\mathfrak i}_r,{\mathfrak i}_{a+s},{\mathfrak i}_{t+c}]$ in either $(x*y)*z$ or $x*(y*z)$, which will prove $(x*y)*z=x*(y*z)$.
When determining $x*y$ we find the RL minimums of $x$ and the LR maximums of $y$ and then replace the pattern $21[{\mathfrak i}_a,{\mathfrak i}_{r+s}]$ in $21[x,y]$ with ${\mathfrak i}_{a+r+s}$ and get $x*y=\tau$.
In a larger view the union of the LR maximums of $y$ and the RL minimums of $x$ and $y$ form the pattern $4132[{\mathfrak i}_a,{\mathfrak i}_r,{\mathfrak i}_s,{\mathfrak i}_t]$ in $21[x,y]$ that we replace with $132[{\mathfrak i}_r,{\mathfrak i}_{a+s},{\mathfrak i}_t]$ to get $\tau = x*y$ with the ${\mathfrak i}_r$ and ${\mathfrak i}_t$ portion forming the RL minimums of $\tau$.
So the RL minimums of $\tau$ are from the pattern ${\mathfrak i}_{r+t}$. To find $\tau*z$ we need the LR maximums of $z$, which form the pattern ${\mathfrak i}_c$. We replace the pattern $21[{\mathfrak i}_{r+t},{\mathfrak i}_{c}]$ in $21[\tau,z]$ with ${\mathfrak i}_{r+t+c}$. In conclusion we have replaced the pattern $52431[{\mathfrak i}_a,{\mathfrak i}_r,{\mathfrak i}_s,{\mathfrak i}_t,{\mathfrak i}_c]$ in $321[x,y,z]$ with $132[{\mathfrak i}_r,{\mathfrak i}_{a+s},{\mathfrak i}_{t+c}]$. By a very similar argument when determining $x*(y*z)$ we replace the pattern $52431[{\mathfrak i}_a,{\mathfrak i}_r,{\mathfrak i}_s,{\mathfrak i}_t,{\mathfrak i}_c]$ in $321[x,y,z]$ with $132[{\mathfrak i}_r,{\mathfrak i}_{a+s},{\mathfrak i}_{t+c}]$, which proves $(x*y)*z=x*(y*z)$.
Secondly, we will show (v). Note that the RL minimums of $x$ in $21[x,y]$ decrease in value in forming $x*y$ but remain an increasing subsequence. The values in the $x$ part of $21[x,y]$ not part of the RL minimums of $x$ remain unchanged in $x*y$. Since $x$ avoids 213 we know from part (ii) of Lemma~\ref{lem:LRmaxFacts} the RL minimums of $x$ are $x(i_1),x(i_2),\dots, x(i_a)=1,2,\dots, a$ for some $a$ where $i_a=n$. This means that the left $|x|$ points of $x*y$ are order isomorphic to $x$. Using part (i) of Lemma~\ref{lem:LRmaxFacts} we can conclude that the bottom $|y|$ points of $x*y$ are order isomorphic to $y$.
Next we show (ii) by showing that $x*y$ avoids 213 if both $x$ and $y$ avoid 213. We will do so by induction on the length of $x$. Let $x\in {\mathfrak S}_k(213)$ and $y\in {\mathfrak S}_{n-k}(213)$. The base case is when $k=0$ and $\epsilon *y=y$ avoids 213 by assumption. We now assume that $k>0$. We must have $n$ occurring at some index $i$ in the $x$ part of $21[x,y]$.
Let $\bar{x}$ be $x$ with $x(i)$ removed. The first case is if $x(i)$ is part of the RL minimums of $x$. This would means since $x$ avoids 213 that $i=k$, $x = {\mathfrak i}_k$ and $\bar{x} = {\mathfrak i}_{k-1}$. The LR maximums of $y$ by Lemma~\ref{lem:LRmaxFacts} must be $y(1),y(2),\dots ,y(b)$ for some $b$. We then have that $\bar{x}*y$ is $y(1)\dots y(b)(n-k+1)(n-k+2)\dots (n-1)y(b+1)\dots y(n-k)$, which avoids 213 by induction. Further we know that ${x}*y$ is $y(1)\dots y(b)(n-k+1)(n-k+2)\dots (n) y(b+1)\dots y(n-k)$. If ${x}*y$ contained a 213 then $n$ must play the role of 3, which is impossible because ${x}*y$ strictly increases before $n$.
The next case is when $x(i)$ is not part of the RL minimums of $x$. Then $x*y$ is $\bar{x}*y$ but we insert $n$ at index $i$ in $\bar{x}*y$. By induction $\bar{x}*y$ avoids 213 so if $x*y$ contains a 213 then $n$ plays the role of 3 and the pattern is in the left $k$ indices of $x*y$. By part (v) the left $k$ points of $x*y$ are order isomorphic to $x$ so if $x*y$ contains a 213 in the left $k$ points then $x$ contains the pattern 213, which is a contraction.
Next we will show (iii), that $r_1(x*y)=r_1(y)*r_1(x)$. Note that in forming $x*y$ we needed to find $x(i_1),x(i_2),\dots ,x(i_a)$ the RL minimums of $x$ and $y(j_1),y(j_2),\dots, y(j_b)$ the LR maximums of $y$. These points form the pattern $21[{\mathfrak i}_a,{\mathfrak i}_b]$ in $21[x,y]$ and we replace this pattern with ${\mathfrak i}_{a+b}$. By Lemma~\ref{lem:LRmaxFacts} the RL minimums of $r_1(y)$ are $j_1,j_2,\dots, j_b$ and the LR maximums of $r_1(x)$ are $i_1,i_2,\dots, i_a$. So in forming $r_1(y)*r_1(x)$ we replace the pattern $21[{\mathfrak i}_b,{\mathfrak i}_a]$ in $21[r_1(y),r_1(x)]$ with ${\mathfrak i}_{a+b}$, which is the same thing as $r_1(x*y)$.
Finally we prove (iv) that if $x\in {\mathfrak S}_k(213)$ and $y\in {\mathfrak S}_{\ell}(213)$ then $\Des(x*y)=\Des(x)\cup(\Des(y)+k)$. Consider $\tau=x*y$. By part (v) the left $k$ points of $x*y$ are order isomorphic to $x$. Thus, the first $k$ indices of $\tau$ have the same descents as $x$. By Lemma~\ref{lem:LRmaxFacts} the RL maximums of $y$ are $y(1),y(2),\dots,x(b)$. In forming $\tau$ these points increase in value. The subsequence $\tau(k+1)\tau(k+2)\dots\tau(n)$ while isn't order isomorphic to $y$, it does have the same ascents and descents. Because $i_a=k$, the first point in $y$ is a LR maximum and the last point in $x$ is a RL minimum we must have an increase at $k$ in $\tau$. All together this implies that $\Des(x*y)=\Des(x)\cup(\Des(y)+k)$.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture} [scale = .5]
\begin{scope}[shift={(0,1)}]
\filldraw [black]
(0,3) circle (5pt)
(1,1) circle (5pt)
(2,2) circle (5pt)
(3,0) circle (5pt);
\draw (0,0) rectangle (3,3);
\draw (4,1.5) node {$*$};
\end{scope}
\begin{scope}[shift={(5,1.5)}]
\filldraw [black]
(0,0) circle (5pt)
(1,2) circle (5pt)
(2,1) circle (5pt);
\draw (0,0) rectangle (2,2);
\draw (3,1) node {$=$};
\end{scope}
\begin{scope}[shift={(9,0)}]
\filldraw [black]
(0,6) circle (5pt)
(1,4) circle (5pt)
(2,5) circle (5pt)
(3,0) circle (5pt)
(4,2) circle (5pt)
(5,3) circle (5pt)
(6,1) circle (5pt);
\draw (0,0) rectangle (6,6);
\end{scope}
\begin{scope}[shift={(18,1)}]
\filldraw [black]
(0,2) circle (5pt)
(1,3) circle (5pt)
(2,1) circle (5pt)
(3,0) circle (5pt);
\draw (0,0) rectangle (3,3);
\draw (5,1.5) node {$\star$\hspace{.05cm} $1=$};
\end{scope}
\begin{scope}[shift={(25,.5)}]
\filldraw [black]
(0,2) circle (5pt)
(1,4) circle (5pt)
(2,3) circle (5pt)
(3,1) circle (5pt)
(4,0) circle (5pt);
\draw (0,0) rectangle (4,4);
\end{scope}
\end{tikzpicture}
\caption{On the left $4231*132=7561342$ and on the right $3421\star 1=35421$.}
\label{fig:*andstarExample}
\end{center}
\end{figure}
We next define our second operation $\star$ for $\sigma\star 1$ when $\sigma$ avoids 213. Because $\sigma$ avoids $213$ we can write $\sigma=21[x,y]$ with $x,y\neq \epsilon$ or $\sigma=12[1,z]$. We now define $\sigma\star 1$.
\begin{enumerate}
\item We define $1\star 1=21$.
\item If $\sigma=12[1,z]$ then $\sigma\star 1=12[1,z\star 1]$.
\item If $\sigma=21[x,y]$ with $x,y\neq \epsilon$ let $\bar{x}=x\star 1$ and $\bar{y}=y\star 1$. Say $\bar{x}$ has RL minimums $\bar{x}(i_1),\bar{x}(i_2),\dots, \bar{x}(i_a)$ and $\bar{y}$ has LR maximums $\bar{y}(j_1),\bar{x}(j_2)\dots ,\bar{y}(j_b)$. These together form the pattern $21[{\mathfrak i}_a,{\mathfrak i}_{b}]$ in $21[\bar{x},\bar{y}]$. We replace this pattern with ${\mathfrak i}_{a+b}$ but remove the point at $(i_a,y(j_b))$. We define $\sigma\star 1$ to be this new permutation.
\end{enumerate}
Note that step (3) is like defining $21[x,y]\star 1 = (x\star 1) * (y\star 1)$ but we remove the point $(i_a,y(j_b))=(|x|+1,|y|+1)$ where $|x|$ gives the length of a permutation. In Figure~\ref{fig:*andstarExample} we illustrate $3421\star 1=35421$. The operation $\star$ has some nice properties that we prove in the next lemma.
\begin{lemma}Let $\sigma\in {\mathfrak S}_n(213)$. The operation $\star$ is
\begin{enumerate}[(i)]
\item well defined in that the decomposition choice of $\sigma=21[x,y]$ plays no role in the output,
\item takes a $\sigma$ avoiding 213 and outputs $\sigma\star 1$ which avoids 213,
\item has $r_1(\sigma\star 1)=r_1(\sigma)\star 1$ and
\item $\Des(\sigma\star 1)=\Des(\sigma)\cup\{n\}$.
\end{enumerate}
\label{lem:starfacts}
\end{lemma}
\begin{proof}
First we will prove that the decomposition choice of $\sigma=21[x,y]$ plays no role in the output $\sigma\star 1$. Consider $\sigma=321[x,y,z]$ that avoids 213. Let $\alpha = 21[x,y]$. By definition $\alpha \star 1$ is $(x\star 1)*(y\star 1)$ with $(|x|+1,|y|+1)$ removed and $21[\alpha,z]\star 1$ is $(\alpha\star 1)*(z\star 1)$ with $(|\alpha|+1,|z|+1)$ removed. Putting this together we have $321[x,y,z]\star 1$ equal to $((x\star 1)*(y\star 1))*(z\star 1)=(x\star 1)*(y\star 1)*(z\star 1)$ by Lemma~\ref{lem:*facts} with points $(|x|+1,|z|+|y|+2)$ and $(|x|+|y|+2,|z|+1)$ removed. This is the same output we would get if we had instead combined $y$ and $z$ first.
Next we will show (ii) that if $\sigma$ avoid 213 then $\sigma\star 1$ avoids 213 by inducting on the length $|\sigma|=n$. Certainly if $n=1$ then $\sigma$ avoids 213 and $\sigma\star 1=21$ avoids 213. Now assume $n>1$. Because $\sigma$ avoids 213 we have either $\sigma=21[x,y]$ with $x,y\neq\epsilon$ or $\sigma=12[1,z]$. If $\sigma=21[x,y]$ then because $x$ and $y$ have length smaller than $n$ we can use our inductive assumption and can assume that $\bar{x}=x\star 1$ and $\bar{y}=y\star 1$ avoid 213. By Lemma~\ref{lem:*facts} we know that $\bar{x}*\bar{y}$ avoids 213 and since $\sigma\star 1$ is a pattern of $\bar{x}*\bar{y}$ we can conclude $\sigma\star 1$ avoids 213 in this case. Our other case is when $\sigma=12[1,z]$ and we defined $\sigma \star 1=12[1,z\star 1]$. Because $z$ has length smaller than $n$ we can assume that $z\star 1$ avoids 213, which implies that $\sigma \star 1=12[1,z\star 1]$ avoids 213.
We will also show (iii), $r_1(\sigma\star 1)=r_1(\sigma)\star 1$, by induction. Certainly $r_1(\sigma\star 1)=r_1(\sigma)\star 1$ if $\sigma = 1$. Assume the length of $\sigma$ is greater than 1. Again because $\sigma$ avoids 213 we have either $\sigma=21[x,y]$ with $x,y\neq\epsilon$ or $\sigma=12[1,z]$. If $\sigma=21[x,y]$ then $\sigma\star 1=(x\star 1)*(y\star 1)$ with the point $(|x|+1,|y|+1)$ removed. Consider $r_1(\sigma)=21[r_1(y),r_1(x)]$ then $r_1(\sigma)\star 1=(r_1(y)\star 1)*(r_1(x)\star 1)$ with $(|y|+1,|x|+1)$ removed. By our inductive assumption $r_1(x\star 1)=r_1(x)\star 1$ and $r_1(y\star 1)=r_1(y)\star 1$ so using Lemma~\ref{lem:*facts} we have that $r_1(\sigma)\star 1=r_1(y\star 1)*r_1(x\star 1)=r_1((x\star 1)*(y\star 1))$ with $(|y|+1,|x|+1)$ removed because $(|x|+1,|y|+1)$ was removed from $(x\star 1)*(y\star 1)$. This proves for this case $r_1(\sigma\star 1)=r_1(\sigma)\star 1$ as we wanted. The other case is when $\sigma=12[1,z]$. Then $r_1(\sigma\star 1)=r_1(12[1,z\star 1])=12[1,r_1(z)\star 1]$ because $r_1(z\star 1)=r_1(z)\star 1$ by our inductive assumption. This proves $r_1(\sigma\star 1)=12[1,r_1(z)]\star 1=r_1(\sigma)\star 1$.
Finally, we show (iv) that $\Des(\sigma\star 1)=\Des(\sigma)\cup\{n\}$, which we also prove by inducting on $|\sigma|=n$. Certainly if $n=1$ then $\Des(\sigma\star1)=\{1\}$ so we can assume that $n>1$. Again, since $\sigma$ avoids 213 we have either $\sigma=21[x,y]$ with $x,y\neq\epsilon$ or $\sigma=12[1,z]$. In the first case Let $|x|=k$ and $|y|=\ell$ so if $\sigma=21[x,y]$ then $\Des(\sigma)=\Des(x)\cup \{k\}\cup(\Des(y)+k)$. By induction $\Des(x\star1)=\Des(x)\cup\{k\}$ and $\Des(y\star 1)=\Des(y)\cup\{\ell\}$. By Lemma~\ref{lem:*facts} we have that $\Des((x\star1)*(y\star1))=\Des(x)\cup\{k\}\cup (\Des(y)+k+1)\cup\{\ell+k+1\}$. Now we only have to consider the removal of the point $(k+1,\ell+1)$. We have that $k\in \Des(x\star 1)$ so the point at index $k$ is higher than the one at index $k+1$ implying that the value at index $k$ is not a RL minimum of $x\star 1$. Further the value at $k+1$ in $21[x\star 1,y\star 1]$ is the maximum of the union of RL minimums of $x\star 1$ and LR maximums of $y\star 1$ in $21[x\star 1,y\star 1]$. That means when we do the pattern replacement in forming $(x\star1)*(y\star1)$ we still have a descent at index $k$ even after removing the point $(k+1,\ell+1)$. As result $\Des(\sigma\star 1)=\Des(x)\cup\{k\}\cup (\Des(y)+k)\cup\{\ell+k\}=\Des(\sigma)\cup\{n\}$. Next we consider the case where $\sigma=12[1,z]$ so $\Des(\sigma)=\Des(z)+1$. By induction $\Des(z\star 1)=\Des(z)\cup\{n-1\}$ so since $\sigma\star 1 = 12[1,z\star 1]$ has descent set $\Des(z\star 1)+1$ we can conclude that $\Des(\sigma\star 1)=\Des(\sigma)\cup\{n\}$.
\end{proof}
At this point we have defined all the operations and facts we need to define $\theta:{\mathfrak S}_n(132)\rightarrow{\mathfrak S}_n(213)$ and swiftly prove that $\theta$ commutes with $r_1$ and $\Asc(\sigma)=\Des(\theta(\sigma))$. We will define $\theta$ inductively. Consider a permutation $\sigma$ that avoids 132. Let $\theta(1)=1$ and assume now that $\sigma$ has length at least two. Either $\sigma$ can be decomposed as $\sigma=21[\alpha,\beta]$ for $\alpha,\beta\neq \epsilon$ or $\sigma=12[\gamma, 1]$. In the first case we define
\begin{equation}
\theta(21[\alpha,\beta])=\theta(\alpha)*\theta(\beta)
\label{eq:theta1}
\end{equation}
and in the second case
\begin{equation}
\theta(12[\gamma, 1])=\theta(\gamma)\star 1.
\label{eq:theta2}
\end{equation}
See Figure~\ref{fig:132to213permex} for an example. We will now prove that $\theta$ is well-defined, commutes with $r_1$ and has $\Asc(\sigma)=\Des(\theta(\sigma))$.
\begin{lemma} The map $\theta:{\mathfrak S}_n(132)\rightarrow{\mathfrak S}_n(213)$ defined above is well defined,
\begin{enumerate}[(i)]
\item commutes with $r_1$ and
\item has $\Asc(\sigma)=\Des(\theta(\sigma))$.
\end{enumerate}
\label{lem:almostTheta}
\end{lemma}
\begin{proof}
Let $\sigma\in {\mathfrak S}_n(132)$. We have two cases to consider in proving all the properties, which are either $\sigma=21[\alpha,\beta]$ with $\alpha,\beta\neq\epsilon$ or $\sigma=12[\gamma,1]$. First we will mention why this map is well defined and show that $\theta(\sigma)$ avoids 213 by inducting on $n$. The $n=1$ case is straight forward so assume $n>1$. In the first case if $\sigma=21[\alpha,\beta]$ then $\theta(\sigma)=\theta(\alpha)*\theta(\beta)$. By our inductive assumption $\theta(\alpha)$ and $\theta(\beta)$ avoid 213. Using Lemma~\ref{lem:*facts} we can conclude that $\theta(\alpha)*\theta(\beta)=\theta(\sigma)$ also avoids 213. In the second case where $\sigma=12[\gamma,1]$ we have that $\theta(\sigma)=\theta(\gamma)\star 1$. By our inductive assumption $\theta(\gamma)$ avoids 213 so by Lemma~\ref{lem:starfacts} we can conclude that $\theta(\gamma)\star 1$ also avoids 213.
Next, we show that $\theta$ commutes with $r_1$. Consider $\theta\circ r_1(\sigma)$. If we have the first case that $\sigma=21[\alpha,\beta]$ then $\theta\circ r_1(\sigma)=\theta(21[r_1(\beta),r_1(\alpha)])=r_1(\beta)*r_1(\alpha)=r_1(\alpha*\beta)$ by Lemma~\ref{lem:*facts}. Since $r_1(\alpha*\beta)=r_1\circ\theta(\sigma)$ we can conclude that $\theta$ and $r_1$ commute in this case. Next consider when $\sigma=12[\gamma,1]$ then $(\theta\circ r_1)(\sigma)=\theta(12[r_1(\gamma),1])=r_1(\gamma)\star 1=r_1(\gamma\star 1)$ by Lemma~\ref{lem:starfacts}. Since $r_1(\gamma\star 1)=r_1\circ \theta(\sigma)$ we can conclude that $\theta$ and $r_1$ commute in all cases.
Finally, we will prove that $\Asc(\sigma)=\Des(\theta(\sigma))$. We will prove this by inducting on $n$. The case of $n=1$ is again straight forward so we will assume $n>1$.
First consider if $\sigma=21[\alpha,\beta]$ where $\alpha\in {\mathfrak S}_k(132)$, $\beta\in {\mathfrak S}_{\ell}(132)$ and $k,\ell \neq 0$. By induction we know $\Asc(\alpha)=\Des(\theta(\alpha))$ and $\Asc(\beta)=\Des(\theta(\beta))$.
Also we know $\Des(\sigma)=\Des(\alpha)\cup\{k\}\cup(\Des(\beta)+k)$, so all we need to show that is $\Asc(\sigma)=\Asc(\alpha)\cup(\Asc(\beta)+k)$. We have $\theta(\sigma)=\theta(\alpha)*\theta(\beta)$ so $\Des(\theta(\sigma))=\Des(\theta(\alpha))\cup(\Des(\theta(\beta))+k)$ by Lemma~\ref{lem:*facts}. This equals $\Asc(\alpha)\cup(\Asc(\beta)+k)$ by our inductive assumptions so we are done in this case. Now consider if $\sigma=12[\gamma,1]$ so $\Des(\sigma)=\Des(\gamma)$. We want to show that $\Des(\theta(\sigma))=\Asc(\gamma)\cup\{n-1\}$. Using Lemma~\ref{lem:starfacts} we get $\Des(\theta(\gamma))=\Des(\theta(\gamma)\star 1)=\Des(\theta(\gamma))\cup\{n-1\}$. By our inductive assumption $\Asc(\gamma)=\Des(\theta(\gamma))$ so we further have $\Des(\theta(\sigma))=\Asc(\gamma)\cup\{n-1\}$, which completes the proof.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture} [scale = .5]
\begin{scope}[shift={(0,0)}]
\draw (0,0) rectangle (3,3);
\filldraw [black]
(4,4) circle (5pt)
(0,3) circle (5pt)
(1,0) circle (5pt)
(2,1) circle (5pt)
(3,2) circle (5pt);
\draw (0,0) rectangle (4,4);
\draw (5,2) node {$\overset{\theta}{\rightarrow}$};
\end{scope}
\begin{scope}[shift={(6,0)}]
\filldraw [black]
(0,2) circle (5pt)
(1,4) circle (5pt)
(2,3) circle (5pt)
(3,1) circle (5pt)
(4,0) circle (5pt);
\draw (0,0) rectangle (4,4);
\end{scope}
\begin{scope}[shift={(15,0)}]
\draw (0,2.5) rectangle (3.5,6);
\draw (3.5,0) rectangle (6,2.5);
\filldraw [black]
(0,5) circle (5pt)
(1,6) circle (5pt)
(2,3) circle (5pt)
(3,4) circle (5pt)
(4,1) circle (5pt)
(5,0) circle (5pt)
(6,2) circle (5pt);
\draw (0,0) rectangle (6,6);
\draw (7,3) node {$\overset{\theta}{\rightarrow}$};
\end{scope}
\begin{scope}[shift={(23,0)}]
\filldraw [black]
(0,6) circle (5pt)
(1,4) circle (5pt)
(2,5) circle (5pt)
(3,0) circle (5pt)
(4,2) circle (5pt)
(5,3) circle (5pt)
(6,1) circle (5pt);
\draw (0,0) rectangle (6,6);
\end{scope}
\end{tikzpicture}
\caption{On the left $\theta(41235)=35421$ and on the right $\theta(6745231)=7561342$.}
\label{fig:132to213permex}
\end{center}
\end{figure}
So far we have shown that $\theta$ has all the properties we want save for showing that this map is indeed a bijection. We first define some helpful terminology. Let $\sigma\in {\mathfrak S}_n(213)$. We will say that $\sigma$ is {\it $*$-splittable} if $\sigma=x*y$ for some permutations $x,y\neq\epsilon$ that avoid 213. Consider the sequence $\sigma(m_1),\sigma(m_2),\dots, \sigma(m_l)$ of the RL maximums of $\sigma$. Also let $u_i=m_{i}-m_{i-1}$ and $m_0=0$. This keeps track of the number of indices to the left of the $i$th RL maximum but to the right of the $(i-1)$st RL maximum including the $i$th RL maximum. Let $v_i=\sigma(m_{i})-\sigma(m_{i+1})$ and $\sigma(m_{l+1})=0$. This keeps track of the number of indices below the $i$th RL maximum but above the $(i+1)$st RL maximum including the $i$th RL maximum. Because $\sigma$ avoids 213 we know that $\sigma$ will be strictly increasing for all points that are to the left and below of $(m_i,\sigma(m_i))$. Let $p_i$ equal the number of points that are to the left and below of $(m_i,\sigma(m_i))$ including $(m_i,\sigma (m_i))$. We illustrate this geometrically in Figure~\ref{fig:uivipiDiagram}. We will use this notation of $u_i$, $v_i$, $p_i$ and $\sigma(m_i)$ for the next couple lemmas. Recall when constructing $x*y$ we find $x(i_1)\dots x(i_a)$ the RL minimums of $x$ and $y(j_1)\dots y(j_b)$ the LR maximums for $y$. These values form the pattern $21[{\mathfrak i}_a,{\mathfrak i}_b]$ in $21[x,y]$, which we replace with ${\mathfrak i}_{a+b}$. The result of this is that the point $(y(j_b),x(i_a))$ becomes a RL maximum in $\sigma=x*y$. Say this RL maximum is $\sigma(m_j)$. We further have that $u_j\geq b+1$, $v_j\geq a+1$ and $p_j=a+b$. From this we can conclude that $u_j+v_j\geq p_j+2$. This is actually a sufficient condition for $*$-splittable.
\begin{figure}
\begin{center}
\begin{tikzpicture} [scale = .5]
\draw[fill,gray] (0,0) rectangle (3.6,3.6);
\draw[dashed] (1,5)--(0,5);
\draw[dashed] (1,5)--(1,0);
\draw[dashed] (3.5,3.5)--(0,3.5);
\draw[dashed] (3.5,3.5)--(3.5,0);
\draw[dashed] (5,1)--(5,0);
\draw[dashed] (5,1)--(0,1);
\filldraw [black]
(1,5) circle (3pt)
(3.5,3.5) circle (3pt)
(5,1) circle (3pt);
\draw (0,0) rectangle (6,6);
\draw (-.3,5.5) node {\tiny $(m_{i-1},\sigma(m_{i-1}))$};
\draw (5,4) node {\tiny$(m_{i},\sigma(m_{i}))$};
\draw (6.6,1.5) node {\tiny$(m_{i+1},\sigma(m_{i+1}))$};
\draw (2.3,4.3) node {\tiny$u_i$};
\draw [decorate,decoration={brace,amplitude=6pt},rotate=0] (1.1,3.7) -- (3.5,3.7);
\draw [decorate,decoration={brace,amplitude=6pt},rotate=270] (-1.1,0) -- (-3.5,0);
\draw (-.7,2.2) node {\tiny$v_i$};
\draw (2,2) node {\tiny$p_i$};
\end{tikzpicture}
\caption{Given the RL maximums $(m_i,\sigma(m_i))$ of $\sigma$ we illustrate the values $u_i=m_{i}-m_{i-1}$, $v_i=\sigma(m_{i})-\sigma(m_{i+1})$ and $p_i$, which is the number of points to the left and below the $i$th RL maximum.}
\label{fig:uivipiDiagram}
\end{center}
\end{figure}
\begin{lemma}A permutation $\sigma$ is $*$-splittable if there exists a RL maximum $\sigma(m_j)$ such that $u_j+v_j\geq p_j+2$. Particularly if $\sigma$ avoids 213 and $\sigma = x*y$ then both $x$ and $y$ avoid 213.
\label{lem:*split}
\end{lemma}
\begin{proof}
Consider $\sigma$ with all the mentioned conditions. The $p_j$ points to the left and below of $(m_j,\sigma(m_j))$ and including $(m_j,\sigma(m_j))$ form the pattern ${\mathfrak i}_{p_j}$. We will replace this pattern with $21[{\mathfrak i}_k,{\mathfrak i}_{p_j-k}]$ for some $k\in [p_j-1]$. In all cases our new permutation has the decomposition $21[x,y]$. Note that if $p_j-k\leq u_j-1$ that the right-most part of ${\mathfrak i}_k$ is a RL minimum of $x$. Also if $k\leq v_j-1$ then the right-most part of ${\mathfrak i}_{p_j-k}$ becomes a LR maximum of $y$. If we choose $k=p_j-u_j+1$ and $p_j-k=u_j-1$ we get both conditions $p_j-k\leq u_j-1$ and $k\leq v_j-1$. It follows that the points from ${\mathfrak i}_k$ in $x$ become the RL minimums of $x$ and the points from ${\mathfrak i}_{p_j-k}$ in $y$ become the LR maximums of $y$. This implies $\sigma=x*y$ so $\sigma$ is $*$-splittable.
Consider the case where $\sigma$ is $*$-splittable as in the previous paragraph. We want to show that $x$ and $y$ avoid 213. By part (v) of Lemma~\ref{lem:*facts} the left $|x|$ points of $\sigma=x*y$ are order isomorphic to $x$. Because $\sigma$ avoided 213 we must have that $x$ does as well. By part (v) of Lemma~\ref{lem:*facts} the bottom $|y|$ points of $\sigma=x*y$ are isomorphic to $y$. Because $\sigma$ avoided 213 we must have that $y$ does as well.
\end{proof}
Now consider the case where $\sigma\in {\mathfrak S}_n(213)$ is not $*$-splittable, so $u_i+v_i\leq p_i+1$ for all $i$. We want to show in this case that $\sigma$ is {\it $\star$-splittable} where we can write $\sigma=z\star 1$ for some $z$. Recall when constructing $z\star 1$ where $z=21[x,y]$ with $x\in {\mathfrak S}_s(213)$ and $y\in {\mathfrak S}_t(213)$ that we had $z\star 1=(x\star 1)*(y\star 1)$
with the point $(s+1,t+1)$ removed. Certainly $(x\star 1)*(y\star 1)$ is $*$-splittable so by Lemma~\ref{lem:*split} there is some index $j$ with $u_j+v_j\geq p_j+2$. Note that the point $(s+1,t+1)$ is above the $(j+1)$st RL maximum and to the right of the $(j-1)$st RL maximum, so when removing
the point $(s+1,t+1)$ to get $z\star1$ we decrease $u_j$, $v_j$ and $p_j$ by one. This implies our inequality $u_j+v_j\geq p_j+2$ becomes $u_j+v_j\geq p_j+1$. This together with the assumption that $\sigma$ is not $*$-splittable implies that $u_j+v_j= p_j+1$. We will show that $u_i+v_i\leq p_i+1$ for all $i$ is a sufficient condition for a permutation avoiding 213 to be $\star$-splittable.
\begin{lemma}A permutation $\sigma$ avoiding 213 is $\star$-splittable if $u_i+v_i\leq p_i+1$ for all $i$. Particularly if $\sigma$ avoids $213$ and $\sigma = w\star 1$ then $w$ avoids $213$.
\label{lem:starsplit}
\end{lemma}
\begin{proof}
Consider $\sigma$ with all the mentioned conditions.
Note that we are only concerned with the cases where $n=|\sigma|>1$.
Also assume that there exists a $j$ such that $u_j+v_j=p_j+1$ for a RL maximum that is not the first or the last. We will first show that $\sigma$ is $\star$-splittable with this condition by induction on $|\sigma|$. If $|\sigma|=2$ then only $\sigma=21=1\star 1$ satisfies the conditions but then $\sigma$ is $\star$-splittable. Let $|\sigma|>2$. The $p_j$ points to the left and below of $(m_j,\sigma(m_j))$ and including $(m_j,\sigma(m_j))$ form the pattern ${\mathfrak i}_{p_j}$. Consider the subcollection of points counted by $p_j$ that are to the right of $(m_{j-1},\sigma(m_{j-1}))$ and above $(m_{j+1},\sigma(m_{j+1}))$. This subcollection isn't empty since it contain $(m_j,\sigma(m_j))$ so there exists a left-most point in the subcollection. We will insert a new point $(s,t)$ just below and to the left of this point. Note that with this we increase $u_j$, $v_j$ and $p_j$ by one, which we will notate $\check{u}_j$, $\check{v}_j$ and $\check{p}_j$ so we have $\check{u}_j+\check{v}_j=\check{p}_j+2$. If we take the pattern ${\mathfrak i}_{\check{p}_j}$ created by these $\check{p}_j$ points and replace it with $21[{\mathfrak i}_{\check{p}_j-\check{u}_j+1},{\mathfrak i}_{\check{u}_j-1}]$ we create a permutation with decomposition $21[\bar{x},\bar{y}]$ just as we had in Lemma~\ref{lem:*split} with $\sigma=\bar{x}*\bar{y}$.
Note that this means that $s=|\bar{x}|$ and $t=|\bar{y}|$ where $(s,t)$ was the point we added earlier.
The associated $u^{\bar{x}}_i$, $v^{\bar{x}}_i$ and $p^{\bar{x}}_i$ values for $\bar{x}$ are the old values $u^{\bar{x}}_i=u_i$, $v^{\bar{x}}_i=v_i$ and $p^{\bar{x}}_i=p_i$ for $1\leq i\leq j-1$ and $u^{\bar{x}}_j=1$, $v^{\bar{x}}_j=\check{p}_j-\check{u}_j+1$ and $p^{\bar{x}}_j=\check{p}_j-\check{u}_j+2$. So $u^{\bar{x}}_i+v^{\bar{x}}_i\leq p^{\bar{x}}_i+1$ for all $i$. By induction $\bar{x}$ is $\star$-splittable and $\bar{x}=x\star 1$ for some $x$ avoiding 213.
The associated $u^{\bar{y}}_i$, $v^{\bar{y}}_i$ and $p^{\bar{y}}_i$ values for $\bar{y}$ are the old values $u^{\bar{y}}_i=u_i$, $v^{\bar{y}}_i=v_i$ and $p^{\bar{y}}_i=p_i$ for $j+1\leq i\leq l$ and $u^{\bar{y}}_j=\check{u}_j+1$, $v^{\bar{y}}_j=1$ and $p^{\bar{y}}_j=\check{u}_j+2$. So $u^{\bar{y}}_i+v^{\bar{y}}_i\leq p^{\bar{x}}_i+1$ for all $i$.
By induction $\bar{y}$ is $\star$-splittable and $\bar{y}=y\star 1$ for some $y$ avoiding 213.
We now have $\sigma=(x\star 1)*(y\star 1)$ with the point $(|x|+1,|y|+1)=(s,t)$ removed. Hence $\sigma=21[x,y]\star 1$ is $\star$-splittable.
We now will show in this case that if $\sigma$ avoids 213 and $\sigma = w\star 1$ then $w$ avoids 213. We will do so by induction on $|\sigma|$. The base case is straightforward so assume $|\sigma |>2$. We will first argue that when we introduced the point $(s,t)$ into $\sigma$ that we did not create the pattern 213. By the way we included this point we know that there is another point $(s+1,t+1)$ since we inserted $(s,t)$ just to the left and below of another point. If we did create the pattern 213 by inserting $(s,t)$ then this pattern can not involve $(s+1,t+1)$. However, we can replace the point $(s,t)$ in the 213 pattern with the point $(s+1,t+1)$ and maintain the 213 pattern, which is a contradiction since $\sigma$ avoids 213. Since $\sigma$ with this point included is $\bar{x}*\bar{y}$ we can conclude that $\bar{x}$ and $\bar{y}$ avoid 213 by Lemma~\ref{lem:*split}. By induction this means both $x$ and $y$ avoid 213 so $21[x,y]$ avoids 213, which completes our argument in this case because $\sigma=21[x,y]\star 1$.
Next we consider the case where $u_i+v_i\neq p_i+1$ for all $i\neq 1,l$ so $u_i+v_i\leq p_i$ for all $i\neq 1,l$ and $u_i+v_i\leq p_i+1$ for $i= 1,l$. If $l$, the number of RL maximums, is $1$ then $\sigma={\mathfrak i}_n$ and $u_1=v_1=p_1$, which contradicts $u_1+v_1\leq p_1+1$ unless $n=1$. If $l=2$ then we have $u_1=p_1$ and $v_2=p_2$, which implies $v_1=u_2=1$ and $\sigma=12[{\mathfrak i}_{n-2},{\mathfrak d}_2]$. By definition the definition of $\star$ we have $\sigma = {\mathfrak i}_{n-1}\star 1$ so $\sigma$ is $\star$-splittable. Now assume $l>2$.
We will consider several cases depending on $\sigma(1)$ and will end up showing $\sigma=12[1,\tau]$. The first case is when $\sigma(1)$ is a RL maximum, which forces $\sigma(1)=n$ where $|\sigma|=n$. It follows that the second RL maximum is $(m_2,n-1)$ and all the points counted by $p_2$ are also counted by $u_2$. It follows that $u_2+v_2\geq p_2+1$ and because $l>2$ we have a contradiction. Our second case is when $\sigma(1)$ is below the first RL maximum but above the second. Because $\sigma$ avoids 213 all the points to the left of the first RL maximum must be weakly above $(1,\sigma(1))$. Because the second RL maximum is below $(1,\sigma(1))$ we have $u_1=v_1=p_1$. For the inequality $u_1+v_1\leq p_1+1$ to hold we need $p_1=1$, which contradicts $(1,\sigma(1))$ not being a RL maximum.
Our third case is when $\sigma(1)$ is below the second RL maximum but is above the last. Say that $\sigma(1)$ is above $(m_j,\sigma(m_j))$ but is below the $(j-1)$st RL maximum. Consider the $p_{j-1}$ points associated to the $(j-1)$st RL maximum. Because $\sigma$ avoids 213 none of these points can be below $(1,\sigma(1))$ and to the left of $(m_{j-1},\sigma(m_{j-1}))$, so $v_{j-1}$ counts all the $p_{j-1}$ points. We then have $u_{j-1}+v_{j-1}\geq p_{j-1}+1$, which is a contradiction. Our last case is when $\sigma(1)$ is below the last RL maximum. Because $\sigma$ avoids 213 we must then have that $\sigma(1)=1$ so $\sigma=12[1,\tau]$. Note that in $\tau$ the $u_i$'s and $v_i$'s are the same as they were in $\sigma$ except $u_1$ decreases by one and $v_l$ decreases by one where $l$ indicates the number of RL maximums. We also have that $p_i$ decreases by 1 for all $i$. Because we assumed that $u_i+v_i\leq p_i$ for all $i\neq 1,l$ and $u_i+v_i\leq p_i+1$ for $i= 1,l$ in $\sigma$ we now have that $u^{\tau}_i+v^{\tau}_i\leq p_i^{\tau}+1$ for all $i$ for the associated values in $\tau$. This means $\tau$ is $\star$-splittable so $\tau=z\star 1$ for some $z$. It follows that $\sigma=12[1,z\star 1]=12[1,z]\star 1$ and $\sigma$ is $\star$-splittable.
Lastly, we will show in this second case that if $\sigma$ avoids 213 and $\sigma = w\star 1$ then $w$ avoids 213 by induction on $|\sigma|$. Again the base case is straightforward so assume $|\sigma|>2$. Because $\sigma$ avoids 213 and $\sigma=12[1,\tau]$ we must have that $\tau$ avoids 213. By induction if $\tau$ avoids 213 and $\tau=z\star 1$ then $z$ avoids 213, which further implies that $12[1,z]$ avoids 213. This completes our argument in this case because $\sigma=12[1,z]\star 1$.
\end{proof}
Now that we proven the conditions for $*$-splittable and $\star$-splittable we can prove that $\theta$ is a bijection.
\begin{thm}
The map $\theta:{\mathfrak S}_n(132)\rightarrow{\mathfrak S}_n(213)$ is a well-defined bijection that commutes with the map $r_1$ and has $\Asc(\sigma)=\Des(\theta(\sigma))$.
\label{thm:theta}
\end{thm}
\begin{proof}
From Lemma~\ref{lem:almostTheta} we know that $\theta$ is well-defined, commutes with $r_1$ and has $\Asc(\sigma)=\Des(\theta(\sigma))$. The last thing we need to show is that $\theta$ is indeed a bijection. Because $|{\mathfrak S}_n(132)|=|{\mathfrak S}_n(213)|=C_n$ for all $n$ it suffices to show that $\theta$ is surjective. We will prove this by induction on $n$. The base case of $n=1$ is true so assume that $n>1$ and $\theta:{\mathfrak S}_k(132)\rightarrow{\mathfrak S}_k(213)$ is bijective for all $k<n$.
Let $\sigma\in {\mathfrak S}_n(213)$ and $u_i$, $v_i$ and $p_i$ be defined as in Lemmas~\ref{lem:*split} and \ref{lem:starsplit}. If there exists a $j$ such that $u_j+v_j\geq p_j+2$ then by Lemma~\ref{lem:*split} we know that $\sigma$ is $*$-splittable and $\sigma=x*y$ where $x$ and $y$ avoid 213. By induction there exists $\alpha$ and $\beta$ avoiding 132 such that $\theta(\alpha)=x$ and $\theta(\beta)=y$, which implies $\theta(21[\alpha,\beta])=\theta(\alpha)*\theta(\beta)=\sigma$.
The alternative is that $u_i+v_i\leq p_i+1$ for all $i$. By Lemma~\ref{lem:starsplit} we know that $\sigma$ is $\star$-splittable so $\sigma=z\star 1$ for some $z$ that avoids 213. By induction there exists a $\gamma$ avoiding 132 such that $\theta(\gamma)=z$. Further $\theta(12[\gamma,1])=\theta(\gamma)\star 1 = \sigma$. Hence $\theta$ is surjective for all $n$ and thus a bijection.
\end{proof}
The map in Theorem~\ref{thm:theta} is sufficient to prove the symmetry $M_n(132)=q^{\binom{n}{2}}M_n(213;q^{-1}).$
\begin{cor}
For $n\geq 0$ we have the symmetry
$$M_n(132)=q^{\binom{n}{2}}M_n(213;q^{-1}).$$
\vspace{-1.2cm}
\hfill \qed
\end{cor}
Because $\theta$ commutes with $r_1$ we can conclude that $\theta$ restricts to involutions implying that $\theta$ is also a bijection ${\mathcal I}_n(132)\rightarrow{\mathcal I}_n(213)$. This reproves Theorem~\ref{thm:132symm213}, a result we had shown in Section~\ref{maj213}.
\begin{cor}
For $n\geq 0$ we have the symmetry
$$M{\mathcal I}_n(132)=q^{\binom{n}{2}}M{\mathcal I}_n(213;q^{-1}).$$
\vspace{-1.2cm}
\hfill \qed
\end{cor}
This symmetry does not seem to be limited to the pairs of patterns 123 and 321 as well as 132 and 213. It appears we have the symmetries $M_n(\pi_1)=q^{\binom{n}{2}}M_n(\pi_2;q^{-1})$ and $M{\mathcal I}_n(\pi_1)=q^{\binom{n}{2}}M{\mathcal I}_n(\pi_2;q^{-1})$ for any pair of patterns of the form $\pi_1=12[{\mathfrak i}_k,{\mathfrak d}_{m-k}]$ and $\pi_2=12[{\mathfrak d}_{k+1},{\mathfrak i}_{m-k-1}]$ for any $n$, $m$ and $k\in\{0,\dots m\}$.
\begin{conj}
For the pair of patterns $\pi_1=12[{\mathfrak i}_k,{\mathfrak d}_{m-k}]$ and $\pi_2=12[{\mathfrak d}_{k+1},{\mathfrak i}_{m-k-1}]$ we have for $n\geq 0$ the symmetry
$$M_n(\pi_1)=q^{\binom{n}{2}}M_n(\pi_2;q^{-1}).$$
\label{conj}
\end{conj}
\begin{conj}
For the pair of patterns $\pi_1=12[{\mathfrak i}_k,{\mathfrak d}_{m-k}]$ and $\pi_2=12[{\mathfrak d}_{k+1},{\mathfrak i}_{m-k-1}]$ we have for $n\geq 0$ the symmetry
$$M{\mathcal I}_n(\pi_1)=q^{\binom{n}{2}}M{\mathcal I}_n(\pi_2;q^{-1}).$$
\label{conj:invo}
\end{conj}
This has been confirmed for all $m,n\leq 9$ and $k\in\{0,\dots m\}$ and this symmetry for involutions does not seem to happen for any other pairs of patterns. However, this symmetry for permutations does appear to happen for many more pairs. These additional pairs seem to arise from the pairs $\pi$ and $r_0(\pi)$ and equalities coming from the $M$-Wilf equivalence classes. This is because $r_0:{\mathfrak S}_n(\pi)\rightarrow {\mathfrak S}_n(r_0(\pi))$ is a bijection with $\Asc(\sigma)=\Des(r_0(\sigma))$. However, the only pairs whose symmetry further restricts to involutions does seem to just be the mentioned pairs.
This suggests there does exist maps ${\mathfrak S}_n(\pi_1)\rightarrow{\mathfrak S}_n(\pi_2)$ that commute with $r_1$ and has $\Asc(\sigma)=\Des(\theta(\sigma))$.
One implication of Conjecture~\ref{conj} is another conjecture by Dokos et.\ al. Yan, Ge and Zhang~\cite{YGZ15} (Theorem 1.3) proved Conjecture~\ref{conj:dokos} in the case of $k = 1$.
\begin{conj}[Dokos et.\ al.~\cite{DDJSS12} Conjecture 2.7] The following pairs are $M$-Wilf equivalent.
\begin{enumerate}[(i)]
\item $132[{\mathfrak i}_m,1,{\mathfrak d}_k]$ and $231[{\mathfrak i}_m,1,{\mathfrak d}_k]$.
\item $213[{\mathfrak d}_m,1,{\mathfrak i}_k]$ and $312[{\mathfrak d}_m,1,{\mathfrak i}_k]$.
\end{enumerate}
\label{conj:dokos}
\end{conj}
Conjecture~\ref{conj} implies Conjecture~\ref{conj:dokos} because of the following. Certainly the pair $\pi_1=132[{\mathfrak i}_m,1,{\mathfrak d}_k]$ and $\pi_2=312[{\mathfrak d}_m,1,{\mathfrak i}_k]$ has the symmetry $M_n(\pi_1)=q^{\binom{n}{2}}M_n(\pi_2;q^{-1})$ because $r_0(132[{\mathfrak i}_m,1,{\mathfrak d}_k])=312[{\mathfrak d}_m,1,{\mathfrak i}_k]$. For the same reason the pair $\pi_3=231[{\mathfrak i}_m,1,{\mathfrak d}_k]$ and $\pi_4=213[{\mathfrak d}_m,1,{\mathfrak i}_k]$ displays the same symmetry. If Conjecture~\ref{conj} was true we then would certainly have the equalities of $M_n(\pi_1)=M_n(\pi_3)$ and $M_n(\pi_2)=M_n(\pi_4)$ that Conjecture~\ref{conj:dokos} implies.
\section*{Acknowledgements}\label{sec:acknow} The author would like to thank Bruce Sagan, Stephanie van Willigenburg and Vasu Tewari for the mentorship and conversations that motivated this research.
\bibliographystyle{plain}
|
1,108,101,565,285 | arxiv | \section{Introduction}\label{sec:Introduction}
Primordial black holes (PBHs) were proposed as a macroscopic dark matter (DM) candidate a few decades ago~\cite{Carr:1974nx}. They can be formed in simple inflationary models and do not require new physics below the inflationary scale (see, {\it e.g.}, Refs.~\cite{Carr:2016drx,Sasaki:2018dmp} for reviews). Because of their simplicity as a DM candidate, it is necessary to search for PBHs with all possible masses. Although there are many theoretical and experimental efforts to search for PBHs, there is still a mass window from around $10^{-16}$ to $10^{-11}~M_\odot$ within which PBHs can still compose all of dark matter. It is the purpose of this paper to identify a search method to find or constrain PBHs in this mass window.
In order to be stable on cosmological time scales and evade extragalactic gamma ray bounds from evaporation, PBHs must have mass $M \gtrsim 10^{17}~\text{g}$ or $10^{-16}~M_\odot$~\cite{Carr:2009jm}.
Bounds from evaporation into cosmic rays can set stronger limits for a subdominant PBH DM fraction, though these bounds are slightly weaker than gamma ray bounds when PBHs comprise all of DM \cite{Boudaud:2018hqb}.
However, ``small'' PBH masses remain relatively unconstrained for many orders of magnitude in mass above these bounds. Previous searches for small-mass PBHs include microlensing \cite{Paczynski:1985jf} of stars in M31 using the Subaru/HSC telescope \cite{Subaru} and femtolensing of gamma ray bursts (GRBs) \cite{1992ApJ...386L...5G} using the Fermi GBM detectors \cite{Barnacka:2012bm}. The Subaru/HSC study was limited by wave effects~\cite{1992ApJ...386L...5G} and finite source size effects~\cite{1994ApJ...430..505W} and can only probe PBH masses $M \gtrsim 10^{22}~\text{g}$ or $10^{-11}~M_\odot$. Regarding the study of Fermi GBM data, it was pointed out in Ref.~\cite{Katz:2018zrn} that GRBs cannot at present set bounds on PBHs because the size of the GRB gamma-ray emitting region is too large compared to the Einstein radius of the lens.
Future observations may eventually probe approximately $M \in \left[10^{17},10^{19}\right]~\text{g}$ if GRBs with small enough source size are observed. Other potential constraints in this regime come from neutron star capture \cite{Capela:2013yf} and white dwarf destruction \cite{Graham:2015apa}, both of which face astrophysical uncertainties including the DM abundance in globular clusters \cite{Conroy:2010bs,Ibata:2012eq,Naoz:2014bqa,Popa:2015lkr}.
Other microlensing studies at larger masses above $10^{24}~\text{g}$ include MACHO \cite{Allsman:2000kg}, EROS \cite{Tisserand:2006zx}, OGLE \cite{Wyrzykowski:2011tr}, Kepler \cite{Griest:2013aaa}, caustic crossing \cite{Oguri:2017ock}, and quasar microlensing \cite{Mediavilla:2017bok}.
Thus, a potential window exists for PBHs to be all of DM with mass in the approximate range of $M \in \left[10^{17},10^{22}\right]~\text{g}$ or $\left[10^{-16},10^{-11}\right]\,M_\odot$.
In this paper, we investigate whether any astrophysical object could make a suitable source to search for gravitational lensing due to PBHs in this mass window. A few criteria for a source to serve as a good (micro-)lensing object include:
{\it i}) a large photon energy with sufficient photon counts to reduce the wave effects of lensing; {\it ii}) a small geometric size compared to the Einstein radius such that the finite source size effects are small; {\it iii}) a long distance from the telescopes around the Earth to increase the optical depth or the number of possible lensing events; {\it iv}) a large steady photon flux such that a sudden brightness magnification can be easily identified.
For the first condition {\it i}), the wave effects becomes important when $4G_N M E_\gamma \lesssim 1$~\cite{1992ApJ...386L...5G} or $E_\gamma \lesssim 1/(4 G_N M) = 0.66\,\mbox{keV}\,\times\,\left(10^{20}\,\mbox{g}/M\right) \,$, where $G_N$ is Newton's gravitational constant and $E_\gamma$ is the lensed photon energy.
This leads us to consider sources emitting in the X-ray spectrum with energy above 1 keV, where we may ignore the wave effect for $M \gtrsim \text{few} \times 10^{20}$~g, but not for a smaller mass. In our full analysis, we will take the wave effects into account to determine the minimum mass that can be probed.
The second condition {\it ii}) points towards using highly compact sources. To have a rough understanding of the finite source size effects, we can compare the source size and the Einstein radius when both are projected on the lens plane. Defining $x = D_{\rm OL}/D_{\rm OS}$ as the ratio of the observer-lens angular diameter distance, $D_{\rm OL}$, over the observer-source angular diameter distance, $D_{\rm OS}$, the source radius $R_{\rm S}$ is reduced to $x R_{\rm S}$ after projection to the lens plane. The Einstein radius has~\footnote{Because we are working on galactic scales, we assume $D_{\rm OL}+D_{\rm LS}=D_{\rm OS}$, with $D_{\rm LS}$ the lens-source angular diameter distance.}
\beqa \label{eq:Einstein-radius}
r_{_{\rm E}}(x) = \sqrt{4 \, G_N\, M\,x\,(1-x)\,D_{\rm OS}} =
(107\,\mbox{km}) \times \left(\frac{\sqrt{x(1-x)}}{1/2} \right) \, \left(\frac{D_{\rm OS}}{50\,\mbox{kpc}}\right)^{1/2} \left(\frac{M}{10^{19}\,{\rm g}}\right)^{1/2}\,.
\eeqa
The ratio of the source and Einstein radii is given by
\beqa \label{eq:as}
a_{\rm S}(x) = \frac{x R_{\rm S}}{r_{_{\rm E}}(x)} \approx \left(0.1\right) \times \left(\frac{x}{\sqrt{x(1-x)}} \right)\left(\frac{R_{\rm S}}{20\,\mbox{km}}\right)\left(\frac{50\,\mbox{kpc}}{D_{\rm OS}}\right)^{1/2} \left(\frac{10^{19}\,{\rm g}}{M}\right)^{1/2} \,,
\eeqa
which suggests a very compact source object like a neutron star or stellar mass black hole in order to have $a_{\rm S}(x) \ll 1$ for $x=\mathcal{O}(1)$.
The third and fourth conditions are somewhat at odds---a large distance puts more lenses between the source and the telescope, but it also decreases the source apparent brightness. Balancing these turns out to favor sources towards the outer reaches of the Milky Way halo, {\it e.g.}, in Milky Way satellite galaxies.
In the next section, we motivate why X-ray binary pulsars satisfy these conditions and determine the best candidate source pulsars. The following three sections detail calculations of the lensing event rate and magnification, including wave and finite source size effects. Section \ref{sec:experiments} presents current and prospective experimental bounds. We conclude in Section \ref{sec:conclusion}.
\section{X-ray pulsars as lensing sources}\label{sec:source}
Among the X-ray sources with emitted photon energy around 1-100 keV, X-ray binaries are potential good candidates for lensing because the X-ray emission region can be relatively small. Most X-ray binaries consist of a compact stellar remnant and a nearby relatively normal donor star. Typically, the compact objects are either a neutron star ($\sim$ 1-2 $M_\odot$) or a black hole ($\sim$ 5-15 $M_\odot$)~\cite{Zhang:2010qr,Casares:2017jah}.
The matter from the donor gravitationally infalls into the compact object, forming an accretion disk. X-rays are emitted according to the accretion mechanism~\cite{Shakura:1972te,Rappaport:2003un}, with an X-ray emission region within a factor of few times the neutron star radius or the black hole Schwarzschild radius. For an X-ray pulsar with a solar-mass neutron star as the accretor, the hard X-rays are mainly emitted from the accretion column with a polar cap radius of $0.1\,R_{\rm NS}$ and a cylinder height of $\lesssim R_{\rm NS}$, with $R_{\rm NS}\approx 10$~km denoting the neutron star radius \cite{Hickox:2004fy}. Since the emitting direction of the hard X-rays is approximately perpendicular to the column height, the source size is anticipated to be less than around the neutron-star radius, or $R_{\rm S} \lesssim R_{\rm NS}$, and is generically below 100~km. Given the uncertainty on the current understanding of the source size, we will include the finite source size effects for $R_{\rm S}$ up to 100 km and choose a fiducial value of $R_{\rm S}=20$~km for our later analysis. The brightest X-ray black hole binaries in general are more massive and thus have a larger emitting area and more important finite source size effects.
The observed X-ray spectrum for an X-ray pulsar is dominated by two features: direct emission from its accretion column as described above and reprocessing of column X-rays by its accretion disk~\cite{Hickox:2004fy}. The reprocessing dominates the soft energy spectrum below about 1 keV, while the accretion column dominates above about 2 keV for the pulsars in our study~\cite{Hickox:2005nd,Hung:2010cf}. While the source size of the reprocessed X-rays is potentially large, as discussed above the accretion column is smaller. Thus, it is important to limit any lensing search using these sources to energies greater than 2 keV, which by coincidence aligns nicely with the energy where wave effects become less important for PBH mass around $10^{20}$~g---the mass region we wish to probe.
Among all the X-ray pulsars, in order to satisfy the conditions $\it iii$) and $\it iv$) in Section~\ref{sec:Introduction}, we focus on the most distant bright sources. It is straightforward to identify the X-ray pulsars either in the Large or Small Magellanic Clouds (LMC or SMC) with a distance of $50$-$65$~kpc as the potential good sources~\cite{Casares:2017jah}. Furthermore, to have a large value of observed photon counts per second, we eventually identify SMC X-1 and LMC X-4 as the two ``good" sources to search for lensing events by PBH's and concentrate on SMC X-1 for quantitative analysis.
\section{Estimation of optical depth and averaged time interval}\label{sec:optical-depth}
Before we introduce the formulas to calculate the event rate with both wave and finite source size effects, we first estimate the optical depth and average time interval between lensing events~\cite{Paczynski:1985jf}. We will use more precise formulas in Section~\ref{sec:event-rate} for our final sensitivity study. To estimate the optical depth for PBH DM lensing a source in SMC and LMC, we use the isotropic Einasto profile~\cite{Graham:2006ae} as the dark matter density in our galaxy
\beqa
\rho_{\rm DM}(r) = \rho_{\odot} \, e^{ -\frac{2}{\beta} \left[(r/r_s)^\beta- (r_\odot/r_s)^\beta \right] } \,,
\eeqa
with $\rho_\odot = 0.4\,\mbox{GeV}/\mbox{cm}^3$, $r_s = 20$~kpc, $r_\odot = 8.5$~kpc and $\beta = 0.17$. Other dark matter profiles will only introduce a small perturbation for later results. In our analysis, we will also conservatively ignore the dark matter contributions from SMC and LMC, which only increase the optical depth by around 10\% in the point-like source case and even smaller for the finite source size case.
For a point-like source and ignoring wave effects, the optical depth, or the probability for a source to be within $y_T$ Einstein radii of a foreground PBH lens, is simply
\beqa
\label{eq:optical-depth}
\tau = f_{\rm PBH}\, \int_0^{1} dx\,D_{\rm OS}\,\frac{\rho_{\rm DM}(x\,\vec{r}_{\rm S})}{M}\, \pi\,r_{\rm E}^2(x) \, y_T^2 ~~.
\eeqa
Here, $f_{\rm PBH}$ is the fraction of PBH contributions to the total DM energy density and $y_T$ is the threshold PBH distance from the source line of sight in units of $r_{_{\rm E}}$---its value depends on the required magnification factor. The integrand of (\ref{eq:optical-depth}) is independent of the lens mass, but has a quadratic dependence on the source distance [see Eq.~\eqref{eq:Einstein-radius}]. For the source SMC X-1, with $(\ell, \delta)=(300.41^\circ, -43.56^\circ)$ and at a distance of $D_{\rm OS}=d_{\rm SMC-X1}\approx 65$~kpc~\cite{Hilditch:2004pz,Keller:2006ek} from Earth, the optical depth is $8.4\times 10^{-7}$. For LMC X-4 with $(\ell, \delta)=(276.33^\circ, -32.53^\circ)$~\cite{LMC-X-4} and $D_{\rm OS}=d_{\rm LMC}=50$~kpc in distance, the optical depth is $5.5\times 10^{-7}$. The optical depths to other X-ray pulsars that are within our galaxy~\cite{x-ray-pulsars} are only a few percent of or even smaller than the optical depths for SMC and LMC sources, so we will not include them in our analysis.
To have a rough estimate of the lensing event rate or the averaged time interval between two events, we adopt the approximate formula in Ref.~\cite{Paczynski:1985jf}
\beqa
\langle \Delta t \rangle &=& \Gamma^{-1} \approx \frac{\pi}{2}\,\frac{t_{\rm E}}{\tau}\,f_{\rm PBH}^{-1}\,y_T\, \nonumber \\
&\approx& (11\,\mbox{days})\,\times f_{\rm PBH}^{-1}\, y_T^{-1}\, \left(\frac{\sqrt{x(1-x)}}{1/2} \right) \, \left(\frac{D_{\rm OS}}{65\,\mbox{kpc}}\right)^{1/2} \left(\frac{M}{10^{19}\,{\rm g}}\right)^{1/2} \,.
\label{eq:Deltat-opticaldepth}
\eeqa
Here, we have used $\tau=8.4\times 10^{-7}$ for SMC X-1. The Einstein radius crossing time is $t_{\rm E} \approx r_{_{\rm E}}(x=1/2)/v_\perp \approx 0.50\,\mbox{s}$ for $D_{\rm OS} = 65$~kpc, $M=10^{19}$~g and the PBH perpendicular speed around $v_\perp \approx 240$~km/s~\cite{Nesti:2013uwa}. In the situation with negligible background events, an observation of this X-ray source with a length of $\mathcal{O}(10\,\mbox{days})$ could constrain PBH as 100\% of DM. In the following section, we will include both the wave and finite source size effects and make a more realistic estimation of the event rate.
\section{Wave optical lensing for a finite source size}\label{sec:wave}
For a source emitting primarily with X-ray energy of $\mathcal{O}(1$-$10\,\mbox{keV})$, wave effects must be taken into account in order to probe a lower PBH mass range $\lesssim 10^{19}$\,g. For a point-like source, the magnification factor $\mu$ is given by~\cite{Matsunaga:2006uc}
\beqa
\mu(w, y) = \frac{\pi\, w}{1 - e^{-\pi\, w}}\,\left| _1 F_1\left( \frac{i}{2}\,w, 1; \frac{i}{2}\,w \, y^2 \right) \right|^2 \,,
\eeqa
with $w \equiv 4 G_N M E_\gamma$~\footnote{For sources near or in our galaxy, we have ignored the redshift factor for the lens distance.} and $y(x)\equiv d_s(x)/r_{_{\rm E}}(x)$, with $d_s(x)$ as the tangential distance between the source and lens. Note that the mass dependence in $w$ comes from the black hole Einstein radius. This formula is valid for any lens of mass $M$ so long as its radius is less than the Einstein radius. In the limit of $y=0$, the hypergeometric function $_1 F_1$ approaches 1 and the maximal magnification is simply the prefactor, $\mu^{\rm max} = \pi \, w/(1 - e^{-\pi w})$. For a general $y$, we can also calculate the two limits of $\mu$ in terms of $w$, which are
\beqa \label{eq:mu-two-limits}
\mu(w, y) =
\begin{cases}
1 + \frac{\pi\,w}{2} + \frac{w^2}{12} (\pi^2 - 3 y^2) & \mbox{for} ~~~ w \ll 1 \\
\frac{1}{y\sqrt{4+y^2}}\left\{ 2+y^2 + 2 \sin{\left[ w \left( \frac{1}{2} y \sqrt{4+y^2} + \log{\left|\frac{\sqrt{4+y^2}+y}{\sqrt{4+y^2}-y} \right|} \right) \right]} \right\} & \mbox{for} ~~~ w \gtrsim y^{-1}
\end{cases} \,.
\eeqa
So, when the wave effect is important with $w \rightarrow 0$, $\mu \rightarrow 1$ and there is no magnification.
\begin{figure}[thb!]
\begin{center}
\includegraphics[width=0.55\textwidth]{muSMC.pdf}
\caption{\label{fig:muSMC}
The averaged magnification factor $\overline{\mu}$ for a range of photon energies as a function of $y$ defined as the ratio of the tangential source-lens separation in the lens plane over the Einstein radius. The source is assumed to be point-like for this plot. The black and dashed line is in the infinite mass limit.
}
\end{center}
\end{figure}
For a specific source, one could calculate the averaged magnification factor after integrating out a range of energy. For a source energy spectrum of $F(E_\gamma)$, we define
\beqa \label{eq:mu-energy-ave}
\overline{\mu}(y) \equiv \frac{\int^{E_{\rm max}}_{E_{\rm min}}\,d E_\gamma\,F(E_\gamma)\,\mu\left[w(E_\gamma), y\right] }{\int^{E_{\rm max}}_{E_{\rm min}}\,d E_\gamma\,F(E_\gamma)} ~~.
\eeqa
When analyzing the data for a specific telescope, one should also include the energy-dependent effective acceptance area of the telescope $A(E_\gamma)$ by making the replacement $F(E_\gamma) \rightarrow F(E_\gamma)\,A(E_\gamma)$. The hard energy spectrum for an X-ray pulsar usually follows a power-law with an exponential cutoff. For SMC X-1, we take $F(E_\gamma) = E_\gamma^{-0.93}$ for $E_\gamma \leq 6$~keV and $E_\gamma^{-0.93}\,e^{-(E_\gamma - 6~\text{keV})/7.9~\text{keV}}$ for $E_\gamma > 6$~keV~\cite{Neilsen:2004eb}. The averaged energy for the range from 2 to 60 keV is $\langle E_\gamma \rangle = 6.8$~keV. Integrating out this energy range, we show the magnification factors for different PBH masses in Fig.~\ref{fig:muSMC}. It is clear from this figure that the magnification factor decreases as mass decreases and the wave effect becomes more important. However, this decrease is not monotonic. For instance, the corresponding value of $y$ for $\overline{\mu} = 1.8$ is the larger for $M=10^{19}$~g than for $10^{20}$~g. For $M=10^{18}$~g, the maximum magnification factor is slightly below 1.2. So, there may exist a threshold PBH mass under which lensing is undetectable. To get around this, one may consider increasing $E_{\rm min}$ to reduce the wave effect at the cost of reducing the total photon counts and increasing statistical errors. We will come back to this point when we analyze the real data.
Having discussed the wave effects, we now include the finite source size effect. Given our limited understanding of the source spatial properties, we simply assume a two-dimensional Gaussian distribution with the source size of $R_{\rm S}$ in each direction. The source intensity is $W(\vec{\chi}) \propto \mbox{exp}\left( - |\vec{\chi}|^2/2R_S^2\right)$, where $\vec{\chi}$ is the two-dimensional vector with respect to the source center. After integrating out the angular variable, one rewrites the magnification factor for a fixed energy~\cite{Goodman}
\beqa
\mu\left[ w(E_\gamma), a_{\rm S}(x), y(x) \right] = a_{\rm S}^{-2} \,e^{- y^2/(2 a_{\rm S}^2)}\, \int^\infty_0 dz\,z\,e^{-z^2/(2 a_{\rm S}^2)}\,I_0\left(y\,z/a_{\rm S}^2\right) \,\mu(w, z) \,.
\label{eq:mu-finitesource}
\eeqa
Here, the dimensionless parameter, $a_{\rm S}(x)$, is defined in Eq.~\eqref{eq:as} and proportional to the source size, $R_{\rm S}$. The function $I_0(z)$ is the zeroth-order modified Bessel function. Similarly to Eq.~\eqref{eq:mu-energy-ave}, one can also calculate the energy-averaged $\overline{\mu}$ by integrating over the relevant energy range.
Requiring threshold values of $\overline{\mu}_T = 2.0$ or 1.3, we show the allowed parameter space in the $x$-$y$ plane in Fig.~\ref{fig:y-x-muT}. For a larger value of magnification factor (left panel), the finite source size effect is more dramatic. As the source size increases, the allowed range in $x$ shrinks, which results in a smaller optical depth and a longer observation time required to place a limit. For a small value of $x$, the finite source size effect is not important because $a_{\rm S}(x) \rightarrow 0$ as $x\rightarrow 0$. The allowed range in $y$ increases when the threshold magnification $\overline{\mu}_T$ decreases, and we have already seen from the optical depth that a larger value for $y_T$ increases the rate of lensing. So, the final sensitivity when searching for PBH microlensing events depends on the choice of $\overline{\mu}_T$ for which lensing can be distinguished from normal source fluctuations. We will determine $\overline{\mu}_T$ based on the variance in the count rate from telescope observations.
\begin{figure}[thb!]
\begin{center}
\includegraphics[width=0.47\textwidth]{muT20.pdf} \hspace{6mm}
\includegraphics[width=0.47\textwidth]{muT13.pdf}
\end{center}
\caption{\label{fig:y-x-muT}
The allowed parameter space (below the curves) in $x$-$y$ after including both wave and finite source size effects for two different energy-averaged magnification factors $\overline{\mu}_T=2.0$ (left panel) and 1.3 (right panel).
}
\end{figure}
\section{Event rate}\label{sec:event-rate}
To calculate the event rate, we take into account the dark matter velocity distribution in our galaxy. Ignoring the small effects of the source motion~\cite{1991ApJ...372L..79G}, the differential event rate is given by~\cite{1991ApJ...372L..79G,Subaru}
\beqa
\frac{d\Gamma}{d\hat{t}} = f_{\rm PBH}\times 2\, \int^{x_{\rm max}}_0 dx\,D_{\rm OS}\,\frac{\rho_{\rm DM}(x\,\vec{r}_{\rm S})}{M}\, \int^{y_T(x)}_0 \, \frac{dy}{\sqrt{y_T(x)^2 - y^2}}\, \frac{v_r^4}{v_c^2} \,e^{- v_r^2/v_c^2 } ~~.
\eeqa
Here, $\hat{t}$ is the timescale of the microlensing event; $v_r$ is the velocity of the PBH in the lens plane and is related to $\hat{t}$ by $v_r = 2\,r_{_{\rm E}}(x) \sqrt{y_T(x)^2 - y^2}/\hat{t}$; $y_T(x)$ is the threshold source-lens distance to have $\overline{\mu} > \overline{\mu}_T$ as displayed in Fig.~\ref{fig:y-x-muT}; $x_{\rm max} \in \left[0,1\right]$ is the upper value of $x$ depending on the source size as in Fig.~\ref{fig:y-x-muT}. The velocity $v_c$ is the velocity dispersion in our galaxy, which is taken to be approximately the circular velocity. For our analysis, we simple take this velocity to be $v_c \approx 240\,\mbox{km}/\mbox{s}$ for a wide range of locations away from the center of the galaxy~\cite{Nesti:2013uwa,doi:10.1093/mnras/stw2775}.
Depending on the experimental data, one could choose a minimum value for the lensing timescale, $t_{\rm min}$, which should be a factor of few times the time binning $t_{\rm bin}$ in order to have magnified counts for a few bins. Then, the average time interval from one event to another is
\beqa
\langle \Delta t \rangle = \left(\int_{t_{\rm min}}^\infty \frac{d\Gamma}{d\hat{t}}\right)^{-1} \,.
\label{eq:time-interval-avg}
\eeqa
We show this quantity as a function of $t_{\rm min}$ for different PBH masses and source sizes in Fig.~\ref{fig:dGammadt}. Again, for a smaller value of magnification factor, the finite source size effects are smaller for a fixed PBH mass. For $\overline{\mu}_T =1.3$ and $t_{\rm min}=0.3$~s, the averaged time interval is around 7 days for $M=10^{19}$~g and 5 days for $M=5\times 10^{18}$~g.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Gammamu20.pdf} \hspace{6mm}
\includegraphics[width=0.47\textwidth]{Gammamu13.pdf}
\end{center}
\caption{\label{fig:dGammadt}
The averaged time interval between two lensing events as a function of the minimum of event timescale, $t_{\rm min}$.
Various source sizes are displayed with different line textures; some of these source-size curves overlap, indicating that source size is not important for that particular $M$ and $\overline{\mu}_T$.
}
\end{figure}
\section{Existing and future experimental constraints}\label{sec:experiments}
Having outlined the calculations for optical depth, magnification, and event rate, we now determine the most promising lensing sources and demonstrate how effectively X-ray telescopes can search for PBH lensing events.
Between the X-ray pulsars identified above, SMC X-1 has a larger luminosity of around $1.5\times 10^{39}\,\mbox{erg}/\mbox{s}$~\cite{doi:10.1093/mnras/197.2.247} compared to LMC X-4's $4\times 10^{38}\,\mbox{erg}/\mbox{s}$~\cite{1991ApJ...381..101L}. In addition, it is at a greater distance, giving it a larger optical depth for lensing. It also has longer total archival observations by recent X-ray telescopes. We thus focus on it for the remainder of this section.
Given the SMC X-1 source flux of $\sim 0.1\,\mbox{cts}/\mbox{s}/\mbox{cm}^2$ for X-ray energies above a few keV, a telescope with an effective area of $\mathcal{O}(10^4\,\mbox{cm}^2)$ is necessary to have $\mathcal{O}(100)$ counts for the time bin size of 0.1~s to constrain the magnification factor. We now discuss telescopes that fit this criterion.
\subsection{Existing data from RXTE}\label{sec:existing}
Among the previous and current X-ray telescopes, the Rossi X-ray Timing Explorer Proportional Counter Array (RXTE PCA) and AstroSat~\cite{doi:10.1117/12.2062667} have large enough effective areas in the energy range of interest (a total collecting area of 6500 cm$^2$). Their effective areas are approximately flat for energies above around 4 keV and drop quickly near 2 keV and above 10 keV (80 keV) for RXTE~\cite{RXTE-effective-area} (AstroSat). In our data analysis, we will take the energy-dependent area into account when we calculate the magnification factor using \eqref{eq:mu-energy-ave}. RXTE PCA also has more pointed exposure for SMC X-1, 12.65 days, than any other modern X-ray telescope. This exposure time is in the ballpark of the averaged time interval for the lensing events, shown in Fig.~\ref{fig:dGammadt}, which makes the observation of RXTE PCA on SMC X-1 very interesting to search for PBH dark matter.
We use the RXTE-specific tools in {\tt HEASOFT 6.25}~\cite{RXTE_Heasoft,Heasoft}~\footnote{$\mathtt{SEEXTRCT}$ was used to extract events with Earth elevation angle greater than 10 degrees, pointing offset less than 0.02 degrees, and time since South Atlantic Anomaly greater than 10 minutes. All active Proportional Counter Units (PCUs) were included. Background was estimated using $\mathtt{RUNPCABACKEST}$ with the provided bright source background model, and the background lightcurve was subtracted from the observed lightcurve. Barycenter correction was performed with $\mathtt{FAXBARY}$.} to analyze the RXTE PCA data from the GoodXenon1 and GoodXenon2 modes. Since SMC X-1 has an intrinsic pulsation period of about 0.7\,s, we apply a Fourier transformation for the extracted lightcurves and remove the peaks associated with the intrinsic frequencies. We then perform an inverse Fourier transformation to convert the data back to obtain the pulsation-free lightcurves. The resulting lightcurves are used to estimate the apparent brightness and variability of the persistent emission (no flares or eclipses). As an example, we show a portion of one observation period (observation ID P10139) in Fig.~\ref{fig:example}. For this observation with binning time $t_\text{bin}=0.1\,\text{s}$, the fiducial brightness is calculated to be $\mbox{B}_\text{fid}=496~\text{cts/s}$ with the standard deviation $\sigma_\text{B,fid}=123~\text{cts/s}$ in the persistent emission. Note that $\sigma_\text{B,fid} t_\text{bin} \simeq 12 > \sqrt{\mbox{B}_\text{fid} t_\text{bin}} \simeq 7$, indicating that there is a bit more intrinsic variation than just the source's Poisson noise and pulsation period. The additional variations likely come from the accretion mechanism or observational noise. If this additional variation includes correlations between flux bins, this may weaken our results.
\begin{figure}
\begin{center}
\includegraphics[width=0.65\textwidth]{example_data.pdf}
\end{center}
\caption{\label{fig:example}
Example lightcurve for a 600-second portion of one observation period (observation ID P10139 with all five PCUs added) binned in 0.1 s intervals after removing the intrinsic pulsation frequency. Red crosses indicate where there are two consecutive events exceeding the mean by at least $3\sigma$. There is no three consecutive events with $3\sigma$ deviation, which is close to the requirement for a lensing event.}
\end{figure}
Using this information, we create selection criteria to search for rare lensing events while keeping minimal statistical background. We may look for some number of consecutive points $N_\text{consec}$ on the lightcurve for which the count rate is greater than some number of standard deviations $N_\sigma$ above the mean~\cite{Griest:2011av}. The number of standard deviations $N_\sigma$ is chosen such that the number of expected statistical background occurrences is much smaller than unity for the entire observation period. In other words, the probability for a particular set of $N_\text{consec}$ consecutive bins to be all above a given threshold $N_\sigma$ ought to obey
\beq
p \ll \frac{t_\text{bin}}{t_\text{obs}} = 1.16 \times 10^{-7} \times \left(\frac{10~\text{days}}{t_\text{obs}}\right) \left(\frac{t_\text{bin}}{0.1~\text{s}}\right) ~,
\label{eq:pvalue}
\eeq
where $t_\text{obs}$ is the total observation time. Note that $p$ refers to a {\it particular} set of $N_\text{consec}$ bins, as opposed to any set of $N_\text{consec}$ bins in the entire dataset. For example, assuming a Gaussian distribution and all points uncorrelated, $p=[1-\Phi(N_\sigma)]^{N_\text{consec}}$ where $\Phi$ is the cumulative probability function for a Gaussian with mean zero and standard deviation one; then,
the probability to have three consecutive time bins with over $3\sigma$ fluctuation is $p=2.5 \times 10^{-9}$, while the probability for having two consecutive time bins with $4\sigma$ is $p=1.0\times 10^{-9}$. If the points have additional correlations in time, one could either impose a more stringent statistical requirement or diagnose the to-be-found ``interesting" events closer by examining their light curves and energy spectra. In practice, we require $N_\sigma$ just large enough to saturate a factor of $1/20$ times the number in the right side of Eq.~\eqref{eq:pvalue}. The actual bounds are not sensitive to the choice of factor.
With the requirement on $N_\sigma$, we can then calculate the required energy-averaged magnification factor, $\overline{\mu}_T$. Say we are interested in the magnification of a particular time bin that is $n_\sigma$ standard deviations from the mean (positive or negative). We may also wish to vary the binning time $t_\text{bin}$ and apparent source brightness ${\rm B}$. Then, the required $\overline{\mu}_T$ is
\begin{equation}
\overline{\mu}_T = \left(1 \,+ \,N_\sigma \,\times\, \frac{\sigma_{{\rm B},\text{fid}}/{\rm B}_\text{fid} }{\sqrt{({\rm B}/{\rm B}_\text{fid}) (t_\text{bin}/t_\text{bin,fid})}}\right) \bigg/ \left(1 \,+ \,n_\sigma \,\times\, \frac{\sigma_{{\rm B},\text{fid}}/{\rm B}_\text{fid} }{\sqrt{({\rm B}/{\rm B}_\text{fid}) (t_\text{bin}/t_\text{bin,fid})}}\right) ~~,
\end{equation}
where ${\rm B}_\text{fid}$ is the fiducial apparent source brightness in cts/s with $\sigma_{{\rm B},\text{fid}}$ its standard deviation for a fiducial value of the binning time $t_\text{bin,fid}$. For the fiducial values given above for the RXTE PCA data, taking $N_\sigma=3$ and $n_\sigma=-1$, one has $\overline{\mu}_T=2.32$ for $t_\text{bin}=0.1~\text{s}$. Note that the magnification is required to exceed $\overline{\mu}_T$ for a time period of at least $N_\text{consec} t_\text{bin}$. Thus, the maximum magnification is generally greater than $\overline{\mu}_T$. Note that this assumes each flux bin is uncorrelated. We discuss this further in Section \ref{sec:conclusion}.
With $\overline{\mu}_T$ determined, $y_T(x)$ can be computed using Eqs.~(\ref{eq:mu-energy-ave}) and (\ref{eq:mu-finitesource}). Then, the lensing event rate can be computed from Eq.~(\ref{eq:time-interval-avg}), which must be multiplied by the probability for there to be $N_\text{consec}$ consecutive bins above $n_\sigma$ from the mean in the underlying source signal (before lensing effects); this probability is estimated from the data distribution.\footnote{The light curve data is nearly Gaussian; it is slightly skewed right.} Having tried a few options, we take $N_\text{consec}=3$ and $n_\sigma=-1$ as fixed, which tends to yield slightly better results than other possibilities. For each mass, the optimal value of $t_\text{bin}$ is determined to maximize the lensing event rate. If no lensing candidates are found and background is assumed to be nearly zero, masses and PBH abundances for which the expected number of lensing events is $\geq 3$ can be excluded at 95\% CL.
As we will now demonstrate, the present RXTE data is not sufficient to constrain $f_{\rm PBH} \leq 1$. Therefore, we will not perform a full analysis of the RXTE data because the resulting bounds would not constrain any interesting portions of parameter space. Rather, we give an estimate using some simplifying assumptions for what the RXTE data can exclude. Then, in the next section, we will show how future telescopes can probe heretofore untested PBH masses.
For the total RXTE PCA 12.65-day exposure of SMC X-1, there are about 10 days of persistent emission. We take a constant persistent count rate of $\text{B}=170~\text{cts/s/pcu}$~\cite{Raichur:2009ej}, though in reality the persistent emission varies with the superorbital period; {\it e.g.}, the count rate in Fig.~\ref{fig:example} is a bit lower, while even higher rates have been observed~\cite{Rai:2018vkw}. Future observations should focus on high points in the superorbital period to obtain the best lensing sensitivity. We make a further simplified assumption that all 5 PCUs are active for these observations, although for many observations some of PCUs are not available. All of these assumptions are a bit optimistic compared to actual RXTE data, but they are more realistic for future observations, which are the main focus of this work. Since there is no microlensing-like event observed from our data analysis, we therefore set 95\% CL constraints on the PBH parameter space in $f_{\rm PBH}$ and $M_{\rm PBH}$ in the red shaded region of Fig.~\ref{fig:bounds}. We also show the gamma-ray constraints from PBH evaporation in the gray shaded region~\cite{Carr:2009jm} and the Subaru/HSC constraints from microlensing of stars in M31 in the brown shaded region~\cite{Subaru}.
For $M_{\rm PBH} = 10^{19}~\mbox{g}$, RXTE has the most stringent constraint of $f_{\rm PBH} \lesssim 8.4$, which requires three consecutive $3.7\sigma$ time bins with $t_\text{bin}=0.08~\text{s}$ and has $\overline{\mu}_T=2.2$. For small masses, the wave effect limits the maximum attainable magnification (see Fig.~\ref{fig:muSMC}), and so the optimization procedure prefers to increase $t_{\rm bin}$ and reduce $\overline{\mu}_T$.
Around the threshold mass of $\sim 2\times 10^{18}$~g, the finite source size effects would become important if $\overline{\mu}_T$ were fixed (see Fig.~\ref{fig:dGammadt}). However, a smaller $\overline{\mu}_T$ from the optimization means that finite source size effects are reduced.
On the other hand, for larger masses, the event passing time is long, which also leads to a smaller preferred $\overline{\mu}_T$. So, $\overline{\mu}_T$ as a function of mass has a peak value located around $10^{19}~\mbox{g}$.
The increased sensitivity around $M = 5 \times 10^{19}~\text{g}$ is the result of wave effects giving a relatively flat $\overline{\mu}(y)$ near the optimal $\overline{\mu}_T$, allowing $y_T$ to be larger near this particular mass. The precise location of this dip depends on the source energy spectrum and the range of energies that are integrated.
Regarding the lower energy cutoff of $E_\text{min}=2~\text{keV}$, we have tested and found that the exact choice for this value has small effects on the bounds. This is because any gain from removing the influence of wave effects at lower energy is offset by a loss in apparent source brightness, which goes as $E^{-0.93}$ before including the effective area dependence.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{projected_bounds_v2.pdf}
\end{center}
\caption{\label{fig:bounds}
Constraints on the PBH dark matter fraction $f_\text{PBH}$ and mass $M$ at 95\% CL using around 10 days observation of SMC X-1 by RXTE. Also shown are the projected limits from future observations of SMC X-1 by AstroSat, Athena and Lynx, and eXTP. The SMC X-1 source size with the X-ray energy above 2 keV is fixed to be 20 km for all thicker curves (the thinner green dot-dashed line for eXTP has a source size of 100 km for illustration purposes).
Finite source size effects are unimportant due to the optimization of $t_\text{bin}$ (see text for details).
The flux measurements are optimistically assumed to be uncorrelated during persistent emission.
Also shown are extragalactic gamma ray bounds from BH evaporation \cite{Carr:2009jm} and Subaru/HSC microlensing bounds \cite{Subaru} (truncated at $f_\text{PBH}=1$).
}
\end{figure}
One may consider other existing X-ray telescope data in addition to RXTE. For example, Chandra, XMM-Newton, Suzaku, and NuSTAR all have about 2~\text{to}~5~\text{days} of SMC X-1 exposure \cite{Heasarc_browse}. Unfortunately, their combined exposure time is less than RXTE alone. More importantly to the present discussion, the effective area of these telescopes is smaller than RXTE when multiple PCUs are on. With smaller count rates, a higher $\overline{\mu}_T$ and thus smaller $y_T(x)$ is necessary to pick out a lensing signal from the background fluctuations, reducing the rate of detectable lensing events.
\subsection{Projected sensitivity: AstroSat, Athena, Lynx, and eXTP}\label{sec:future}
Although the existing data from RXTE is not sufficient to constrain PBHs as 100\% of dark matter, a longer observation of this X-ray pulsar source will probe this interesting PBH mass range. RXTE ceased its science operations in 2012, but the ongoing satellite telescope AstroSat, launched in 2015, has a similar effective area to RXTE.
The orange dashed line of Fig.~\ref{fig:bounds} shows the projected 95\% confidence level (CL) limits for a 300-day observation of SMC X-1 by AstroSat, which will constrain a wide range of PHB masses from $\text{few} \times 10^{18}$\,g to $10^{20}$\,g. Notably, only $\mathcal{O}(100~\text{days})$ of AstroSat exposure to persistent emission are necessary to begin to constrain $f_\text{PBH} < 1$.
Among the future X-ray telescopes, Athena~\cite{Athena}, Lynx~\cite{LynxTeam:2018usc}, and eXTP~\cite{Zhang:2016ach} have larger effective areas than RXTE PCA for the interesting energy range of $2$-$10$~keV for the source SMC X-1. Both Athena and Lynx have an effective area as large as 2 m$^2$ at 1 keV, while eXTP has an even larger effective area of 3.4 m$^2$ peaked between 6 and 10 keV. Taking into account the energy-dependent effective area, we show the projected limits for 300-day observations for both Athena and Lynx in the blue dotted line in Fig.~\ref{fig:bounds}, which is similar to the limits from AstroSat but with the best-sensitive point at a slightly higher mass. This is because Athena and Lynx have their effective areas peaked at a lower value of energy, which cannot probe quite as low PBH masses due to wave effects. Finally, we show the ultimate sensitivity with a 300-day exposure for an X-ray telescope with a larger effective area like eXTP in the green dot-dashed line of Fig.~\ref{fig:bounds}. It is interesting to note that a year-long observation of the SMC X-1 will almost cover the currently unconstrained mass gap for PBH's as an explanation for 100\% dark matter. An even larger telescope like LOFT \cite{Feroci:2011jc, Vacchi:2018mnt} (a previous version of eXTP) could probe an even larger range of masses.
All of these bounds exhibit a bump in sensitivity similar to RXTE's a bit below $10^{20}~\text{g}$ due to wave effects. The limits for large masses are set by the increasing passing time, which results in a smaller rate that scales roughly as $\Gamma \propto M^{-1/2}$ [see Eq.~\eqref{eq:Deltat-opticaldepth}]. This effect is slightly offset by increasing $t_\text{bin}$, allowing a smaller $\overline{\mu}_T$ as mass increases.
For all the bounds in Fig.~\ref{fig:bounds}, the finite source size effect is not important, contributing less than a few percent correction for $R_{\rm S}=20$~km. The reason is that the value of $t_\text{bin}$ has been chosen at each point to maximize the sensitivity. As a result of this, the smallest masses where bounds are possible tend to prefer larger $t_\text{bin}$ and smaller $\overline{\mu}_T$. Smaller $\overline{\mu}_T$ delays the wave effect and finite source size effect from becoming relevant (see Figs.~\ref{fig:y-x-muT} and \ref{fig:dGammadt}), which can overcome the decrease in $\Gamma$ as $t_\text{min} \propto t_\text{bin}$ increases. Because this optimization can be specified {\it a priori}, there is no trial factor.
\section{Discussion and conclusions}
\label{sec:conclusion}
We note that the selection criteria as presented will pick out both gravitational lens events as well as source flares. The two can be easily distinguished. First, in the lightcurve, flares exhibit a sharp rise followed by an exponential decay, whereas lensing events are symmetric and have a distinct shape (that varies depending on how far one is into the wave-regime). Furthermore, the effects of each on the energy spectrum differs, and they can be distinguished by, {\it e.g.}, the hardness ratio. For larger masses with $w \gg 1$, we are in the regime of microlensing where the magnification is uniform across all energies. In the other case, we are in the femtolensing regime~\cite{1992ApJ...386L...5G}, and the calculable energy-dependent magnification will manifest in the measured spectrum.
Aside from their pulse periods, X-ray binaries exhibit other periodic fluctuations. In the case of SMC X-1, it has an orbital period of 3.89 days. During part of this period, its emissions are eclipsed by its accretion disk. Further, it exhibits a superorbital variation with period varying in the range 40 to 65 days~\cite{Hu:2013wza,Trowbridge:2007kj}, during which it oscillates between high- and low-state emission. To maximize lensing bounds, future observations should focus on uneclipsed high-state emission.
The effects in the above two paragraphs describe many of the intrinsic source variabilities. However, even on top of these there are more sources of variability beyond ordinary Poisson statistics, as is evident by noting that in our fiducial values, $\sigma_\text{B,fid} t_\text{bin,fid} \simeq 12$ is a bit larger than $\sqrt{\text{B}_\text{fid} t_\text{bin,fid}} \simeq 7$. Importantly, correlations between consecutive time bins could lead to false positives in lensing searches, which could weaken the projections presented herein. As the purpose of this study is to identify X-ray pulsars as suitable lensing targets and provide estimated projections of their lensing sensitivity, we do not attempt a full accounting of all of the mechanisms in the accretion process that may account for this. We leave this as a topic of future study.
While we have chosen SMC X-1 as one of the most promising (and at present, most observed) cases, other X-ray binaries could contribute to future lensing bounds. We have already mentioned LMC X-4 as another promising X-ray pulsar which has similar distance as SMC X-1, although it is a bit fainter. Other closer X-ray pulsars within the Milky Way disk could add further to lensing bounds, although the optical depth for lensing these sources is smaller. Finally, X-ray black holes could provide another avenue for setting bounds. The brightest and thus most promising X-ray black holes tend to be a bit heavier than X-ray pulsars (since pulsars are limited in mass by the requirement that they not gravitationally collapse). While these heavier black hole radii may be on the same order as the neutron star radii, the accretion and X-ray emission region may be larger owing to their larger mass. In addition, reprocessing dominates the black hole spectra to higher energies than for the pulsars~\cite{Nowak:2000kf}. As a result, a larger value for $E_\text{min}$ is necessary, which reduces the overall count rate. Even before this cut, the LMC and SMC black hole binaries are dimmer than SMC X-1. Nonetheless, they may prove especially useful for limiting larger-mass lenses where finite source size effects are unimportant. Better understanding and modeling of the source size and shape could improve the analysis in this paper.
It may also prove advantageous to have multiple X-ray telescopes observing the same source simultaneously. This can help to distinguish non-Gaussian noise that is not associated with the intrinsic source variability---for example, cosmic rays mimicking X-rays. It also potentially allows for parallax detection, giving another handle on distinguishing intrinsic source variation from mircolensing signals \cite{Refsdal:1993kf,Gould:1992yv,Gould:1993yv}. If a lensing event is observed, parallax information could allow a determination of distances to the lens and the source. Finally, it would allow the confirmation of a microlensing measurement across more than one observatory.
Another approach to set bounds at masses nearer to the edge of the Subaru/HSC bounds is to employ sources emitting in energies between X-ray and visible, namely in the ultraviolet (UV). For example, UV stars in the M31 could be considered. However, finite source size effects must be taken into account for UV-emitters of stellar size. Indeed, finite source size effects were an important limiting factor in the Subaru/HSC study. A more detailed analysis may be worth pursuing. One could also consider other UV sources like type Ia supernovae~\cite{Foley:2016obj} and hot white dwarfs~\cite{white}: the former also suffers the finite source size effect, and the latter can only be observed in our Milky Way galaxy and does not have enough optical depth.
In this paper, we have explored the potential for X-ray telescope observations of X-ray binary pulsars to probe lensing due to PBH DM with mass $M \in \left[10^{17},10^{22}\right]~\text{g}$ or $\left[10^{-16},10^{-11}\right]\,M_\odot$, between present BH evaporation and Subura-HSC bounds. We have identified SMC X-1 as one of the most promising candidate sources, which strikes a balance between a distant source with large optical depth and a bright source with good counting statistics. While present data are just shy of excluding PBH in this window, adding just $\mathcal{O}(100~\text{days})$ of exposure to persistent emission by the presently-operating AstroSat telescope to the existing RXTE data can already start to probe presently unbounded PBH masses. A future telescope with larger effective area like eXTP could probe nearly all of the open mass range with about one year of exposure. The microlensing study for PBHs in this paper can be also applied to other macroscopic dark matter candidates like dark quark nuggets~\cite{Bai:2018dxf} or axion miniclusters or stars~\cite{Kolb:1993zz}, provided they have small enough radii.
\subsubsection*{Acknowledgements}
We thank Andrey Katz and Andrew Long for discussion.
The work is supported by the U. S. Department of Energy under the contract DE-SC0017647 and URA Visiting Scholars Program. This work was performed at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1066293. YB also thanks the hospitality of the particle theory group of the University of Chicago.
\bibliographystyle{JHEP}
|
1,108,101,565,286 | arxiv | \section{Introduction}
The application of unmanned aerial vehicles (UAVs) as base stations (BSs), which are called UxNBs by the third generation
partnership project (3GPP) \cite{3GPP_uxnb}, has attracted substantial attention for 5G and beyond-5G due to their many advantages, such as high mobility, on-demand deployment, and a high probability of establishing a line-of-sight (LoS) link with users. UxNBs can provide connectivity for users at special events, such as those held in stadiums or theatres, or in areas impacted by natural disasters like floods or earthquakes. These scenarios can be in served, under-served or un-served areas.
Deploying UxNBs can also be very useful in cases where terrestrial infrastructure may be unable to serve all users due to a temporary spike in demand. At such cases, some users can be offloaded to the aerial infrastructure.
While LoS connectivity between each UxNB and users can provide very high data rates for the users within cell, severe intercell interference in aerial cellular networks \cite{comp_in_sky}, which is caused by UxNBs of neighboring cells, is a big problem.
Another challenge of utilizing UxNBs is their backhauling, and since this can not be performed through the fiber link; it must be wireless \cite{Elham_conf}. Backhauling challenge of UxNBs is even greater in remote areas, where terrestrial infrastructure or fiber links may be lacking. In this paper, we solve these two challenges of UxNBs, i.e., intercell interference and backhauling, with the aid of the cell-free \cite{Elina} scheme and a High altitude platform station (HAPS) \cite{survey_haps}, respectively.
Cell-free massive multiple input multiple output (MIMO) is a technology that has been proposed for beyond 5G \cite{Elina}. In this technology, each user is served by a massive number of access points (APs), and all of these APs are connected to a central processing unit (CPU) \cite{ngo}. In this method, the interference from other cells is utilized as the desired signal, and all of these received signals of APs are combined at the CPU. In our work, we propose to apply this cell-free scheme for a set of aerial APs (UxNBs) to manage the severe intercell interference in aerial cellular networks between UxNBs of neighboring cells and terrestrial users. There are many works in the literature that investigate cell-free scheme for terrestrial networks \cite{Elina,ngo,Manijeh,Bashar_backhaul}. However, to the best of our knowledge, our work is the first to consider the cell-free scheme for the UAV-BSs.
In the cell-free scheme, enormous bandwidth is required for the backhaul links between APs and CPU. For terrestrial cell-free APs, this huge bandwidth for backhauling can be provided by fiber links \cite{ngo,Manijeh}. However, for our proposed aerial cell-free APs, this backhauling must be wireless. In order to satisfy the enormous bandwidth requirements of backhaulling for UxNBs, we need to utilize the upper frequency bands for these wireless links \cite{Elham_conf}. In \cite{Elham_caching},
in order to address the limited wireless backhaul
capacity of UxNBs and consequently, decrease the latency,
content caching is proposed to alleviate the backhaul
congestion. The authors in \cite{backhaul_number} provided analytical expressions for the probability of
successfully establishing a backhaul link in the millimeter-wave band between UAV-BSs and ground stations, and they showed that increasing the density of
the ground station network improved the
performance of the backhaulling. In \cite{comp_in_sky}, the authors proposed utilizing the coordinate multipoint scheme for UAV-BSs in uplink communications. They assumed that the backhaul links between all UAVs and the CPU were perfect so that the
signal distortion induced by the backhaul transmission was ignored.
The problem of wireless backhauling of UxNBs is largely due to the dynamic blockages and shadowing between UxNBs and a terrestrial CPU, which makes it difficult to utilize the upper frequency bands (such as, the terahertz (THz) band) for these links \cite{mmwave_UAV_backhaul}. Higher frequency bands require a reliable LoS link, and probabilistic LoS links between UAVs and a terrestrial CPU is not suitable for these bands. We propose to utilize a HAPS in the stratosphere to solve this problem.
The application of HAPSs in wireless networks has attracted a lot of attention recently \cite{survey_haps,3GPP_haps,Softbank}. HAPSs are typically deployed in the stratosphere at an altitude of around $20 ~\mathrm{km}$ with a quasi-stationary position relative to the earth \cite{grace2011broadband}. A HAPS can provide LoS communication and a wide
coverage radius of $50-500 ~\mathrm{km}$, and it can be equipped with powerful computing resources and batteries \cite{survey_haps}. In \cite{Sahabul_haps}, the authors envisioned a HAPS as a super macro base station to provide connectivity in a plethora of applications. Unlike a conventional HAPS, which targets broad coverage for remote areas or disaster recovery, they envisioned HAPS for highly populated metropolitan areas. In \cite{ren2021caching}, HAPS computing was considered as a promising extension of the edge computing. The authors in \cite{kurt2020communication} envisioned a HAPS as an enabling technology for communication, computing, caching, and sensing in the next-generation aerial delivery networks. In \cite{Safwan}, the authors analyzed the link budget of the aerial platforms equipped with reconfigurable smart surfaces, and compared their communication performance with that of the terrestrial networks.
In our scheme, instead of a terrestrial CPU, we propose to utilize a HAPS as an aerial CPU to process all the received signals from all UxNBs. HAPS is an ideal choice to work as a CPU for our proposed cell-free scheme since there is negligible blockage and shadowing for backhaul links between a HAPS and UxNBs, which means the LoS links will be reliable. Hence, we can easily use the upper frequency bands for these links to support the enormous bandwidth requirement for backhauling of the proposed cell-free scheme. In addition to backhauling of the UxNBs in the urban and dense urban environments, another important scenario for using a HAPS as a CPU for backhauling of the aerial APs is for the cases where these APs are deployed to serve users in remote areas or where terrestrial infrastructure and fiber links may be lacking or damaged.
In this paper, we propose to use the sub-THz frequency band for the backhaul links between UxNBs and HAPS. The THz band is generally defined as the region of the electromagnetic
spectrum in the range of $100 ~\mathrm{GHz}$ to $10 ~\mathrm{THz}$, and sub-THz band is defined as the frequencies in the range of $100 ~\mathrm{GHz}$ to $300 ~\mathrm{GHz}$ \cite{THz_loss_Mag,akyildiz2014terahertz}.
The D band ($110-170 ~\mathrm{GHz}$) is among the next interesting range of
frequencies for beyond-5G systems \cite{D-band-juntti,Rappa}, and hence we consider this band as the carrier frequency in our paper. The authors in \cite{Petrov_SINR_THz}
developed an analytical model for interference
and signal-to-interference-plus-noise ratio (SINR) assessment in dense THz networks obtaining the
first two moments and density functions for both metrics.
In \cite{dahrouj}, the authors investigated
a THz Ultra-Massive-MIMO-based aeronautical communication scheme for the space-air-ground integrated network. In \cite{UAV_THz}, the problem of UAV deployment, power allocation, and bandwidth allocation was investigated for a UAV-assisted wireless system operating at THz frequencies.
The main contributions of this paper are summarized as follows:
\begin{itemize}
\item A cell-free scheme for a set of aerial APs (UxNBs) is proposed to manage the severe intercell interference in aerial cellular networks between UxNBs of neighboring cells and terrestrial users.
\item We utilize a HAPS as a CPU for backhauling of UxNBs in the sub-THz band. In this paper, instead of a terrestrial CPU, we show how a HAPS can be used as an aerial CPU to process all received signals from all UxNBs. HAPS is an ideal choice to work as a CPU since there is negligible blockage and shadowing for backhaul links between it and the UxNBs which means a reliable LoS link. Hence, we can easily use the upper frequency bands for these links to support the huge bandwidth requirement for backhauling.
\item A transceiver scheme at the UxNBs is proposed. At the first time slot of the proposed cell-free scheme, users send their messages to UxNBs at the sub-6 GHz frequency band. Then each UxNB applies match-filtering to align the received signals from users, followed by power allocation among the aligned signals of all users. At the second time slot, at each UxNB, we allocate orthogonal resource blocks (RBs) for each user at the sub-THz band, and forward the filtered signals of all users to the HAPS after analog beamforming.
\item A receiver scheme at the HAPS is proposed. At the HAPS, in order to align the received signals for each user from different UxNBs, we perform analog beamforming. Then, we demodulate and decode the message of each user at its own unique RB. Finally, we derive the achievable rate of the users based on the proposed transceiver and receiver schemes.
\item We formulate an optimization problem that maximizes the minimum SINR of users. We find optimum values for two blocks of optimization variables (i.e., the allocated powers for users in each UxNB and the locations of UxNBs), which are solved by the bisection \cite{boyd2020disciplined} and successive convex approximation (SCA) \cite{boyd2004convex} methods, respectively. Finally, the whole optimization problem is solved by the block coordinate descent (BCD) method \cite{razaviyayn2013unified}.
\end{itemize}
Simulation results demonstrate the superiority of the proposed cell-free scheme compared with the aerial cellular and terrestrial cell-free baseline schemes in urban, suburban, and dense urban environments. Also, simulation results show that utilizing a HAPS as a CPU is useful when the considerable path loss in the sub-THz band between UxNBs and HAPS is compensated for by a high number of antenna elements at the HAPS.
The remainder of this paper is organized as follows. Section II presents the system model. Section III presents the proposed transceiver scheme and the corresponding achievable rate. Section IV provides the formulated optimization problem and its solution for powers and locations of UxNBs. Section V provides simulation
results to validate the performance of the proposed scheme. Finally, Section VI concludes the paper.
\section{System Model and Channel Model}
\subsection{System Model}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{model_system.eps}
\caption{System model for the proposed cell-free scheme with HAPS-assisted sub-THz backhauling. At the first time slot, users send their messages to UxNBs, and each UxNB applies match filtering for its received signals. At the second time slot, the UxNBs forward each user's filtered signal to the HAPS at orthogonal RBs. Then, HAPS decodes the message of each user as a CPU. }\label{system-model}
\end{figure}
The proposed aerial cell-free scheme with HAPS-assisted sub-THz backhauling is shown in Fig. \ref{system-model}.
As we can see, $M$ UxNBs are serving $K$ users in the cell-free mode. Each UxNB is assumed to be equipped with a uniform planar array (UPA) with $N$ receive antenna elements positioned on the underside of each UAV working in sub-6 GHz frequency band, and a UPA with $G$ transmit antenna elements on the topside of the UAV working in the sub-THz band. We propose to utilize a HAPS as a CPU to combine the received signals of all UxNBs and decode the messages of all $K$ users. The HAPS is equipped with a UPA with $S$ receive antenna elements working in the sub-THz band.
Due to the requirement of $K$ orthogonal RBs for retransmission of the received signals at the UxNBs to the HAPS, we propose to use the sub-THz frequency band for the backhaul links.\par
In the proposed scheme, the transmission is performed in two time slots. At the first time slot, users send their data to the UxNBs.
It should be noted that the severe intercell interference among UAVs of neiboring cells and users is a big problem in the aerial networks. In order to solve this problem, we propose an aerial cell-free scheme where each user is served by multiple UxNBs that establish a strong link with users.
At each UxNB, the channel state information (CSI) of the user-to-UxNB links are estimated, and then they are utilized for match-filtering of the received signals. We also divide the total power of each UxNB among the users.
Since the instantaneous CSI only needs to be known locally at the UxNBs for the match-filtering scheme, this is a big advantage compared to other schemes, such as zero-forcing, which requires the instantaneous CSI of all links at the CPU \cite{emil}.\par
At the second time slot, the UxNBs forward the power-allocated and match-filtered signals to the HAPS. Here, we utilize the sub-THz band for the UxNB to HAPS link, and we allocate orthogonal resource RBs for each user's signal at these backhaul links. This means that in the cell-free scheme, we need $K$ times more bandwidth for backhauling compared to the access network, which can be satisfied in the sub-THz band. In order to align the received signals for each user from the different UxNBs, we perform analog beamforming based on the steering vectors of the UPA at the HAPS for all UxNBs. Finally, we demodulate and decode the message of each user at its own unique RBs.
\subsection{Channel Model}
In our scheme, the channel between user $k$ and antenna element $n$ of UxNB $m$ is indicated by $h_{kmn}$, which includes both large-scale fading (i.e., path loss and shadowing) and multipath small-scale fading effects. We assume a UPA for the receiver of each UAV with $N=N_w\times N_l$ antenna elements, where $N_w$ and $N_l$ show the number of antenna elements in the width and length of array, respectively. Because there is a LoS link between the users and UAVs, a Ricean distribution is considered for the channel between user $k$ and antenna element $n=(n_w,n_l)$ of UxNB $m$ as follows:
\begin{equation}
h_{kmn}=10^{-\dfrac{\mathsf{PL_{km}}}{20}}(\sqrt{P_{km}^{\mathsf{LoS}}}a_{kmn}+\sqrt{P_{km}^{\mathsf{NLoS}}}CN(0,1)),
\end{equation}
where $CN(0,1)$ shows a complex normal random variable with the mean value of 0 and the variance (power) of 1. Also,
\begin{equation}
\begin{split}
a_{kmn}=\exp(j2\pi(\frac{d_{km}}{\lambda_\mathsf{sub6}}))\times\exp(j2\pi(\frac{d_\mathsf{sub6,w}(n_w-1)\sin\theta_{km}\cos\phi_{km}}{\lambda_\mathsf{sub6}}))\\ \times\exp(j2\pi(\frac{d_\mathsf{sub6,l}(n_l-1)\sin\theta_{km}\sin\phi_{km}}{\lambda_\mathsf{sub6}}))
\end{split}
\end{equation}
indicates the phase shift of the LoS link's signal due to distance in which $d_{km}$ shows the distance between user $k$ and UxNB $m$; $d_\mathsf{sub6,w}=\frac{\lambda_\mathsf{sub6}}{2}$ ($d_\mathsf{sub6,l}=\frac{\lambda_\mathsf{sub6}}{2}$) is the element spacing along the width (length) of antenna array for each UxNB in sub-6 GHz frequency band $f_\mathsf{sub6}$; $\lambda_\mathsf{sub6}=\frac{C}{f_\mathsf{sub6}}$ is the wavelength, and $C=3\times10^8 ~\mathrm{m/s}$ is the speed of light.
The coordinates of each user $k$ and each UAV $m$ are denoted by $(x_{u,k}, y_{u,k},0)$ and $(x_{d,m}, y_{d,m}, h_{d,m})$, respectively. Hence the distance between user $k$ and UAV $m$ equals $d_{km}=\sqrt{(x_{u,k}-x_{d,m})^2+(y_{u,k}-y_{d,m})^2+(h_{d,m})^2}$.
$\theta_{km}$ and $\phi_{km}$ show the elevation and azimuth angles of arrival of the transmitted signal from user $k$ at the UxNB $m$, respectively.
It should be noted that $\bold{a}_{km}=[a_{kmn}]_{1\times N}$ creates the steering vector of the receive antenna array of UAV $m$ for user $k$.
Also, we have $\mathsf{PL_{km}}=P_{km}^{\mathsf{LoS}}\mathsf{PL_{km}^{\mathsf{LoS}}}+P_{km}^{\mathsf{NLoS}}\mathsf{PL_{km}^{\mathsf{NLoS}}}$ in which the LoS and non-LoS (NLoS) path loss of the link between UxNB $m$ and user $k$ are equal to $\mathsf{PL}_{km}^{\mathsf{LoS}}=\mathsf{FSPL}_{km}+\eta_{\mathsf{LoS}}^{\mathsf{dB}}$, and $\mathsf{PL}_{km}^{\mathsf{NLoS}}=\mathsf{FSPL}_{km}+\eta_{\mathsf{NLoS}}^{\mathsf{dB}}$, respectively \cite{Hourani}. In these equations, $\mathsf{FSPL}_{km}=10\log(\frac{4\pi f_\mathsf{sub6}d_{km}}{C})^2$ shows the free-space path loss (FSPL), and $\eta_{\mathsf{LoS}}^{\mathsf{dB}}$ and $\eta_{\mathsf{NLoS}}^{\mathsf{dB}}$ indicate the excessive path losses (in dB) affecting the air-to-ground links for LoS and NLoS cases, respectively \cite{Irem}. $P_{km}^{\mathsf{LoS}}=\frac{1}{1+A\exp({-B(90-\theta_{km})-A})}$ shows the probability of establishing LoS link between user $k$ and UxNB $m$ in which $\theta_{km}$ (in degree) shows the elevation angle between user $k$ and UxNB $m$, and $A$ and $B$ are parameters depending on the environment \cite{Hourani}. $P_{km}^{\mathsf{NLoS}}=1- P_{km}^{\mathsf{LoS}}$ shows the probability of establishing a NLoS link between user $k$ and UxNB $m$.
The large-scale channel power gain for the user $k$ to UxNB $m$ link is equal to
\begin{equation}
\beta_{km}^2=E\{|h_{kmn}|^2\}=E\{h_{kmn}h_{kmn}^*\}=10^{-\frac{\mathsf{PL_{km}}}{10}}=10^{-\frac{P_{km}^{\mathsf{LoS}}\mathsf{PL_{km}^{\mathsf{LoS}}}+P_{km}^{\mathsf{NLoS}}\mathsf{PL_{km}^{\mathsf{NLoS}}}}{10}}.
\end{equation}
By considering $\beta_{0}=(\frac{4\pi f_\mathsf{sub6}}{C})^{-2}$ as the channel gain at the reference distance $d_{km}=1~\mathrm{m}$, the large-scale channel power gain can be rewritten as
$\beta_{km}^2=\eta_{km}\beta_{0}(d_{km})^{-2}$,
in which $\eta_{km}=10^{-\frac{P_{km}^{\mathsf{LoS}}\eta_{\mathsf{LoS}}^{\mathsf{dB}}+P_{km}^{\mathsf{NLoS}}\eta_{\mathsf{NLoS}}^{\mathsf{dB}}}{10}}$ shows the excessive path loss.
We consider independent additive white Gaussian noise (AWGN) with the distribution $CN(0,\sigma^{2})$ at all antenna elements of all UxNBs. We assume that all of the antenna elements in this paper are omni-directional with antenna gain of 1.
We assume a UPA for the transmitter of each UxNB with $G=G_w\times G_l$ antenna elements in which $G_w$ and $G_l$ show the number of antenna elements along the width and length of the array, respectively. We also assume a UPA at the receiver of the HAPS with a large number of $S=S_w\times S_l$ antenna elements in which $S_w$ and $S_l$ show the number of antenna elements along the width and length of the array, respectively. The channel between the transmit antenna element $g=(g_w,g_l)$ of UxNB $m$ and the receiver antenna element $s=(s_w,s_l)$ of the HAPS, which is in the sub-THz frequency band, is assumed to be LoS, and is equal to
\begin{equation}
g_{mgs}=\gamma_{m}b_{mg}^*c_{ms},
\end{equation}
where
\begin{equation}\label{bmg}
\begin{split}
b_{mg}=\exp(j2\pi(\frac{d_{m}}{\lambda_\mathsf{THz}}))\times \exp(j2\pi(\frac{d_\mathsf{THz,d,w}(g_w-1)\sin\Theta_{m}\cos\Phi_{m}}{\lambda_\mathsf{THz}}))\\ \times\exp(j2\pi(\frac{d_\mathsf{THz,d,l}(g_l-1)\sin\Theta_{m}\sin\Phi_{m}}{\lambda_\mathsf{THz}}))
\end{split}
\end{equation}
indicates the phase shift of the transmitted signal from antenna element $g$ of UxNB $m$, and where
\begin{equation}\label{cms}
c_{ms}=\exp(j2\pi(\frac{d_\mathsf{THz,h,w}(s_w-1)\sin\Theta_{m}\cos\Phi_{m}}{\lambda_\mathsf{THz}}))\times\exp(j2\pi(\frac{d_\mathsf{THz,h,l}(s_l-1)\sin\Theta_{m}\sin\Phi_{m}}{\lambda_\mathsf{THz}}))
\end{equation}
indicates the phase shift of the received signal from user $m$ at antenna element $s$ of the HAPS. In these equations, $d_{m}$ indicates the distance between the reference antenna element of UxNB $m$ and the reference antenna element of HAPS; $d_\mathsf{THz,d,w}=\frac{\lambda_\mathsf{THz}}{2}$ ($d_\mathsf{THz,d,l}=\frac{\lambda_\mathsf{THz}}{2}$) is the element spacing along the width (length) of the transmit antenna array at each UxNB in sub-THz frequency band $f_\mathsf{THz}$; $d_\mathsf{THz,h,w}=\frac{\lambda_\mathsf{THz}}{2}$ ($d_\mathsf{THz,h,l}=\frac{\lambda_\mathsf{THz}}{2}$) is the element spacing along the width (length) of the receiver antenna array at the HAPS in the sub-THz frequency band $f_\mathsf{THz}$; and $\lambda_\mathsf{THz}=\frac{C}{f_\mathsf{THz}}$ is the wavelength. Also, $\Theta_{m}$ and $\Phi_{m}$ show the elevation and azimuth angles of the transmitted signal from UxNB $m$ at the HAPS.
It should be noted that $\bold{b}_{m}=[b_{mg}]_{1\times G}$ creates the steering vector of the transmit antenna array of UxNB $m$, and $\bold{c}_{m}=[c_{ms}]_{1\times S}$ creates the steering vector of the receive antenna array of the HAPS transmitted from UxNB $m$.
$\gamma_m^2$ shows the path loss between UxNB $m$ and the HAPS. The path loss for sub-THz band is given by $\gamma_{m}^2=
\rho_m^2 \tau_m=\gamma_0 d_{m}^{-2} \tau_{m}$ in which $\rho_m^2$ is the free space path loss, $\gamma_0=(\frac{4\pi f_\mathsf{THz}}{C})^{-2}$
is the channel gain at the reference distance $d_{m}=1~\mathrm{m}$,
and $\tau_{m}=10^{-Kh_m^{e}/10}$ shows the transmittance of the medium following
the Beer-Lambert law in which $K$ (in $\mathrm{dB/km}$) is the absorption coefficient of the medium, which is a function of frequency and altitude \cite{Petrov_SINR_THz}. Also, $h_m^{e}=\frac{h^{e}}{\sin \Theta_m}=\frac{h^{e}d_m}{h_{\mathrm{HAPS}}-h_{d,m}}$ shows the effective height of a medium for UxNB $m$ in which $h_{e}$ indicates the effective height for a UxNB that is located in the nadir of the HAPS \cite{ITU_series2019attenuation}.
It should be noted that the absorbed part of the sub-THz signal by the medium due to the molecule absorption (i.e., $1-\tau_{m}$ percent of the transmitted signal from UxNB $m$) will be re-emitted by the molecules with some delay, and hence we consider this re-emitted signal as re-emission interference in our rate derivations \cite{Saad_thz}.
We assume that this delayed re-emitted signal has a random phase as $\exp(j\omega)$ in which $\omega$ is a random variable with uniform distribution such that $U(0,2\pi)$.
We consider independent AWGN with the distribution $CN(0,\sigma_{H}^{2})$ at all antenna elements of the HAPS.
\section{Proposed Transceiver Scheme at UxNBs and HAPS}
\begin{figure*}[!t]
\hspace{-1cm}
\centering
\includegraphics[width=\textwidth]{Transceiver_signal.eps}
\caption{The proposed transceiver scheme at UxNB $m$.}\label{transc_signal}
\end{figure*}
We can see the proposed transceiver scheme at UxNB $m$ in Fig. \ref{transc_signal}. At the first time slot of the proposed scheme, each user transmits its message to the UxNBs. The transmitted signal by user $k$ is shown by $\sqrt{P_k}s_k$ in which $P_k$ (for $k\in\{1,2,...,K\}$) indicates the maximum transmit power at each user $k$, and $s_k$ ($E\{|s_{k}|^{2}\}=1$) is the transmitted symbol from user $k$. The received signal at the antenna element $n$ of UxNB $m$'s UPA equals $y_{mn}=\sum_{k=1}^{K}h_{kmn}\sqrt{P_k}s_k+z_{m}$ in which $z_{m}$ is the AWGN noise at the receiver of UxNB $m$.
After receiving $y_{mn}$ at the antenna element $n$ of UxNB $m$, the low noise amplifier (LNA) block amplifies this signal, and then the radio frequency (RF) chains downconvert the signal to a baseband one and convert the analog signal to a digital signal.
Next, at the digital baseband beamforming block for each user and according to the estimated CSI for the channel between user $k$ and antenna element $n$ of UxNB $m$ (i.e., $h_{kmn}$), we perform match-filtering such that $y_{kmn}^{\mathsf{MF}}=y_{mn}\times \frac{h_{kmn}^*}{|h_{kmn}|}$, and we combine these match-filtered signals to arrive at $y_{km}^{\mathsf{COMB}}=\sum_{n=1}^Ny_{kmn}^{\mathsf{MF}}$ \cite{Omid_inspired}. It should be noted that $x^*$ indicates the conjugate of $x$. Then, we allocate the power $P_{km}$ for each user so that we must have $\sum_{k=1}^KP_{km}\leq P_m$ for each UxNB $m$, in which $P_m$ shows the total power at UxNB $m$. We now need to normalize the signal for each user before power allocation such that $y_{km}^{\mathsf{NORM}}=\frac{y_{km}^{\mathsf{COMB}}}{|y_{km}^{\mathsf{COMB}}|}$ in which $|x|$ shows the absolute value of $x$. Therefore, the signal for each user $k$ at UxNB $m$ after power normalization and power allocation will be $y_{km}=\sqrt{P_{km}}y_{km}^{\mathsf{NORM}}$. In order to transmit these $K$ signals to the HAPS, we allocate orthogonal frequency RBs for each of them to avoid interference among the filtered signals of different users.
It should be noted that the same frequency RBs are allocated for each user at different UxNBs, which means that we need $K$ RBs at the second time slot of the proposed scheme in total. These RBs can be easily provided at the sub-THz frequency band. The digital signal for users is converted to an analog one and upconverted to the sub-THz frequency band utilizing an RF chain. In order to transmit signals from each UxNB to the HAPS, we utilize only one RF chain because we apply fully analog beamforming.
We perform the analog beamforming with phase shifter (PSs) to direct the transmitted signal from each UxNB to the HAPS. This is done by multiplying the signal by $b_{mg}$ in (\ref{bmg}) for each transmit antenna element $g$ of each UxNB $m$. It should be noted that the transmitted signal for user $k$ from antenna element $g$ of UxNB $m$ equals $b_{mg}y_{km}$. \par
\begin{figure*}[!t]
\hspace{-1cm}
\centering
\includegraphics[width=\textwidth]{Receiver.eps}
\caption{The proposed receiver scheme at HAPS for decoding of the message of users.}\label{receiver_signal}
\end{figure*}
The transmitted signal from each UxNB $m$ will be received at each antenna element $s$ of the HAPS after passing through the channel between them (i.e., $g_{mps}$). Hence, the received signal at RB $k$ and antenna element $s$ of the HAPS is given by
\begin{equation}
\begin{split}
y_{s}^k&=\sum_{m=1}^M \sum_{g=1}^G g_{mgs}b_{mg}y_{km}+z_H\\&=\sum_{m=1}^M \sum_{g=1}^G c_{ms}\gamma_{m}b_{mg}b_{mg}^*y_{km}+z_H=G\sum_{m=1}^M c_{ms}\gamma_{m}y_{km}+z_H
\end{split}
\end{equation}
in which $z_H$ is the AWGN noise at each receive antenna element of the HAPS. Superscript $k$ shows the received signal at RB $k$.
We can see the proposed receiver scheme at HAPS for decoding the message of users in Fig. \ref{receiver_signal}. As one can see, first the LNA blocks amplify the received signals.
In the proposed receiver scheme at the HAPS, we perform analog beamforming with PSs to align the received signals from each UxNB $m$ at the receive antenna elements of the HAPS. To do this, we multiply the signal $y_{s}^k$ by conjugate of the steering vector of the receive antenna elements at HAPS for each UxNB $m$, i.e., $\bold{c}_{m}^*$ in (\ref{cms}), and then combine these signals as follows:
\begin{equation}\label{yk}
y^k=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^* y_{s}^k.
\end{equation}
Next, an RF chain downconverts the signal to a baseband one, and converts the analog signal to a digital signal. In order to receive and decode signals of all UxNBs at the HAPS, we utilize only one RF chain since we apply fully analog beamforming.
As mentioned above, we allocate orthogonal RBs for each user at the UxNB-to-HAPS sub-THz links, and for this reason the received signal for each user can be separately achieved at the HAPS. Finally, utilizing this signal $y^k$, we demodulate and decode the symbol of each user $k$.\par
Now, in the following proposition, we derive the $\mathsf{SINR}$ of each user utilizing the signal $y^k$.
\begin{proposition}\label{propos_sinr}
The achievable rate of the user $k$ in the proposed aerial cell-free scheme with HAPS-assisted backhauling in the THz band is given by
\begin{equation}
R_k=\log_2(1+\mathsf{SINR}_k)
\end{equation}
in which
\small
\begin{equation} \label{SINR}
\mathsf{SINR}_k=\frac{MG^2NSP_{k}(\sum_{m=1}^M \gamma_{m}\sqrt{P_{km}} \beta_{km})^2}{MG^2\sum_{m=1}^M\rho_{m}^2P_{km}\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+MG^2\sum_{m=1}^M \rho_{m}^2P_{km}\sigma^2+\sigma_{H}^2(\sum_{m=1}^M\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+M\sigma^2)}.
\end{equation}
\normalsize
\end{proposition}
\begin{proof}
Please see Appendix \ref{proof_sinr}.
\end{proof}
In terms of 5G terminology, the UxNB nodes in the system model in Fig. \ref{system-model} have the same functionality with distributed units (DUs), and the HAPS has the same functionality with a centralized unit (CU) \cite{ahmadi20195g}. In the study item for the new radio access technology, different functional
splits between the CU and the DU have been studied \cite{cu-du-split}.
Split option 7 itself has three variants, depending on what aspects of the
physical layer processing are performed in the DU and CU.
Indeed, in option 7, the lower physical layer functions and
RF circuits are located in the DU(s), and the upper protocol layers including the upper
physical layer functions reside in the CU. Our proposed transceiver scheme at each UxNB in Fig. \ref{transc_signal} resides between the option 7-2 and option 7-3 functional split between the CU and DU. This is because we do not demodulate and decode the received signals at DU, and we simply apply match-filtering and power allocation for the received signals. Then, we upconvert the signals to the sub-THz band and transmit them to the CU (HAPS) in orthogonal RBs. It should be noted that all of these functions happen in layer 1 (physical layer). We should add that, in general, the links between the DUs and CU are assumed to be perfect when connected with a fiber link. However, in our proposed transceiver scheme for the uplink, at each DU, we have a transmission part to prepare and design beamformer for the signals, and forward them to the CU wirelessly. This is one of the main differences between our work and terrestrial cell-free schemes in the literature.
\section{Optimization problem}
In this section, we formulate an optimization problem that maximizes the minimum SINR of users. We find the optimum allocated powers for users in each UxNB (i.e., $\bold{P}=[P_{km}]_{K\times M}$)
and the optimum locations of UxNBs (i.e., $\bold{x}=[x_{d,m}]_{1\times M}$ and $\bold{y}=[y_{d,m}]_{1\times M}$)
We can write the optimization problem as follows:
\begin{alignat}{2}
\text{(P):~~~~ }&\underset{\bold{P},\bold{x},\bold{y}}{\text{max}}~~~ && \underset{k}{\text{min}} ~~\mathsf{SINR_k}\label{eq:obj_fun}\\
&\text{s.t.} & & \sum_{k=1}^KP_{km}\leq P_m, ~~\forall m, \label{eq:constraint-sum}\\
&&& P_{km}>0, ~~\forall k, \forall m, \label{eq:constraint+}\\&&& x_{\text{min}}\leq x_{d,m}\leq x_{\text{max}},~y_{\text{min}}\leq y_{d,m}\leq y_{\text{max}},~\forall m,\label{eq:constraint-range-p}
\end{alignat}
where constraint (\ref{eq:constraint-sum}) shows the maximum total power constraint at each UxNB $m$, and constraint (\ref{eq:constraint-range-p}) indicates the horizontal range of the UxNBs' flight. Problem (P) is a non-convex optimization problem due to its objective function, which is not concave with respect to variables $\bold{P}$, $\bold{x}$ and $\bold{y}$. In order to solve this problem, first, based on the BCD method \cite{razaviyayn2013unified}, we split the optimization problem into two sub-problems. In the first sub-problem, we solve the problem (P) for the case where locations of the UxNBs (i.e., $\bold{x}$ and $\bold{y}$) are given, and in the second optimization sub-problem, the power allocation coefficients $\bold{P}$ are assumed to be given. It should be noted that the power allocation problem can be equivalent to the case where we do not have control of the locations of the UxNBs, and they are non-dedicated aerial users that perform their own missions, and we utilize them as APs of the cell-free scheme as well. Both power allocation and deployment sub-problems are still non-convex, and we solve them by means of the bisection and SCA methods in the following two sub-sections, respectively.
\subsection{Power Allocation Sub-Problem}
If we fix the locations of the UxNBs in (P), we will have the power allocation sub-problem as follows:
\begin{alignat}{2}
\text{(P1):~~~~ }&\underset{\bold{P}}{\text{max}}~~~ && \underset{k}{\text{min}} ~~\mathsf{SINR_k}\label{eq:obj_fun_P}\\
&\text{s.t.} & & \sum_{k=1}^KP_{km}\leq P_m, ~~\forall m, \label{eq:constraint-sum_P}\\
&&& P_{km}>0, ~~\forall k, \forall m. \label{eq:constraint+_P}
\end{alignat}
This problem is still non-convex with respect to power allocation coefficients $\bold{P}$.
\begin{proposition}\label{propos_quasi}
The optimization problem (P1) is a quasi-concave optimization problem.
\end{proposition}
\begin{proof}
Please see Appendix \ref{proof_quasi}.
\end{proof}
Since problem (P1) is a quasi-concave optimization problem, its optimal solution can be found efficiently by the bisection
method \cite{boyd2020disciplined}. To do this, we rewrite the problem (P1) by adding slack variable $\eta$ as follows:
\begin{alignat}{2}
\text{(P2):~~~~ }&\underset{\bold{P},\eta}{\text{max}}~~~ \eta && \label{eq:obj_fun_P}\\
&\text{s.t.} & & \mathsf{SINR_k}\geq \eta, \forall k \\&&&\sum_{k=1}^KP_{km}\leq P_m, ~~\forall m, \label{eq:constraint-sum_P}\\
&&& P_{km}>0, \eta >0, ~~\forall k, \forall m. \label{eq:constraint+_P}
\end{alignat}
It can be easily proven that (P2) is equivalent to (P1). For this, it should be noted that at the optimal solution for (P2), we must have $\eta=\min (\mathsf{SINR_1},...,\mathsf{SINR_K})=\mathsf{SINR_1}=...=\mathsf{SINR_K}$, which is the same with the optimal solution for (P1), and hence we can continue with (P2).
By performing the variable change $\bold{T}=[P_{km}^2]_{K\times M}$, for any given value of $\eta$, problem (P2) will be a convex feasibility
problem that can be solved optimally by convex optimization techniques, such as the interior-point method.
The summary of this bisection method is shown in Algorithm 1.
\begin{algorithm}
\caption{Bisection method for solving the power allocation problem (P1) with the given locations of UxNBs.}\label{alg1}
\begin{algorithmic}[1]
\State Initialize the values of $\eta_{\mathsf{min}}$ and $\eta_{\mathsf{max}}$,
where $\eta_{\mathsf{min}}$ and $\eta_{\mathsf{max}}$ show a range for the minimum $\mathsf{SINR}$ of users. Choose a tolerance $\epsilon>0$.
\Repeat
\State Set $\eta=\frac{\eta_{\mathsf{min}} +\eta_{\mathsf{max}}}{2}$ and solve the convex feasibility
problem (P2) by the interior-point method.
\State If the problem (P2) is feasible, set $\eta_{\mathsf{min}}=\eta$; else set $\eta_{\mathsf{max}}=\eta$.
\Until {$\eta_{\mathsf{max}} -\eta_{\mathsf{min}}<\epsilon$.}
\end{algorithmic}
\end{algorithm}
\subsection{UxNBs Placement Sub-Problem}
For a given allocated powers in (P), we will have the following placement sub-problem:
\begin{alignat}{2}
\text{(P3):~~~~ }&\underset{\bold{x},\bold{y}}{\text{max}}~~~ && \underset{k}{\text{min}} ~~\mathsf{SINR_k}\label{eq:obj_fun_P}\\
&\text{s.t.} & & x_{\text{min}}\leq x_{d,m}\leq x_{\text{max}},~y_{\text{min}}\leq y_{d,m}\leq y_{\text{max}},~\forall m.\label{eq:constraint-range}
\end{alignat}
This problem is still non-convex with respect to variables $\bold{x}$ and $\bold{y}$. By substituting the SINR formula from (\ref{SINR}) and introducing three slack variables $\eta$, $\bold{t}=[t_{k}]_{1\times K}$, and $\boldsymbol{\beta}=[\beta_{km}]_{K\times M}$, (P3) can be rewritten as follows:
\begin{alignat}{2}
&&&\text{(P4):~~~~ }\underset{\bold{x},\bold{y},\eta,\bold{t},\boldsymbol{\beta}}{\text{max}}~~~ \eta \label{eq:obj_fun_P}\\
&&&\text{s.t.} ~~~~ \frac{\sqrt{MG^2NSP_{k}}(\sum_{m=1}^M \gamma_{m}\sqrt{P_{km}} \beta_{km})}{t_k}\geq \sqrt{\eta}, \forall k \label{sinr_constraint-p4}\\&&& t_k \geq \sqrt{MG^2\sum_{m=1}^M\rho_{m}^2P_{km}\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+MG^2\sum_{m=1}^M \rho_{m}^2P_{km}\sigma^2+\sigma_{H}^2(\sum_{m=1}^M\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+M\sigma^2)}, \forall k \label {t_con-p4} \\&&&
\beta_{km}^{-1} \geq 10^{\frac{P_{km}^{\mathsf{LoS}}\eta_{\mathsf{LoS}}^{\mathsf{dB}}+P_{km}^{\mathsf{NLoS}}\eta_{\mathsf{NLoS}}^{\mathsf{dB}}}{20}} \beta_0^{-\frac{1}{2}}\sqrt{(x_{u,k}-x_{d,m})^2+(y_{u,k}-y_{d,m})^2+(h_{d,m})^2}, \forall k, \forall m \label{beta_co-p4}
\\
&&& x_{\text{min}}\leq x_{d,m}\leq x_{\text{max}},~y_{\text{min}}\leq y_{d,m}\leq y_{\text{max}},~\forall m \label{eq:constraint-range-p4}\\&&&
\eta > 0, t_k > 0, \beta_{km} > 0, \forall k, \forall m.
\label{eq:constraint-pos-p4}
\end{alignat}
It can be easily proven that (P4) is equivalent to (P3). As we can see, constraint (\ref{t_con-p4}) is in the form of a norm function of variable $\boldsymbol{\beta}$ less than an affine function of variable $\bold{t}$, and hence it is a convex set. However, this problem is still non-convex due to constraints (\ref{sinr_constraint-p4}) and (\ref{beta_co-p4}). To address constraint (\ref{sinr_constraint-p4}), we perform a variable change such that $\eta=\zeta^4$. Hence, for the optimization problem (P4) we can write the following:
\begin{alignat}{2}
\text{(P5):~~~~ }&\underset{\bold{x},\bold{y},\zeta,\bold{t},\boldsymbol{\beta}}{\text{max}} ~~~~ && \zeta \label{eq:obj_fun_P}\\
&\text{s.t.} && \frac{1}{t_k}\geq \frac{\zeta^2}{\sqrt{MG^2NSP_{k}}(\sum_{m=1}^M \gamma_{m}\sqrt{P_{km}} \beta_{km})}, \forall k \label{sinr_constraint_zeta}\\&&& (\ref{t_con-p4}), (\ref{beta_co-p4}), (\ref{eq:constraint-range-p4}), (\ref{eq:constraint-pos-p4}).
\label{eq:constraints-p4}
\end{alignat}
One can see that the right-hand side of (\ref{sinr_constraint_zeta}) is in the form of a quadratic function of variable $\zeta$ over an affine function of variable $\boldsymbol{\beta}$, which is known to be a convex function when its denominator is positive \cite{boyd2004convex}. Also, the right-hand side of (\ref{beta_co-p4}) is in the form of a norm function of variables $\bold{x}$ and $\bold{y}$, and so it is convex. However, the left-hand side of the constraints (\ref{beta_co-p4}) and (\ref{sinr_constraint_zeta}) (i.e., $\frac{1}{\beta_{km}}$ and $\frac{1}{t_{k}}$) are not in the form of a concave function. In order to manage these non-concave terms, we propose an iterative scheme based on the SCA method \cite{Traj_Omid}. In this method, the original non-convex problem is optimized by solving convex approximations of the original problem iteratively around an initial point until convergence. These approximations must lead to non-decreasing values for the objective function with the increase of the iteration number $l$ in order to guarantee the convergence of the SCA method. We know that the first-order Taylor series expansion of a convex function $f(z)$ provides a global lower-bound for that function, i.e., $f(z)\geq f(z_0)+\nabla f(z_0)^T(z-z_0)$.
Given that the left-hand side of the constraints (\ref{beta_co-p4}) and (\ref{sinr_constraint_zeta}) are in the form of a convex function, we can approximate them at each iteration $l+1$ by their first-order Taylor series expansion around the solution of the previous iteration $l$. Therefore, at the iteration ($l+1$), we replace the left-hand side of the constraints (\ref{beta_co-p4}) and (\ref{sinr_constraint_zeta}) with the expressions
$-\frac{1}{(\beta_{km}^l)^2}(\beta_{km}-\beta_{km}^{l})+\frac{1}{\beta_{km}^l}$ and $ -\frac{1}{(t_{k}^l)^2}(t_{k}-t_{k}^{l})+\frac{1}{t_{k}^l}$, respectively. It is clear that these functions are affine functions with respect to variables $\bold{t}$ and $\boldsymbol{\beta}$. Now, by replacing these approximations in (P5), the optimization problem at the iteration $l+1$ of the SCA method around the initial points $\bold{t}^{l}$ and $\boldsymbol{\beta^{l}}$ is given by
\small
\begin{alignat}{2}
&&&\text{(P6):~~~~ }\underset{\bold{x},\bold{y},\zeta,\bold{t},\boldsymbol{\beta}}{\text{max}}~~~ \zeta \label{eq:obj_fun_P}\\
&&&\text{s.t.} ~~~~ -\frac{1}{(t_{k}^l)^2}(t_{k}-t_{k}^{l})+\frac{1}{t_{k}^l}\geq \frac{\zeta^2}{\sqrt{MG^2NSP_{k}}(\sum_{m=1}^M \gamma_{m}\sqrt{P_{km}} \beta_{km})}, \forall k \label{sinr_constraint_zeta_2}\\&&&
-\frac{1}{(\beta_{km}^l)^2}(\beta_{km}-\beta_{km}^{l})+\frac{1}{\beta_{km}^l} \geq 10^{\frac{P_{km}^{\mathsf{LoS}}\eta_{\mathsf{LoS}}^{\mathsf{dB}}+P_{km}^{\mathsf{NLoS}}\eta_{\mathsf{NLoS}}^{\mathsf{dB}}}{20}} \beta_0^{-\frac{1}{2}}\sqrt{(x_{u,k}-x_{d,m})^2+(y_{u,k}-y_{d,m})^2+(h_{d,m})^2}, \forall k, \forall m \label{beta_co}
\\
&&& (\ref{t_con-p4}), (\ref{eq:constraint-range-p4}), (\ref{eq:constraint-pos-p4}).
\label{eq:constraint-pos}
\end{alignat}
\normalsize
This optimization problem is convex since all of the constraints are in the form of a convex function of variables less than a concave function of variables. This problem can be efficiently solved by convex optimization techniques, such as the interior-point method. The optimum value for the objective function of (P3) can be derived from $\eta=\zeta^{\frac{1}{4}}$. One can see the summary of the proposed iterative method for solving (P3) in Algorithm 2. Now, since the objective function of (P6) is non-decreasing over iterations and is globally upper-bounded by the optimal value of (P3), the proposed sub-optimal algorithm is guaranteed to converge. Also, since each iteration of Algorithm 2 only requires
solving a convex problem, the overall complexity
of Algorithm 2 is polynomial in the worst scenario.
\begin{algorithm}
\caption{Iterative SCA method for solving the placement optimization problem (P3) with the given power allocation.}\label{alg1}
\begin{algorithmic}[1]
\State Initialize the locations of UxNBs, i.e., $\mathbf{x}^{l}$ and $\mathbf{y}^{l}$, and let $l=0$.
\Repeat
\State Solve the convex problem (P6) by the interior-point method and find the optimum values for $\mathbf{x}$ and $\mathbf{y}$.
\State Update $l=l+1$, and set $\mathbf{x}^l=\mathbf{x}$ and $\mathbf{y}^l=\mathbf{y}$.
\Until {convergence or a maximum number of iterations is reached.}
\end{algorithmic}
\end{algorithm}
\subsection{Iterative Algorithm for Joint Power Allocation and Placements of UxNBs}
In this subsection, we apply the BCD method \cite{razaviyayn2013unified} to solve the original problem (P), and find optimized values for power allocation and location variables. For this, we solve the placement of UxNBs and power allocation sub-problems alternately. At each iteration, after initialization of power and location, we optimize the location variables utilizing Algorithm 2. Then, using these locations, we find the optimum power allocation with the aid of Algorithm 1. These powers are used as initial values in the next iterations. The associated algorithm is summarized in Algorithm 3.
It should be noted that the optimal value of the original problem (P) is a global upper-bound for the optimal values of the sub-problems (P1) and (P3), and hence it is also an upper-bound for the optimal value of Algorithm 3. Also, since Algorithm 3 performs Algorithm 1 and Algorithm 2 alternately, the objective value of Algorithm 3 is non-decreasing with iteration number $l$. As a result, the proposed sub-optimal method at Algorithm 3 is guaranteed to converge.
Also, It should be noted that since each iteration of Algorithm 3 only requires
solving convex problems, the overall complexity
of Algorithm 3 is polynomial in the worst scenario.
\begin{algorithm}
\caption{Iterative algorithm based on BCD to jointly find the power allocation and placement of UxNBs in problem P.}\label{alg1}
\begin{algorithmic}[1]
\State Initialize power allocation variables $\bold{P}^{l}$, and UxNB locations $\bold{x}^{l}$ and $\bold{y}^{l}$, and let $l=0$.
\Repeat
\State Solve the non-convex placement problem (P6) with the given powers $\bold{P}^{l}$ and initial locations $\bold{x}^{l}$ and $\bold{y}^{l}$ by the SCA method in Algorithm 2 and find the optimum values for $\mathbf{x}$ and $\mathbf{y}$.
\State Solve the quasi-convex problem (P2) with the given UxNBs locations $\bold{x}$ and $\bold{y}$ by the bi-section method in Algorithm 1, and find the optimal values for $\bold{P}$.
\State Update $l=l+1$; and set $\mathbf{x}^l=\mathbf{x}$, $\mathbf{y}^l=\mathbf{y}$, and $\mathbf{P}^l=\mathbf{P}$.
\Until {convergence or a maximum number of iterations is reached.}
\end{algorithmic}
\end{algorithm}
\section{Numerical Results}
In this section, numerical results are provided in order to show the performance gain of the proposed scheme. The following default parameters are applied in the simulations except that we specify different values for them. For the carrier frequency of the first and second hops, we assume $f_\mathsf{sub6}=2~\mathrm{GHz}$ and $f_\mathsf{THz}=120~\mathrm{GHz}$, respectively. The communication bandwidth is assumed to be $\mathsf{BW}=1~\mathrm{MHz}$, and the noise power spectral density is $N_0=-174~\mathrm{dBm/Hz}$. We assume that all users are uniformly distributed over a square area with a length of $1000$ meters. Also, we assume that the HAPS is deployed in the middle of this square area. The default values for the number of users, number of UxNBs, number of antenna elements in the receive UPA of each UxNB, number of antenna elements in the transmit UPA of each UxNB, and number of antenna elements in the receive UPA of the HAPS are equal to $K=16$, $M=16$, $N=4$, $G=9$, and $S=400$, respectively.
We set the maximum transmit power at each user and each UxNB as $P_k=0.2~W,~\forall k$ and $P_m=25 ~\mathrm{dBm},~\forall m$, respectively. We assume that all users send their signals with their maximum power. Considering an urban area, the excessive path loss affecting the air-to-ground links in LoS and NLoS cases is assumed to be $\eta_{\mathsf{LoS}}^{\mathsf{dB}}=1~\mathsf{dB}$ and $\eta_{\mathsf{NLoS}}^{\mathsf{dB}}=20~\mathsf{dB}$, respectively \cite{Irem}. Also, for the urban area, we have $A=9.61$ and $B=0.16$. The absorption coefficient of the sub-THz medium for $f_\mathsf{THz}=120~\mathrm{GHz}$ is equal to $K=0.5~\mathrm{dB/km}$ and the effective height is given by $h^{e}=1.6~ \mathrm{km}$ \cite{grace2011broadband}. We assume an initial uniform square placement for $M$ UAVs,
and a fixed flight height $h_{d,m}=120~\mathrm{m},~\forall m$ for all of the UAVs. In Algorithm 1, we set initial values as $\eta_{\mathsf{min}}=0$ and $\eta_{\mathsf{max}}=1500$, and also the tolerance value as $\epsilon=0.01$. We also consider the HAPS altitude as 20 km.
In the simulation figures, the proposed scheme, called \textbf{aerial cell-free scheme}, is compared with two baseline schemes. \textbf{1) Aerial cellular scheme} in which each terrestrial user is served by only one UxNB. In this baseline method, all of the other parameters and channel models are the same as the proposed scheme. The backhauling of this baseline scheme, same as the proposed scheme, is performed through the HAPS in the sub-THz band. \textbf{2) Terrestrial cell-free scheme} in which each terrestrial user is served by multiple terrestrial access points, and a perfect backhaul with fiber links is assumed to connect the APs and CPU. In this baseline scheme, the Raleigh fading model is considered for the channels between the users and APs. Also, in order to have a fair comparison, the number of the receiver antenna elements at terrestrial APs is assumed to be the same as the number of the receiver antenna elements at UxNBs (i.e., $N$) in aerial schemes. The match-filtering method is applied in each AP at both baseline schemes to align the received signals.
Fig. \ref{R_vs_P_K16_M16_S400_100} shows the achievable minimum rate per user versus the total power of each UxNB ($P_1=...=P_M$) for the aerial and terrestrial BSs with cell-free and cellular schemes.
We can see that the proposed aerial cell-free scheme has a better performance compared with the aerial cellular scheme for both values of $S=100$ and $S=400$. Indeed, due to the severe intercell interference from other users in the neighboring cells in the aerial cellular scheme, our proposed scheme has much better performance than this baseline scheme. Also in this figure, we can see that the terrestrial cell-free baseline scheme has a constant value by increasing APs powers since we considered a perfect backhaul for this scheme. Our proposed scheme has much better performance than this baseline scheme as well. This is because in the terrestrial cell-free scheme, due to a high path loss and shadowing, the link between a user and far access points can be very weak, and hence working in cell-free mode is not useful. However, in the proposed aerial cell-free scheme, since there is a strong LoS link between the users and UxNBs, the signal of each user is received at multiple UxNBs, and so the cell-free scheme is useful for the proposed system model.
Fig. \ref{R_vs_M_K16_P25} shows the achievable minimum rate per user versus the number of UxNBs ($M$) for the aerial and terrestrial BSs with cell-free and cellular schemes.
We can see that the proposed aerial cell-free scheme outperforms the aerial cellular scheme for both values of $S=100$ and $S=400$.
Further, we can see that by increasing $M$, the improvement in the performance of aerial schemes is much more than the terrestrial scheme, and this is because of a higher probability of establishing LoS links between UxNBs and users for a larger $M$ in aerial schemes. Finally, it is also shown that the superiority of the aerial cell-free scheme over aerial cellular scheme increases by $M$ which is due to a higher intercell interference for the cellular scheme at a higher $M$.
\begin{figure}[t]
\centering
\begin{minipage}{.48\linewidth}
\includegraphics[width=\linewidth,height=5cm]{R_vs_P_K16_M16_S400_100.eps}
\captionof{figure}{The achievable minimum rate per user versus the total power of each UxNB ($P_1=...=P_M$) for the aerial and terrestrial BSs with cell-free and cellular schemes. We set $K=16$, $M=16$, $N=4$, and $G=9$.}
\label{R_vs_P_K16_M16_S400_100}
\end{minipage}
\hspace{.01\linewidth}
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\linewidth,height=5cm]{R_vs_M_K16_P25.eps}
\captionof{figure}{The achievable minimum rate per user versus the number of UxNBs ($M$) for the aerial and terrestrial BSs with cell-free and cellular schemes. We set $P_m=25~\mathrm{dBm},~\forall m$, $K=16$, $N=4$, and $G=9$.}
\label{R_vs_M_K16_P25}
\end{minipage}
\end{figure}
Fig. \ref{R_VS_k_M16_P25_S400} indicates the achievable minimum rate per user versus the number of users ($K$) for the aerial and terrestrial BSs with cell-free and cellular schemes.
We set the parameter values as $P_m=25~\mathrm{dBm},~\forall m$, $M=16$, $N=4$, and $G=9$.
As we can see, by increasing $K$, the performance of all schemes decreases. Also, we can see that the proposed aerial cell-free scheme performs better than both the aerial cellular and terrestrial cell-free baseline schemes due to the LoS link between the users and UxNBs. Further, by increasing $S$, the performance of the aerial schemes improve, which means that by increasing $S$ we can serve more users for a given minimum rate per user.
Fig. \ref{R_vs_S_P25_M16_K16} indicates the achievable minimum rate per user versus the number of HAPS antenna elements ($S$). One can see that when the number of HAPS antenna elements is low, the performance of the aerial and terrestrial schemes are close. For example, when $S=16$, the terrestrial cell-free scheme performs better than the aerial cellular scheme, and its performance is comparable to the the proposed aerial cell-free scheme. However, when the number of HAPS antenna elements is high, both aerial schemes have significant performance gain over the terrestrial cell-free scheme. This figure shows that utilizing a HAPS as a CPU is useful when the enormous path loss between the UxNBs and the HAPS in the sub-THz band is compensated for by a high number of antenna elements at the HAPS.
\begin{figure}[t]
\centering
\begin{minipage}{.48\linewidth}
\includegraphics[width=\linewidth,height=5cm]{R_VS_k_M16_P25_S400.eps}
\captionof{figure}{The achievable minimum rate per user versus number of users ($K$) for the aerial and terrestrial BSs with cell-free and cellular schemes. We set $P_m=25~\mathrm{dBm},~\forall m$, $M=16$, $N=4$, and $G=9$.}
\label{R_VS_k_M16_P25_S400}
\end{minipage}
\hspace{.01\linewidth}
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\linewidth,height=5cm]{R_vs_S_P25_M16_K16.eps}
\captionof{figure}{The achievable minimum rate per user versus number of HAPS antenna elements ($S$) for the cell-free and cellular schemes. We set $P_m=25~\mathrm{dBm},~\forall m$, $M=16$, $K=16$, $N=4$, and $G=4,9$.}
\label{R_vs_S_P25_M16_K16}
\end{minipage}
\end{figure}
Fig. \ref{CDF_K16_M16_S400} shows the CDF of the minimum rate per user ($R_1=...=R_K$) for the aerial and terrestrial BSs with cell-free and cellular schemes.
We can see that the proposed aerial cell-free scheme has a better performance compared with both the aerial cellular and terrestrial cell-free baseline schemes for both values of $M=8$ and $M=16$. We can also see that the aerial cellular scheme outperforms the terrestrial cell-free scheme, and this is due to the LoS links that are established between users and UxNBs in the aerial networks. Further, we can see that the variance of the minimum achievable rate per user for the cell-free scheme is less than the cellular one, and we can observe that increasing the number of UxNBs reduces the variance of the minimum achievable rate per user for both aerial cell-free and aerial cellular schemes.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{CDF_K16_M16_S400.eps}
\caption{The CDF of the minimum rate per user ($R_1=...=R_K$) for the aerial and terrestrial BSs with cell-free and cellular schemes. We set $P_m=25~\mathrm{dBm},~\forall m$, $K=16$, $S=400$, $N=4$, and $G=9$.}\label{CDF_K16_M16_S400}
\end{figure}
Fig. \ref{R_vs_M_PXY_K_16_P25_S400} indicates the achievable minimum rate per user in the proposed cell-free scheme versus the number of UxNBs ($M$) for the cases where the locations of UxNBs and/or allocated power for each user at each UxNB are optimized.
In this figure, we consider heuristic solutions for power allocation and locations of UxNB, and compare them with the optimized values. In the heuristic solution for power allocation, we assume that all of the power of each UxNB is equally divided among users. In the heuristic solution for locations of UxNBs, we uniformly deploy them over a square area with equal distance with each other. \footnote{Note that for simulations, we utilize these heuristic solutions as the initial points in Algorithm 2 and Algorithm 3 for powers and locations of the UxNBs.} We can see that when we use heuristic solutions for both power allocation and locations, the worst performance is achieved. Also, we can see that the optimization of power with Algorithm 1 improves the performance of the proposed scheme. This figure indicates that the performance of the case that both power and location are jointly optimized with Algorithm 3 is the same as the performance of the case where the location is optimized with Algorithm 2 and the heuristic solution is utilized for power allocation. This means that when the locations of UxNBs are optimized, equal power allocation is the optimal solution for power. This is because by optimizing the locations of the UxNBs, all users reach to equal rates, and hence the optimal policy for power allocation is to divide the total power equally among the users. It should be noted that in the optimal point of our max-min optimization problem, all users must have the same rates.
Fig. \ref{R_vs_P_PXY_M16_K16_S400} shows the achievable minimum rate per user in the proposed cell-free scheme versus the total power of each UxNB ($P_1=...=P_M$) for the cases where the locations of UxNBs and/or allocated power for each user at each UxNB are optimized. We set the parameter values as $K=16$, $M=16$, $S=400$, $N=4$, and $G=9$. We consider the same heuristic solutions with Fig. \ref{R_vs_M_PXY_K_16_P25_S400} for power allocation and locations of UxNBs. We can see that optimizing power is more useful for lower transmit powers because there is enough power resources at higher transmit powers for managing the fairness among users. Also, as in Fig. \ref{R_vs_M_PXY_K_16_P25_S400}, we can see that when the locations of UxNBs are optimized, equal power allocation is the optimal solution for power.
\begin{figure}[t]
\centering
\begin{minipage}{.48\linewidth}
\includegraphics[width=\linewidth,height=5cm]{R_vs_M_PXY_K_16_P25_S400.eps}
\captionof{figure}{The achievable minimum rate per user in the proposed cell-free scheme versus number of UxNBs ($M$) for the cases where the locations of UxNBs and/or allocated power for each user at each UxNB are optimized. We assumed that $K=16$, $P_m=25~\mathrm{dBm},~\forall m$, $S=400$, $N=4$, and $G=9$.}
\label{R_vs_M_PXY_K_16_P25_S400}
\end{minipage}
\hspace{.01\linewidth}
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\linewidth,height=5cm]{R_vs_P_PXY_M16_K16_S400.eps}
\captionof{figure}{The achievable minimum rate per user in the proposed cell-free scheme versus total power of each UxNB ($P_1=...=P_M$), for the cases where the locations of UxNBs and/or allocated power for each user at each UxNB are optimized. We assumed that $K=16$, $M=16$, $S=400$, $N=4$, and $G=9$.}
\label{R_vs_P_PXY_M16_K16_S400}
\end{minipage}
\end{figure}
Fig. \ref{R_vs_P_env_K16_M16_S400} shows the achievable minimum rate per user versus the total power of each UxNB ($P_1=...=P_M$) in aerial schemes for three urban, suburban, and dense urban environments. According to \cite{Irem}, the excessive path losses affecting the air-to-ground links in LoS and NLoS cases, i.e., $(\eta_{\mathsf{LoS}}^{\mathsf{dB}},\eta_{\mathsf{NLoS}}^{\mathsf{dB}})$, are equal to $(0.1,21)$, $(1,20)$, and $(1.6,23)$ for suburban, urban, and dense urban environments, respectively. Also, parameters $(A,B)$ are equal to $(4.88.0.43)$, $(9.61,0.16)$, and $(12.8,0.11)$ for suburban, urban, and dense urban environments, respectively.
As we can see, in both cellular and cell-free schemes, the suburban environment outperforms the urban and dense urban environments, and the dense urban environment performs the worst. This is due to the higher probability of establishing a LoS link in suburban environment. Also, one can see that the proposed cell-free scheme performs better than the baseline cellular scheme for all environments, and this is due to the utilization of all the received signals from all users at each UxNB in the cell-free mode.
Fig. \ref{R_vs_M_env_K16_S400_P25} shows the achievable minimum rate per user versus the number of UxNBs ($M$) in aerial schemes for three urban, suburban, and dense urban environments.
As we can see, the proposed aerial cell-free scheme outperforms the aerial cellular scheme for all environments.
We can also see that by increasing $M$, the performance of both cell-free and cellular schemes improves for all environments, and this is due to the establishment of more LoS links between the UxNBs and users for a larger $M$ in aerial schemes. Further, it is shown that the superiority of the proposed cell-free scheme over the aerial cellular scheme for all environments increases by $M$, and this is due to a higher intercell interference for the cellular scheme at a higher value for $M$.
\begin{figure}[t]
\centering
\begin{minipage}{.48\linewidth}
\includegraphics[width=\linewidth,height=5cm]{R_vs_P_env_K16_M16_S400.eps}
\captionof{figure}{The achievable minimum rate per user versus total power of each UxNB ($P_1=...=P_M$) in aerial schemes for three urban, suburban, and dense urban environments. We set $K=16$, $M=16$, $N=4$, $S=400$, and $G=9$.}
\label{R_vs_P_env_K16_M16_S400}
\end{minipage}
\hspace{.01\linewidth}
\begin{minipage}{0.48\linewidth}
\includegraphics[width=\linewidth,height=5cm]{R_vs_M_env_K16_S400_P25.eps}
\captionof{figure}{The achievable minimum rate per user versus the number of UxNBs ($M$) in aerial schemes for three urban, suburban, and dense urban environments. We set $P_m=25~\mathrm{dBm},~\forall m$, $K=16$, $N=4$, $S=400$, and $G=9$.}
\label{R_vs_M_env_K16_S400_P25}
\end{minipage}
\end{figure}
Simulation results showed that the aerial cell-free scheme performs much better than the terrestrial cell-free scheme when the HAPS is equipped with a very large antenna array.
This performance increase comes at the cost of deploying a HAPS and dedicated UAVs. In order to reduce the UAV deployment costs, we can utilize other non-dedicated UAVs as UxNBs by equipping them with the proposed transceiver scheme. In regard to deploying a HAPS, it should be noted that HAPS can be deployed in the stratosphere for many other use cases, such as super macro BS, RSS, computing, sensing, and localization, and in this paper it is utilizes as a CPU as well. Finally, it should be noted that despite the costs of deploying UAVs and a HAPS, there are some application scenarios where the proposed scheme may nevertheless be advantageous, such as providing a dedicated service for wealthy users, offloading from saturated terrestrial networks, and special events involving a massive number of users.
\section{Conclusion}
In this paper, we proposed a cell-free scheme for a set of UxNBs to manage the severe interference in aerial cellular networks between terrestrial users and UxNBs of neighboring cells. We also proposed to use a HAPS as a CPU to combine all the received signals from all UxNBs in the sub-THz band. This involved proposing a transceiver scheme at the UxNBs, a receiver scheme at the HAPS, and formulating an optimization problem to maximize the minimum SINR of users. Simulation results proved the superiority of the proposed scheme compared to aerial cellular and terrestrial cell-free baseline schemes in urban, suburban, and dense urban environments, which is due to the existence of LoS links between users and UxNBs. Simulation results also showed that utilizing a HAPS as a CPU is useful when the considerable path loss in the sub-THz band between UxNBs and the HAPS is compensated for by a high number of antenna elements at the HAPS.
\appendices
\section{PROOF OF Proposition \ref{propos_sinr}}\label{proof_sinr}
In order to derive the $\mathsf{SINR}$ of each user, we rewrite the filtered and combined signal at RB $k$ in the HAPS, i.e., $y^k$ in (\ref{yk}), as
\footnotesize
\begin{equation}
\begin{split}
y^k&=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^* y_{s}^k=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*(P\sum_{m'=1}^M c_{m's}\gamma_{m'}y_{km'}+z_H)
=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*(P\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}y_{km'}^{\mathsf{NORM}}+z_H)\\&=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*(P\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\frac{y_{km'}^{\mathsf{COMB}}}{|y_{km'}^{\mathsf{COMB}}|}+z_H)=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*(P\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\frac{\sum_{n=1}^Ny_{km'n}^{\mathsf{MF}}}{|\sum_{n=1}^Ny_{km'n}^{\mathsf{MF}}|}+z_H)\\&=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*(P\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\frac{\sum_{n=1}^Ny_{m'n}\times \frac{h_{km'n}^*}{|h_{km'n}|}}{|\sum_{n=1}^Ny_{m'n}\times \frac{h_{km'n}^*}{|h_{km'n}|}|}+z_H)\\&=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*(P\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\frac{\sum_{n=1}^N(\sum_{k'=1}^{K}h_{k'm'n}\sqrt{P_{k'}}s_{k'}+z_{m'})\times \frac{h_{km'n}^*}{|h_{km'n}|}}{|\sum_{n=1}^N(\sum_{k'=1}^{K}h_{k'm'n}\sqrt{P_{k'}}s_{k'}+z_{m'})\times \frac{h_{km'n}^*}{|h_{km'n}|}|}+z_H).
\end{split}\label{yk_appen}
\end{equation}
\normalsize
Then, with the use-and-then-forget bound \cite{marzetta2016fundamentals}, we derive the achievable SINR. From the last equation of (\ref{yk_appen}), we write the desired signal (DS) for user $k$ as
\footnotesize
\begin{equation}\label{ds}
\begin{split}
\mathsf{DS}_k&=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*P\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\frac{\sum_{n=1}^N h_{km'n}\sqrt{P_{k}}s_{k}\times \frac{h_{km'n}^*}{|h_{km'n}|}}{|\sum_{n=1}^N(\sum_{k'=1}^{K}h_{k'm'n}\sqrt{P_{k'}}s_{k'}+z_{m'})\times \frac{h_{km'n}^*}{|h_{km'n}|}|}\\&=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*P\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\frac{\sum_{n=1}^N |h_{km'n}|\sqrt{P_{k}}s_{k}}{|\sum_{n=1}^N(\sum_{k'=1}^{K}h_{k'm'n}\sqrt{P_{k'}}s_{k'}+z_{m'})\times \frac{h_{km'n}^*}{|h_{km'n}|}|}.
\end{split}
\end{equation}
\normalsize
The standard deviation of the power normalization factor, i.e., $|\sum_{n=1}^N(\sum_{k'=1}^{K}h_{k'm'n}\sqrt{P_{k'}}s_{k'}+z_{m'})\times \frac{h_{km'n}^*}{|h_{km'n}|}|$, in the denominator of the desired signal in (\ref{ds}) is given by
\footnotesize
\begin{equation}
\begin{split}
&F_{\mathsf{NORM}}=\sqrt{E[(\frac{1}{M}\sum_{m=1}^M\sum_{n=1}^N(\sum_{k'=1}^{K}h_{k'mn}\sqrt{P_{k'}}s_{k'}+z_{m})\times \frac{h_{kmn}^*}{|h_{kmn}|})(\frac{1}{M}\sum_{m=1}^M\sum_{n=1}^N(\sum_{k'=1}^{K}h_{k'mn}\sqrt{P_{k'}}s_{k'}+z_{m})\times \frac{h_{kmn}^*}{|h_{kmn}|})^*]}\\&
=\frac{1}{M}\sqrt{E[\sum_{m=1}^M\sum_{n=1}^N(\sum_{k'=1}^{K}|h_{k'mn}|^2P_{k'}|s_{k'}|^2+|z_{m}|^2)]}
=\frac{1}{M}\sqrt{\sum_{m=1}^M\sum_{n=1}^N(\sum_{k'=1}^{K}E[|h_{k'mn}|^2]P_{k'}E[|s_{k'}|^2]+E[|z_{m}|^2])}\\&=\frac{1}{M}\sqrt{\sum_{m=1}^M\sum_{n=1}^N(\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+\sigma^2)}
=\frac{\sqrt{N}}{M}\sqrt{\sum_{m=1}^M\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+M\sigma^2}.
\end{split}
\end{equation}
\normalsize
The expectation of the desired signal for user $k$ can then be written as
\footnotesize
\begin{equation}
\begin{split}
E[\mathsf{DS}_k]&=E[\frac{1}{F_{\mathsf{NORM}}}\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*G\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\sum_{n=1}^N |h_{km'n}|\sqrt{P_{k}}]=\frac{G\sqrt{P_{k}}}{F_{\mathsf{NORM}}}\sum_{m=1}^M\sum_{s=1}^S |c_{ms}|^2 \gamma_{m}\sqrt{P_{km}}\sum_{n=1}^N E[|h_{kmn}|]\\&=\frac{G\sqrt{P_{k}}}{F_{\mathsf{NORM}}}\sum_{m=1}^M\sum_{s=1}^S \gamma_{m}\sqrt{P_{km}}\sum_{n=1}^N \beta_{km}=\frac{GNS\sqrt{P_{k}}}{F_{\mathsf{NORM}}}\sum_{m=1}^M \gamma_{m}\sqrt{P_{km}} \beta_{km}.
\end{split}
\end{equation}
\normalsize
Next, we derive the variance of the interference terms in (\ref{yk_appen}). We can see there that we have two types of interference. The first type is caused by interference from other users, and we show it with $I_{k,U}$. The second type is due to the amplified noise at UxNBs, and we name it $I_{k,N}$. For $I_{k,U}$, we can write
\footnotesize
\begin{equation}
\begin{split}\label{var_IU}
E[I_{k,U}\times I_{k,U}^*]=&E[(\frac{1}{F_{\mathsf{NORM}}}\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*G\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\sum_{n=1}^N\sum_{k'=1}^{K}h_{k'm'n}\sqrt{P_{k'}}s_{k'}\times \frac{h_{km'n}^*}{|h_{km'n}|})\\&\times(\frac{1}{F_{\mathsf{NORM}}}\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*G\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\sum_{n=1}^N\sum_{k'=1}^{K}h_{k'm'n}\sqrt{P_{k'}}s_{k'}\times \frac{h_{km'n}^*}{|h_{km'n}|})^*]\\&
=\frac{G^2}{F_{\mathsf{NORM}}^2}\sum_{m=1}^M\sum_{s=1}^S c_{ms}^* c_{ms}c_{ms} c_{ms}^*\gamma_{m}^2P_{km}\sum_{n=1}^N\sum_{k'=1}^{K}E[h_{k'mn}h_{k'mn}^*]P_{k'}E[|s_{k'}|^2]\times \frac{h_{kmn}^*}{|h_{kmn}|}\frac{h_{kmn}}{|h_{kmn}|}\\&=\frac{SG^2}{F_{\mathsf{NORM}}^2}\sum_{m=1}^M\gamma_{m}^2P_{km}\sum_{n=1}^N\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}=\frac{NSG^2}{F_{\mathsf{NORM}}^2}\sum_{m=1}^M\gamma_{m}^2P_{km}\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}.
\end{split}
\end{equation}
\normalsize
Also, for the variance of the $I_{k,N}$, we can write
\footnotesize
\begin{equation}
\begin{split}\label{var_IN}
E[I_{k,N}I_{k,N}^*]&=\frac{1}{F_{\mathsf{NORM}}^2}E[(\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*G\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\sum_{n=1}^Nz_{m'}\times \frac{h_{km'n}^*}{|h_{km'n}|})\\&
\times(\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*G\sum_{m'=1}^M c_{m's}\gamma_{m'}\sqrt{P_{km'}}\sum_{n=1}^Nz_{m'}\times \frac{h_{km'n}^*}{|h_{km'n}|})^*]\\&=\frac{G^2}{F_{\mathsf{NORM}}^2}\sum_{m=1}^M\sum_{s=1}^S c_{ms}^* c_{ms}c_{ms} c_{ms}^* \gamma_{m}^2P_{km}\sum_{n=1}^NE[|z_{m}|^2]\times \frac{h_{kmn}^*}{|h_{kmn}|}\frac{h_{kmn}}{|h_{kmn}|}=\frac{NSG^2}{F_{\mathsf{NORM}}^2}\sum_{m=1}^M \gamma_{m}^2P_{km}\sigma^2.
\end{split}
\end{equation}
\normalsize
Finally, we show the noise at the HAPS with $N_{\mathsf{HAPS}}$, whose variance is given by
\footnotesize
\begin{equation}
\begin{split}
E[N_{\mathsf{HAPS}}N_{\mathsf{HAPS}}^*]=&E[\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*z_H\times(\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*z_H)^*]
=\sum_{m=1}^M\sum_{s=1}^S c_{ms}^*c_{ms}E[|z_H|^2]=MS\sigma_{H}^2.
\end{split}
\end{equation}
\normalsize
As we mention in Section II, in the sub-THz band, the absorbed parts of the signals by the medium are re-emitted with a random phase shift \cite{Petrov_SINR_THz,Saad_thz}. Due to its random phase, this re-emission interference signal, that is indicated by $I_{k,R}$ for user $k$, is uncorrelated with $y^k$ in (\ref{yk_appen}), and its mean value is equal to $0$. In order to get the variance of the re-emission interference for user $k$, we just need to replace the terms $\gamma_m^2= \tau_m\rho_m^2 $ with $(1-\tau_m)\rho_m^2 $ in the variance expressions of the interference formulas in (\ref{yk_appen}), i.e., $E[I_{k,U}\times I_{k,U}^*]$ in (\ref{var_IU}) and $E[I_{k,N}\times I_{k,N}^*]$ in (\ref{var_IN}), and combine them as follows:
\footnotesize
\begin{equation}
E[I_{k,R}I_{k,R}^*]=\frac{NSG^2}{F_{\mathsf{NORM}}^2}\sum_{m=1}^M(1-\tau_m)\rho_m^2P_{km}\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+\frac{NSG^2}{F_{\mathsf{NORM}}^2}\sum_{m=1}^M (1-\tau_m)\rho_m^2P_{km}\sigma^2.
\end{equation}
\normalsize
Now, according to the derived formulas for $E[\mathsf{DS}_k]$, $E[I_{k,U}\times I_{k,U}^*]$, $E[I_{k,N}\times I_{k,N}^*]$, $ E[N_{\mathsf{HAPS}}N_{\mathsf{HAPS}}^*]$, and $E[I_{k,R}I_{k,R}^*]$, we derive the $\mathsf{SINR}$ of the user $k$ for the proposed scheme as follows:
\footnotesize
\begin{equation}
\begin{split}
&\mathsf{SINR}_k=\frac{E[\mathsf{DS}_k]^2}{E[I_{k,U} I_{k,U}^*]+E[I_{k,N} I_{k,N}^*]+E[I_{k,R}I_{k,R}^*]+E[N_{\mathsf{HAPS}}N_{\mathsf{HAPS}}^*]}\\&=\frac{(\frac{GNS\sqrt{P_{k}}}{F_{\mathsf{NORM}}}\sum_{m=1}^M \gamma_{m}\sqrt{P_{km}} \beta_{km})^2}{\frac{NSG^2}{F_{\mathsf{NORM}}^2}\sum_{m=1}^M\rho_{m}^2P_{km}\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+\frac{NSG^2}{F_{\mathsf{NORM}}^2}\sum_{m=1}^M \rho_{m}^2P_{km}\sigma^2+MS\sigma_{H}^2}\\&=\frac{G^2N^2SP_{k}(\sum_{m=1}^M \gamma_{m}\sqrt{P_{km}} \beta_{km})^2}{NG^2\sum_{m=1}^M\rho_{m}^2P_{km}\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+NG^2\sum_{m=1}^M \rho_{m}^2P_{km}\sigma^2+M\sigma_{H}^2F_{\mathsf{NORM}}^2}\\&=\frac{G^2N^2SP_{k}(\sum_{m=1}^M \gamma_{m}\sqrt{P_{km}} \beta_{km})^2}{NG^2\sum_{m=1}^M\rho_{m}^2P_{km}\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+NG^2\sum_{m=1}^M \rho_{m}^2P_{km}\sigma^2+M\sigma_{H}^2(\frac{\sqrt{N}}{M}\sqrt{\sum_{m=1}^M\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+M\sigma^2})^2}\\&=\frac{MG^2NSP_{k}(\sum_{m=1}^M \gamma_{m}\sqrt{P_{km}} \beta_{km})^2}{MG^2\sum_{m=1}^M\rho_{m}^2P_{km}\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+MG^2\sum_{m=1}^M \rho_{m}^2P_{km}\sigma^2+\sigma_{H}^2(\sum_{m=1}^M\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+M\sigma^2)}.
\end{split}
\end{equation}
\normalsize
Utilizing these SINRs, we can derive the achievable rate of user $k$ as $R_k=\log_2(1+\mathsf{SINR}_k)$, and the proof is completed.
\section{PROOF OF Proposition \ref{propos_quasi}}\label{proof_quasi}
In order to prove the quasi-concavity of (P1), we need to show that the objective function is quasi-concave, and that the constraints' set is convex. To prove quasi-concavity of the objective function, we just need to prove that its upper-level set is a convex set \cite{boyd2020disciplined}. For this, we first perform the variable change $\bold{T}=[P_{km}^2]_{K\times M}$ in (P1) and show the objective function with $f(T)$. Hence, for any $t\in \mathbb{R}_+$, the upper-level set (ULS) of the objective function will be
\footnotesize
\begin{equation}
\begin{split}
& \mathsf{ULS}(f,t)=\{T:f(T)>t\}\\&=\bigg\{T:\frac{MP^2NSP_{k}(\sum_{m=1}^M \gamma_{m}T_{km} \beta_{km})^2}{MP^2\sum_{m=1}^M\rho_{m}^2T_{km}^2\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+MP^2\sum_{m=1}^M \rho_{m}^2T_{km}^2\sigma^2+\sigma_{H}^2(\sum_{m=1}^M\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+M\sigma^2)}>t,\forall k\bigg\}\\&=\bigg\{T:\sqrt{MP^2\sum_{m=1}^M\rho_{m}^2T_{km}^2\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+MP^2\sum_{m=1}^M \rho_{m}^2T_{km}^2\sigma^2+\sigma_{H}^2(\sum_{m=1}^M\sum_{k'=1}^{K}\beta_{k'm}^2P_{k'}+M\sigma^2)}\\&<\frac{\sqrt{MP^2NSP_{k}}\sum_{m=1}^M \gamma_{m}T_{km} \beta_{km}}{\sqrt{t}},\forall k\bigg\}.
\end{split}
\end{equation}
\normalsize
We can see that this set is in the form of a norm function less than an affine function of variable $T$, and hence it is a convex set. With the new variable $T$, the constraint (\ref{eq:constraint-sum_P}) will be $\sum_{k=1}^KT_{km}^2\leq P_m, ~\forall m$, which is a convex set, and the proof is completed.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,108,101,565,287 | arxiv | \subsection*{Acknowledgment} #1}
\def\floatpagefraction{.95}
\def\topfraction{.95}
\def\bottomfraction{.95}
\def\textfraction{.05}
\def\dblfloatpagefraction{.95}
\def\dbltopfraction{.95}
\newcommand{\EFigure}[2]{\begin{figure} \centering
\framebox[85mm]{\epsfxsize=80mm\epsfbox{#1}}
\caption{\protect\small #2}\medskip\hrule
\end{figure}}
\newcommand{\REFigure}[2]{\begin{figure} \centering
\framebox[85mm]{\epsfysize=80mm\rotate[r]{\epsfbox{#1}}}
\caption{\protect\small #2}\medskip\hrule
\end{figure}}
\newcommand{\WEFigure}[2]{\begin{figure*} \centering
\framebox[178mm]{\epsfxsize=170mm\epsfbox{#1}}
\caption{\protect\small #2}\medskip\hrule
\end{figure*}}
\def\Jl#1#2{#1 {\bf #2},\ }
\def\ApJ#1 {\Jl{Astroph. J.}{#1}}
\def\CQG#1 {\Jl{Class. Quantum Grav.}{#1}}
\def\DAN#1 {\Jl{Dokl. AN SSSR}{#1}}
\def\GC#1 {\Jl{Grav. Cosmol.}{#1}}
\def\GRG#1 {\Jl{Gen. Rel. Grav.}{#1}}
\def\JETF#1 {\Jl{Zh. Eksp. Teor. Fiz.}{#1}}
\def\JETP#1 {\Jl{Sov. Phys. JETP}{#1}}
\def\JHEP#1 {\Jl{JHEP}{#1}}
\def\JMP#1 {\Jl{J. Math. Phys.}{#1}}
\def\NPB#1 {\Jl{Nucl. Phys. B}{#1}}
\def\NP#1 {\Jl{Nucl. Phys.}{#1}}
\def\PLA#1 {\Jl{Phys. Lett. A}{#1}}
\def\PLB#1 {\Jl{Phys. Lett. B}{#1}}
\def\PRD#1 {\Jl{Phys. Rev. D}{#1}}
\def\PRL#1 {\Jl{Phys. Rev. Lett.}{#1}}
\def\al{&\nhq}
\def\lal{&&\nqq {}}
\def\eq{Eq.\,}
\def\eqs{Eqs.\,}
\def\beq{\begin{equation}}
\def\eeq{\end{equation}}
\def\bear{\begin{eqnarray}}
\def\bearr{\begin{eqnarray} \lal}
\def\ear{\end{eqnarray}}
\def\earn{\nonumber \end{eqnarray}}
\def\nn{\nonumber\\ {}}
\def\nnv{\nonumber\\[5pt] {}}
\def\nnn{\nonumber\\ \lal }
\def\nnnv{\nonumber\\[5pt] \lal }
\def\yy{\\[5pt] {}}
\def\yyy{\\[5pt] \lal }
\def\eql{\al =\al}
\def\eqv{\al \equiv \al}
\def\sequ#1{\setcounter{equation}{#1}}
\def\dst{\displaystyle}
\def\tst{\textstyle}
\def\fracd#1#2{{\dst\frac{#1}{#2}}}
\def\fract#1#2{{\tst\frac{#1}{#2}}}
\def\Half{{\fracd{1}{2}}}
\def\half{{\fract{1}{2}}}
\def\e{{\,\rm e}}
\def\d{\partial}
\def\re{\mathop{\rm Re}\nolimits}
\def\im{\mathop{\rm Im}\nolimits}
\def\arg{\mathop{\rm arg}\nolimits}
\def\tr{\mathop{\rm tr}\nolimits}
\def\sign{\mathop{\rm sign}\nolimits}
\def\diag{\mathop{\rm diag}\nolimits}
\def\dim{\mathop{\rm dim}\nolimits}
\def\const{{\rm const}}
\def\eps{\varepsilon}
\def\ep{\epsilon}
\def\then{\ \Rightarrow\ }
\newcommand{\toas}{\mathop {\ \longrightarrow\ }\limits }
\newcommand{\aver}[1]{\langle \, #1 \, \rangle \mathstrut}
\newcommand{\vars}[1]{\left\{\begin{array}{ll}#1\end{array}\right.}
\def\suml{\sum\limits}
\def\intl{\int\limits}
\bls{1.025}
\begin{document}
\twocolumn[
\jnumber{2}{2011}
\Title{Rotating thin-shell wormhole from glued Kerr spacetimes}
\Aunames{P. E. Kashargin$^a$ and S. V. Sushkov$^{a,b,1}$}
\Addresses{
\addr a {Department of General Relativity and Gravitation,
Kazan State University, Kremlevskaya str. 18, Kazan 420008,
Russia}
\addr b {Department of Mathematics, Tatar State University of Humanities and
Education, Tatarstan str. 2, Kazan 420021, Russia}
}
\Rec{January 13, 2011}
\Abstract
{We construct a model of a rotating wormhole made by cutting and pasting
two Kerr spacetimes. As a result, we obtain a rotating thin-shell wormhole
with exotic matter at the throat. Two candidates for the exotic matter are
considered: (i) a perfect fluid; (ii) an anisotropic fluid. We show that
a perfect fluid is unable to support a rotating thin-shall wormhole.
On the contrary, the anisotropic fluid with the negative energy density
can be a source for such a geometry.
}
\bigskip
]
\email 1 {sergey\[email protected]; [email protected]}
{
\newcommand{\Ref}[1]{(\ref{#1})}
\newcommand{\hi}{{\hat\imath}}
\newcommand{\hj}{{\hat\jmath}}
\newcommand{\cE}{{\cal E}}
\newcommand{\cP}{{\cal P}}
\def\ee{{\mathbf{e}}}
\section{Introduction}
Wormholes are usually defined as topological handles in spacetime linking
widely separated regions of a single universe, or ``bridges'' joining two
different spacetimes [1]. Their history traces back to the works of
Einstein and Rosen [2], and Misner and Wheeler [3]. The modern interest in
wormholes dates back to 1988, when Morris and Thorne [4] discussed the
possibility of using wormholes for interstellar travels. As is well-known
[4, 5], traversable wormholes can exist only if their throats contain
exotic matter which possesses a negative pressure and violates the null
energy condition. The search for realistic physical models providing the
wormhole existence represents an important direction in wormhole physics.
In general relativity there are models of wormholes supported by matter
with exotic equations of state such as phantom energy [6, 7], a Chaplygin
gas [8], tachyon matter [9]. Numerous examples of wormhole solutions have
been found in various modifications of general relativity such as
scalar-tensor theories of gravity, brane theories, semiclassical gravity,
theories with non-minimal coupling [10, 11]. It is worth being noticed
that most of the investigations deal with static spherically symmetric
wormholes because of their simplicity and high symmetry. At the same
time, it would be important and interesting from a physical point of view
to study wider classes of wormholes including non-static and rotating ones.
Rotating wormholes were first considered by Teo [12] who discussed some
general geometrical properties of the stationary rotating wormhole
spacetime. Other investigations in this field include studies of general
requirements to the stress-energy tensor necessary to generate a rotating
wormhole [13], energy conditions in a rotating wormhole spacetime and its
traversability [14], and scalar perturbations in the rotating wormhole
background [15]. Arguments in favor of the possibility of existence
of semiclassical rotating wormholes were given in [16]. Solutions
describing slowly rotating wormholes have been found and analyzed in [17,
18]. A number of new axially symmetric stationary exact solutions in
general relativity with phantom and Maxwell fields have recently been
obtained in [19, 20]; among them are solutions which represent rotating
and magnetized wormholes.
The first examples of thin-shell wormholes have been given by Visser [21,
22]. In particular, he considered a spherically symmetric thin-shell
wormhole constructed by joining two Schwarzschild geometries [22].
Generally, thin-shell wormholes are made by cutting and pasting two
manifolds to form a geodesically complete new one with a throat located on
the joining shell. In this case, the exotic matter needed to build the
wormhole is concentrated on the shell, and the junction-condition
formalism is used for its study. Due to elegancy and relative simplicity,
the cut-and-paste approach has became generally used for constructing new
models of thin-shell wormholes such as charged wormholes [23], those with
a cosmological constant [24], cylindrical [25] and plane-symmetric [26]
wormholes, those in dilaton [27], Einstein--Gauss--Bonnet [28, 29], and
Brans-Dicke [30] gravity, wormholes with a generalized Chaplygin gas [31],
wormholes associated with global cosmic strings [32] and global monopoles
[33]. Worth mentioning is also the paper by Bronnikov and Starobinsky [34]
who considered static, spherically symmetric thin-shell wormholes in any
non-ghost scalar-tensor theory of gravity and showed that the shell
surface energy density is negative in all such cases.
In this paper we will apply the cut-and-paste method in order to construct
and study a rotating thin-shell wormhole made by cutting and pasting two
Kerr spacetimes.
\section{Kerr surgery}
The Kerr metric in the Boyer-Lindquist coordinates reads [35]
\bearr
ds^2=\left(1-\frac{2mr}{\rho^2}\right)dt^2
-\frac{\rho^2}{\Delta}dr^2-\rho^2d\theta^2
\nnn \cm
-\left(r^2+a^2+\frac{2ma^2r}{\rho^2}\sin^2\theta\right)\sin^2\theta
d\phi^2
\nnn \cm
+\frac{4mar}{\rho^2}\sin^2\theta d\phi dt, \label{kash-metric}
\ear
where $\rho^2 = r^2+a^2\cos^{2}\theta$ and $\Delta = r^2-2mr +a^2$.
The parameters $m$ and $J=ma$ correspond to the mass and angular momentum
of a Kerr black hole measured by a distant observer. The metric (1)
has two fictitious singularities. The first one occurs at the {\em event
horizon} $r=r_{+}$ where $\Delta=0$, and hence $g_{rr}$ is infinite:
\beq \label{kash-r+}
r_+ = m + \sqrt{m^2-a^2}.
\eeq
The second singularity occurs on the boundary of the {\em ergosphere}
$r=r_0$ where $g_{tt}=0$:
\beq \label{kash-r_0}
r_0 = m+\sqrt{m^2-a^2\cos^2\theta}.
\eeq
Consider two copies ${\cal M}_1$ and ${\cal M}_2$ of the region
$r\ge b$ of the Kerr spacetime (\ref{kash-metric}):
\beq
{\cal M}_{1,2}=\{ (t,r,\theta,\phi) \, |\, r\ge b \}.
\eeq
As a result, we get two geodesically incomplete manifolds with boundaries
given by the timelike hypersurfaces
\beq
\Sigma_{1,2}=\{ (t,r,\theta,\phi) \, |\, F(r)=r-b=0 \}.
\eeq
Identifying these hypersurfaces (i.e., $\Sigma = \Sigma_{1} \equiv
\Sigma_{2}$), we obtain a new manifold ${\cal M}={\cal M}_{1}\cup{\cal
M}_{2}$, which is geodesically complete and possesses two asymptotically
flat regions connected by a wormhole with the throat $\Sigma$. Note that
the two-dimen\-sional surface $t=\const$, $r=b$ in Kerr spacetime is
actually an ellipsoid of revolution having minor and major axes equal to
$b$ and $2(a^2+b^2)^{1/2}$, respectively. Nevertheless, for brevity we
will call $b$ the wormhole throat radius. To avoid the presence of
horizons in the resulting manifold $\cal M$, we will suppose $ b > r_{+}$.
Since $\cal M$ is piecewise Kerr, the stress-energy tensor is everywhere
zero, except for the throat itself. At $\Sigma$ one may expect a
stress-energy tensor proportional to the delta function. This means that
the throat $\Sigma$ is a thin shell.
To analyze such a thin-shell configuration, we will follow the
Darmois-Israel standard formalism [36], also known as the junction
condition formalism. The wormhole throat $\Sigma$ is a synchronous
timelike hypersurface, where we define the intrinsic coordinates
$\xi^{i} = (\tau,\vartheta,\varphi)$ as follows: $\tau=t_1\equiv t_2$,
$\vartheta=\theta_1\equiv \pi-\theta_2$, and $\varphi=\phi_1\equiv\phi_2$.
The coordinate $\tau$ is the proper time on the shell. Generally, the
throat radius can be a function of proper time. However, we will assume
$b(\tau)\equiv b=\const$. Note that the metric (the first fundamental
form) is continuous on $\Sigma$:
\beq
g_{ij}^{1}|_\Sigma=g_{ij}^{2}|_\Sigma,\label{kash-g_on_sigma}
\eeq
while its first derivatives can be discontinuous. To describe this
discontinuity, one should consider the extrinsic curvature. The
extrinsic curvatures (second fundamental forms) associated with the two
sides of the shell $\Sigma$ are
\beq
K^{\pm}_{ij}= \left. -n^\pm_{\gamma}\left( \frac{\d^2
x^{\gamma}}{\d\xi^{i}\d\xi^{j}}+\Gamma^\gamma_{\alpha\beta}
\frac{\d x^{\alpha}}{\d\xi^{i}} \frac{\d x^{\beta}}
{\d\xi^{j}}\right)\right|_{\Sigma},
\label{kash-second_form}
\eeq
where $n^\pm_\gamma$ are the unit normals ($n^\gamma n_\gamma=1$)
to $\Sigma$:
\beq
n^\pm_\gamma=\pm\left|g^{\alpha\beta}\frac{\d F}{\d x^\alpha}
\frac{\d F}{\d x^\beta}\right|^{-1/2}\frac{\d F}{\d x^\gamma}.
\eeq
Generally, $K_{ij}^{+}\neq K_{ij}^{-}$. With the definitions
$k_{ij} = K^{+}_{ij}-K^{-}_{ij}$ and $k=k^i_i$ we have the Einstein
equations on the shell (also called the Lanczos equations)
\bear \label{kash-Lanczos}
-k_{ij} + kg_{ij} = 8\pi S_{ij},
\ear
where $S_{ij}$ is the surface stress-energy tensor.
Let us adopt the orthonormal basis
$\{\ee_{\hat\tau},\ee_{\hat\vartheta},\ee_{\hat\varphi}\}$
for the metric \Ref{kash-metric} on $\Sigma$:
\bearr \label{kash-orthobasis}
\ee_{\hat\tau} = \frac{\ee_{\tau}-\frac{g_{\tau\varphi}}
{g_{\varphi\varphi}} \ee_{\varphi}}
{\sqrt{g_{\tau\tau}-\frac{g_{\tau\varphi}^2}{g_{\varphi\varphi}}}},
\nnn
\ee_{\hat\vartheta} = \frac{\ee_{\vartheta}}
{\sqrt{-g_{\vartheta\vartheta}}},
\nnn
\ee_{\hat\varphi} = \frac{\ee_{\varphi}}{\sqrt{-g_{\varphi\varphi}}}.
\ear
In this basis, the surface stress-energy tensor $S_{ij}$ has the
following algebraic structure:
\beq \label{kash-Sgen}
S_{\hat\imath\hat\jmath} = \left[
\begin{array}{ccc}
\sigma & 0 & \zeta \\
0 & p_{\vartheta} & 0 \\
\zeta & 0 & p_{\varphi}
\end{array} \right],
\eeq
where $\sigma$ is the surface energy density, $p_{\vartheta}$ and
$p_{\varphi}$ are the principal surface pressures, and $\zeta$ is the
surface angular momentum density. The Lanczos equations \Ref{kash-Lanczos}
in the basis \Ref{kash-orthobasis} take the following form:
\begin{subequations} \label{kash-alleqs}
\bearr
4\pi\sigma = -\frac{\Delta_\beta^{1/2}}{m \rho_\beta
\Phi}\Big[2\beta^3+\alpha^2\beta+\alpha^2
\nnn \inch
+\alpha^2(\beta-1)\cos^2\vartheta\Big] , \label{kash-sigma}
\yyy
4\pi p_{\vartheta} = \frac{\beta-1}{m\rho_\beta\Delta_\beta^{1/2}},
\yyy
4\pi p_{\varphi} = \frac 1 {m \rho_\beta^3\Delta_\beta^{1/2}\Phi}
\Big[\beta^2 \big(\beta^5-\beta^4+2\alpha^2\beta^3
\nnn \cm
+2\alpha^2\beta^2 +\alpha^2\beta(\alpha^2-8)+3\alpha^4\big)
\nnn \cm
+\alpha^2\cos^2\vartheta
\big(\beta^5-5\beta^4+2\beta^3(\alpha^2+4)
\nnn \cm
-6\alpha^2\beta^2 +\alpha^4\beta-\alpha^4\big)\Big] ,\label{kash-pphi}
\yyy
4\pi \zeta = -\frac1{m\rho_\beta^3 \Phi}
\Big[\alpha \sin\vartheta \big(3\beta^4+\alpha^2\beta^2
\nnn \cm\cm
+ \alpha^2(\beta^2-\alpha^2)\cos^2\vartheta \big)\Big],
\label{kash-zeta}
\ear
\end{subequations}
where we have introduced the convenient dimensionless quantities
$$
\beta = bm^{-1},\qquad \alpha = am^{-1},
$$ $$
\Delta_\beta = \beta^2-2\beta +\alpha^2, \qquad
\rho_\beta^2 = \beta^2+\alpha^2\cos^{2}\theta,
$$ $$
\Phi = \beta^4+\alpha^2\beta^2+2\alpha^2\beta
+\alpha^2\Delta_\beta\cos^2\vartheta.
$$
{ Later on we will also use dimensionless notations for the event
horizon $\beta_+=r_+m^{-1}=1+\sqrt{1-\alpha^2}$ and the boundary
of ergosphere
$\beta_0=r_0m^{-1}=1+\sqrt{1-\alpha^2\cos^2\theta}$.}
\section{Matter on the shell}
It is necessary to emphasize that the quantities $\sigma$,
$p_{\vartheta}$, $p_{\varphi}$, and $\zeta$ given by \eqs (12) are not yet
related to any physical model of matter filling the shell $\Sigma$. Their
values are of purely geometric nature and depend on the metric parameters
$m$ and $a$ and the throat radius $b$. To impart a physical sense to these
quantities one should specify the kind of matter which can support the
rotating thin-shell wormhole.
\subsection{Perfect fluid}
As a simple model of matter located on the shell $\Sigma$, we will first
consider a perfect fluid. In the orthonormal basis \Ref{kash-orthobasis}
the surface stress-energy tensor of a perfect fluid is
\beq \label{kash-Sperfluid}
S_{\hi\hj} = (\cE+\cP) u_\hi u_\hj - \eta_{\hi\hj}\cP,
\eeq
where $\eta_{\hi\hj} = \diag(+1,-1,-1)$, $u_\hi$ is the fluid velocity
which is supposed to be timelike, i.e. $u^\hi u_\hi = 1$, ${\cE}$ is the
fluid energy density measured in the comoving frame, and $\cP$ is the
pressure isotropic in all directions tangent to the shell $\Sigma$. For
the rotating fluid it is naturally to choose $u_\hi= (u_\tau,0,u_\varphi)$.
Comparing \Ref{kash-Sgen} and \Ref{kash-Sperfluid}, we find
\begin{subequations} \label{kash-alleqs0}
\bearr
\sigma = ({\cE}+\cP)u_\tau^2-\cP,
\\ \lal
p_\vartheta = \cP,
\\ \lal
p_\varphi = ({\cE}+\cP)u_{\varphi}^2+\cP,
\\ \lal
\zeta = ({\cE}+\cP)u_\tau u_\varphi.
\ear
\end{subequations}
Combining these equations, one can easily obtain the following relation:
\beq \label{kash-relperfluid}
(\sigma+p_\vartheta)(p_\varphi-p_\vartheta)-\zeta^2\equiv 0.
\eeq
Substitution of \eqs \Ref{kash-alleqs} into the last relation gives
\beq
4\alpha^2\beta^2\rho_\beta^{-6}\sin^2\theta\equiv 0.
\eeq
This identity is only fulfilled provided $\alpha = am^{-1} = 0$, i.e.,
$a = 0$. Therefore, a perfect fluid cannot be a source for a rotating
thin-shell wormhole with $a \ne 0$.\footnote
{Nevertheless, a perfect fluid can support a spherically symmetric
thin-shell wormhole made of two surgically modified Schwarzschild
spacetimes [22].}
\subsection{Anisotropic fluid}
Now consider an anisotropic fluid with the surface stress-energy tensor
\beq \label{kash-Sfluid}
S_{\hi\hj} = \cE u_\hi u_\hj+\cP_1 v_\hi v_\hj+\cP_2 \Pi_{\hi\hj},
\eeq
Here $u_\hi = (u_\tau,0,u_\varphi)$ is the fluid timelike velocity
($u^\hi u_\hi = 1$), and $v_\hi$ and $\Pi_{\hi\hj}$ satisfy the following
orthogonality conditions:
\beq
u^\hi v_\hi = 0, \quad u^\hi\Pi_{\hi\hj} = 0, \quad
v^\hi\Pi_{\hi\hj} = 0.
\eeq
$\cE$ is the energy density, $\cP_1$ and $\cP_2$ are the fluid pressures
in two orthogonal directions tangent to the shell $\Sigma$ (generally,
$\cP_1\not = \cP_2$). For the rotating fluid it is natural to choose
$u_\hi = (u_\tau,0,u_\varphi)$ with
\beq \label{kash-norm}
u_\tau^2-u_\varphi^2 = 1,
\eeq
and $v_\hi = (0,1,0)$; the tensor $\Pi_{\hi\hj}$ can be constructed
as follows: $\Pi_{\hi\hj} = u_\hi u_\hj-v_\hi v_\hj-\eta_{\hi\hj}$.
Comparing \Ref{kash-Sgen} and \Ref{kash-Sfluid}, we find
\begin{subequations} \label{kash-alleqs2}
\bearr
\sigma = ({\cE}+\cP_2)u_\tau^2-\cP_2 ,
\\ \lal
p_\vartheta = \cP_1,
\\ \lal
p_\varphi = ({\cE}+\cP_2)u_{\varphi}^2+\cP_2,
\\ \lal
\zeta = ({\cE}+\cP_2)u_\tau u_\varphi,
\ear
\end{subequations}
The latter equations, together with the normalizing condition
\Ref{kash-norm}, form a set of five algebraic equations for five unknowns
$\cE$, $\cP_1$, $\cP_2$, $u_\tau$ and $u_\varphi$. Resolving the system
yields $\cP_1 = p_\vartheta$, and
\begin{subequations} \label{kash-sol}
\bear
\cE^\pm \eql \frac12\left[ \sigma-p_\varphi\pm\sqrt{D}\right],
\label{kash-rho_pm}
\\
\cP_2^\pm \eql \frac12\left[-\sigma+p_\varphi\pm\sqrt{D}\right],
\label{kash-P_p_pm}\\
(u^\pm_\tau)^2 \eql \pm\frac{\sigma+p_\varphi}{2\sqrt{D}}+\frac12,
\label{kash-ut_pm}
\\
(u^\pm_\varphi)^2 \eql
\pm\frac{\sigma+p_\varphi}{2\sqrt{D}}-\frac12, \label{kash-up_pm}
\ear
\end{subequations}
with $D = (\sigma+p_\varphi)^2-4\zeta^2$. It is worth noting that we have
got two classes of solutions which depend on a choice of the plus or minus
sign in the obtained expressions.
Finally, \eqs \Ref{kash-sol} represent expressions for the surface energy
density $\cE$, pressures $\cP_1$ and $\cP_2$, and velocity components
$u_\tau$ and $u_\varphi$ of the anisotropic fluid on the shell $\Sigma$.
\section{Analysis}
In this section we will analyze the model of a rotating thin-shell
wormhole constructed above. First of all, let us consider the particular
case of a non-rotating thin-shell wormhole with $a = 0$ (no angular
momentum). In this case the metric \Ref{kash-metric} reduces to the
Schwarzschild one, and Eqs. \Ref{kash-alleqs} reduce to those obtained by
Visser [22]:
\bearr
\sigma = -\frac{1}{2\pi b}\sqrt{1-2m/b},
\nnn
p_\vartheta = p_\varphi = \frac{1}{4\pi b}\frac{1-m/b}{\sqrt{1-2m/b}},
\qquad \zeta = 0.
\ear
Note that the surface energy density $\sigma$ tends to zero and the
pressures $p_\vartheta$ and $p_\varphi$ to infinity if the throat radius
$b$ tends to that of the event horizon $r_g = 2m$.
In the general case of a rotating thin-shell wormhole with $a\not = 0$
we have $\sigma \sim \Delta_\beta^{1/2}$ and $p_\vartheta,\
p_\varphi\sim \Delta_\beta^{-1/2}$ (see \Ref{kash-alleqs}). Since
$\Delta_\beta = 0$ if $\beta = \beta_+: = 1+\sqrt{1-\alpha^2}$, we can
see that $\sigma\to 0$ and $p_\vartheta,\ p_\varphi \to\infty$
as $\beta\to \beta_+$.
Now let us discuss the properties of the anisotropic fluid located on
the shell $\Sigma$. Given the expressions \Ref{kash-alleqs} for $\sigma$,
$p_\vartheta$, $p_\varphi$, and $\zeta$, we can find the values $\cE$,
$\cP_1$, $\cP_2$, $u_\tau$, and $u_\varphi$ as explicit functions of the
dimensionless throat radius $\beta$. In particular, we have
\bearr
D = \frac1{4\pi m\rho_\beta^{6}\Delta_\beta}
\Big[\beta^3(\beta(\beta-3)^2-4\alpha^2)
\nnn
+2\alpha^2\beta\cos^2\vartheta
(\beta^3-3\beta+2\alpha^2)+\alpha^4\cos^4\vartheta(\beta-1)^2\Big].
\nnn
\ear
Note that $D$ should necessarily be positive, i.e., $D > 0$. As is shown
in the Appendix, it is possible if and only if $\beta\in{I}_1\cup{I}_2$,
where ${I}_1 = (\beta_+,\beta_2)$, ${I}_2 = (\beta_3,\infty)$, and
\beq \label{kash-beta_n}
\beta_n = 2+2\cos\left(\frac{\chi-2\pi(3-n)}{3}\right), \quad
n = 1,2,3,
\eeq
with $\chi$ defined by $\cos\chi = 2\alpha^2-1$. Additionally, one should
check whether or not the values of $(u_\tau^\pm)^2$ and
$(u_\varphi^\pm)^2$ given by \eqs \Ref{kash-ut_pm} and \Ref{kash-up_pm}
are non-negative.\footnote
{In principle, one may discard this requirement and consider also
negative values of $u_\tau^2$ and $u_\varphi^2$. In this case the
components $u_\tau$ and $u_\varphi$ will be pure imaginary, and
as a consequence $u^\hi$ will be spacelike, i.e. $u^\hi u_\hi = -1$.
In turn, this means that the fluid velocity exceeds the
velocity of light.}
From Fig.\,1 one may see that $(u_\tau^+)^2$ and $(u_\varphi^+)^2$ are
positive if $\beta<\beta_2$, while $(u_\tau^-)^2$ and $(u_\varphi^-)^2$
are positive if $\beta>\beta_3$. This means that one should take the plus
sign in \eqs \Ref{kash-rho_pm}--\Ref{kash-up_pm} in the case $\beta\in
I_1$ and the minus sign if $\beta\in I_2$. Let us repeat that the domain
$\beta \le \beta_+$ is forbidden by definition since we consider only
wormholes whose throat radius is greater than that of the event horizon
$\beta_+$. In addition, it turns out that the domain
$\beta\in[\beta_2,\beta_3]$ is also forbidden for rotating thin-shell
wormholes. Thus we have two classes of wormhole solutions depending on
the throat radius $\beta$: (i) $\beta_+ < \beta < \beta_2$; (ii)
$\beta > \beta_3$.
The energy density $\cE$ and the pressures $\cP_1$ and $\cP_2$ as
functions of $\beta$ are shown in Fig.\,2. Note that $\cE$ is negative,
while $\cP_1$ and $\cP_2$ are positive for all values of $\beta$.
\begin{figure}[ht]
\centerline{\includegraphics[scale = 0.4]{kashargin1.eps}}
\caption{Plots of $(u_\tau^\pm)^2$ and $(u_\varphi^\pm)^2$ vs. $\beta$ with
given $\alpha = 0.5$, $m = (4\pi)^{-1}$. The solid and dashed curves
are used for the plus- and minus-sign solutions, respectively; thick
lines show $(u_\tau^\pm)^2$, and thin lines show
$(u_\varphi^\pm)^2$. The shaded areas indicate forbidden regions
$\beta \le \beta_+$ and $\beta\in[\beta_2,\beta_3]$.
\label{kash-fig1}}
\end{figure}
\begin{figure}[ht]
\centerline{\includegraphics[scale = 0.4]{kashargin2.eps}}
\caption{Plots of $\cE$, $\cP_1$, and $\cP_2$ vs. $\beta$ with given $\alpha
= 0.5$, $m = (4\pi)^{-1}$. Solid, dotted, and thick lines
show $\cE$, $\cP_1$, and $\cP_2$, respectively. The shaded areas
indicate forbidden regions $\beta \le \beta_+$ and
$\beta\in[\beta_2,\beta_3]$. \label{kash-fig2}}
\end{figure}
\section{Conclusion}
We have constructed a rotating wormhole model by cutting and pasting
two Kerr spacetimes. As is usual for the cut-and-paste approach, the
resulting wormhole spacetime has a thin shall joining two regions of Kerr
spacetimes. This shell represents the wormhole throat and contains exotic
matter needed to support the wormhole. We have discussed two possible
candidates to the role of the exotic matter: (i) a perfect fluid, and (ii)
an anisotropic fluid. It has been shown that a perfect fluid is unable
to support a rotating thin-shell wormhole, while an anisotropic fluid
localized on the shell can be a source of such geometry. The corresponding
fluid energy density $\cE$ and anisotropic pressures $\cP_1$ and $\cP_2$
are given by \eqs \Ref{kash-sol} which express $\cE$, $\cP_1$, and
$\cP_2$ as functions of the dimensionless throat radius $\beta$.
{Admissible values of $\beta$ belong to two nonintersecting
intervals $I_1=(\beta_+,\beta_2)$ and $I_2=(\beta_3,\infty)$,
where $\beta_+=1+\sqrt{1-\alpha^2}$ is the event horizon, and
$\beta_n$ ($n=2,3$) are given by Eq. \Ref{kash-beta_n}. Since
$\beta_2<\beta_3$, the intervals $I_1$ and $I_2$ are not
intersected. Therefore, }
there are two classes of wormhole solutions: (i) with
``small'' throat radii $\beta_+<\beta<\beta_2$, and (ii) with
``large'' radii $\beta>\beta_3$. In both cases the energy
density $\cE$ of the anisotropic fluid turns out to be negative.
This means that matter supporting the rotating wormhole violates
the weak energy condition.
{It is interesting that the throat radius $\beta$ of the rotating
thin-shell wormhole can be less than the maximal size of
ergosphere $\beta_0^{max}=2$ ($\theta=\pi/2$). This is possible
for wormholes of the class I with small throat radii
$\beta_+<\beta<\beta_2$ (see the appendix). Moreover, for
wormholes with large angular momentum $\alpha>2^{-1/2}$ all values
of $\beta$ from the interval $(\beta_+,\beta_2)$ are less than
$\beta_2^{max}$. Thus, there are wormholes (of the class I) whose
throat lies inside of the ergosphere. Such the feature may, in
principle, lead to interesting consequences due to processes
similar to the Penrose process in the ergosphere of Kerr black
hole.}
An important issue in wormhole physics is the stability of wormhole
configurations. The stability of spherically symmetric thin-shell
wormholes has been intensively considered in the literature [37--44].
We intend to study this problem for rotating thin-shall wormholes in our
forthcoming paper.
\section*{Appendix}
\def\theequation{A.\arabic{equation}}
\sequ 0
Rearranging \eqs \Ref{kash-alleqs2} yields
\beq
\zeta^2 = (\sigma+\cP_2)(p_\varphi-\cP_2) , \label{kash-Pphi}
\eeq
It is a quadratic equation for $\cP_2$ with the discriminant $D =
(\sigma+p_\varphi)^2-4\zeta^2$ which should be necessarily positive,
$D > 0$. Using the relations \Ref{kash-sigma}, \Ref{kash-pphi}, and
\Ref{kash-zeta}, we find
\bearr \label{kash-D}
D = (4\pi m)^{-1}\rho_0^{-6}\Delta_0^{-1}
\Big[\beta^3(\beta(\beta-3)^2-4\alpha^2)
\nnn \cm
+2\alpha^2\beta\cos^2\vartheta (\beta^3-3\beta+2\alpha^2)
\nnn \cm
+\alpha^4\cos^4\vartheta (\beta-1)^2\Big].
\ear
Since $b > r_{+}$ is assumed, we have $\beta>\beta_+ = 1+\sqrt{1-\alpha^2}$,
and one may check in a straightforward manner that the cosine terms in
\Ref{kash-D} are positive. Therefore the sign of $D$ is determined by the
first term in the square brackets. In particular, on the equator
$\vartheta = \pi/2$ the condition $D>0$ reduces to
\beq \label{kash-ineqq}
f_\alpha(\beta) = \beta(\beta-3)^2-4\alpha^2 > 0.
\eeq
The cubic parabola $f_\alpha(\beta)$ has three roots $\beta_n$
($n = 1,2,3$) given by Cardano's formulas:
\beq
\beta_n = 2+2\cos\left(\frac{\chi-2\pi(3-n)}{3}\right),
\eeq
with $\chi$ defined by
\[
\cos\chi = 2\alpha^2-1.
\]
In the case $0 < \alpha < 1$ all roots are real and different, such
that $\beta_1 < \beta_2 < \beta_3$; if $\alpha = 0$, then $\beta_1 = 0$
and $\beta_2 = \beta_3 = 3$; if $\alpha = 1$, then $\beta_1 = \beta_2 = 1$
and $\beta_3 = 4$
{(see Fig. \ref{fig3})}.
Formally, one can also consider $\alpha > 1$ (i.e.,
$a > m$); in this case $\beta_1$ and $\beta_2$ become imaginary, and
$\beta_3$ is an only real root. In general, the solution of the inequality
\Ref{kash-ineqq} reads
\[
\beta\in(\beta_1,\beta_2)\cup(\beta_3,\infty).
\]
In addition, let us recall that it is assumed $b > r_{+}$, hence
$\beta > \beta_{+} = 1 + \sqrt{1-\alpha^2}$. One can check that
$\beta_1 < \beta_+ < \beta_2$, and so we finally have
\beq\label{kash-interval}
\beta \in (\beta_+, \beta_2) \cup (\beta_3,\infty).
\eeq
{
\begin{figure}[ht]
\centerline{\includegraphics[scale=0.35]{kashargin3.eps}}
\caption{Graphs of roots $\beta_n$ vs. $\alpha$. Thick, middle,
and thin lines denote $\beta_1$, $\beta_2$, and $\beta_3$,
respectively. The dot-dashed line indicates the event horizon
$\beta_+=1+\sqrt{1-\alpha^2}$. The dashed line shows the maximal
size of ergosphere $\beta_0^{max}=2$ ($\theta=\pi/2$). The lines
for $\beta_2$ and $\beta_0^{max}$ are intersected at
$\alpha=2^{-1/2}$. \label{fig3}}
\end{figure}
Thus, admissible values of $\beta$ belong to two nonintersecting
intervals $I_1=(\beta_+,\beta_2)$ and $I_2=(\beta_3,\infty)$. Note
that they can only be intersected if $\alpha=0$ (no rotation),
when $\beta_2=\beta_2=3$. In this case one may obtain static,
spherically symmetric thin-shell wormhole with the throat's radius
$\beta=3$, or $b=3m$, whose value lies on the boundary between
$I_1$ and $I_2$ \cite{kash-Vis89b}.
It is also worth emphasizing that an admissible value of $\beta$
can be less than the maximal size of ergosphere $\beta_0^{max}=2$
($\theta=\pi/2$). Really, in case $\beta\in(\beta_+,\beta_2)$ one
may always choose $\beta_+<\beta<\min(\beta_0^{max},\beta_2)$ (see
Fig. \ref{fig3}). Moreover, for $\alpha>2^{-1/2}$ one has
$\beta_2<2$, hence all values of $\beta$ from the interval
$(\beta_+,\beta_2)$ are less than $\beta_2^{max}$.
}
\subsection*{Acknowledgments}
The authors are deeply grateful to Kirill Bronnikov for a valuable
discussion.
The work was partially supported by the Russian Foundation for Basic
Research grants No 08-02-00325, 08-02-91307.
}
\small
|
1,108,101,565,288 | arxiv | \section{Introduction}
Supersymmetric gauge theories with matter fields generally have a large
degeneracy of inequivalent vacua. The space of vacua, or `moduli
space', can be readily determined at the classical level. After
quantization the problem of determining the moduli space is more
difficult because asymptotically free theories can be strongly
coupled. Seiberg has studied the phase structure of SUSY QCD with
$N_c$ colors and $N_f$ flavors\cite{seiberg1}. It was know that the
theory has runaway vacua\cite{ADS1} for \(N_{f} < N_{c}\). Seiberg
argued that for \(N_{f} = N_{c}\) the moduli space (of vacua) is
modified by quantum effects while for \(N_{f} = N_{c}+1\) the theory
displays confinement without chiral symmetry breaking. For \(N_{f} >
N_{c}+1\) he found dual descriptions in which the magnetic dual
coupling is weak when the electric one is strong and vice versa.
Following Seiberg many have studied a number of specific SUSY gauge
theories. Intriligator and Pouliot repeated Seiberg's analysis for
$Sp(N_c)$ gauge groups with $2N_f$ flavors (matter fields in the
fundamental representation)\cite{keni1}. Pouliot\cite{pouliot} and
Trivedi and Poppitz\cite{trivedi} studied $SU(N)$ theories with an
antisymmetric tensor field and Pouliot and Strassler\cite{pouliot2}
with a symmetric tensor. Many other examples and references can be
found in reviews\cite{reviews}.
In those investigations the emphasis was on finding the behavior of a
particular theory, or class of theories. Csaki, Schmaltz and Skiba
took a different approach\cite{CSS}. They attempted to find all
theories that display a particular effect. To this end they define
``s-confinement'' as a generalization of confinement without chiral
symmetry breaking as obtained by Seiberg for SUSY QCD with
$N_f=N_c+1$. They proceed to find all SUSY gauge theories based on a
simple gauge group that display s-confinement.
In this paper we take a similar track. We begin a study of all SUSY
gauge theories with a quantum modified moduli space. We determine all
theories based on a simple $SU(N)$ or $Sp(N)$ gauge group with a
quantum modified moduli space. We have not attempted to study
exceptional or orthogonal gauge groups. Theories with a modified
moduli space are of interest per se. The quantum modification is
poorly understood, inferred only from consistency conditions. These
theories can be used to fabricate models of dynamical SUSY breaking
\cite{IT}.
One may describe the classical moduli space in terms
of gauge invariant composite operators. The moduli is the space of
values these operators may take, modulo algebraic constraints. At
the quantum level the description of the moduli space is still in
terms of these operators. The modification of the moduli space is to
be found in a modified algebraic constraint. It is therefore useful to
know, a priori, how many constraints one must have (given a choice of
composite operators to describe the moduli). We derive a simple
formula for the dimension of the moduli space which then gives us the
number of required constraints.
The constraint specifying the moduli may be either invariant or
covariant under the non-anomalous global symmetries of the theory. In
the former case the quantum moduli space differs from the classical in
that the origin has been smoothly excised. But when the
constraint is covariant the origin remains in the quantum moduli
space. In order for the 't~Hooft anomaly condition to be satisfied at
the origin, one mode must be excluded, and this can be implemented in
two distinct ways. The constraint can be used to express one mode in
terms of the rest, and therefore this mode does not contribute to the
anomaly. Alternatively one can implement the constraint with a
Lagrange multiplier in a superpotential. In this way we find that one
of the constrained modes, classically massless, becomes
massive. Integrating out this mode leaves unconstrained the rest of
the modes. The only quantum effect has been to pick which mode to
eliminate by the constraint, save for an interesting subtlety. Going
to infinity in moduli space along a particular direction we find a new
branch of moduli space. On this branch the global $U(1)_R$ symmetry is
spontaneously broken. A similar situation has been found for theories
with branches in a Coulomb phase\cite{SW1,SW2,IS}, but with the
obvious distinction that in the theories we consider there is no local
symmetry on the branch.
The methods we use are similar to those of Csaki {\it et al}. They
used a condition on a certain sum of indices of the representations
for the particle and gauge contents. This condition significantly
reduced the number of all possible theories. The number of theories
was further reduced by studying the flow to other theories with a
phase structure incompatible with s-confinement. The remaining theories
were checked one by one to be in the s-confining phase.
An index condition can also be used to classify theories with a
quantum modified moduli space. With the help of a generalized flow, we
not only check the phase structure of our potentially interesting
theories, but also use it to determine how the gauge invariant
operators and the constraints flow from one theory to another. In
this way we can determine the quantum modified constraints explicitly.
One could also use our generalized flow to determine explicitly the
precise form of the constraints in the s-confining theories considered
by Csaki {\it et al}.
The paper is organized as follows. In Sect.~\ref{sec:QQMs} we classify
theories according to whether the algebraic constraint specifying the
moduli is invariant or covariant under global symmetries and discuss
the correspondingly different structure of the moduli space. In
Sect.~\ref{index} we review the index condition for the s-confining
theories and for theories with quantum modified moduli. We also
explain there the additional conditions from the flow of the theories
and give some examples of gauge invariant operator flow. Our formula
for the number of constraints is explained in Sect.~\ref{dim-moduli}.
The methods introduced are then put to work in an explicit example in
Sect.~\ref{example}. The results for the $SU(N)$ and $Sp(N)$ theories
are presented in Sect.~\ref{all-Qs}. We list all the theories obeying
the index condition along with their phase structure. For the theories
not yet discussed in the literature, we write down the gauge invariant
operators and the exact constraint. We come to a conclusion in
Sect.~\ref{conclusions}.
In the appendix we list all the gauge invariant operators with their
precise index structure. This is important because there is no unique
choice of operators. Another choice will generally change the precise
form of the constraint.
It may appear that the qualitative results obtained here do not
require a precise determination of the form of the
constraints. However, care must be exercised in not choosing redundant
operators. That is, some of the operators used to described the
classical moduli space may not be independent even though it may appear
so a priori. We have found that deriving quantitatively precise
constraints guards against such errors. Moreover, we believe it will
be of general use to both model builders and field theorists to have a
complete tabulation of the precise constraints. We have undertaken the
task here.
\section{Theories with Quantum Modified Moduli}\label{sec:QQMs}
The theories with quantum modified moduli (QMM) generalize Seiberg's
SUSY QCD with \( N_{c} = N_{f} \). QMM theories are confining. The
moduli space is described by a set of composite gauge invariant
operators. A generic feature of QMM theories is that the dimension of
the vacuum is smaller than the number of independent gauge invariant
`composite' operators. Both classically and quantum mechanically the
moduli space is specified by algebraic constraints among the composite
operators. In theories with QMM the quantum and classical constraints
differ.
Returning to our prototype, supersymmetric QCD with \( N_{c} =
N_{f}\equiv N \), we recall that the moduli is described by a matrix
valued composite $M_{ij}$ transforming as $(N,\bar N)$ under the
global symmetry group $SU(N)\times SU(N)$, and two composites $B$ and
$\tilde B$ transforming as singlets. The classical moduli is the space
of these composites, modulo the constraint \(\det(M)-B\tilde B=0\). At
the quantum level the origin is excised from the moduli space; the QMM
is described by the modified constraint \(\det(M)-B\tilde
B=\Lambda^{2N}\). Notice that the constraints remain invariant under
the global symmetry group.
In this example the origin of moduli space was taken out. This is not
generic for quantum modified theories. When the classical constraint
$F(\phi_i)=0$ for composites $\phi_i$ is covariant but not invariant
under the global symmetries, the quantum modification cannot be simply
replacing the right hand side by a non-vanishing constant. This would
break the global symmetries. Instead, the right hand side will turn
out to be of the form $\Lambda^p\phi_k$, where $\phi_k$ is a composite
with the right transformation properties under the global symmetry
group and the power $p$ is governed by dimensional analysis.
Theories in which the constraint is covariant (c-QMM's) have different
physics than those with invariant constraints (i-QMM's). In c-QMM's
the particle corresponding to the composite $\phi_k$ that appears in
the quantum modification becomes massive. The description of the
moduli space should not include $\phi_k$. In contrast, in i-QMM's one
is free to solve the constraint for any one composite in terms of the
others.
To see this introduce a lagrange multiplier chiral superfield
$\lambda$ and use it to enforce the constraint by means of a
superpotential
\begin{equation}
\label{eq:Wi-QMM}
W=\lambda(F(\phi_i)-\Lambda^p).
\end{equation}
for i-QMM's and
\begin{equation}
\label{eq:Wc-QMM}
W=\lambda(F(\phi_i)-\Lambda^p\phi_k).
\end{equation}
for c-QMM's. In Eq.~(\ref{eq:Wi-QMM}) the lagrange multiplier simply
enforces the constraint $F(\phi_i)=\Lambda^p$. But in Eq.~(\ref{eq:Wc-QMM}) the
lagrange multiplier plays a dynamical physical role: it pairs up with
the composite $\phi_k$ into massive states. Integrating out these
massive sates leaves a theory with $\phi_k$ (and $\lambda$) excluded
and a vanishing superpotential.
We believe this to be the correct realization of the constraint for
the case of theories with c-QMM. The obvious alternative is to apply
the constraint $F(\phi_i)=\Lambda^p\phi_k$ directly to the description
of the moduli. There are physical distinctions between these two
approaches, as seen in the paragraph below. Our believe in the
lagrange multiplier method can be supported by the following
argument. As discussed below, s-confining theories flow into theories
with i-QMM's. These have constraints implemented by lagrange
multipliers which can be identified with modes of the parent
s-confining theories.
Thus the field $\lambda$ must be considered a dynamical field. And this
leads to a surprising modification of the c-QMM. There is a branch
parametrized by $\lambda$ itself, with the other fields determined
from
\begin{eqnarray}
\lambda\frac{\partial F}{\partial\phi_{i}} &=& 0\qquad (i\ne k)\label{eq:dw1}\\
\lambda\left(\frac{\partial F}{\partial\phi_{ k}}-
\Lambda^p\right)&=&0\label{eq:dw2}\\
F(\phi_i)-\Lambda^p\phi_k&=&0\label{eq:dw3}
\end{eqnarray}
To solve these for arbitrary $\lambda$ generally requires that one of
the $\phi_i$ tend to infinity as some of the others approach the
origin. This may seem bizarre, but we know of no reason why such
solutions should be excluded. On this moduli subspace the $U(1)_R$
symmetry is broken. In addition, if $\phi_k$ carries any other
non-anomalous global symmetry then $\lambda$ must carry the opposite
charge and this symmetry is also broken on this branch. The two real
scalar components of $\lambda$ can be understood as the corresponding
goldstone bosons.
\section{Index Conditions and Flows}\label{index}
\subsection{The Index Condition}\label{sub-index}
\subsubsection{The Index Condition for s-confining theories}\label{index-s}
Csaki, Schmaltz and Skiba introduced ``smooth confinement without
chiral symmetry breaking and with a non-vanishing confining
superpotential'', or ``s-confinement'' for short, as a generalization
of SUSY QCD with \(N_{f} = N_{c} + 1\). It is defined as follows. An
s-confining theory must admit a description in terms of gauge
invariant composite operators everywhere on the moduli space. The
infrared effective theory must have a smooth superpotential, ie,
polynomial in the gauge invariant operators. The origin of the
classical moduli space must also be a vacuum of the quantum moduli
space. The definition excludes theories which admit a Coulomb phase
somewhere on the moduli space and theories which have boundaries in
the moduli space between distinct Higgs and confinement phases.
Consequently the t'Hooft anomalies should match between the short and
long distance descriptions everywhere in the moduli space, and this
was found to be true by explicit computation.
To explain the index condition for s-confining theories we need to
introduce some notation. Consider a supersymmetric theory with gauge
group $G$ and $N$ chiral matter multiplets, $Q_1,\ldots,Q_N$. In the
absence of a superpotential there are $N$ global $U(1)$ symmetries,
one for each matter field, corresponding to separate flavor number.
There is also a $U(1)$ R-symmetry. All these symmetries are broken at
the quantum level by anomalies, but one may combine the $U(1)$
R-symmetry with each of the global flavor numbers to form $N$
conserved R-symmetries, $U(1)_{R_1},\ldots,U(1)_{R_N}$ with the
following charge assignments:
\begin{displaymath}
\begin{array}{c|c|c|c|c}
&U(1)_{R_{1}} & U(1)_{R_{2}} & \cdots&U(1)_{R_{N}} \\ \hline
Q_{1} & a_{1} & 0 & \cdots& 0 \\
Q_{2} & 0 & a_{2} & \cdots& 0 \\
. & . & . & \cdots& . \\
. & . & . & \cdots& . \\
. & . & . & \cdots& . \\
Q_{N} & 0 & 0 & \cdots& a_{N} . \\
\end{array}
\end{displaymath}
The $R$-charges $a_i$ are fixed by requiring the vanishing of the
gauge anomaly. Denoting by $\mu_G$ and $\mu_i$ the indices of
the adjoint and of the representation of $Q_i$, normalized to unity
for the fundamental representation, one finds:
\[a_{i} = ( \sum_{j=1}^{N} \mu_{j} - \mu_{G})/\mu_{i} .\]
Now, s-confining theories must admit a smooth superpotential. It must
carry 2 units of every one of the $R$ charges. Since only the $i$-th
field carries $R_i$ charge, it must enter the superpotential as
$Q_i^{2/a_i}$. The superpotential must be a combination of terms
of the form
\[ \Lambda^3 \prod_{i=1}^{N} (Q_{i}/\Lambda)^{2\mu_{i}/ (
\sum_{j=1}^{N} \mu_{j} - \mu_{G})} .\]
$\Lambda$, a dynamical mass scale, is introduced by dimensional
analysis. If there is at least one chiral superfield transforming as
the fundamental (or antifundamental) of the gauge group, which is
always the case in Csaki {\it et al}, then the smoothness of the
superpotential requires \( \sum_{j=1}^{N} \mu_{j} - \mu_{G} = 1~~{\rm
or}~~2 \). Csaki {\it et al\/} argue that, in fact, only the second solution
is available. For $Sp(N)$ theories this can be seen from Witten's
anomaly, which requires an even number of fundamentals. The index
condition for s-confinement for theories with at least one fundamental
is therefore\cite{CSS}
\[\sum_{j=1}^{N} \mu_{j} - \mu_{G} = 2 .\]
If the theory has no matter fields transforming as the fundamental
(or antifundamental) representation the index condition is relaxed:
\( \sum_{j=1}^{N} \mu_{j} - \mu_{G} \) or
\( (\sum_{j=1}^{N} \mu_{j} - \mu_{G})/2 \)
must be a common divisor of all the $\mu_i$.
\subsubsection{The Index Condition for Theories with Quantum Modified Moduli}
The classical constraint between the composites \(\phi_{i} \) is a non
trivial polynomial,
\[ \sum_{n=1}^{m} (\prod_{i=1}^{k_{n}} \phi_{i})_{n} = 0 .\]
The quantum modification generically is of the form
\equ{\label{Qconst}
\sum_{n=1}^{m} (\prod_{i=1}^{k_{n}} \phi_{i} )_{n}
= \prod_{i} \phi_{i} \Lambda^{p} .}
Notice that we have allowed for a product of composites on the right
hand side. In all the cases we study we find, however, at most one
composite on the right hand side. The exact form of the left hand
side, \(\sum_{n=1}^{m} (\prod_{i=1}^{k_{n}} \phi_{i})_{n} \), is
determined by the classical limit.
The index condition now follows from requiring that the constraint be
covariant under global $U(1)$ R-symmetries. As in our review of
s-confining theories we introduce an anomaly free $U(1)$ R-symmetry
for each chiral superfield. Because the left and right sides of the
constraint in Eq.~(\ref{Qconst}) have different number of composites,
at least one of the R-charges must vanish. For this
we must have the index condition\cite{CSS}
\equ{\label{index-const}\sum_{i=1}^{n} \mu_{i} - \mu_{G} = 0 .}
In an alternative derivation of the index condition we adopt the
point of view that \(\Lambda^{b_{0}}\) is a background chiral
superfield. Now consider the $R$ symmetry with all the $R$ charges of
the chiral superfields set to vanish. The assigned R-charge of
\(\Lambda^{b_{0}}\) is given by the anomaly,
\[Q_{R}(\Lambda^{b_{0}})=\sum_{i=1}^{n} \mu_{i} - \mu_{G} .\]
The left side of our constraint, however, has an R-charge of
zero. Therefore, \(\Lambda^{b_{0}}\) has an R-charge of zero and we
have again Eq.~(\ref{index-const}).
To find all QMM theories one must begin by classifying all theories
that satisfy Eq.~(\ref{index-const}). Since the fundamental
representation has $\mu_{\rm fund}=1$, adding a pair of chiral
superfields, one in the fundamental and one in the antifundamental
representations, to a QMM theory gives a theory with \(\sum_{i=1}^{n}
\mu_{i} - \mu_{G} = 2 \). These are candidates for s-confinement and
were classified by Csaki {\it et al}. Therefore all theories
satisfying the index condition (\ref{index-const}) can be obtained from
the list of s-confinement candidates of Csaki {\it et al} by removing
a fundamental and an antifundamental. Clearly removing a pair
ensures that all the gauge anomalies remain
absent. Section~\ref{all-Qs} contains tables listing the complete set
of QMM candidates based on $SU$ and $Sp$ gauge groups.
\subsection{The Flow}
\subsubsection{The Flow of the Theories}
The index condition gives only a necessary condition. To find out if a
candidate theory actually has a QMM, one must make some other
investigations. As the next step to sort out all QMM theories we
consider points in the classical moduli space where the gauge group of
our candidate theory is broken. The gauge fields which correspond to
the broken generators acquire a mass proportional to the vacuum
expectation value of the Higgs field. These massive gauge superfields
pair up with chiral superfields which become massive through the Higgs
mechanism as well. Together they form a massive supermultiplet. We
integrate out these heavy degrees of freedom. The new theory, which is
an effective theory of the original `UV' theory, should be in a phase
consistent with the UV theory being in a quantum modified phase. We
refer to this as `the flow' of the UV theory to an effective
theory. If the theory flows to a theory in a Coulomb phase we say that
the theory has a Coulomb branch, not a QMM. By studying the flow we
can, therefore, rule out quite a few theories which fulfill the index
condition.
It is useful to tabulate the manner in which theories may flow. Below
we list the gauge groups together with their particle content. The
latter is contained in square brackets and is represented by the Young
tableaux of the corresponding representation, with a possible
multiplier when there are more than one field for that
representation. We don't list any gauge singlets that may remain in
the effective theory. These are not all the possible flow
diagrams. They were, however, sufficient for our classification work.
\begin{eqnarray}
\label{flow1}
SU(N) [\; N (\Yfund + \overline{\Yfund}) \;] \longrightarrow SU(N-1) [\; (N-1)
(\Yfund + \overline{\Yfund}) \;]
\end{eqnarray}
\begin{eqnarray}
\label{flow2}
SU(N) [\;\Yasymm + (N-1)\, \overline{\Yfund} + 3\, \Yfund \;] &
\longrightarrow & SU(N-1) [\; \Yasymm + (N-2)\, \overline{\Yfund} +
3\, \Yfund \;]\nonumber
\\ \downarrow\qquad\qquad & & \\
Sp(N) [\; (N+2)\, \Yfund \;] &
\longrightarrow & Sp(N-2) [\; (N)\, \Yfund \;]\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{flow3}
SU(N) [\;\Yasymm + \overline{\Yasymm} + 2 (\Yfund +
\overline{\Yfund}) \;]
& \longrightarrow &
SU(N-1) [\;\Yasymm +
\overline{\Yasymm} + 2 (\Yfund + \overline{\Yfund}) \;] \nonumber\\
\downarrow\qquad\qquad & &
\\ Sp(2N)[\; \Yasymm +4\, \Yfund\;] &&\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{flow4}
SU(N) [\; {\rm Adj} \;] \longrightarrow \mbox{Coulomb branch}
\end{eqnarray}
\begin{eqnarray}
\label{flow5}
SU(6) [\; \Ythreea + 3 (\Yfund + \overline{\Yfund}) \;] &
\longrightarrow & SU(3) \times SU(3) [\; 3( (1,\overline{\Yfund}) +
(\overline{\Yfund} ,1) +(1,\Yfund) + (\Yfund{},1) ) \;] \nonumber\\
\downarrow\qquad\qquad & &\nonumber \\
SU(5) [\; 2\, \Yasymm + 1\, \Yfund + 3\, \overline{\Yfund} \;] &
\longrightarrow & Sp(4) [\; (\Yasymm + 4 \Yfund ) \;] \\
\downarrow\qquad\qquad & &\nonumber \\
SU(4) [\; 2\, \Yasymm + 2 (\Yfund + \overline{\Yfund}) \;] &&
(\mbox{This is special case of } SU(N) [\; \Yasymm +
\overline{\Yasymm} + 2 (\Yfund + \overline{\Yfund}) \;].)\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{flow6}
SU(4) [\; 3\, \Yasymm + 1 (\Yfund + \overline{\Yfund}) \;] &
\longrightarrow & Sp(4) [\; 2\, \Yasymm +2\, \Yfund \;]
\longrightarrow (SU(2) \times SU(2)) [\; (\Yfund,1) + (1,\Yfund) +
(\Yfund,\Yfund) \;]\nonumber \\
\downarrow\qquad\qquad & & \\
SU(3) [\; 3 (\Yfund +
\overline{\Yfund}) \;] & &\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{flow7}
SU(4) [\; 4\, \Yasymm \;] \longrightarrow Sp(4) [\; 3\, \Yasymm \;]
\longrightarrow \mbox{Coulomb branch}
\end{eqnarray}
\begin{eqnarray}
\label{flow8}
SU(5) [\; 2\, \Yasymm + \overline{\Yasymm} + 1\, \overline{\Yfund} \;]
\longrightarrow Sp(4) [\; 2\, \Yasymm +2\, \Yfund \;] \longrightarrow
(SU(2) \times SU(2)) [\; (\Yfund,1) + (1,\Yfund) + (\Yfund,\Yfund) \;]
\end{eqnarray}
\begin{eqnarray}
\label{flow9}
SU(6) [\; 2\, \Ythreea \;] \longrightarrow SU(3) \times SU(3)[\;
((\Yfund,\overline{\Yfund}) + (\overline{\Yfund},\Yfund)) \;] \longrightarrow
\mbox{Coulomb branch}
\end{eqnarray}
\begin{eqnarray}
\label{flow10}
SU(7) [\; \Ythreea + 4\, \overline{\Yfund} + 2\, \Yfund \;] &
\longrightarrow & SU(3) \times SU(3) [\; (3 (1,\overline{\Yfund}) +
(\overline{\Yfund},1)) + (\Yfund,\Yfund) \;]\nonumber \\
\downarrow\qquad\qquad & & \nonumber\\
SU(6) [\; \Ythreea + \Yasymm + 2\, \overline{\Yfund} \;] &
\longrightarrow & SU(3) \times SU(3) [\; (3 (1,\overline{\Yfund}) +
(\overline{\Yfund},1)) + (\Yfund,\Yfund) \;]\nonumber \\
\downarrow\qquad\qquad & & \\
Sp(6) [\; \Ythreea +3\,\Yfund \;] & \longrightarrow & Sp(4) [\; 2\,
\Yasymm +2\, \Yfund \;] \longrightarrow (SU(2) \times SU(2))[\;
(\Yfund,1) + (1,\Yfund) + (\Yfund,\Yfund) \;]\nonumber \\
\downarrow\qquad\qquad & &\nonumber \\
SU(3) [\; 3 (\Yfund + \overline{\Yfund}) \;] & &\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{flow11}
Sp(2N) [\; \Ysymm ={\rm Adj} \;] \longrightarrow \mbox{Coulomb branch}.
\end{eqnarray}
\subsubsection{Flow of Operators}
The flow is useful in determining the quantum modified constraints
precisely. For example given a classical constraint, one can
immediately write a putative quantum modified constraints as in
Eqs.~(\ref{eq:Wi-QMM}) or~(\ref{eq:Wc-QMM}). The precise coefficient is
then determined by flowing to a theory with exactly known
constraint, such as SUSY QCD with $N_f=N_c$.
This works because the flow maps not just theories but also specific
operators between the UV and effective theories. This is very useful
in the determination of the classical constraints too. Given a
classical constraint in a UV theory one can generally determine the
constraint in any of its effective theories by following the flow. It
is not so obvious that one may infer the constraint of a UV theory if
the constraints of its effective theories are known. In fact, in
practice one finds that for many cases one needs only the constraints
of one of the effective theories. We found this reverse flow procedure
of central importance in our investigations of the more complicated
theories.
For example, one can start from the known theory \(SU(3) [\;3 (\Yfund
+ \overline{\Yfund})\;]\) and map the gauge invariant operators up to
\(SU(4) [\;3\, \Yasymm + 1 (\Yfund + \overline{\Yfund})\;] \). Then mapping
down to \(Sp(4) [\;2\, \Yasymm +2\, \Yfund\;]\) is possible. One can
then determine how the operators are mapped from one theory to the
next. For example,
\[ SU(4) [Q\bar Q] \rightarrow Sp(4) [Q_{1}Q_{2}] \]
where $Q\bar Q$ and $Q_{1}Q_{2}$ represent composites of the $SU(4)$
and $Sp(4)$ theories, respectively (if the notation is not
self-evident, it will be clarified in Section~\ref{all-Qs}). Thus one
can find all the constraints of these $SU(4)$ and $Sp(4)$
theories. The constraints are determined explicitly, that is, all the
numerical coefficients are fixed. One can obtain all the results for
the remaining theories with similar sequences of reverse and forward
flows.
Mapping the gauge invariant operators from a smaller theory to a
bigger theory may be problematic. The Higgs mechanism may map some of
the gauge invariant operators to singlets in the effective theory or may
even render some of the gauge invariant operators in the UV theory
equal to zero. This implies that we would break our theory to a place
of the moduli space which has a dimension strictly smaller by more
than one. The zero operators or the singlet operators would not show
up and all the terms in which they do show up would not be in the
constraint if we map the constraint from the effective theory to the
UV theory. This happened only twice in our analysis. When this
happened, however, we could always determine the constraint by
requiring covariance under global symmetries.
Other useful examples of operator mapping are presented in the
sequences below, in which $Q$ and $\bar Q$ always stand for a
fundamental and anti-fundamental ($\Yfund$ and $\overline{\Yfund}$), and
$A$ and $B$ for a two and three index antisymmetric tensors
(\,$\Yasymm$ and $\Ythreea$\,), respectively. The composite operators
are denoted by their components, and a subscript ``anti'' is included
when only the antisymmetric part is included (the precise
description of the operators can be found in the appendix). The
mappings are
\[ SU(7) [B^{3}{\bar Q}^{3}Q_{anti} ]
\rightarrow SU(6) [(B^{2}A{\bar Q}^{2})_{anti} ]
\rightarrow Sp(6)
[BQ^{3}] \rightarrow Sp(4) [Q_{1}Q_{2}] \]
for the theory flow in (\ref{flow10}), and
\[ SU(5) [A^{2}{\bar A}{\bar Q}^{2}] \rightarrow Sp(4) [Q_{1}Q_{2}] \]
for the flow in (\ref{flow8}).
\section{Dimension of the moduli space}
\label{dim-moduli}
We derive formulas for the dimension of the classical moduli
space. These formulas give a relation between the number of gauge
invariant operators and the number of constraints.
Because the SUSY lagrangian is invariant under the complexified gauge
group $G_{c}$, the moduli space $M_{0}$ is equal to
\[ M_{0} = F \| G_{c} .\]
F is the space of all constant field configurations if there is no
superpotential. It is the space of all extrema of the superpotential
if there is a superpotential. The equivalence relation between two
elements $\Phi$ and $\Phi_{0}$ of the same $G_{c}$ orbit is of the
generalized form $\lim g_{i} \Phi = \Phi_{0}$ with $g_{i} \in G_{c}$.
$F \| G_{c}$ can be described as an algebraic variety of all
gauge invariant holomorphic polynomials\cite{LT}. Therefore the dimension of
the vacuum is
\begin{equation}
\label{dimvacops}
\dim {\rm vacuum} =N_{\rm Ops} - N_{\rm Con}
\end{equation}
where $N_{\rm Ops}$ and $N_{\rm Con}$ are the number of independent
gauge invariant operators and constraints, respectively. But there is
a natural map $\pi$ between F and $M_{0}$. This map induces a map
between the tangent spaces of F at the generic point $\phi \in F$ and
the tangent space at the point $\pi(\phi)$. This map is a surjective
homomorphism if $\phi$ is a point on the moduli space which breaks the
gauge group totally. The kernel is $G_{c}\phi$\cite{GA}. That $\phi$
is on the moduli space implies of course that the D-flat condition is
fulfilled. It follows directly that:
\begin{equation}
\label{dimvac}
\dim {\rm vacuum} = \dim F - \dim G_c .
\end{equation}
s-confining theories and quantum modified theories have no
superpotential and they have always points on the moduli space where
the gauge group is completely broken.
The two formulas for the dimension of the moduli space allow us to
calculate easily the difference between the number of gauge invariant
operators and constraints.
As an example consider $ SU(5)$ with $(2\Yasymm + \Yfund +
3\overline{\Yfund})$ (example 4.1.3 in Ref.~\cite{CSS}). There are 18
gauge invariant operators and the dimension of the moduli space, as
given by Eq.~\ref{dimvac} is 16, so that there must be {\it two}
constraints.
The constraints are easily obtained by integrating out
$(Q_{2}\overline{Q}_{4})$ and
$(A^2Q_{1}Q_{2}\overline{Q}_{4})$ from the superpotential of the
corresponding s-confining theory, $ SU(5)$ with $(2\Yasymm + 2\Yfund +
4\overline{\Yfund})$ (here we are using the notation of
Ref.~\cite{CSS}). Alternatively one can use the operator flow between
$ SU(5)$ with $ (2\Yasymm + \Yfund + 3\overline{\Yfund})$ and $ SU(4)$
with $ (2\Yasymm + 2\Yfund + 2\overline{\Yfund})$ to map the two
constraints for the SU(4) theory to the constraints of the SU(5)
theory. One obtains two constraints, one is quantum modified and the
other is not. The corresponding superpotential is:
\[ W = \lambda
[(A^3\overline{Q})^2 (Q\overline{Q}) +
(A^3\overline{Q})(A^2Q)(A\overline{Q}^2) - \Lambda^{10}] + \mu
[(A^3\overline{Q}_{i})^{a}(A\overline{Q}^2_{jk})^{b} \epsilon^{ijk}
\epsilon_{ab}].\]
\section{An Example}\label{example}
Before giving our results we present an example in which we apply all
the tools presented above. Consider\cite{CSS} an $SU(4)$ gauge theory with 3
antisymmetric tensors $A_{\alpha\beta}^i$, a fundamental $Q_\alpha$
and an antifundamental $\bar Q^\alpha$. Since $\mu_G=8$, $\mu_A=2$ and
$\mu_Q=\mu_{\bar Q}=1$ we see that the index condition,
\(\sum_{i=1}^{n} \mu_{i} - \mu_{G} = 0\), is satisfied. Adding an
additional $Q$, $\bar Q$, gives a theory with
\(\sum_{i=1}^{n} \mu_{i} - \mu_{G} = 2\) which is not s-confining.
According to Eq.~(\ref{dimvac}) the dimension of the classical moduli
space is $3\times6+2\times4-15=11$. To determine the number of
constraints we need a choice of composites. Consider the obvious
choice
\begin{eqnarray}
(AA_{\rm sym})^{ij} & =& A_{\alpha \beta}^{i} A_{\gamma \delta}^{j}
\epsilon^{\alpha \beta \gamma \delta}\\
(AAQ{\bar Q})^{ij}& =& A_{\alpha \beta}^{i} A_{\gamma \delta}^{j}
Q_{\eta}{\bar Q}^{\alpha} \epsilon^{\beta \gamma \delta \eta}
\end{eqnarray}
It would seem that these 15 operators are sufficient to characterize
the 11 dimensional moduli space if four constraints are
imposed. However subspaces of the moduli characterized by $A=0$ with
arbitrary $Q=\bar Q^\dagger$ are not properly parametrized by these
composites. We see that we need in addition
\begin{equation}
(Q{\bar Q}) = Q_{\alpha}{\bar Q}^{\alpha}
\end{equation}
This set of operators is not independent. One can verify that the part
of $(AAQ{\bar Q})$ symmetric under $i\leftrightarrow j$ is proportional
to $(Q{\bar Q})(AA_{\rm sym})$. We do not consider this a
constraint, for the relation involves $(AAQ{\bar Q}_{\rm sym})$
linearly: one should simply exclude this operator.
What operators might we need, in addition to $(Q{\bar Q})$,
$(AA_{\rm sym})$ and
$(AAQ{\bar Q}_{\rm anti})$, to describe the moduli? To answer this we
flow to $SU(3)$, along directions of non-vanishing $Q=\bar Q^\dagger$,
as in Eq.~(\ref{flow6}). This theory is the familiar example analyzed by
Seiberg and has a classical constraint \(\det(M)-B\tilde B=0\)
involving baryons. However none of the operators above flow to these
baryons. To remedy this we include in our list
\begin{eqnarray*}
(AAA{\bar Q}{\bar Q}) &=&1/6 ({\bar Q}^{\alpha} A_{\alpha \beta}^{i}
{\bar Q}^{\gamma} A_{\gamma \delta}^{j} A_{\eta \iota}^{k}
\epsilon^{\beta \delta \eta \iota} \epsilon_{i j k}) \\
(AAAQQ) &=& 1/6 (A_{\alpha \beta}^{i} A_{\gamma \delta}^{j} A_{\eta \iota}^{k}
Q_{\kappa} Q_{\lambda} \epsilon^{\kappa \delta \eta \iota}
\epsilon^{\alpha \beta \gamma\lambda }\epsilon_{i j k} )
\end{eqnarray*}
which flow to $B$ and $\tilde B$.
The set of operators
$(Q{\bar Q})$, $(AA_{\rm sym})$, $(AAQ{\bar Q}_{\rm anti})$, $(AAA{\bar
Q}{\bar Q})$ and $ (AAAQQ)$ is what we list in
Sect.~\ref{su4}. With $N_{\rm Ops}=12$ Eq.~(\ref{dimvacops}) implies we
need one constraint. The constraint must flow to \(\det(M)-B\tilde
B=0\) in $SU(3)$. Now, $(Q{\bar Q})(AA_{\rm sym})+(AAQ{\bar Q}_{\rm
anti})$ flows to $M$ and $(AAA{\bar Q}{\bar Q})$ and $( AAAQQ)$ flow to $B$
and $\tilde B$. It follows that the classical constraint must be of
the form \( \det[(Q{\bar Q})(AA_{\rm sym})+(AAQ{\bar Q}_{\rm anti})]-
(AAA{\bar Q}{\bar Q})( AAAQQ)=0\), or by expanding the determinant and
keeping track of numerical constants
\begin{equation}
\label{su4-class-const}
1/6 (AA_{sym})^3 (Q{\bar Q})^2 + 4 (AA_{sym})(AAQ{\bar Q}_{anti})^2 +
64 (AAA{\bar Q}{\bar Q}) (AAAQQ) =0
\end{equation}
where
\begin{eqnarray*}
(AA_{sym})^3 & = & (AA_{sym})^{ij} (AA_{sym})^{kl} (AA_{sym})^{mn} \epsilon_{ikm}
\epsilon_{jln} \\
(AAQ{\bar Q}_{anti})^2 (AA_{sym}) & = & (AAQ{\bar Q})^{[ij]} (AAQ{\bar
Q})^{[kl]} (AA)^{mn} \epsilon_{i j m} \epsilon_{k l n}.
\end{eqnarray*}
It is now a simple exercise to verify this constraint (with the help
of symbolic manipulator programs).
To explore the quantum moduli we note, as above, that the 't~Hooft
anomaly matching conditions are satisfied everywhere except at the
origin which must therefore be excluded by modifying the classical
constraint. The theory has a non-anomalous global $U(1)$ symmetry
under which the fields $A$, $Q$ and $\bar Q$ transform with charges 1,
$-3$ and $-3$, respectively. The left hand side of the constraint in
Eq.~(\ref{su4-class-const}) transforms non-trivially, with charge
$-6$. This is an example of a c-QMM. The composite $(Q\bar Q)$ has charge
$-6$. It is straightforward to check that the constraint
\begin{equation}
\label{su4-quant-const}
1/6 (AA_{sym})^3 (Q{\bar Q})^2 + 4 (AA_{sym})(AAQ{\bar Q}_{anti})^2 +
64 (AAA{\bar Q}{\bar Q}) (AAAQQ) =\Lambda^{8}(Q{\bar Q})
\end{equation}
flows to the corresponding $SU(3)$ constraint.
This c-QMM constraint does not exclude the origin of moduli space
where the 'tHooft anomaly condition is not satisfied. However, if the
constraint is implemented by a lagrange multiplier, $\lambda$, via a
superpotential
\[
W=\lambda[
1/6 (AA_{sym})^3 (Q{\bar Q})^2 + 4 (AA_{sym})(AAQ{\bar Q}_{anti})^2 +
64 (AAA{\bar Q}{\bar Q}) (AAAQQ) - \Lambda^{8}(Q{\bar Q})],
\]
and $\lambda$ is interpreted as a dynamical field, both
$\lambda$ and $Q{\bar Q}$ become massive. This removes one composite
from the spectrum and leaves the others unconstrained, and the
't~Hooft anomaly matching conditions are satisfied.
This superpotential exhibits a new, purely quantum mechanical, branch
of the moduli space. Consider directions on the moduli given by the
scalings $(AA_{sym})\sim\epsilon^{-1}$, $(Q{\bar Q})\sim\epsilon^3$,
$(AAQ{\bar Q}_{anti})\sim\epsilon^{1+x}$ (any $x>0$) and $(AAA{\bar
Q}{\bar Q})= (AAAQQ)=0$. These are in the moduli only if $\lambda=0$,
but in the limit $\epsilon\to0$ the moduli includes the branch
$\lambda\neq0$. Since $\lambda$ carries 2 units of $U(1)_R$, the
symmetry is spontaneously broken on this branch. Although
$(AA_{sym})\to\infty$, there remains at least an unbroken $SU(2)$
gauge group, which is strongly coupled in the neighborhood of this
branch. This suggests the interpretation of $\lambda$ as a
glueball superfield, and $\lambda\neq0$ as gaugino condensation.
\section{All Quantum Modified Theories}\label{all-Qs}
This section contains our results. In tables~\ref{SUtable}
and~\ref{SPtable} we list all gauge and Witten anomaly free theories
that satisfy the index constraint~(\ref{index-const}) for $SU$ and $Sp$
gauge groups, respectively. We give the gauge
group in the first column, the matter content in the second and state
whether the theory has a QMM or a Coulomb branch in the last column.
For all theories which are derived from an s-confining theory by
taking out a fundamental and antifundamental, the superpotential is
easily determined by integrating out a fundamental and an
antifundamental. This was done by Csaki {\it et al} and we do no
reproduce their results here.\footnote{However,
we do not agree with some of their results. For details see section
\ref{dim-moduli}. }
For the rest of the theories, those which only follow from theories
with \(\sum_{j=1}^{N} \mu_{j} - \mu_{G} = 2 \) that are not
s-confining, we use the flow to determine the classical constraint.
Next, in separate sub-sections, we give the precise results for those
theories which cannot be obtained from an s-confining
theory by integrating out a fundamental-antifundamental pair.
In each case we give a table. The upper part of each table lists the
chiral superfields, the representation they belong to under the gauge
group and finally their global symmetry properties. The second part of
the table shows the analogous information for the composite
operators. The composite operators are labeled by their component
fields. There is often more than one way to construct an invariant
operator from the given component fields. The precise construction
used is specified in the appendix. In some tables there is a third
part which introduces shorthand notation convenient for giving the
constraint. The explicit constraint is then given.
\subsection{The Quantum Modified $SU(N)$ Theories}
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|} \hline
$SU(N)$ & $ N (\Yfund + \overline{\Yfund})$ & i-quantum modified \\
$SU(N)$ & $\Yasymm + (N-1)\, \overline{\Yfund} + 3\, \Yfund $
& i-quantum modified \\
$SU(N)$ & $\Yasymm + \overline{\Yasymm} + 2 (\Yfund +
\overline{\Yfund})$ & i-quantum modified \\
$SU(N)$ & $Adj $ & Coulomb branch \\ \hline
$SU(4)$ & $3\, \Yasymm + 1 (\Yfund + \overline{\Yfund})$ & c-quantum modified \\
$SU(4)$ & $ 4\, \Yasymm $ & Coulomb branch \\ \hline
$SU(5)$ & $ 2\, \Yasymm + 1\, \Yfund + 3\, \overline{\Yfund}$
& i-quantum modified \\
$SU(5)$ & $2\, \Yasymm + \overline{\Yasymm} + 1\,
\overline{\Yfund}$ & c-quantum modified \\ \hline
$SU(6)$ & $2\, \Yasymm + 4\, \overline{\Yfund}$ & i-quantum modified \\
$SU(6)$ & $\Ythreea + 3 (\Yfund + \overline{\Yfund})$ & i-quantum modified \\
$SU(6)$ & $\Ythreea + \Yasymm + 2\, \overline{\Yfund}$
& c-quantum modified \\
$SU(6)$ & $ 2\, \Ythreea $ & Coulomb branch \\ \hline
$SU(7)$ & $ \Ythreea + 3\, \overline{\Yfund} + 1\,
\Yfund$ & c-quantum modified \\ \hline
\end{tabular}
\end{center}
\caption{These are all $SU$ theories satisfying $\sum_j \mu_j -\mu_G =
0$ and free of gauge anomalies. We list the gauge group and
the field content of the theories in the first and second column. In
the third column, we indicate whether the theory has a quantum
modified moduli space or a Coulomb branch. The prefix ``i'' indicates
an invariant quantum modification and the prefix ``c'' a covariant
quantum modification.}
\label{SUtable}
\end{table}
\subsubsection{$SU(4)$ with $ 3 \protect\Yasymm +(\protect\Yfund +
\overline{\protect\Yfund})$}\label{su4}
\begin{displaymath}
\begin{array}{|l|c|cccc|}
\hline
& SU(4) & SU(3) & U(1)_{A} & U(1)_{B} & U(1)_{R} \\ \hline
A & \Yasymm & \Yfund & 0 & 1 & 0 \\
Q & \Yfund & 1 & 1 & -3 & 0 \\
{\bar Q} & \overline{\Yfund} & 1 & -1 & -3 & 0 \\ \hline
Q{\bar Q} & 1 & 1 & 0& -6 & 0 \\
AA_{sym} & 1 & \Ysymm & 0 & 2 & 0 \\
AAQ{\bar Q}_{anti}& 1 & \Yasymm & 0 & -4 & 0 \\
AAA{\bar Q}{\bar Q}& 1 & 1 & -2 & -6 & 0 \\
AAAQQ & 1 & 1 & 2 & -6 & 0 \\
\hline
\end{array}
\end{displaymath}
The constraint is:
\[1/6 (AA_{sym})^3 (Q{\bar Q})^2 + 4 (AA_{sym})(AAQ{\bar Q}_{anti})^2 +
64 (AAA{\bar Q}{\bar Q}) (AAAQQ) =\Lambda^{8}(Q{\bar Q}) \]
\subsubsection{$SU(5)$ with $2\, \protect\Yasymm +\overline{\protect\Yasymm} +
\overline{\protect\Yfund}$}\label{su5}
\begin{displaymath}
\begin{array}{|l|c|cccc|} \hline
& SU(5) & SU(2) & U(1)_{A} & U(1)_{B} & U(1)_{R} \\ \hline
A & \Yasymm & \Yfund & 1 & 0 & 0 \\
{\bar A} & \overline{\Yasymm} & 1 & -2 & 1 & 0 \\
{\bar Q} & \overline{\Yfund} & 1 & 0 & -3 & 0 \\ \hline
A{\bar A} & 1 & \Yfund & -1 & 1 & 0 \\
A^{2}{\bar A}^{2} & 1 & \Ysymm & -2 & 2 & 0 \\
{\bar A}^{2}{\bar Q} & 1 & 1 & -4 & -1 & 0 \\
A^{3}{\bar Q} & 1 & \Yfund & 3 & -3 & 0 \\
A^{4}{\bar A}{\bar Q} & 1 & \Ysymm & 2 &-2 & 0 \\
A^{2}{\bar A}{\bar Q}^{2} & 1 & 1 & 0 & -5 & 0 \\ \hline
f_{1} = [(A^{2}{\bar A}^{2}) (A^{4}{\bar A}{\bar Q})]_{flavorsym}& 1& & & &\\
f_{2} = [(A^{4}{\bar A}{\bar Q})^2]_{flavorsym}& 1& & & &\\
f_{3} = [(A^{4}{\bar A}{\bar Q})^2 (A{\bar A})^2]_{flavorsym}& 1& & & &\\
f_{4} = [(A^{2}{\bar A}^{2}) (A{\bar A}) (A^{3}{\bar Q})]_{flavorsym}& 1& & & &\\
f_{5} = [(A{\bar A}) (A^{3}{\bar Q})]_{flavorsym}& 1& & & &\\
f_{6} = [(A^{2}{\bar A}^{2}) (A^{3}{\bar Q})^2]_{flavorsym}& 1& & & &\\
f_{7} = [(A^{4}{\bar A}{\bar Q}) (A{\bar A}) (A^{3}{\bar Q})]_{flavorsym}& 1& & & &\\
\hline
\end{array}
\end{displaymath}
The constraint is: \[(2^{10} f_{1} +2^9 f_{3} +2^7 f_{4}) A^{2}{\bar A}{\bar Q}^{2} + (5 f_{5}^{2} + 2^2 f_{6} - 2^{7} f_{2} -2^6 f_{7}) {\bar A}^{2}{\bar Q} = \Lambda^{8}(A^{2}{\bar A}{\bar Q}^{2})\]
\subsubsection{$ SU(6)$ with $ \protect\Ythreea +\protect\Yasymm + 2\, \overline{\protect\Yfund}$}\label{su6}
\begin{displaymath}
\begin{array}{|l|c|cccc|} \hline
& SU(6) & SU(2) & U(1)_{A} & U(1)_{B} & U(1)_{R} \\ \hline
B & \Ythreea & 1 & 1 & 0 &0 \\
A &\Yasymm & 1 & 0 &1& 0 \\
{\bar Q}&\overline{\Yfund} & \Yfund & -3 & -2 & 0 \\ \hline
S_{1}=A{\bar Q}^{2} & 1 & 1 & -6 & -3 & 0 \\
S_{2}=A^{3}& 1 & 1 & 0& 3 & 0 \\
S_{3}=B^{4}& 1 & 1 & 4 & 0& 0 \\
S_{4}=(B^{4}A^{3})& 1 & 1 & 4 & 3 & 0 \\
(BA^{2}{\bar Q})& 1 &\Yfund & -2 & 0 & 0 \\
(B^{2}A{\bar Q}^{2})_{sym}& 1 &\Ysymm & -4 & -3 & 0 \\
S_{5}=(B^{2}A{\bar Q}^{2})_{anti}& 1 & 1 & -4 & -3 & 0 \\
(B^{3}A^{2}{\bar Q})& 1 &\Yfund & 0 & 0 & 0 \\
S_{6}=(B^{4}A{\bar Q}^{2})_{anti}& 1 & 1& -2 & -3 & 0 \\ \hline
f_{1}=[(BA^{2}{\bar Q})(B^{3}A^{2}{\bar Q})]_{flavorsym}&1&1& & & \\
f_{2}=[(B^{2}A{\bar Q}^{2})_{sym} (B^{3}A^{2}{\bar Q})^{2}]_{flavorsym}&1&1& & & \\
f_{3}=[(B^{2}A{\bar Q}^{2})_{sym} (BA^{2}{\bar Q})^{2}]_{flavorsym}&1&1& & & \\
f_{4}=[(B^{2}A{\bar Q}^{2})_{sym}^{2}]_{flavorsym}&1&1& & & \\
\hline
\end{array}
\end{displaymath}
The constraint is:
\begin{eqnarray*}
-12 (6 S_{6} +S_{1} S_{3}) f_{1} + 18 f_{2} -27 S_{3} f_{3} - 648 S_{4} f_{4} - 16 (18 S_{4} +S_{2} S_{3}) S_{5}^{2} + \\
48 (12 S_{6}- S_{1} S_{3})S_{4} S_{1} + 96 S_{2} S6^{2} = \Lambda^{12} S_{5}
\end{eqnarray*}
\subsubsection{$ SU(7)$ with $ \protect\Ythreea +\protect\Yfund + 3\, \overline{\protect\Yfund}$}\label{su7}
\begin{displaymath}
\begin{array}{|l|c|cccc|} \hline
& SU(7) & SU(3) & U(1)_{A} & U(1)_{B} & U(1)_{R} \\ \hline
B & \Ythreea & 1 & 0 & 1 & 0 \\
Q &\Yfund & 1 & -3 & -10 & 0 \\
{\bar Q}&\overline{\Yfund} & \Yfund & 1 & 0& 0 \\ \hline
Q{\bar Q} &1 & \Yfund & -2 & -10 & 0 \\
B{\bar Q}^{3} & 1 & 1 & 3 & 1 & 0 \\
B^{3}{\bar Q}^{2} & 1 & \Ysymm & 2 & 3 & 0 \\
B^{3}{\bar Q}^{3}Q_{anti} & 1 & 1 & 0 & -7 & 0 \\
B^{4}Q^{2} & 1 & 1 & -6 & -16 & 0 \\
B^{5}{\bar Q}^{2}Q & 1 & \overline{\Yfund} & -1 & -5 & 0 \\
B^{7} & 1 & 1 & 0 & 7 & 0 \\ \hline
f_{1}=[(B^{3}{\bar Q}^{2})^{3}]_{flavorsym}&1&1& & & \\
f_{2}=[(B^{3}{\bar Q}^{2})^{2} (Q{\bar Q})^{2}]_{flavorsym}&1&1& & & \\
f_{3}=[(B^{5}{\bar Q}^{2}Q) (Q{\bar Q})]_{flavorsym}&1&1& & & \\
f_{4}=[(B^{5}{\bar Q}^{2}Q)^{2} (B^{3}{\bar Q}^{2})]_{flavorsym}&1&1& & & \\
\hline
\end{array}
\end{displaymath}
The constraint is:
\begin{eqnarray*}
7 f_{1} (B^{4}Q^{2}) +6 f_{2} (B^{7}) -288 f_{3} (B^{7}) (B{\bar Q}^{3})
+ 1008 f_{4} + 12 (B^{7}) (B^{3}{\bar Q}^{3}Q_{anti})^{2} - \\
72 (B^{4}Q^{2}) (B^{7}) (B{\bar Q}^{3})^2 =
\Lambda^{14}(B^{3}{\bar Q}^{3}Q_{anti})
\end{eqnarray*}
\subsection{The Quantum Modified $Sp(N)$ Theories}
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|} \hline
$Sp(2N)$ & $(2N+2)\, \Yfund$ & i-quantum modified \\
$Sp(2N)$ & $\Yasymm +4\, \Yfund $ & i-quantum modified \\
$Sp(2N)$ & $\Ysymm =Adj $ & Coulomb branch \\ \hline
$Sp(4)$ & $2\, \Yasymm +2\, \Yfund $ & c-quantum modified \\
$Sp(4)$ & $3\, \Yasymm $ & Coulomb branch \\ \hline
$Sp(6)$ & $2\, \Yasymm $ & Coulomb branch \\
$Sp(6)$ & $\Ythreea +3\,\Yfund $ & c-quantum modified \\
\hline
\end{tabular}
\end{center}
\caption{These are all $Sp$ theories satisfying $\sum_j \mu_j -\mu_G =
0$ and the Witten anomaly condition. We list the gauge group and the
field content of the theories in the first and second column. In the
third column, we indicate which theories are quantum modified.
The prefix ``i'' indicates an invariant quantum modification
and the prefix ``c'' a covariant quantum modification. }
\label{SPtable}
\end{table}
\subsubsection{$Sp(4)$ with $2\, (\protect\Yasymm +\protect\Yfund)$}
\label{sp4}
\begin{displaymath}
\begin{array}{|l|c|cccc|} \hline
& Sp(4) & SU(2)_{A} & SU(2)_{Q} & U(1)_{A} & U(1)_{R} \\\hline
A & \Yasymm & \Yfund & 1& 1 & 0\\
Q & \Yfund & 1 & \Yfund & -2 & 0 \\ \hline
Q_{1}Q_{2} & 1 & 1 & 1& -4 & 0 \\
AA_{sym} & 1 & \Ysymm & 1& 2 & 0 \\
AQ_{1}Q_{2} & 1 &\Yfund & 1& -3 & 0 \\
AAQ_{i}Q_{j} & 1 & 1 & \Ysymm & -2 & 0 \\
\hline
\end{array}
\end{displaymath}
The constraint is:
\[ (AA_{sym})^2 (Q_{1}Q_{2})^2 -4 ((AA_{sym})
(AQ_{1}Q_{2})^2)-16 (AAQ_{i}Q_{j})^2 = \Lambda^{6} (Q_{1}Q_{2})\]
\subsubsection{$Sp(6)$ with $ \protect\Ythreea +3\,\protect\Yfund $}
\label{sp6}
\begin{displaymath}
\begin{array}{|l|c|ccc|} \hline
& Sp(6) & SU(3) & U(1)_{A} & U(1)_{R} \\ \hline
B &\Ythreea & 1 & 3 & 0 \\
Q & \Yfund &\Yfund & -5 & 0 \\ \hline
QQ & 1 & \Yasymm & -10& 0 \\
BQ^{3}& 1 & 1 & -12 & 0 \\
B^{2}Q^{2}_{sym}& 1 & \Ysymm & -4 & 0 \\
B^{4} & 1 & 1 & 12 & 0 \\
B^{3}Q^{3} & 1 & 1 & -6 & 0 \\
\hline
\end{array}
\end{displaymath}
The constraint is:
\begin{eqnarray*}
1728 (B^{3}Q^{3})^{2}-8 (BQ^{3})^{2} (B^{4}) + 12
(B^{2}Q^{2}_{sym})^{3} + 3 (B^{4}) (B^{2}Q^{2}_{sym}) (QQ)^{2} =
\Lambda^{8} (BQ^{3})
\end{eqnarray*}
\section{Conclusions}\label{conclusions}
Adding a fundamental and an antifundamental matter multiplet to a
theory with a quantum modified moduli space one obtains a theory
satisfying the index condition $\sum_{i=1}^{n} \mu_{i} - \mu_{G} =
2$. If this theory is s-confining, the algebraic constraint defining
the moduli is invariant under all global symmetries, and the invariant
quantum modified moduli (i-QMM) is given by
$F(\phi_i)=\Lambda^p$. However, if the resulting theory is not
s-confining the constraint of the original theory is only covariant
under global symmetries. This gives a covariant quantum modified
moduli (c-QMM), characterized by $F(\phi_i)=\Lambda^p\phi_k$.
Theories with i-QMM are by now commonplace. Less familiar are theories
with c-QMM. In these theories we believe one must take the lagrange
multiplier enforcing the constraint via a superpotential seriously as
a dynamical degree of freedom. But it follows immediately that the
c-QMM has branches, absent at the classical level, for which global
$U(1)_R$ symmetry is broken.
\vskip1.2cm
{\it Acknowledgments}
\hfil\break
We are grateful to Ken Intriligator, Erich Poppitz, Witold Skiba
and Martin Schmaltz for many
helpful discussions. This work is supported by the Department of
Energy under contract DOE-FG03-97ER40506.
|
1,108,101,565,289 | arxiv | \section{Introduction}
Solar eruptions often show rotational motion, especially in filament eruptions, which are commonly believed to be the manifestation of erupting magnetic flux ropes (MFRs). For example, during some filament eruptions, the observation of a bunch of helical threads that appear to wind around a central axis with rotation motion is reminiscent of an MFR. The rotation motion plays an important role in reconfiguring the erupting magnetic field. For instance, on the one hand, it may yield a possible magnetic reconnection of the erupting MFR with the surrounding field ruining the coherence of the erupting MFR, even resulting in a failed eruption \citep{Zhou2019}. On the other hand, it can persistently modulate the axis direction of the subsequent coronal mass ejection (CME), and change the southward component of the interplanetary magnetic field, which therefore makes the prediction of potential geoeffectiveness more challenging \citep{Yurchyshyn2009}.
The twisting and rotating features indicate that their underlying magnetic fields carry currents and possess magnetic helicity, and the direction of rotation is closely related to the sign of helicity. Indeed, observations of filament eruptions show that there is a one-to-one correlation between the rotation direction and the filament chirality (or sign of helicity of the corresponding magnetic field); sinistral (dextral) filaments with positive (negative) helicity rotate clockwise (counterclockwise) when viewed from above \citep{Green2007}. Furthermore, the pre-eruptive morphology of filaments corroborates an moderate hemispheric preference, namely, filaments of forward (reverse) S shape are usually located in the southern (northern) hemisphere and have a positive (negative) helicity \citep{Rust1996,Zhou2020}. Besides, there are coronal loops presenting S shape known as sigmoids observed in extreme ultraviolet (EUV) and soft X-ray (SXR) passbands \citep{Cheng2017}, and in many events the sigmoid is found to be co-spatial roughly with the pre-eruptive filament.
To explain the relationships between the rotation direction of the MFR as it erupts, the field chirality, and the associated filament (sigmoid) morphology,
\citet{Green2007} invoked the theory based on the Titov and D{\'e}moulin (T\&D) model \citep{Titov1999} which assumes an arched MFR pre-existing before eruption,
and argued that the observations agree well with the T\&D model. In the T\&D model, the observed sigmoid is considered to be a thin current layer formed in the bald patch separatrix layer (BPSS) or hyperbolic flux tube (HFT) in the wake of the rising flux rope,
and \citet{Green2007} suggested that the observed relationship between the filament chirality and its rotation direction is a manifestation of ideal MHD instability of the MFR, during which the magnetic twist is converted into the writhe of the axis (thus rotation of the MFR's axis). However, \citet{Torok2010} found that for the T\&D's MFRs, the relation between writhe and the projected S shape of MFRs is not unique, since the writhe depends largely on the height of the MFRs and on the presence or absence of dips in the middle of the MFR, rather than the transformation of their twist helicity into writhe helicity as is often assumed.
In this Letter, we proposed an alternative explanation for the eruption rotation, which is more uniformly consistent with the observations, by using a reconnection-initiated eruption model in which an MFR does not need to exist before eruption but is formed during eruption through reconnection within a sheared arcade configuration. Our explanation is developed based on a recent fully 3D magnetohydrodynamic (MHD) simulation \citep{Jiang2021} which demonstrates for the first time that the runaway tether-cutting reconnection alone can initiate solar eruption within a single arcade as sheared by photospheric motion. In the simulation, the MFR is formed during the eruption through reconnection of the sheared arcade. Here we will analyze the morphology, chirality, and rotation direction of the erupting MFR in the simulation, and compare them with the observations of a typical filament eruption. Our results show that at the onset of the eruption, the MFR is built up with a reverse S shape, and the top of the MFR shows a significant counterclockwise rotation immediately after the initiation of the eruption, which are entirely consistent with the observations. Furtherly, by a quantitative measurement of writhe and twist of the MFR in the simulation, we found that there a transfer of writhe to twist in the MFR during eruption, which is distinct from the previous theory based on kink instability.
\section{Observation of a typical filament eruption} \label{obs}
We first take a typical filament eruption, which occured in NOAA active region (AR) 11475 on May 10th, 2012, as an example to illustrate the relationship between the orientation of the S-shaped morphology of the filament (and its associated sigmoid), the filament chirality, and the rotation direction during its eruption. The filament is well observed by $\mathrm{H}\upalpha$ image in the 6563 {\AA} wavelength from Global Oscillation Network Group \citep[GONG;][]{Hill1994}, showing an inverse S shape located on the solar disk center (S15W15) as seen from the Earth view (Figure~\ref{f1}(a)). It has a wider appearance in the He II 304 {\AA} from dual-perspective imaging observations from Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly \citep[AIA;][]{Lemen2012} and Solar TErrestrial RElations Observatory (STEREO)-A/ Extreme UltraViolet Imager \citep[EUVI;][]{Wuelser2004}. A sigmoid co-spatial with the filament is observed with the X-ray telescope \citep[XRT;][]{Golub2007} onboard Hinode (Figure~\ref{f1}(b)).
The chirality of the filament can be determined with the help of the radial magnetogram provided by the Heliospheric and Magnetic Imager \citep[HMI;][]{Schou2012} onboard SDO. Figure~\ref{f1}(c) shows that overall this AR has a bipolar configuration where the two opposite polarities are aligned northeast$-$southwest, and the filament observed in the SDO/AIA 304 {\AA} (Figure~\ref{f2}(d)) is outlined by the green plus symbols as overplotted on the magnetogram.
The footpoints of the magnetic field supporting the filament can be located by where the filament plasma flows down to the solar surface. The observed right-skewed drainage sites relative to the PIL implies that the filament chirality is dextral \citep{Chen2014, Zhou2020}.
Meanwhile, during its eruption, the apex part of the filament displays CCW rotation, which can be seen in both SDO and STEREO-A observations (Figure~\ref{f2}(b),(d), and accompanied animation). This event is a well-observed example supporting a strong one-to-one relationship during the eruption as found by \citet{Zhou2020}; sinistral/dextral filaments rotate clockwise (CW)/counterclockwise (CCW) when viewed from above, and the morphology of the filament and related sigmoid both exhibit a forward (reverse) S shape.
\section{MHD Simulation of an erupting MFR} \label{Mod}
\citet{Jiang2021} performed a high accuracy, fully 3D MHD simulation and established a fundamental mechanism behind solar eruption initiation: a bipolar field driven by quasi-static shearing motion at the photosphere can form an internal current sheet, followed by fast magnetic reconnection that triggers and drives the eruption. In this mechanism, an MFR is built up during the eruption, and here we focus on the evolution of the erupting MFR by using the simulation run like the one in \citet{Jiang2021}, but with a lower resolution than the original ones. The simulation solves the full set of MHD equations with both solar gravity and plasma pressure included, and starts from a bipolar potential magnetic field and a hydrostatic plasma stratified by solar gravity with typical coronal temperature. Then shearing flows along the PIL, which are implemented by rotating the two magnetic polarities at the photosphere in the same CCW direction, are applied on the bottom boundary to energize the coronal field until an eruption is triggered, and after then the surface flow is stopped. During the quasi-static evolution phase as driven by the shearing motion, a current sheet is gradually built up. Since no explicit resistivity is used in the MHD model, magnetic reconnection is triggered when the current sheet is sufficiently thin such that its width is close to the grid resolution, owing to the implicit, grid-dependent numerical resistivity. For more details of the simulation settings, the readers are referred to \citet{Jiang2021}. In that paper, the simulation is managed to be of very high resolutions with Lundquist number achieving $\sim 10^5$ for a length unit (approximately 10 Mm). Therefore, the plasmoid instability is triggered in the current sheet and the magnetic topology becomes extremely complicated in small scales along with formation of the large-scale MFR. Such a complexity substantially complicates our analysis of the large-scale evolution associated with the erupting MFR, thus in this paper we used a lower-resolution run (corresponding to a Lundquist number of $\sim 10^3$). In the lower-resolution run, the amount of shearing time before the eruption onset is a little less than that needed in the high-resolution run, because the current sheet required for triggering reconnection is thicker and thus needs less shear (which has been shown in \citet{Jiang2021} with four different resolutions). That said, the basic evolution of the MFR during the eruption is not changed as compared to the high-resolution run, except that the small-scale complex structure will not arise. Moreover, with the lower resolution, we can run the simulation longer and thus follow a longer evolution of MFR.
\section{Comparison of Simulation and Observation} \label{sec:com}
Figure~\ref{f2} compares the process of the filament eruption from 23:00 UT on May 9 to 00:36 UT on May 10 observed by STEREO-A/EUVI, and SDO/AIA with magnetic field evolution seen in two different views from the MHD simulation. At the onset of the eruption (Note that here for the simulation, $t=0$ is reset to be the onset time of the simulated eruption), the core magnetic field of the newly formed MFR, which is built up through reconnection of the two sets of sheared arcades, presents a continuous reverse S-shaped sigmoid from the top view, and subsequently exhibits a significant CCW rotation during the eruption (Figure~\ref{f2}(c)). From the side (limb) view, the low-lying flux rope rises up quickly, yielding a nearly circular shape, and further with the CCW rotation the shape is transformed into an oval (Figure~\ref{f2}(a)). Therefore, the evolving morphology of the erupting flux rope in the simulation agrees well with that of the erupting filament in the dual-perspective observations from STEREO-A and SDO.
To compare the thermal morphology of the observed sigmoid with that of the simulation,
we deduce the thermal evolution of this eruption based on imaging data from six AIA EUV passbands, including 131{\AA} (Fe XXI, $\sim$11 MK; Fe VIII, $\sim$0.4 MK), 94{\AA} (Fe XVIII, $\sim$7.1 MK; Fe X, $\sim$1.1 MK), 335{\AA} (Fe XVI, $\sim$2.5 MK), 211{\AA} (Fe XIV, $\sim$2.0 MK), 193{\AA} (Fe XII, $\sim$1.6 MK; Fe XXIV, $\sim$17.8 MK), and 171{\AA} (Fe IX, $\sim$0.6 MK) \citep{ODwyer2010}.
We use a sparse inversion code \citep{Cheung2015,Su2018} to calculate the emission measure (EM) as a function of temperature from AIA imaging data.
EM is the integral of electron density squared over the emitting volume, it gives the amount of plasma emission at a given temperature.
Due to the limitation of HINODE/XRT observations, evolution of sigmoid can't be followed up in SXR passbands, we use this EM map as a substitution.
The EM maps over the temperature range from 5 -8 MK show a clear sigmoid-to-arcade transformation during the eruption (Figure~\ref{f3}(a)-(d)). Initially, a coaxial, bright feature appears in the wake of this rising filament, broadens as a sigmoidal shape, and finally, evolves into an arcade shape.
This sigmoidal emission pattern is expected to be due to the heating in the current-carrying magnetic fields \citep{Kliem2004,Gibson2006}.
To visualize current-carrying field lines in the simulation for comparison, a method for the synthesis of mock coronal images is utilized. It calculates line-of-sight integrals of the proxy emissivity
from the value of $j^2$ (square of the current density) averaged along magnetic field lines \citep{Cheung2012}.
From the synthetic images (Figure~\ref{f3}(e)-(h) and accompanied movie), an inverse sigmoidal shape forms before the eruption and then it broadens with the expansion of the erupting field. In the end, the two elbows of the sigmoid fade away and become indistinguishable from the ambient as shown in Figure~\ref{f3}(g) and (h).
Compared to the heating only along current sheets at the interface between the helical core field (e.g., MFR) and the ambient field \citep{Kliem2004,Gibson2006},
this result provides an alternative scenario that the sigmoidal emission pattern is due to the heating as line-of-sight integrals of $j^2$ from both the current sheet and the nearby regions with intense currents.
\section{Evolution of twist and writhe} \label{sec:evo}
To understand the variation of the MFR's morphology during its rising, we investigate the evolution of two parameters, namely, the writhe number ($\mathcal{W}_r$) and the twist number ($\mathcal{T}_w$), which characterize quantitatively the helical deformation of the MFR axis and how much the field lines winding about the MFR axis, respectively. Based on the simulation data, these two parameters are computed using the following methods. \par
For an open curve like the axis of an MFR with both endpoints on a bottom plane (e.g., the photosphere), its temporal evolution of writhe is difficult to be quantified \citep{Linton1998}. \citet{Berger2006} proposed a modified writhe expression termed the polar writhe to distinguish from the pre-existing closed curve definition. The bottom plane has $\hat{\mathbf{z}}$ as normal. Along the $z$-direction, the open curve is split into several pieces at turning points (extrema in z-direction). The coiled geometric quantity of each individual piece is the local polar writhe ($\mathcal{W}_{pl}$), where the global geometric relations between the pieces is described by the nonlocal polar writhe ($\mathcal{W}_{npl}$). The polar writhe ($\mathcal{W}_p$) is the sum of local and non-local components. This nonlocal component is useful in interpreting the presence of S shape in the corona \citep{Torok2010}. It can be calculated as \citep{Berger2006}:
\begin{equation}
\mathcal{W}_{pnl}(\textbf{r})=\mathop{\sum_{i=1}^{n+1}\sum_{j=1}^{n+1}}_{i\neq j} \frac{\sigma_{i}\sigma_{j}}{2\pi}\int_{z^{min}_{ij}}^{z^{max}_{ij}}\frac{d\Theta_{ij}}{dz}dz, \label{eqwr}
\end{equation}
where i,j are two different pieces, $\sigma_{i} =+1$ if this piece is moving upward, $\sigma_{i} =-1$ if this piece is moving downward, and $n$ is the number of the turning points. In the integration,
let the relative position vector at height z be $\mathbf{r}_{ij}(z) = x_j (z) - x_i(z)$. Note that $\mathbf{r}_{ij}(z)$ is parallel to the xy plane.
$\Theta_{ij}$ is the orientation of this vector with respect to the x axis, $z^{min}_{ij}$ and $z^{max}_{ij}$ are the minimum and maximum heights at which both pieces reach.
The $\mathcal{W}_{pnl}$ of the open curve then can be computed based on this equation using the \citet*{Prior2016} code available online.\footnote{\url{https://www.maths.dur.ac.uk/~ktch24/code.html}} This code can also compute the $\mathcal{W}_{pl}$ and $\mathcal{W}_{r}$, which are used to calculate the linking number ($\mathcal{L}_{k} =\mathcal{W}_{r} + \mathcal{T}_{w}$, see the inset panel of Figure~\ref{f5}(a)).
The definition of the other parameter, the twist number ($\mathcal{T}_w$), is
defined as follow: Let a smooth curve $\mathbf{y}(s)$ wrap around the central axis $\mathbf{x}(s)$, where $s$ is the arc length starting from a reference point on this central axis. $\mathbf{T}(s)$ is the unit tangent vector to the axis curve $\mathbf{x}(s)$, and $\mathbf{V}(s)$ denotes a unit vector normal to $\mathbf{T}(s)$ and points to $\mathbf{y}$ at the point
$\mathbf{y}(s) = \mathbf{x}(s) + \epsilon \mathbf{V}(s)$. Then $\mathcal{T}_w$ density can be calculated following
the formula \citep{Berger2006,Guo2013},
\begin{equation}
\dfrac{d\mathcal{T}_{w}}{ds} = \frac{1}{2\pi }\mathbf{\textit{T}}\cdot \mathbf{\textit{V}}\times \dfrac{\mathbf{\textit{dV}}}{ds}, \label{eqtw}
\end{equation}
The total twist is the integration of Equation~\ref{eqtw} along the axis curve $\mathbf{x}$.
If the field line is in the vicinity of the MFR's axis and the MFR is approximately cylindrically symmetric,
it is more convenient to use the twist number of an individual magnetic field line defined by
\begin{equation}
\mathcal{T}_{w}' = \int_{L}\frac{\mu_{0}J_{\parallel }}{4\pi B}dl=\int_{L}\frac{\curl{\mathbold{B}}\cdot\mathbold{B}}{4\pi B^2}dl \label{eqtw1}
\end{equation}
$\mathcal{T}_w'$ measures how much two neighboring close field lines twist about each other. It is a reliable approximation of the twist number
with respect to the axis, $\mathcal{T}_w$, as computed by integration of Equation~\eqref{eqtw} \citep[][Appendix C]{Liu2016}.
The total twist is the line integral of the twist intensity ($\curl{\mathbold{B}}\cdot\mathbold{B}/4\pi B^2$) along each individual field line.
To calculate the two parameters, the top priority is the determination of the MFR's axis.
Due to the symmetry of modeled MFR's geometry, the streamlines of its transverse magnetic field forms a series of concentric rings at the central cross-section (Figure~\ref{f4}(a)), and the axis of the MFR can be identified clearly as the field line passing through the center of these concentric rings.
Furthermore, the bottom surface of the MHD model is fixed without any motion during the eruption, thus for any field line without reconnection, its two footpoints will not change with time owing to this line-tied boundary condition. Therefore, once the axis is located initially, its subsequent evolution can be followed by tracing the field line from one fixed footpoint of the axis (red line in Figure~\ref{f4}(b)-(d)). To compute the twist number, we traced eight sample field lines around the axis starting from eight points in the neighborhood surrounding the footpoint of the axis (Figure~\ref{f4}(b)-(d), as an example for one of the eight field lines). During the eruption, the other footpoints' location of the axis and wrapping field line moves less than 1.8 grid points, lower than 5.8\% changing rate relative to the separation of their two footponts.
Accompanied with the MFR rising and rotation motions (Figure~\ref{f2}(a) and (c)), temporal evolutions of $\mathcal{W}_{pnl}$ and $\mathcal{T}_{w}$ of the neighboring field line are shown in Figure~\ref{f5}. Initially, the reverse S-shaped MFR possesses a positive $\mathcal{W}_{pnl}$ of $0.33$,
and as the CCW rotation of the apex of the MFR about its rise direction sets in,
this value decreases monotonically to $0$, indicating that the initial strong reverse S-shaped bending is completely straightened out. Moreover, as the rotation goes on, $\mathcal{W}_{pnl}$ even reverses its sign to a negative value of $-0.07$. On the other hand, the neighboring field lines winding around the axis have initially a left-handed twist on average of ($\mathcal{T}_{w}$) $\approx - 1.70$, and as the eruption goes on, the twist is enhanced with its absolute value growing gradually to $2.09$, which indicates that the CCW rotation motion twists up these spiral field lines.
Based on the extension for the open field lines of C\u{a}lug\u{a}reanu theorem \citep{Berger2006}, the total linking number ($\mathcal{L}_{k} = \mathcal{T}_{w} + \mathcal{W}_{r}$) is proved mathematically invariant to all motions (as long as no reconnection happens between these field lines). This is consistent with our result (the changing rate of $\mathcal{L}_{k}$ is less than 8\%, see the inset panel of Figure~\ref{f5}(a)) which shows a negative correlation between $\mathcal{W}_r$ and $\mathcal{T}_{w}$ and indicates that writhe transfers to twist during the eruption.
We have also checked the energy and helicity evolution during the simulated eruption. As can be seen in the bottom panel of Figure~\ref{f5}, the free magnetic energy in the volume is rapidly released by 50\% through the eruption, while the magnetic helicity is preserved pretty well in the volume with only a small variation of less than 2\%.
\section{Discussion and Conclusion} \label{sec:dis}
In this Letter, the relationship between the direction of filament rotation during eruption, the orientation of S-shaped morphology of filament (and co-spatial sigmoid), and the chirality of the filament is studied using a fully 3D MHD simulation of solar eruption initiation.
The simulated flux rope eruption resembles the initial morphology and rotation motion of an erupted filament. The emission image as synthesized from the electric current density in the model shows an inverse sigmoidal pattern in the wake of the eruption and the sigmoid-to-arcade transformation. Furtherly, $\mathcal{W}_{pnl}$ and $\mathcal{T}_{w}$ of the simulated MFR, the quantitative parameters describing the deformation of the axis and its wrapping field lines, are calculated, which shows clearly an accumulation of the $\mathcal{T}_{w}$ and a reduction of the $\mathcal{W}_{pnl}$ during the eruption. Such a transfer from writhe to twist is at variance with the existing explanation for MFR rotation invoking the helical kink instability in which the twist of MFR is converted to writhe.
Many attempts have been made before to
determine this observed rotation--chirality relationship. For example,
\citet{Green2007} has comprehensively
reviewed various models of sigmoid formation and considered this observed property
as a consequence of the conversion of twist into writhe under the ideal MHD constraint of helicity conservation.
But this leaves a mystery:
through the rotation the original inverse S-shaped filament spine is straightened and even over rotated to become forward S-shaped, which contradicts the expectation of the kink instability. Some observation and simulation suggest that eruption of a low-lying MFR with downward-bent axis also accommodates this scenario \citep[e.g.,][]{Torok2010,Zhou2017}. But for the studied filament eruption, no obvious dip (i.e., with a concave-upward motion) is present in the middle, and the initial reverse S-shape of the filament is formed largely by the two curved ends rather than downward-bent or flat portions in its mainbody.
This mystery is solved in our analysis here, that a forward (reverse) S-shaped filament eruption shows CW (CCW) rotation is consistent with the eruption scenario as demonstrated by \citet{Jiang2021}'s simulation, namely, the erupting MFR is formed during the eruption by tether-cutting reconnection.
The physics behind the key different behavior of MFR formed during eruption from that formed prior to eruption will be investigated in future works.
\section{acknowledgements } \label{sec:ack}
The authors wish to express their special thanks to the referee for suggestions
and comments which led to the improvement of the paper.
The authors appreciate discussions with Guo Yang, Xin Cheng, and Xudong Sun.
We acknowledge the \emph{SECCHI}, \emph{AIA}, \emph{GONG}, \emph{XRT}, and \emph{HMI} consortia for providing excellent observations.
This work is supported by the B-type Strategic Priority Program XDB41000000 funded by the Chinese
Academy of Sciences. The authors also acknowledge support from the National Natural Science Foundation of China (NSFC 42004142, 41822404, 41731067, 11925302, 42188101, 41822404, 41731067, 41574170, and 41531073), Open Research Program of CAS Key
Laboratory of Geospace Environment, the Fundamental Research Funds for the Central Universities (grant No. HIT.BRETIV.201901), and Shenzhen Technology Project JCYJ20190806142609035.
|
1,108,101,565,290 | arxiv | \section{Introduction}
\begin{figure}
\includegraphics[width=\columnwidth,trim=0.2in 3in 3.5in 0.3in, clip]{fig/teaser}
\caption{
Learning from policy sketches. The figure shows simplified versions of
two tasks (\textit{make planks} and \textit{make sticks}, each
associated with its own
policy ($\Pi_1$ and $\Pi_2$ respectively).
These policies share an initial high-level action $b_1$: both require the
agent to \textit{get wood} before taking it to an appropriate crafting
station. Even without prior information about how the associated behavior $\pi_1$
should be implemented, knowing that the agent should initially follow the same
subpolicy in both tasks is enough to learn a reusable representation of their
shared structure.
}
\label{fig:teaser}
\vspace{-1em}
\end{figure}
\added{
This paper describes a framework for learning composable deep subpolicies in a
multitask setting, guided only by abstract sketches of high-level behavior.
General reinforcement learning algorithms allow agents to solve
tasks in
complex environments. But tasks featuring extremely delayed rewards or other
long-term structure are often difficult to solve with flat, monolithic policies,
and a long line of prior work has studied methods for learning hierarchical
policy representations
\citep{Sutton99Options,Dietterich00MaxQ,Konidaris07Skills,Hauser08Primitives}.
While unsupervised discovery of these hierarchies is possible
\citep{Daniel12HREPS,Bacon15OptionCritic}, practical approaches often require
detailed supervision in the form of explicitly specified high-level actions,
subgoals, or behavioral primitives \cite{Precup00Options}. These
depend on state representations simple or structured enough that
suitable reward signals can be effectively engineered by hand.
But is such fine-grained supervision actually necessary to
achieve the full benefits of
hierarchy? Specifically, is it necessary to explicitly ground high-level
actions into the representation of the environment? Or is it sufficient to simply inform the
learner about the abstract \emph{structure} of policies, without ever
specifying how high-level behaviors should make use of primitive percepts or actions?
To answer these questions, we explore a multitask reinforcement learning setting where the
learner is presented with \emph{policy sketches}.
Policy sketches are short, ungrounded, symbolic representations of a
task that describe its component parts, as illustrated in \autoref{fig:teaser}. While
symbols might be shared across tasks (\emph{get wood} appears in sketches for
both the \emph{make planks} and \emph{make sticks} tasks), the learner is told nothing about
what these symbols \emph{mean}, in terms of either observations or intermediate rewards.
We present an agent architecture that learns from policy sketches by
associating each high-level action with a parameterization of a low-level subpolicy, and jointly optimizes
over concatenated task-specific policies by tying parameters across shared subpolicies.
We find that this architecture can use the high-level guidance provided by
sketches, without any grounding or concrete definition, to dramatically accelerate learning
of complex multi-stage behaviors. Our experiments indicate that many of the benefits to
learning that come from highly detailed low-level supervision (e.g.\ from subgoal rewards) can
also be obtained from fairly coarse high-level supervision (i.e.\ from policy sketches).
Crucially, sketches are much easier to produce: they require no
modifications to the environment dynamics or reward function, and can be easily provided by
non-experts. This makes it possible to extend the
benefits of hierarchical RL to challenging environments where it may not be possible to specify
by hand the details of relevant subtasks.
We show that our approach substantially
outperforms purely unsupervised methods that do not provide the learner with any task-specific
guidance about how hierarchies should be deployed, and further that the specific use of sketches to
parameterize modular subpolicies makes better use of sketches than conditioning
on them directly.
}
The present work may be viewed as an extension of recent approaches for
learning compositional deep architectures from structured program descriptors
\citep{Andreas16DNMN,Reed15NPI}. Here we focus on learning in interactive
environments. This extension presents a variety of technical challenges, requiring
analogues of these methods that can be trained from sparse,
non-differentiable reward signals without demonstrations of desired system behavior.
Our contributions are:
\begin{itemize}
\item A general paradigm for multitask, hierarchical, deep reinforcement
learning guided by abstract sketches of task-specific policies. \item A concrete recipe for learning from these sketches, built
on a general family of modular deep policy representations
and a multitask actor--critic training objective. \end{itemize}
The modular structure of our approach, which associates every high-level
action symbol with a discrete subpolicy, naturally induces a library of
interpretable policy fragments that are easily recombined.
This makes it possible to evaluate our approach under a variety of different
data conditions: (1) learning the full
collection of tasks jointly via reinforcement, (2) in a zero-shot setting where
a policy sketch is available for a held-out task, and (3) in a adaptation
setting, where sketches are hidden and the agent must learn to adapt a pretrained
policy to reuse high-level actions in a new task. In all cases, our approach
substantially outperforms previous approaches based on explicit decomposition of
the Q function along subtasks \cite{Parr98HAM,Vogel10SARSA}, unsupervised option
discovery \cite{Bacon15OptionCritic}, and several standard policy gradient
baselines.
We consider three families of tasks: a \mbox{2-D} Minecraft-inspired crafting
game (\autoref{fig:tasks}a), in which the agent must acquire particular
resources by finding raw ingredients, combining them together in the proper
order, and in some cases building intermediate tools that enable the agent to
alter the environment itself; a 2-D maze navigation task that requires the agent
to collect keys and open doors, and a 3-D locomotion task (\autoref{fig:tasks}b)
in which a quadrupedal robot must actuate its joints to traverse a narrow winding
cliff.
In all tasks, the agent receives a reward only after the final goal is
accomplished. For the most challenging tasks, involving sequences of four or
five high-level actions, a task-specific agent initially following a random
policy essentially never discovers the reward signal, so these tasks cannot be
solved without considering their hierarchical structure.
We have released code at \url{http://github.com/jacobandreas/psketch}.
\section{Related Work}
The agent representation we describe in this paper belongs to the broader
family of hierarchical reinforcement learners. As
detailed in \autoref{sec:learning},
our approach may be viewed as an instantiation of the \emph{options} framework
first described by \citet{Sutton99Options}. A large body of work describes
techniques for learning options and related abstract actions, in both single-
and multitask settings. Most
techniques for learning options rely on intermediate supervisory signals, e.g.\ to encourage
exploration \citep{Kearns02Exploration} or completion of pre-defined subtasks
\citep{Kulkarni16DeepHierarchical}. An alternative family of approaches employs
post-hoc analysis of demonstrations or pretrained policies to extract reusable
sub-components \citep{Stolle02LearningOptions, Konidaris11SkillTrees, Niekum15Demonstrations}.
Techniques for learning options with less guidance than the present work include
\citet{Bacon15OptionCritic} and \citet{Vezhnevets16STRAW}, and other general
hierarchical policy learners include \citet{Daniel12HREPS},
\citet{Bakker04Hierarchical} and \citet{Menache02QCut}. \added{We will see that
the minimal supervision provided by policy sketches results in (sometimes
dramatic) improvements over fully unsupervised approaches, while being
substantially less onerous for humans to provide compared to the grounded
supervision (such as explicit subgoals or feature abstraction hierarchies) used
in previous work.}
Once a collection of high-level actions exists, agents are faced with the problem
of learning meta-level (typically semi-Markov) policies that invoke appropriate
high-level actions in sequence \citep{Precup00Options}. The learning problem we
describe in this paper is in some sense the direct dual to the problem of
learning these meta-level policies: there, the agent begins with an inventory
of complex primitives and must learn to model their behavior and select among
them; here we begin knowing the names of appropriate high-level actions but
nothing about how they are implemented, and must infer implementations (but not,
initially, abstract plans) from context.
\added{Our model can be
combined with these approaches to support a ``mixed'' supervision condition
where sketches are available for some tasks but not others (\autoref{ssec:generalization}).}
Another closely related line of work is the Hierarchical Abstract Machines (HAM)
framework introduced by \citet{Parr98HAM}. Like our approach, HAMs begin with a
representation of a high-level policy as an automaton (or a more general
computer program; \citeauthor{Andre01ALISP}, \citeyear{Andre01ALISP}; \citeauthor{Marthi04ALISP}, \citeyear{Marthi04ALISP}) and use reinforcement learning to fill in
low-level details.
Because these approaches attempt to
learn a single representation of the Q function for all subtasks and contexts,
they require extremely strong formal assumptions about the form of the reward
function and state representation \citep{Andre02ALISPAbstraction} that the
present work avoids by decoupling the policy representation from the value
function. \added{They perform less effectively when applied to arbitrary state representations
where these assumptions do not hold (\autoref{ssec:multitask}). We are additionally
unaware of past work showing that HAM automata can be automatically inferred
for new tasks
given a pre-trained model, while here we show that it is easy to solve the corresponding
problem for sketch followers (\autoref{ssec:generalization}).}
Our approach is also inspired by a number of recent efforts toward compositional
reasoning and interaction with structured deep models. Such models have been
previously used for tasks involving question answering
\citep{Iyyer14Factoid,Andreas16DNMN} and relational reasoning
\citep{Socher12Semantic}, and more recently for multi-task, multi-robot transfer
problems \citep{Devin16NMN}. In the present work---as in existing approaches employing
dynamically assembled modular networks---task-specific training signals are
propagated through a collection of composed discrete structures with tied
weights. Here the composed structures specify time-varying
policies rather than feedforward computations, and their parameters must be
learned via interaction rather than direct supervision. Another closely related
family of models includes neural programmers \citep{Neelakantan15NP} and
programmer--interpreters \citep{Reed15NPI}, which generate discrete
computational structures but require supervision in the form of output actions
or full execution traces.
\added{
We view the problem of learning from policy sketches as complementary to the
instruction following problem studied in the natural language processing literature.
Existing work on instruction following focuses on mapping from natural language strings
to symbolic action sequences that are then executed by a hard-coded interpreter
\citep{Branavan09PG,Chen11Navigation,Artzi13Navigation,Tellex11Commands}.
Here, by contrast, we focus on learning to execute complex actions given symbolic
representations as a starting point.
Instruction following models may be viewed as joint policies over instructions
and environment observations (so their behavior is not defined in the absence of
instructions), while the model described in this paper naturally supports adaptation to
tasks where no sketches are available. We expect that future work might combine the
two lines of research, bootstrapping policy learning directly from
natural language hints rather than the semi-structured
sketches used here.
}
\section{Learning Modular Policies from Sketches}
\label{sec:learning}
We consider a multitask reinforcement learning problem arising from a family of
infinite-horizon discounted Markov decision processes in a shared environment.
This environment is specified by a tuple $(\mathcal{S}, \mathcal{A}, P,
\gamma)$, with $\mathcal{S}$ a set of states, $\mathcal{A}$ a set of low-level actions,
$P : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to \mathbb{R}$ a transition
probability distribution, and $\gamma$ a discount factor. Each task $\tau \in
\mathcal{T}$ is then specified by a pair $(R_\tau, \rho_\tau)$, with $R_\tau : \mathcal{S} \to
\mathbb{R}$ a task-specific reward function and $\rho_\tau : \mathcal{S} \to \mathbb{R}$ an
initial distribution over states. For a fixed sequence $\{(s_i, a_i)\}$ of
states and actions obtained from a rollout of a given policy, we will denote
the empirical return starting in state $s_i$ as $q_i := \sum_{j=i+1}^\infty
\gamma^{j-i-1} R(s_j)$. In addition to the components of a standard multitask RL
problem, we assume that tasks are annotated with \emph{sketches} $K_\tau$, each
consisting of a sequence $(b_{\tau 1}, b_{\tau 2}, \ldots)$ of high-level
symbolic labels drawn from a fixed vocabulary $\mathcal{B}$.
\subsection{Model}
\label{ssec:model}
We exploit the structural information provided by sketches by constructing for
each symbol $b$ a corresponding \emph{subpolicy} $\pi_b$.
By sharing
each subpolicy across all tasks annotated with the corresponding symbol, our
approach naturally learns the shared abstraction for the corresponding subtask,
without requiring any information about the grounding of that task to be
explicitly specified by annotation.
At each timestep, a
subpolicy may select either a low-level action $a \in \mathcal{A}$ or a
special $\textsc{stop}$ action. We denote the augmented state space $\mathcal{A}^+ :=
\mathcal{A} \cup \{\textsc{stop}\}$. At a high level, this framework is agnostic to the
implementation of subpolicies: any function that takes a representation of the
current state onto a distribution over $\mathcal{A}^+$ will do.
In this paper, we focus on the case where each $\pi_b$ is represented as a neural network.\footnote{
For ease of presentation, this section assumes that these
subpolicy networks are independently parameterized. As
described in \autoref{ssec:envs}, it is also possible to
share parameters between subpolicies, and introduce discrete
subtask structure by way of an \emph{embedding} of each
symbol $b$.
}
These subpolicies may be viewed as options of the kind described by
\citet{Sutton99Options}, with the key distinction that they have no initiation
semantics, but are instead invokable everywhere, and have no explicit
representation as a function from an initial state to a distribution over final
states (instead implicitly using the $\textsc{stop}$ action to terminate).
\begin{algorithm}[t]
\begin{algorithmic}[1]
\STATE $\mathcal{D} \gets \emptyset$
\WHILE{$|\mathcal{D}| < D$}
\mycomment{sample task $\tau$ from curriculum (\autoref{sec:curriculum})}
\STATE $\tau \sim \textrm{curriculum}(\cdot)$
\mycomment{do rollout}
\STATE $d = \{(s_i, a_i, (b_i=K_{\tau,i}), q_i, \tau), \ldots\} \sim \Pi_\tau$
\STATE $\mathcal{D} \gets \mathcal{D} \cup d$
\ENDWHILE
\mycomment{update parameters}
\FOR{$b \in \mathcal{B}, \tau \in \mathcal{T}$}
\STATE $d = \{(s_i, a_i, b', q_i, \tau') \in \mathcal{D} : b' = b, \tau' = \tau\}$
\mycomment{update subpolicy}
\STATE $\theta_b \gets \theta_b + \frac{\alpha}{D}
\sum_d \big(\nabla \log \pi_b(a_i|s_i)\big)\big(q_i - c_\tau(s_i)\big)$
\mycomment{update critic}
\STATE $\eta_\tau \gets \eta_\tau + \frac{\beta}{D}
\sum_d \big(\nabla c_\tau(s_i)\big)\big(q_i - c_\tau(s_i)\big)$
\ENDFOR
\end{algorithmic}
\caption{$\textsc{train-step}(\mathbf{\Pi}, \textrm{curriculum})$}
\label{alg:inner-loop}
\end{algorithm}
Given a fixed sketch $(b_1, b_2, \dots)$, a task-specific policy $\Pi_\tau$ is formed by
concatenating its associated subpolicies in sequence. In particular, the
high-level policy maintains a subpolicy index $i$ (initially $0$), and executes
actions from $\pi_{b_i}$ until the $\textsc{stop}$ symbol is emitted, at which point
control is passed to $\pi_{b_{i+1}}$. We may thus think of $\Pi_\tau$ as
inducing a Markov chain over the state space $\mathcal{S} \times \mathcal{B}$, with
transitions: \begin{align*}
(s, b_i) &\to (s', b_i) &\textrm{with pr.}\quad& {\textstyle \sum_{a \in
\mathcal{A}}} \pi_{b_i}(a | s) \cdot P(s' | s, a)\\
&\to (s, b_{i+1}) &\textrm{with pr.}\quad& \pi_{b_i}(\textsc{stop} | s)
\end{align*}
Note that $\Pi_\tau$ is semi-Markov with respect to projection
of the augmented state space $\mathcal{S} \times \mathcal{B}$ onto the underlying state
space $\mathcal{S}$. We denote the complete family of task-specific policies
$\mathbf{\Pi} := \bigcup_\tau \{\Pi_\tau\}$, and let each $\pi_b$ be an arbitrary
function of the current environment state parameterized by some weight vector
$\theta_b$. The learning problem is to optimize over all $\theta_b$ to maximize
expected discounted reward \[J(\mathbf{\Pi}) := \sum_\tau
J(\Pi_\tau) := \sum_\tau \mathbb{E}_{s_i \sim \Pi_\tau} \big[ \sum_i \gamma^i
R_\tau(s_i) \big]\] across all tasks $\tau \in \mathcal{T}$.
\begin{algorithm}[t]
\caption{\textsc{train-loop}()}
\label{alg:outer-loop}
\begin{algorithmic}[1]
\mycomment{initialize subpolicies randomly}
\STATE $\mathbf{\Pi} = \textsc{init}()$
\STATE $\ell_\textrm{max} \gets 1$
\LOOP
\STATE $r_\textrm{min} \gets -\infty$
\mycomment{initialize $\ell_\textrm{max}$-step curriculum uniformly}
\STATE $\mathcal{T}' = \{ \tau \in \mathcal{T} : |K_\tau| \leq \ell_\textrm{max} \}$
\STATE $\textrm{curriculum}(\cdot) = \textrm{Unif}(\mathcal{T}')$
\WHILE{$r_\textrm{min} < r_\textrm{good}$}
\mycomment{update parameters (\autoref{alg:inner-loop})}
\STATE $\textsc{train-step}(\mathbf{\Pi}, \textrm{curriculum})$
\STATE $\displaystyle \textrm{curriculum}(\tau) \propto \mathbb{1}[\tau \in
\mathcal{T}'] (1
- \hat{\mathbb{E}} r_\tau) \quad \forall \tau \in \mathcal{T}$
\STATE $r_\textrm{min} \gets \min_{\tau \in \mathcal{T}'} \hat\mathbb{E} r_\tau$
\ENDWHILE
\STATE $\ell_\textrm{max} \gets \ell_\textrm{max} + 1$
\ENDLOOP
\end{algorithmic}
\end{algorithm}
\begin{figure}[b]
\centering
\includegraphics[width=0.75\columnwidth, trim=0.1in 5in 4.8in 0.2in, clip]{fig/model}
\caption{
Model overview. Each subpolicy $\pi$ is uniquely associated with a symbol
$b$ implemented as a neural network that maps from a state $s_i$ to
distributions over $\mathcal{A}^+$, and chooses an action $a_i$ by sampling
from this distribution. Whenever the $\textsc{stop}$ action is sampled, control
advances to the next subpolicy in the sketch.
}
\label{fig:model}
\end{figure}
\subsection{Policy Optimization}
Here that optimization is accomplished via a simple decoupled actor--critic
method. In a standard policy gradient approach, with a single policy $\pi$
with parameters $\theta$, we compute gradient steps
of the form \citep{Williams92Reinforce}:
\begin{equation}
\label{eq:vanilla-pg}
\nabla_\theta J(\pi) = \sum_i \big(\nabla_{\theta} \log
\pi(a_i|s_i)\big)\big(q_i - c(s_i)\big),
\end{equation}
where the baseline or ``critic'' $c$ can be chosen independently of the future
without introducing bias into the gradient. Recalling our previous definition of
$q_i$ as the empirical return starting from $s_i$, this form of the gradient
corresponds to a generalized advantage estimator \citep{Schulman15GAE} with
$\lambda = 1$. Here $c$ achieves close to the optimal variance
\citep{Greensmith04PG} when it is set exactly equal to the state-value function
$V_\pi(s_i) = \mathbb{E}_\pi q_i$ for the target policy $\pi$ starting in state
$s_i$.
The situation becomes slightly more complicated when generalizing to modular
policies built by sequencing subpolicies. In this case, we will have one
subpolicy per symbol but one critic per \emph{task}. This is because subpolicies
$\pi_b$ might participate in a number of composed policies $\Pi_\tau$, each
associated with its own reward function $R_\tau$. Thus individual subpolicies
are not uniquely identified with value functions, and the aforementioned
subpolicy-specific state-value estimator is no longer well-defined.
We extend the actor--critic method to incorporate the decoupling of policies
from value functions by allowing the critic to vary per-sample (that is,
per-task-and-timestep) depending on the reward function with which the sample is
associated. Noting that
$\nabla_{\theta_b} J(\mathbf{\Pi}) =
\sum_{t: b \in K_\tau} \nabla_{\theta_b} J(\Pi_\tau)$, i.e.\ the sum of
gradients of expected rewards across all tasks in
which $\pi_b$ participates, we have:
\begin{align}
\label{eq:decoupled-pg}
\nabla_\theta &J(\mathbf{\Pi}) = \sum_\tau \nabla_\theta J(\Pi_\tau) \nonumber \\
&=
\sum_\tau \sum_i \big(\nabla_{\theta_b} \log
\pi_b(a_{\tau i}|s_{\tau i})\big)\big(q_i - c_\tau(s_{\tau i})\big),
\end{align}
where each state-action pair $(s_{\tau i}, a_{\tau i})$ was selected by the
subpolicy $\pi_b$ in the context of the task $\tau$.
Now minimization of the gradient variance requires that each $c_\tau$ actually
depend on the task identity. (This follows immediately by applying the
corresponding argument in \citet{Greensmith04PG} individually to each term in the
sum over $\tau$ in \autoref{eq:decoupled-pg}.) Because the value function is
itself unknown, an approximation must be estimated from data. Here we allow
these $c_\tau$ to be implemented with an arbitrary function approximator
with parameters $\eta_\tau$.
This is trained to minimize a squared
error criterion, with gradients given by
\begin{align}
\nabla_{\eta_\tau} \bigg[ -\frac{1}{2} \sum_i (&q_i - c_\tau(s_i))^2 \bigg] \nonumber \\
&= \sum_i \big( \nabla_{\eta_\tau} c_\tau(s_i) \big)\big(q_i - c_\tau(s_i)\big).
\end{align}
Alternative forms of the advantage estimator (e.g.\ the TD residual $R_\tau(s_i) +
\gamma V_\tau(s_{i+1}) - V_\tau(s_i)$ or any other member of the generalized
advantage estimator family)
can be easily substituted by simply maintaining one such estimator per task.
Experiments (\autoref{ssec:ablations}) show that conditioning on both the state
and the task identity results in noticeable performance improvements, suggesting
that the variance reduction provided by this objective is important for
efficient joint learning of modular policies.
The complete procedure for computing a \emph{single} gradient step is given in
\autoref{alg:inner-loop}. (The outer training loop over these steps, which is
driven by a curriculum learning procedure, is
specified in \autoref{alg:outer-loop}.) This is an on-policy algorithm. In
each step, the agent samples tasks from a task distribution provided
by a curriculum (described in the following subsection). The current family of
policies $\mathbf{\Pi}$ is used to perform rollouts in each sampled task,
accumulating the resulting tuples of (states, low-level actions, high-level
symbols, rewards, and task identities) into a dataset $\mathcal{D}$. Once $\mathcal{D}$
reaches a maximum size $D$, it is used to compute gradients w.r.t.\ both
policy and critic parameters, and the parameter vectors are updated accordingly.
The step sizes $\alpha$ and $\beta$ in \autoref{alg:inner-loop} can be chosen
adaptively using any first-order method.
\subsection{Curriculum Learning}
\label{sec:curriculum}
For complex tasks, like the one depicted in \autoref{fig:tasks}b, it is
difficult for the agent to discover any states with positive reward until many
subpolicy behaviors have already been learned. It is thus a better use of the
learner's time to focus on ``easy'' tasks, where many rollouts will result
in high reward from which appropriate subpolicy behavior can be inferred. But
there is a fundamental tradeoff involved here: if the learner spends too much
time on easy tasks before being made aware of the existence of harder ones, it
may overfit and learn subpolicies that no longer generalize or exhibit the
desired structural properties.
To avoid both of these problems, we use a curriculum learning scheme
\citep{Bengio09Curriculum} that allows
the model to smoothly scale up from easy tasks to more difficult ones while
avoiding overfitting. Initially the model is presented with tasks associated
with short sketches. Once average reward on all these tasks reaches a certain
threshold, the length limit is incremented. We assume that rewards across tasks
are normalized with maximum achievable reward $0 < q_i < 1$. Let
$\hat{\mathbb{E}}r_\tau$ denote the empirical estimate of the expected reward for
the current policy on task $\tau$. Then at each timestep, tasks are sampled in
proportion to $1 - \hat{\mathbb{E}}r_\tau$, which by assumption must be positive.
Intuitively, the tasks that provide the strongest learning signal are those in
which (1) the agent does not on average achieve reward close to the upper bound,
but (2) many episodes result in high reward. The expected reward
component of the curriculum addresses condition (1) by ensuring that time is not
spent on nearly solved tasks, while the length bound component of the curriculum
addresses condition (2) by ensuring that tasks are not attempted until
high-reward episodes are likely to be encountered.
Experiments show that both components of this curriculum learning scheme improve
the rate at which the model converges to a good policy
(\autoref{ssec:ablations}).
The complete curriculum-based training procedure is specified in
\autoref{alg:outer-loop}. Initially, the maximum sketch length
$\ell_\textrm{max}$ is set to 1, and the curriculum initialized to sample
length-1 tasks uniformly. (Neither of the environments we consider in this paper
feature any length-1 tasks; in this case, observe that \autoref{alg:outer-loop}
will simply advance to length-2 tasks without any parameter updates.) For each
setting of $\ell_\textrm{max}$, the algorithm uses the current collection of
task policies $\mathbf{\Pi}$ to compute and apply the gradient step described in
\autoref{alg:inner-loop}. The rollouts obtained from the call to
$\textsc{train-step}$ can also be used to compute reward estimates $\hat\mathbb{E}
r_\tau$; these estimates determine a new task distribution for
the curriculum. The inner loop is repeated until the reward threshold
$r_\textrm{good}$ is exceeded, at which point $\ell_\textrm{max}$ is incremented
and the process repeated over a (now-expanded) collection of tasks.
\section{Experiments}
\label{sec:experiments}
We evaluate the performance of our approach
in three environments: a crafting environment, a maze navigation environment,
and a cliff traversal environment. These environments involve various kinds of
challenging low-level control: agents must learn to avoid
obstacles, interact with various kinds of objects, and relate fine-grained joint
activation to high-level locomotion goals. They also
feature hierarchical structure: most rewards are provided only after the agent
has completed two to five high-level actions in the appropriate sequence, without any intermediate goals to indicate progress towards completion.
\subsection{Implementation}
In all our experiments, we implement each subpolicy as a feedforward neural network
with ReLU nonlinearities and a hidden layer with 128 hidden units,
and each critic as a linear
function of the current state. Each subpolicy network receives as input a set of
features describing the current state of the environment, and outputs a
distribution over actions. The agent acts at every timestep by sampling
from this distribution.
The gradient steps given in lines 8 and 9 of \autoref{alg:inner-loop} are
implemented using \textsc{RMSProp} \citep{Tieleman12RMSProp} with a step size of 0.001 and gradient
clipping to a unit norm. We take the batch size $D$ in
\autoref{alg:inner-loop} to be 2000, and set $\gamma=0.9$ in both environments.
For curriculum learning, the improvement threshold $r_\textrm{good}$ is
0.8.
\begin{figure}
\centering
\raisebox{4.5em}{(a)} \hspace{1em} \includegraphics[width=2.2in, trim=0.1in
4.2in 4.1in 0.2in, clip]{fig/craft} \\[1em]
\raisebox{4.5em}{(b)} \hspace{1em} \includegraphics[width=2.2in, trim=0.1in 4.2in 4.1in 0.2in, clip]{fig/spider.pdf}
\caption{
Examples from the crafting and cliff environments used in this paper. An
additional maze environment is also investigated.
(a)
In the crafting environment, an agent seeking to pick up the gold nugget in
the top corner must first collect wood (1) and iron (2), use a workbench to
turn them into a bridge (3), and use the bridge to cross the water (4).
(b)
In the cliff environment, the agent must reach a goal position by traversing
a winding sequence of tiles without falling off. Control takes place at the level
of individual joint angles; high-level behaviors like ``move north'' must be learned.
}
\label{fig:tasks}
\vspace{-1em}
\end{figure}
\subsection{Environments}
\label{ssec:envs}
\begin{figure*}
{
\centering
\includegraphics[width=0.32\textwidth]{fig/craft_all}
\includegraphics[width=0.32\textwidth]{fig/maze_all}
\includegraphics[width=0.32\textwidth]{fig/path_all} \\
}
\hspace{7.4em} (a) \hspace{14.3em} (b) \hspace{14.4em} (c)
\caption{
Comparing modular learning from sketches with standard RL baselines.
\textbf{Modular} is the approach described in this paper, while
\textbf{Independent} learns a separate policy for each task, \textbf{Joint}
learns a shared policy that conditions on the task identity, \textbf{Q
automaton} learns a single network to map from states and action symbols to Q
values, and \textbf{Opt--Crit} is an unsupervised option learner.
Performance for the best iteration of the (off-policy) Q automaton is plotted.
Performance is shown in (a) the crafting environment, (b) the maze environment,
and (c) the cliff environment.
The modular approach is
eventually able to achieve high reward on all tasks, while the baseline
models perform considerably worse on average.
}
\label{fig:multitask}
\vspace{-.5em}
\end{figure*}
\textbf{The crafting environment} (\autoref{fig:tasks}a) is
inspired by the popular game Minecraft, but is implemented in a discrete \mbox{2-D} world.
The agent may interact with objects in the world by facing them and
executing a special \textsc{use} action. Interacting with raw materials
initially scattered around the environment causes them to be added to an
inventory. Interacting with different crafting stations causes objects in the
agent's inventory to be combined or transformed. Each task in
this game corresponds to some crafted object the agent must produce; the most
complicated goals require the agent to also craft intermediate ingredients, and
in some cases build tools (like a pickaxe and a bridge) to reach ingredients
located in initially inaccessible regions of the environment.
\textbf{The maze environment} (not pictured)
corresponds closely to the the ``light world'' described by
\citet{Konidaris07Skills}. The agent is placed in a discrete world consisting of
a series of rooms, some of which are connected by doors. Some doors require that
the agent first pick up a key to open them. For our experiments, each task
corresponds to a goal room (always at the same position relative to the agent's
starting position) that the agent must reach by navigating through a sequence of
intermediate rooms. The agent has one sensor on each side of its body, which
reports the distance to keys, closed doors, and open doors in the corresponding
direction. Sketches specify a particular sequence of directions for the agent
to traverse between rooms to reach the goal.
The sketch always corresponds to a viable traversal from the
start to the goal position, but other (possibly shorter) traversals may also exist.
\textbf{The cliff environment} (\autoref{fig:tasks}b) is
intended to demonstrate the applicability of our approach
to problems involving high-dimensional continuous control. In this
environment, a quadrupedal robot \cite{Schulman15TRPO}
is placed on a variable-length winding
path, and must navigate to the end without falling off.
This task is designed to provide a substantially more challenging
RL problem, due to the fact that the walker must learn the low-level
walking skill before it can make any progress, but has simpler
hierarchical structure than the crafting environment. The
agent receives a small reward for making progress toward the
goal, and a large positive reward for reaching the goal square,
with a negative reward for falling off the path.
A
listing of tasks and sketches is given in \autoref{app:tasks}.
\subsection{Multitask Learning}
\label{ssec:multitask}
The primary experimental question in this paper is whether the extra structure
provided by policy sketches alone
is enough to enable fast learning of coupled
policies across tasks.
We aim to explore the differences between the approach described in
\autoref{sec:learning} and relevant prior work that performs
either unsupervised or weakly supervised multitask learning of hierarchical policy structure. Specifically, we compare our \textbf{modular} to approach to: \\[-1.5em]
\begin{enumerate}
\item Structured hierarchical reinforcement learners:
\begin{enumerate}
\item[(a)] the fully unsupervised \textbf{option--critic} algorithm of \citet{Bacon15OptionCritic}
\item[(b)] a \textbf{Q automaton} that attempts to explicitly represent the Q function for each task / subtask combination (essentially a HAM \citep{Andre02ALISPAbstraction} with a deep state abstraction function)
\end{enumerate}
\item Alternative ways of incorporating sketch data into standard policy
gradient methods:
\begin{enumerate}
\item[(c)] learning an \textbf{independent} policy for each task
\item[(d)] learning a \textbf{joint} policy across all tasks, conditioning
directly on both environment features and a representation of the complete
sketch\\[-1.5em]
\end{enumerate}
\end{enumerate}
The joint and independent models performed best when trained
with the same curriculum described in \autoref{sec:curriculum}, while the
option--critic model performed best with a length--weighted curriculum that
has access to all tasks from the beginning of training.
Learning curves for baselines and the modular model are shown in
\autoref{fig:multitask}. It can be seen that in all environments, our approach
substantially outperforms the baselines: it induces policies with substantially
higher average reward and converges more quickly than the policy gradient
baselines. It can further be seen in \autoref{fig:multitask}c that after
policies have been learned on simple tasks, the model is able to rapidly adapt
to more complex ones, even when the longer tasks involve high-level actions not
required for any of the short tasks
(\autoref{app:tasks}).
Having demonstrated the overall effectiveness of our approach, our remaining
experiments explore (1) the importance of various components of the training
procedure, and (2) the learned models' ability to generalize or adapt to
held-out tasks. For compactness, we restrict our consideration on the crafting
domain, which features a larger and more diverse range of tasks and high-level
actions.
\begin{figure}[t]
\vspace{-.5em}
\strut
\footnotesize
\hspace{-10pt}
\includegraphics[height=3.3cm,trim=25pt 0pt 10pt 0pt,clip]{fig/critics}~~~~ \includegraphics[height=3.3cm,trim=50pt 0pt 25pt 0pt,clip]{fig/curricula}
\\[-0.5em]\strut\hspace{5.3em}(a)\hspace{13em}(b)
\\[-1.5em]
\begin{center}
\includegraphics[height=3.3cm,trim=0pt 0pt 1cm 0pt,clip]{fig/craft_bytask}
\\[-0.5em]
(c)
\end{center}
\vspace{-1em}
\caption{
Training details in the crafting domain. (a) Critics: lines labeled ``task'' include a
baseline that varies with task identity, while lines labeled ``state''
include a baseline that varies with state identity. Estimating a
baseline that depends on both the representation of the current state and
the identity of the current task is better than either alone or a constant
baseline. (b) Curricula: lines labeled ``len'' use a curriculum
with iteratively increasing sketch lengths, while lines labeled ``wgt'' sample
tasks in inverse proportion to their current reward. Adjusting the sampling
distribution based on both task length and performance return improves convergence.
(c) Individual task performance. Colors correspond to task length.
Sharp steps in the learning curve correspond to
increases of $\ell_\textrm{max}$ in the curriculum.
}
\label{fig:ablations}
\vspace{-1em}
\end{figure}
\subsection{Ablations}
\label{ssec:ablations}
In addition to the overall modular parameter-tying structure induced by our
sketches, the key components of our training procedure are the decoupled critic
and the curriculum. Our next experiments investigate the extent to which these
are necessary for good performance.
To evaluate the the critic, we consider three ablations: (1) removing the
dependence of the model on the environment state, in which case the baseline is
a single scalar per task; (2) removing the dependence of the model on the task,
in which case the baseline is a conventional generalized advantage estimator;
and (3) removing both, in which case the baseline is a single scalar, as in a
vanilla policy gradient approach. Results are shown in \autoref{fig:ablations}a.
Introducing both state and task dependence into the baseline leads to faster
convergence of the model: the approach with a constant baseline achieves less
than half the overall performance of the full critic after 3 million episodes.
Introducing task and state dependence independently improve this performance;
combining them gives the best result.
We also investigate two aspects of our curriculum learning scheme: starting with
short examples and moving to long ones, and sampling tasks in inverse proportion
to their accumulated reward. Experiments are shown in \autoref{fig:ablations}b.
\added{
Both components help;
prioritization by both length and weight gives the best
results.
}
\subsection{Zero-shot and Adaptation Learning}
\label{ssec:generalization}
In our final experiments, we consider the model's ability to generalize
beyond the standard training condition.
We first consider two tests of generalization: a
\textbf{zero-shot} setting, in which the model is provided a sketch for the new
task and must immediately achieve good performance, and a \textbf{adaptation}
setting, in which no sketch is provided and the model must learn the form of a
suitable sketch via interaction in the new task.
We hold out two length-four tasks from the full inventory used in
\autoref{ssec:multitask}, and train on the remaining tasks. For zero-shot
experiments, we simply form the concatenated policy described by the sketches of
the held-out tasks, and repeatedly execute this policy (without learning) in
order to obtain an estimate of its effectiveness. For adaptation experiments,
we consider ordinary RL over high-level actions $\mathcal{B}$ rather than
low-level actions $\mathcal{A}$, implementing the high-level learner with the same agent architecture
as described in \autoref{ssec:model}. Note that the Independent and Option--Critic models cannot
be applied to the zero-shot evaluation, while the Joint model cannot be
applied to the adaptation baseline (because it depends on pre-specified sketch
features). Results are shown in \autoref{tab:generalization}. The held-out
tasks are sufficiently challenging that the baselines are unable to obtain more
than negligible reward: in particular, the joint model overfits to the training
tasks and cannot generalize to new sketches, while the independent model cannot
discover enough of a reward signal to learn in the adaptation setting.
The modular model does comparatively well: individual subpolicies succeed
in novel zero-shot configurations (suggesting that they have in fact discovered
the behavior suggested by the semantics of the sketch) and provide a suitable
basis for adaptive discovery of new high-level policies.
\begin{table}[t]
\centering
{\footnotesize
\begin{tabular}{lccc}
\toprule
Model & Multitask & 0-shot & Adaptation \\
\midrule
Joint & .49 & .01 & -- \\
Independent & .44 & -- & .01 \\
Option--Critic & .47 & -- & .42 \\
Modular (ours) & \bf .89 & \bf .77 & \bf .76 \\
\bottomrule
\end{tabular}
}
\caption{
Accuracy and generalization of learned models in the crafting domain. The table
shows the task completion rate for each approach after convergence under
various training conditions.
\textbf{Multitask} is the multitask training condition described in
\autoref{ssec:multitask}, while \textbf{0-Shot} and \textbf{Adaptation} are the
generalization experiments described in \autoref{ssec:generalization}.
Our modular approach consistently achieves the best performance.
}
\label{tab:generalization}
\vspace{-1em}
\end{table}
\section{Conclusions}
We have described an approach for multitask learning of deep multitask policies
guided by symbolic policy sketches. By associating each symbol appearing in a
sketch with a modular neural subpolicy, we have shown that it is possible to
build agents that share behavior across tasks in order to achieve success in
tasks with sparse and delayed rewards. This process induces an inventory of
reusable and interpretable subpolicies which can be employed for zero-shot
generalization when further sketches are available, and hierarchical
reinforcement learning when they are not. Our work suggests that these sketches,
which are easy to produce and require no grounding in the environment, provide
an effective scaffold for learning hierarchical policies from minimal
supervision.
\section*{Acknowledgments}
JA is supported by a Facebook Graduate Fellowship and a Berkeley AI / Huawei
Fellowship.
|
1,108,101,565,291 | arxiv | \section{Introduction}
In the last few decades, observations from ground and space based telescopes coupled to theoretical models have built the Lambda cold dark matter ($\Lambda$CDM) model. In this framework galaxies form within dark matter haloes that grow hierarchically \citep[e.g.][]{Gunn72, White78, Perlmutter99}.
Several, and often competing, processes then lead to the growth, evolution and final fate of galaxies inside these haloes, giving rise to the diverse morphology, colours, and structural properties of present-day galaxies. One of the major tasks of modern theories of galaxy formation is thus to describe in detail the physical processes that underpin this observed evolution.
Galaxies grow both in mass and size \citep{Muzzin13,van-der-Wel14} by acquiring gas either through cooling of a hot gas halo, or via cold gas streams fed by cosmic filaments \citep[e.g][]{Keres05,Dekel06,van-de-Voort11}. The process of star formation converts the gas into stars and it is tightly related to the net balance of gas accretion and ejection \citep{Bouche10, Dave12, Lilly13, Sharma19}. Feedback processes related to stellar winds, supernova explosions, and active galactic nuclei eject gas back into the { haloes surrounding galaxies: the} circumgalactic medium (CGM). While some of this gas falls back on the galaxy in a fountain-like mode \citep{Fraternali08}, a fraction of it can leave the halo of galaxies contributing to the metal enrichment of the intergalactic medium \citep[IGM, e.g][]{Dekel86,Schaye03,Springel05,Oppenheimer10}. Within this framework, it becomes crucial for an effective theory of galaxy evolution to understand how inflows and outflows interact and coexist within the CGM \citep[e.g.][]{Steidel10,Tumlinson17}, that is the gaseous component surrounding galaxies.
At the same time, galaxies do not evolve in isolation, but are accreted onto more massive structures, comprising groups or clusters of galaxies. Indeed, it has long been known that galaxies in dense environments have different properties than those in less dense environments, both in terms of gas content and structural properties \citep[e.g.][]{Oemler74,Dressler80,Giovanelli85,Balogh04,Peng10,Fossati17}. Environmental processes, triggered by the interactions between galaxies themselves with the hot medium in massive haloes { and with the IGM} further regulate the gas supply of galaxies \citep[e.g.][]{Boselli06}, leading to { a different evolution compared to more isolated galaxies.}
Significant progress has been made in understanding the gas-galaxy co-evolution since the advent of large spectroscopic surveys of galaxies at $z\lesssim 1$ (e.g. SDSS; \citealt{York00} or GAMA; \citealt{Driver11}). These galaxy surveys have been complemented by spectroscopic observations of quasars allowing for detailed studies of the CGM in absorption as a function of galaxy properties (including mass, star-formation rates), and their environment \citep[e.g.][]{Prochaska11,Tumlinson13,Tumlinson17,Stocke13,Bordoloi14,Tejos14,Finn16,Kauffmann17}.
These studies reveal the ubiquitous presence of a multiphase, enriched, and kinematically-complex CGM surrounding every galaxy \citep{Werk14, Werk16}.
This picture of a multiphase CGM has been extended to $z>1$ thanks to extensive observational campaigns with multi-object spectrographs on 8-10m telescopes \citep[e.g.][]{Steidel10,Rubin10,Crighton11,Rudie12,Turner14,Tummuangpak14,Turner17}. Despite these advancements, however, our view of the CGM at early cosmic epochs has been mostly limited to star-forming galaxies at the bright end of the UV luminosity function, and to scales of $\approx 0.1-1~\rm Mpc$ around galaxies, due to the difficulty in obtaining spectroscopy of samples of objects at small projected separations from the quasars using traditional multi-objects spectrographs.
These limitations have recently been lifted by integral field spectrographs which have been deployed at the largest observing facilities, including the Multi Unit Spectroscopic Explorer \citep[MUSE;][]{Bacon10} at the ESO Very Large Telescope, and the Keck Cosmic Web Imager \citep[KCWI;][]{Morrissey18} at the Keck Observatory. In particular, thanks to its large 1 arcmin$^2$ field of view, its extended wavelength coverage in the optical, and its exquisite sensitivity, MUSE has become the ideal instrument for studies of the CGM of galaxies in quasar fields \citep[e.g.][]{Schroetter16,Fumagalli16, Fumagalli17a, Peroux17, Bielby17a, Klitsch18, Chen19a}.
Leveraging the unique features of this instrument, we have designed an observational campaign to acquire very deep MUSE observations in the field centred at 21$^{h}$:42$^{m}$:24$^{s}$,$-$44$^{\circ}$:19$^{m}$:48$^{s}$
(hereafter the MUSE Ultra Deep Field or MUDF). This field stands out for its rare property of hosting two bright quasars at $z\approx 3.22$ that probe the IGM and CGM of intervening galaxies with two sightlines $\approx 60$ arcsec apart. Another quasar lies at the same redshift at $\approx 8$ arcmin separation, making this system a quasar triplet \citep{Francis93}. Upon its completion, this programme will acquire $\approx 200~$hours of MUSE data (corresponding to $\approx 150$ hours on source) in a $1.5 \times 1.2$ arcmin$^2$ region around the two quasars (ESO PID 1100.A$-$0528). This programme is complemented by deep high-resolution spectroscopy of the quasars using the UV and Visual Echelle Spectrograph \citep[UVES][]{Dekker00} at the VLT (ESO PIDs 65.O$-$0299, 68.A$-$0216, 69.A$-$0204, 102.A$-$0194),
and by the deepest spectroscopic survey (90 orbits in a single field) in the near infrared using the Wide Field Camera 3 instrument on board the {\it Hubble Space Telescope}, together with deep 8-orbit near UV imaging ({\it HST} PIDs 15637 and 15968).
These combined datasets will enable us to achieve several goals. First and foremost, we will connect the presence of gas in the CGM of galaxies with their properties and their environment from $z\approx 3$ to the present day. Without a pre-selection for UV and optically bright sources, we will have a unique vantage point on the low mass galaxy population up to $z\sim3$. Furthermore, the MUDF hosts notable structures as a function of redshift. For instance, a correlated strong $\rm{H\,{\sevensize I}}$\ absorber detected in both sightlines at $z\approx 3$ hints at an extended structure running across the field of view \citep{DOdorico02}. Moreover, the presence of a quasar pair is suggestive of an overdense region at $z\approx 3.22$, which is predicted to lie at the intersection of filaments in the cosmic web. Indeed, in the first paper of this series, \citet{Lusso19} studied the morphology of the giant Ly$\alpha$\ nebulae surrounding the quasars, finding an elongation of the ionized gas along the line connecting the two quasars. In the future, once we have the full dataset, we will search for the presence of ionized gas in this putative filament. The depth of the observations will also provide spectra of exquisite quality for a few hundred objects. Thanks to our multiwavelength dataset from the near-UV to the near-IR, we will study the properties and structure of low-mass galaxies across a large fraction of cosmic time.
In this paper, we present the survey design and the details of the MUSE observations and data reduction.
As a first application we focus on the connection of enriched cool gas ($T\sim 10^4$ K), as traced by the \mbox{Mg\,{\sc ii}}\ $\lambda \lambda\ 2796,2803 \AA$ absorption doublet in the quasar spectra, with the galaxy population and its environment at $0.5<z<1.5$.
Even though the full MUSE dataset is still being collected, the observations available to date already provide an excellent dataset for an accurate reconstruction of the local galaxy environment, and of the physical properties of galaxies at $z \lesssim 1.5$. Several studies have used the \mbox{Mg\,{\sc ii}}\ doublet to trace gas with similar column densities to that detected through 21-cm atomic hydrogen observations \citep[e.g.][]{Bergeron86,Kacprzak08,Steidel94,Chen08,Chen10,Gauthier13,Bordoloi14a,Nielsen15,Schroetter16,Nielsen18,Rubin18,Rubin18a}. These studies have found that \mbox{Mg\,{\sc ii}}\ absorbers trace the CGM of galaxies and possibly outflowing gas up to a distance of $\sim 100$ kpc \citep{Kacprzak08, Chen10}. This transition is therefore ideal to study the CGM of galaxies in groups and in isolation in the MUDF, for the first time in a very deep and complete dataset.
The paper is structured as follows: first we present the MUDF survey strategy, the data reduction procedures, and the quality validation of the MUSE data (Section \ref{obs_datared}), and of the high-resolution UVES spectroscopy (Section \ref{obs_hires}). We then describe the procedures adopted to extract the sources and their properties (Section \ref{sec_galaxyprop}), and the reconstruction of the local environment by searching for groups of galaxies in the field (Section \ref{sec_groups}). In Section \ref{sec_mgiiabs}, we describe a novel method to fit the high resolution quasar spectra to extract metal absorption profiles and we present our results on the correlation of absorbers and galaxies in groups and in isolation. We conclude with a discussion of these results (Section \ref{sec_discussion}) and with a summary of our findings (Section \ref{sec_conclusions}).
Throughout this paper, we assume a flat $\Lambda$CDM cosmology with $H_0 = 67.7~{\rm km~s^{-1}~Mpc^{-1}}$ and $\Omega_m = 0.307$ \citep{Planck16}. All magnitudes are expressed in the AB system, distances are in proper units, and we assume a \citet{Chabrier03} stellar initial mass function.
\section{MUSE Observations} \label{obs_datared}
\subsection{Survey strategy and current status}
The science goals of the MUDF programme include the study of the galaxy population around and along the line of sight to the quasar pair, a deep search for Ly$\alpha$\ emission from the putative filaments which are expected to connect the two quasars at $z\approx 3.22$.
Because the projected distance of these quasars is $\approx 62$ arcsec on the sky, a single Wide Field MUSE pointing of 60 arcsec on a side (the exact shape is trapezoidal) would not allow a full mapping of the area of interest. For these reasons, we designed an observational strategy that includes two heavily-overlapping pointings: { North-West and South-East (hereafter named simply North and South)}, the centres of which are shown as black crosses in Figure \ref{fig:MUDFexpmap}. Throughout the entire survey, we plan to collect $\sim 200$ frames dithered around each of these centers. The nominal exposure time of each frame is 1450s and different frames include small on-sky dithers ($\approx 3-4~$ arcsec), as well as 10 deg rotations of the instrument to reduce systematic errors arising from the different response of the 24 spectrographs and detectors of MUSE. While multiple exposures will be taken at the same rotation angle, the sequence of dithers has been designed to ensure that those exposures will not have the same centre and orientation.
The final survey footprint will be ellipsoidal, with a $\approx -45$ deg position angle (North through East), and a major and minor axis of $\approx 110$ and 90 arcsec respectively. At the time of writing, we have reached $\sim 35\%$ completion, and Figure \ref{fig:MUDFexpmap} shows the exposure map generated with these data overlaid on a combination of white-light images from the MUSE data itself and from the Dark Energy Survey (DES) Data Release 1 \citep{Abbott18} outside the MUSE footprint. The survey design prioritises the collection of deeper data in the area between the two quasars, where our science goals warrant maximum sensitivity. The outer black dashed contours mark the regions with at least 15 hours and 30 hours of exposure, respectively.
The instrument setup makes use of the MUSE Wide Field Mode with extended wavelength coverage in the blue (4650$-$9300$\AA$), in order to search for Ly$\alpha$\ emitting galaxies down to $z\approx 3$. We also take advantage of the Ground Layer Adaptive Optics module (GALACSI) which uses artificial laser guide stars to improve the image quality by partially correcting for atmospheric turbulence. In this way, the observations can be performed under a wider range of natural seeing conditions without compromising the image quality of the final mosaics. However, this setup implies that we cannot use data between 5760$\AA$ and 6010$\AA$, a range affected by the sodium line generated by the laser beam. Moreover, laser induced Raman scattering of molecules in the atmosphere creates emission lines outside this range. These lines are removed as part of the sky subtraction data reduction steps with no impact on the data quality.
\begin{figure}
\centering
\includegraphics[width=0.50\textwidth]{musefov_expmap.pdf}
\caption{Exposure map (colour scale) of the MUSE observations in the MUDF overlaid on an optical white light image from the MUSE data where available or from the Dark Energy Survey { combined} $g,r,i$ images elsewhere { (shown in grey on the same surface brightness scale). North is up and East is left.} The outer and inner black dashed contours mark the regions with at least 15 hours and 30 hours of exposure, respectively in this partial dataset. { The black and the white crosses mark the pointing centres and the position of the two quasars, respectively}.}
\label{fig:MUDFexpmap}
\end{figure}
In this paper, we present results obtained with the first 44.1 hours of observations. The data are collected as part of ESO programme 1100.A-0528. The first run (A) was executed in visitor mode on 16th August 2017, where we acquired 19 exposures of 1270s each in dark-time. The conditions were excellent with clear skies and an image quality of 0.4-0.6 arcsec in the final reconstructed frames. After this successful run, further dark-time observations have been executed in service mode (runs B and C) under similarly good observing conditions. As of June 2019, we have acquired 93 frames with an exposure time of 1450s each. In total we have acquired 159ks of data, corresponding to 44.1h. The image quality on the final coadd has a FWHM=0.57 arcsec as measured by Moffat profile fits to point sources.
\subsection{Data reduction}
The data reduction procedure follows the methodology described in previous works by our team \citep[][]{Fumagalli16,Fumagalli17,Lofthouse19}. In brief, we use the ESO pipeline \citep[v2.4.1][]{Weilbacher14} to reduce the calibrations (bias, flat, arcs, and standard stars) and apply them to the individual exposures. We also use this software to resample the detector values into cubes that are then sky subtracted using models of the sky continuum and sky lines that are matched to the data extracted from the darkest pixels in each frame. The individual exposures are then aligned by using the position of point sources in the field, and we generate a first coadd of the frames. We register this initial coadd to match the WCS coordinates of the two quasars from the Data Release 2 of the {\it GAIA} survey \citep{Gaia18}. The combination of relative offsets (arising from dithers) and this absolute WCS offset (arising form the pointing accuracy of MUSE) is then propagated into the pixel-tables of each exposure. We then reconstruct the cubes again for each exposure, this time on a pre-defined spatial grid of 540$\times$540 pixels, with each voxel (volumetric pixel) measuring 0.2 arcsec in the spatial direction and 1.25\AA\ in the spectral direction. The size of the grid has been chosen such that all the spatially aligned exposures of the programme will fall onto it, which eliminates the need for re-projecting the data when new observations are acquired.
The cubes produced by the ESO pipeline are not yet fully science-grade, as they are still affected by an uneven spatial illumination and by residuals from sky lines subtraction \citep{Bacon17}. To correct for these effects, we post-process the individually aligned, reconstructed and un-skysubtracted exposures using routines from the {\sc CubExtractor} package (v1.8, {\sc CubEx} hereafter, Cantalupo in prep.; see \citealt{Cantalupo19} for a description of the algorithms). The adopted procedure follows earlier work using MUSE data \citep{Borisova16, Fumagalli16, Fumagalli17}.
In brief, we use the {\sc CubeFix} tool to correct residual differences in the relative response of the 24 MUSE IFUs and of individual slices, which are not fully corrected by flat-field calibration frames. {\sc CubeFix} improves the illumination uniformity by measuring the average illumination in each stack
(the MUSE FoV is composed of 24 IFUs which are further made of 4 stacks of 12 slices), as a function of wavelength on white-light images generated on-the-fly from the cube. We then use the {\sc CubeSharp} tool for sky subtraction. The algorithm performs a local sky subtraction, including empirical corrections of the sky line spread function (LSF). The combination of these two routines is applied twice, by using the first illumination-corrected and sky-subtracted cube to mask the sources in the field for the second iteration. The masking step is critical to measure the true instrumental illumination in each stack and to achieve a high-quality sky subtraction.
After this first double pass of the {\sc CubEx} tools on the individual frames, we coadd them with mean statistics and 3$\sigma$ clipping to generate a deep white light cube. This cube is then used to detect and mask the sources and the deep mask is then fed into {\sc CubeFix} and {\sc CubeSharp} for a final run. After this step we measure the FWHM of point sources, their average flux and their position with respect to the {\it GAIA} astrometry using Moffat fits to the white light images of each exposure. The distribution of these values across all the exposures are then inspected to ensure that all the frames are correctly aligned and flux calibrated. The photometry is consistent at a 4\% r.m.s. level across frames and the astrometric precision is within 0.05 and 0.03 arcsec in R.A. and Dec. respectively, i.e. $<10\%$ of the spatial resolution. Moreover, no frame has been identified to be a $>3 \sigma$ outlier on any of these metrics and therefore we include all frames in the final combine.
Lastly, we combine the individual cubes with a $3\sigma$ clipping rejection of outliers and both with mean and median statistics. We also generate mean combines obtained with two independent halves of the exposures. These products are useful to correctly identify weak emission-line sources in the cube from residual artefacts (e.g. residuals of cosmic ray hits), which are likely to appear only in one of the two independent combines. { The {\sc CubEx} reduction pipeline significantly improves the quality of the illumination uniformity across the entire observed area. We quantified the flatness of the illumination by comparing the standard deviation of the flux in sky pixels of the white-light image from the {\sc CubEx} processing relative to the ESO reduction, finding a ratio of 0.23. This means that the products used in this work are four times deeper than what would have been possible to achieve with the standard pipeline.}
\subsection{Noise characterisation}
During each step of the reduction process, the Poisson noise from detector counts is propagated and then combined into a cube that contains the variance of the resampled flux values. However, during several steps, including the drizzle interpolation onto the final grid, the propagated variance departs from accurately reproducing the effective standard deviation of individual voxels in the final data cube \citep[see][]{Lofthouse19}. In Fig.~\ref{fig:rmshisto} we show the flux distribution of voxels ($f_{\rm vox}$) normalised by the pipeline error ($\sigma_1$) in each pixel within three wavelength intervals that are increasingly affected by atmospheric sky lines (namely $4900-5500~\rm \AA$, $6400-7000~\rm \AA$, and $7800-8400~\rm \AA$, in blue, orange, and green, respectively). Once sources are masked, the distribution is expected to approximate a Gaussian (the black lines are Gaussian fits to the distributions), with standard deviation of unity. Instead, Figure \ref{fig:rmshisto} shows that the pipeline error underestimates the true standard deviation in regions free from sky lines (blue histogram, with $\sigma=1.19$), while it overestimates it in regions more contaminated by skylines (green histogram, with $\sigma=0.83$). { Moreover, the distribution of $f_{\rm vox}/\sigma_1$ in the wavelength interval $7800-8400~\rm \AA$ shows the largest departure from a Gaussian distribution, an effect that could be attributed to the second-order contamination of the spectra due to our use of the MUSE extended wavelength mode.}
We overcome this issue by bootstrapping the combination of individual exposures for each pixel to accurately reconstruct the noise in the final mean, median, and half-exposure cubes. For this, we use 10,000 realisations of the bootstrap procedure to produce a variance cube. We then inspect the flux distribution of pixel values divided by the standard deviation of this new error, finding a distribution much closer to Gaussian, although still offset by a few percent from unity. We attribute this offset to a non-Gaussian distribution of the $f_{\rm vox}$ values. Due to this small offset, we further rescale the bootstrap variance cube to obtain a distribution of $f_{\rm vox}/\sigma_1$ that is unity on average, as shown by the red line in Figure \ref{fig:rmshisto}. We also note that by adopting this improved variance cube, the trend with wavelength which affects the propagated variance is also removed.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{cubermshist.pdf}
\caption{Histograms of flux in voxels values, normalised by the voxel standard deviation from the ESO+{\sc CubEx} pipeline in three wavelength ranges (blue, orange, and green), and in the full wavelength range after reconstructing the variance cube using a bootstrap resampling technique (red). Shown with black lines are Gaussian fits to these distributions, with the resulting standard deviation listed in the legend. Only spatial pixels free from continuum sources are included in this analysis. }
\label{fig:rmshisto}
\end{figure}
Lastly, we derive a model for the correlated noise arising from the resampling of the pixel tables onto a final grid, as described in \citet{Lofthouse19}. This model represents the correction that needs to be applied to the propagated error for a source in a { square} aperture of $N$ spatial pixels on a side, $\sigma_{\rm N}$, to recover the effective noise, $\sigma_{\rm eff}$. In the spectral direction we use an aperture of 4 pixels ($\approx 5 \AA$) which is generally appropriate for narrow emission lines in galaxies. Figure \ref{fig:covariance2dmap} shows this correction in bins of wavelength and aperture size. While the dependence with aperture size is a smooth and monotonic function, the wavelength trend is not trivial to understand. It does not correlate to the brightness of sky lines, and we hypothesize it might be driven by the opto-mechanical design of the instrument, by our observing strategy, or more likely by a combination of the two. Regardless of the physical origin, when using these data to search for line emitters, we will use a second-order polynomial fit that describes $\sigma_{\rm eff}/\sigma_{\rm N}$ as a function of the aperture size, for the wavelength bin where each source is found.
\begin{figure}
\centering
\includegraphics[width=0.50\textwidth]{sigmanscale_2D.pdf}
\caption{Ratio between the flux dispersion in apertures of varying size ($\sigma_{\rm eff}$) and the error derived propagating the variance ($\sigma_{\rm N}$), as computed across the MUDF datacube in a 5\AA\ window (4 pixels) as a function of wavelength and aperture size. The dark stripe just below 6000\AA\ is the region without valid data due to the scattered light from the AO laser. }
\label{fig:covariance2dmap}
\end{figure}
\section{High-resolution Quasar spectroscopy} \label{obs_hires}
As part of the MUDF survey, we are collecting a set of ancillary data to complement the MUSE observations. In this work, we make use of { UVES} high-resolution spectroscopy of the two quasars, J214225.78-442018.3 { (also known as Q2139$-$4434, and hereafter QSO-SE)} at $z = 3.221 \pm 0.004$ and J214222.17-441929.8 { (Q2139$-$4433, hereafter QSO-NW)} at $z = 3.229 \pm 0.003$ \citep{Lusso19}, for which we provide a description of the data acquisition and data reduction.
Some UVES data for the two quasars already exists in the ESO archive \citep[PIDs 65.O-0299, 68.A-0216, 69.A-0204,][]{DOdorico02}.
These data cover the wavelength range $\approx 4100-9000~$\AA, with a gap between $\approx 7400-7500~$\AA, { due to the gap between the two CCDs in the red arm of UVES}. The S/N of the bright quasar is $\approx 25$ per pixel across most of the wavelength range, while the fainter quasar has a S/N $\approx 8$ per pixel. To increase the wavelength coverage (down to $\approx 3600~$\AA\ and to fill the current gaps) and to increase the S/N of the fainter quasar, we have been awarded a total of 23 hours of new UVES observations (PID 102.A$-$0914). At the time of writing, observations for QSO-SE\ have been completed for a total of 7h on-source, while for QSO-NW\ only 15.5h were obtained, and 17h are still to be observed. In this paper, we therefore make use of the full dataset for the brighter quasar, relying only on a partial dataset for the fainter one. We will describe the spectra obtained from the complete observations in a forthcoming paper.
All data were reduced with the current version of the UVES pipeline (v. 5.10.4), using default parameters and procedures. At the end of the standard reduction process, the non-merged, non-rebinned spectra were reformatted with
a custom script and input to the ESPRESSO Data Analysis Software \citep[DAS,][]{Cupani16} for the final operations of coaddition and continuum fitting. This step avoids multiple rebinning of the spectra, which would introduce correlations in the error array.
Spectra have been normalized to the continuum estimated by the ESPRESSO DAS. This software fits a cubic spline to the spectrum redward of the Ly$\alpha$\ emission. In the Ly$\alpha$\ forest, the fit is iteratively improved by the simultaneous fit of the Ly$\alpha$\ absorption lines (for more details see \citealt{Cupani16}).
The final continuum-normalised spectra were then rebinned to a constant velocity step of 2.5 $\rm km~s^{-1}$.
\section{The galaxy population in the MUDF} \label{sec_galaxyprop}
In this Section we characterize the galaxy population detected in the MUDF and describe the procedures used for the identification of continuum sources, the extraction of their spectroscopic redshifts, and the derivation of their physical properties through stellar population syntesis fits.
\subsection{Source detection}
We identify continuum sources using the {\sc SExtractor} \citep{Bertin96} software on the white light image reconstructed from the MUSE datacubes. We input a variance image and we use a conservative threshold of 3$\sigma$ above the local { noise}, and a minimum area of 10 pixels for detection. The minimum deblending parameter {\sc DEBLEND\_CONT} is set to 0.0001, chosen to enable the detection of sources in crowded regions of the mosaic. We restrict source extraction to the area where we collected more than 10 exposures, corresponding to an observing time of $\approx 4$ hours, to avoid spurious detections at the noisy edges of the field of view. In future publications we will use deep {\it HST} imaging for source detection in the field.
This procedure, identified 250 sources. For each of them, we extract the magnitude in the detection image ($m_{\rm MUSE}$) in an elliptical aperture with size equal to 2.5 times the \citet{Kron80} size from {\sc SExtractor}. We also reconstruct a 1D spectrum summing the spectra from pixels within the Kron aperture, also transforming the wavelength to vacuum. In both these procedures, we mask nearby sources whose segmentation map falls in the extraction mask to minimize the effects of blending.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{musefov.pdf}
\caption{Reconstructed optical white-light image from the MUSE data (where available) or from the DES $g,r,i$ images (outside the MUSE FoV). { North is up and East is left}. The two quasars are marked with white crosses. Continuum-detected sources identified within the MUSE field are marked with apertures (red for sources with spectroscopic redshifts with confidence classes 4 and 3) with size equal to 6 times the Kron radius. The dotted orange contour encloses the region where the exposure time is $\ge 4$h.}
\label{fig:musefov_sex}
\end{figure*}
We measure redshifts using the {\sc Marz} software \citep{Hinton16}, which we customize\footnote{This version is available at \url{https://matteofox.github.io/Marz}} with the inclusion of high-resolution synthetic templates for passive and star-forming galaxies at $z<2$ derived from \citet{Bruzual03} stellar population synthesis models. Following automatic template fitting, individual sources are inspected and classified by two authors (MFo and EKL) in four classes (4, secure redshift with multiple absorption or emission features; 3, good redshift with single but unambiguous feature; 2, possible redshift, based on a single feature; 1, unknown redshift). Typical redshift uncertanties are $\delta z \approx 0.0002 \times (1+z)$, corresponding to $\delta v \approx 60$ $\rm km~s^{-1}$.
The final redshift classification is presented in Table~\ref{tab:sourcesample}. Figure \ref{fig:musefov_sex} shows the position within the MUDF footprint of the continuum-detected sources and their {\sc SExtractor} IDs. Hereafter we make use only of the objects with a reliable spectroscopic redshift (classes 3 and 4) which are marked with red apertures. The redshift distribution of these { 117 sources (47\% of the detected sample)} is shown in Figure \ref{fig:zdistr}. The range $1.5<z<2.5$, known as the ``redshift desert'', does not have significant detections due to the absence of strong emission lines that would fall within MUSE coverage.
Only two galaxies are identified there thanks to strong \mbox{Al\,{\sc iii}}\ absorption lines in their spectra. This range, however, will be filled in by the ultra-deep near-infrared observations which we are collecting with {\it HST}/WFC3.
\begin{figure}
\centering
\includegraphics[scale=0.44]{zcont_distr.pdf}
\caption{Distribution of spectroscopic redshifts with confidence classes 4 and 3 (see text for details) for continuum sources in the MUDF. }
\label{fig:zdistr}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{ccccccccc}
\hline
ID & Name & R.A. (J2000) & Dec. (J2000)& $m_{\rm MUSE}$ & $\sigma_{m_{\rm MUSE}}$ & Redshift & Confidence & $\log(M_*/{\rm M_\odot})$ \\
\hline
1 & MUDFJ214226.11-442031.3 & 325.60878 &$-$44.34204 &24.3 & 0.2 & 4.0441 & 4 & -\\
2 & MUDFJ214223.99-442029.5 & 325.59996 &$-$44.34154 &26.0 & 0.3 & - & 1 & -\\
3 & MUDFJ214224.73-442025.4 & 325.60304 &$-$44.34039 &25.7 & 0.3 & - & 1 & -\\
4 & MUDFJ214223.34-442029.4 & 325.59725 &$-$44.34151 &24.3 & 0.1 & - & 1 & -\\
5 & MUDFJ214225.93-442026.3 & 325.60805 &$-$44.34064 &23.6 & 0.1 & 1.0530 & 4 & 10.24\\
\hline
\end{tabular}
\caption{The first five continuum sources in the MUDF extracted by {\sc{Sextractor}} with $S/N>3$. Column 1 shows the source ID, column 2 shows the source name. Columns 3 and 4 list the right ascension and declination of the sources followed by the MUSE white-light magnitude of the source in column 5 with its associated error (column 6). The redshifts obtained using {\sc{MARZ}} are shown in column 7 followed by their confidence (column 8). A confidence flag of 3 or 4 indicates reliable redshifts while flags 1 or 2 are for unknown or highly uncertain redshifts, respectively. Column 9 shows the stellar mass from stellar population fitting for sources with $z<1.5$. The full table is included as online only material.}
\label{tab:sourcesample}
\end{table*}
\subsection{Completeness}
To assess the completeness of our source extraction procedure, we inject mock sources of known magnitude into the detection { (white-light)} image and we assess their detection rate.
First, we inject two dimensional Gaussian templates with a FWHM of 0.6~arcsec to simulate unresolved point sources at the resolution limit of the final MUSE mosaic. Then, we repeat the experiment using circular exponential profiles which are more appropriate for real disk-like galaxies. In this case we use an exponential scale length of 0.26~arcsec which corresponds to an effective radius of 5~kpc at $z\sim1$, which is typical for star-forming galaxies \citep{van-der-Wel14}. The intrinsic exponential profiles are convolved with a Gaussian kernel with a FWHM of 0.6~arcsec to account for the effects of the observational PSF, { and do not include the effects of the disk inclination}. In each iteration, we inject 80 mock sources (to avoid confusion and blending issues) in blank background regions and we repeat the detection procedure 10,000 times.
\begin{figure}
\centering
\includegraphics[scale=0.45]{PSF_EXP_depth_withdet.pdf}
\caption{{ Top Panel: Number of galaxies extracted as a function of the magnitude in the detection image. The grey dashed line shows the expected number of galaxies determined from a linear fit to the region between $m_{\rm{MUSE}}>24$~mag and $m_{\rm{MUSE}}<26.5$~mag.}
Bottom Panel: Fraction of mock sources detected by {\sc Sextractor}. The solid lines are for the injection of Gaussian sources with a FWHM of 0.6~arcsec. The dashed lines are for sources with an exponential profile with scale-length equal to 0.26~arcsec and PSF convolved. Lines of different colours are for different exposure times in the MUDF map. The arrows (filled for point sources and empty for extended ones) mark the faintest magnitude at which we are 90\% complete in each bin of exposure time. { The black points (with errorbars) show the empirical detection fraction determined as the ratio of the detected galaxies relative to the expected number from the linear fit in the top panel. }}
\label{fig:simdepth}
\end{figure}
{ The bottom panel of} Figure \ref{fig:simdepth} shows the fraction of detected mock sources as a function of magnitude for sources injected at locations of the image with different exposure times. For point sources (solid lines) we reach fainter limits than for extended sources (dashed lines), due to their compactness. The arrows mark the faintest magnitude at which we are 90\% complete in each bin of exposure time. In the deepest bin, where we have so far collected between 30-45 hours of data, we reach $m_{\rm MUSE} \simeq 27.8~\rm mag$ and 26.6 mag for point-like and extended sources, respectively. { The top panel of Figure \ref{fig:simdepth} shows the number of detected galaxies as a function of the white-light magnitude, $m_{\rm MUSE}$. We fit a linear relation (dashed line) to the logarithm of the number counts in the region $ 24~\rm{mag} < m_{\rm MUSE} < 26.5~\rm{mag}$, where our survey is complete. We then show in the bottom panel the empirical detection fraction (black points) determined as the ratio of the detected galaxies relative to the expected number from the linear fit. Despite the low number statistics, we find that this empirical detection fraction is in between the limits defined by the two
mock experiments (point sources and extended sources), complementing and corroborating the approach based on mocks.
Lastly we note that, to reach maximum depth, the detection image is obtained from the full MUSE wavelength range, which does not correspond to the most commonly used broad-band filters. To facilitate the comparison to other imaging surveys we have computed the following color corrections for a star-forming (passive) galaxy template at $z=1$: $m_{\rm MUSE}-r_{\rm SDSS} = -0.31 (-0.44)$, and $m_{\rm MUSE}-i_{\rm SDSS} = 0.04 (0.05)$.}
\subsection{Source photometry and ancillary data}
Taking advantage of the wide wavelength coverage of MUSE, we extract source photometry in four pseudo-filters with the goal of characterising the physical properties of the galaxy population. The width and the number of filters have been carefully selected to maximise our sensitivity to strong breaks in the galaxy spectra, while keeping a good S/N ratio in the measurements, and avoiding the gap in the spectra due to the AO laser filter. We convolve the MUSE datacube with the top-hat filters, whose ranges are given in Table \ref{tab:filterspec}, to generate an image and the associated variance. We then run a forced photometry algorithm using { apertures with radius 2.5$\times r_{\rm Kron}$} as defined above for the detection image.
\begin{table}
\centering
\begin{tabular}{lcc}
\hline
Filter Name & $\lambda_{\rm min}$ (\AA) & $\lambda_{\rm max}$ (\AA) \\
\hline
MUSE Blue & 4750 & 5700 \\
MUSE Green & 6100 & 7100 \\
MUSE Red & 7100 & 8200 \\
MUSE NIR & 8200 & 9300 \\
\end{tabular}
\caption{Name and wavelength range of the top-hat filters defined for the photometric extraction.}
\label{tab:filterspec}
\end{table}
The MUSE data probe the rest-frame near-UV to optical region of the galaxy spectra for sources at $z \lesssim 1.5$. While providing a good sensitivity of the recent star-formation activity and stellar content of the galaxies, these blue wavelengths are affected by dust extinction which could bias our reconstruction of the galaxy parameters.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{id35_mudf_summary_small.pdf}
\caption{Example of the stellar population fitting results for source ID 35 in the MUDF. Rest-frame MUSE spectrum (black line) and best fit model (red line). The fit residuals { (Data-Model)} are given below the main panel, and the shaded area shows the noise array. The inset shows the spectral energy distributions extracted from the MC-SPF posterior (red lines) with the observed photometry overplotted in black. The blue lines show the contribution of young stars (Age $< 10$~Myr) to the total template.}
\label{fig:sps_example}
\end{figure*}
To mitigate this issue, we include in our analysis data taken with the IRAC \citep{Fazio04} instrument onboard the {\it Spitzer} \citep{Werner04} Space Telescope. IRAC imaging of an area of the sky including the MUDF has been taken during the cryogenic mission \citep[PID: GO-3699,][]{Colbert11} using the four channels at 3.6, 4.5, 5.8, and 8.0$\mu$m. However, due to the field of view offset of the IRAC camera, only the channels at 3.6, and 5.8$\mu$m cover the MUDF in its full extent, and we use them in this work. The total integration time per pixel of these observations are 1800s, and image FWHM is $\approx 1.8$~arcsec.
To extract IRAC photometry from these images we use the {\sc t-phot} software \citep{Merlin15}, which optimally deals with the lower resolution of IRAC data in crowded fields by means of source priors from higher resolution imaging. We use the MUSE white light image, PSF model and source catalogue as the high resolution dataset. We then derive a Gaussian convolution kernel for each IRAC channel, to transform the MUSE PSF into the IRAC one. Lastly we resample the IRAC data onto the MUSE pixel grid and run {\sc t-phot}. Due to the shallowness of the IRAC data, only the relatively bright sources can be detected robustly, especially in crowded areas of the MUDF. In the end, we visually inspected the {\sc t-phot} results for each source to assess if results: i) can be used and the detection is at least with 3$\sigma$ significance; ii) can be used as an upper limit; iii) cannot be used because the source is faint and it is highly blended with a nearby bright source. In the first two cases (95\% of the sources we fit), we include the IRAC data in the estimates of the stellar populations as described in the following section.
\subsection{Stellar population parameters}
We characterize the physical properties of the galaxy population in the MUDF by jointly fitting the MUSE spectra and the photometric data derived from MUSE and IRAC as described above. These combined data sets allow us to derive reliable estimates for the star-formation history, stellar mass, current star-formation rate, and dust content of the galaxies. This multi-wavelength approach is instrumental in breaking the degeneracy that often arise between these parameters, most notably between stellar mass and dust extinction, or star-formation rate and galaxy age \citep{Wuyts11a, Fossati18}. In this work, we primarily use the derived stellar masses, and we postpone the discussion of results involving the other quantities to a follow-up paper, where we will re-assess the quality of these measurements by adding into the fitting procedure the near-UV and near-IR data that will be collected with {\it HST} as part of programmes PID 15637 and 15968.
To infer the source properties, we fit the observed data to synthetic models using the Monte Carlo Spectro-Photometric Fitter (MC-SPF). A more complete description of the code is given in \citet{Fossati18}; here we briefly describe the procedure and the model set. First, we build a grid of synthetic stellar spectra from the \citet{Bruzual03} high-resolution models at solar metallicity. Following similar work that studied the properties of galaxies in deep fields \citep{Wuyts11a, Momcheva16}, we use exponentially-declining star-formation histories with $e$-folding times varying between 300 Myr and 15 Gyr, and galaxy ages varying between 50 Myr and the age of the Universe at the redshift of each galaxy. With the imposed minimum $e$-folding time, \citet{Wuyts11a} found that the SFR derived with this spectral energy distribution technique matches the one obtained from UV+far-IR photometry for systems { with low-to-intermediate star-formation rates ($SFR \lesssim 100~{\rm M_\odot~yr^{-1}}$), which are likely
to dominate the sources in our small field.}
We further add nebular emission lines to the stellar template, by using line ratios from \citet{Byler18} scaled to the line luminosity using the number of Lyman continuum photons from the stellar spectra. The SFH and nebular emission grids are interpolated on-the-fly during the likelihood sampling. We assume a double \citet{calzetti00} attenuation law to include the effects of dust attenuation. Stars older than 10 Myr and emission lines arising from them are attenuated by a free parameter, $A_{\rm old}$, while younger stars are attenuated by $A_{\rm young} = 2.27\times A_{\rm old}$ to include the extra extinction occurring within the star forming regions.
We jointly fit the photometric data points and the MUSE spectra at their native resolution. A third degree multiplicative polynomial is used to normalize the spectra to the photometric data and to remove large-scale shape differences between the models and the spectra, especially at the edges of the wavelength range where the MUSE response function is more uncertain. The multidimensional likelihood space is sampled by using {\sc PyMultiNest} \citep{Buchner14}, a python wrapper for the {\sc MultiNest} code \citep{Feroz08,Feroz13}. Figure \ref{fig:sps_example} shows an example of the results obtained with this fitting procedure for a galaxy in the MUDF. With the current wavelength coverage, some degeneracy remains between dust extinction and star-formation rate. Rest-UV data from HST will break this degeneracy, however, in this work we primarily make use of the stellar mass estimates from the fits which are found to be well converged and free from degeneracies with other parameters.
\section{Group Identification} \label{sec_groups}
The detection of a large number of sources with accurate spectroscopic redshifts in narrow redshift bins (see Figure~\ref{fig:zdistr}) is highly suggestive of the presence of overdense structures in the MUDF footprint. Indeed, over-densities spanning from compact groups to clusters and super-clusters of galaxies have been detected in all the fields which have been targeted by deep and extensive spectroscopic campaigns \citep[e.g.][]{Yang07, Scoville07, Kovac10, Diener13, Balogh14, Fossati17, Galametz18}. We therefore proceed to systematically identify galaxy groups in the MUDF.
There is a rich collection of literature on finding groups in spectroscopic redshift surveys, with most methods based on a Friends-Of-Friends approach \citep{Huchra82, Berlind06, Knobel09, Knobel12, Diener13}. These methods link galaxies into structures by finding all objects connected within linking lengths $\Delta r$ (a physical distance) and $\Delta v$ in redshift space. The choice of these parameters is driven by the need to balance the competing requirements of identifying all group members without over-merging different groups, avoiding interlopers, and taking into account redshift uncertainties.
In this work we search for galaxy groups at $0.5<z<1.5$, where the lower limit is dictated by the small volume probed at lower redshift and the upper limit is given by the redshift desert. In this range, we have highly accurate spectroscopic redshifts for all the galaxies that we aim to connect into structures. We use $\Delta r = 400$ kpc and $\Delta v = 400$ $\rm km~s^{-1}$, following \citet{Knobel09} in the same redshift range. Similar to previous works, we define a galaxy group to be an association of three or more galaxies \citep{Knobel12, Diener13}, { independently of stellar mass or observed magnitude}.
\begin{table*}
\centering
\begin{tabular}{ccccccccc}
\hline
ID & <R.A.> & <Dec.> & <z> & $\rm{N_{gal}}$ & $\log({M_{\rm halo}}/{\rm{M_\odot}})$ & $r_{\rm vir}$ & $W_{2796}$ & $W_{2796}$\\
& J2000 & J2000 & & & & kpc & (QSO-SE) $\AA$ &(QSO-NW) $\AA$\\
\hline
1 & 325.602433 & -44.332283 & 0.67837 & 11 & 12.0 & 160 & $0.85^{+0.03}_{-0.04}$ & $<0.08$ \\
2 & 325.598267 & -44.332865 & 0.68531 & 5 & 11.2 & 87 & $<0.02$ & $<0.07$ \\
3 & 325.593548 & -44.330041 & 0.78491 & 3 & 10.8 & 61 & $<0.02$ & $<0.06$ \\
4 & 325.597824 & -44.330187 & 0.88205 & 6 & 12.9 & 293 & $1.34^{+0.02}_{-0.01}$ &$0.43^{+0.01}_{-0.01}$ \\
5 & 325.604431 & -44.331396 & 1.05259 & 15 & 13.4 & 419 & $1.67^{+0.02}_{-0.02}$ & $<0.06$ \\
6 & 325.605063 & -44.331111 & 1.15524 & 3 & 12.1 & 138 & $1.16^{+0.06}_{-0.13}$ & $<0.06$ \\
7 & 325.602279 & -44.328677 & 1.22849 & 4 & 11.6 & 96 & $<0.02$ & $<0.06$ \\
\end{tabular}
\caption{Properties of the groups identified in the MUDF. The table lists: the group ID; the average R.A., Dec. and redshift of the group members; the number of members in each group; the halo mass derived from the stellar-to-halo mass relation of \citet{Moster10}; the virial radius (R200) for that halo mass; the equivalent width of the \mbox{Mg\,{\sc ii}}\ $\lambda$2796\AA\ absorption line associated with the group (or 2$\sigma$ upper limit) { along the QSO-SE\, and QSO-NW\ sightlines.}}
\label{tab:groups}
\end{table*}
Following this procedure, we find seven groups in the MUDF footprint, and their properties are listed in Table~\ref{tab:groups}. We estimate the group halo mass by summing the stellar mass of the group galaxies \citep[see][for a validation of the method]{Yang07} and using the stellar to halo mass relation from \citet{Moster10} at the redshift of the group. We also report in the table the geometric centre of the group, the average redshift of its members { (not weighted by other galaxy parameters)}, and the virial radius calculated from the halo mass.
Assuming a typical uncertainty on the total stellar mass of 0.15 dex \citep{Conroy09, Gallazzi09, Mendel14}, this turns into an error on the virial mass of 0.20-0.25 dex depending on the local gradient of the stellar- to halo-mass relation. This uncertainty corresponds to an error of $\sim 50-60$ kpc on the virial radius. The virial radius is further affected by the implicit assumption of virialization of the group halo. With only a few members per group, this can not be guaranteed as we might be observing groups in formation. Given these caveats, the estimated virial radii should only be taken as indicative values, especially for structures with less than 10 members.
At the redshift of the groups, we search for strong line emitters which are too faint in continuum to be detected. We run {\sc CubEx} on the continuum-subtracted cube and we use the detection parameters defined in \citet{Lofthouse19}, i.e. voxel $S/N > 3$, minimum number of voxels of 27 and minimum number of channels in wavelength of 3. We found no line emitter that is not associated with a detected continuum source. However it remains possible that deeper exposures would lead to a secure detection of more redshifts from continuum features, possibly increasing the number of groups or including fainter members.
Figure \ref{fig:groups_gallery} shows the location of the detected galaxies in groups within the MUDF footprint.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{MUDFgroups_gallery.pdf}
\caption{Gallery of the location within the MUDF of the galaxies in groups, marked with ellipses as in Figure \ref{fig:musefov_sex}. The white crosses mark the position of the background quasars, while the red crosses in each panel indicate the geometrical centre of the group. The cyan cross in the top left panel marks the position of a serendipitously detected quasar that shows absorption at the redshift of this group (see Section \ref{sec_connectingprop}). The number next to each galaxy shows the velocity offset with respect to the redshift of the group. The red segment in the bottom right corner of each panel marks a 100 kpc scale at the redshift of each group.}
\label{fig:groups_gallery}
\end{figure*}
\subsection{Going beyond the survey edges}
The unprecedented depth of the MUSE observations in the MUDF comes at the price of a relatively small survey area. Therefore, it is possible that at least some of the groups identified in the previous section are part of a larger scale structure, or that some of the group members are missed by our footprint. Unfortunately, this area of the sky is not covered by wide-area archival spectroscopy, thus we can only attempt to characterise the large scale environment around the MUDF with photometric redshifts (photo-z).
For this purpose, we downloaded source and magnitude catalogues from the DES DR1 \citep{Abbott18}, and we computed photo-zs from the $g,r,i,z$ fluxes using the {\sc EAZY} code \citep{Brammer08}. Following \citet{Hoyle18} we do not include $Y-$band magnitudes, because the observations are too shallow to improve the photo-z estimate. These authors also studied the photo-z uncertainty on a smaller region of the DES footprint, finding that they range from $\sigma_z = 0.1$ at $z=0.5$ to $\sigma_z = 0.4$ at $z=1.5$. For each group identified in the MUDF, we compute the galaxy density in an aperture with radius 1 Mpc centred on the geometrical centre of the group. The density is defined as the sum over all the galaxies within this aperture of the fraction of the DES photo-z probability density function that falls within $\Delta v = \pm 5000$$\rm km~s^{-1}$\ of the redshift of the group. This approach optimally takes into account the variable photo-z accuracy as a function of redshift and galaxy magnitude \citep{Kovac10, Cucciati14}. We then define two control apertures of the same size and velocity depth bracketing in redshift the previous aperture, but not overlapping with it.
Despite the limitations of this approach, we do not find a significant over-density of bright galaxies over a scale larger than the MUDF footprint for any of the groups studied. We therefore conclude that our inferred group properties are broadly representative of the underlying population. Nevertheless, wider-area multi-object spectroscopic observations are required to investigate with better precision the large scale environment of the MUDF.
\section{The CGM of group galaxies} \label{sec_mgiiabs}
Having completed a deep census of galaxy groups including members that are as faint as $m_{\rm MUSE} \approx 28.5$ mag (or stellar masses down to $\approx 10^{8}~\rm M_*$ at $z=1.5$), we now take advantage of the presence of the two bright quasars in the field to probe the gas content of galaxies within these structures in absorption.
In particular, we focus on the \mbox{Mg\,{\sc ii}}\ absorbers detected in the quasar spectra at $z<1.5$.
\subsection{\mbox{Mg\,{\sc ii}}\ fitting in the high-resolution quasar spectra}
To identify and fit the \mbox{Mg\,{\sc ii}}\ absorbers in the quasar spectra we employed a two step procedure. First, strong absorption lines were searched for by visually inspecting the spectra { across the entire wavelength range}, looking first for the \mbox{Mg\,{\sc ii}}\ doublets and then for other transitions which are commonly detected in quasar spectra, e.g. \mbox{Mg\,{\sc i}}, \mbox{Fe\,{\sc ii}}, \mbox{Mn\,{\sc ii}}, \mbox{Cr\,{\sc ii}}\ and \mbox{Ca\,{\sc ii}}. This procedure has been repeated by two authors independently (MFo, VDO) and for both quasars. We reliably identified five absorbers in the bright quasar at $z\approx 0.67,0.88,0.98,1.05,1.15$, while in the fainter quasar we only identify one absorber at $z\sim0.88$.
For each identified absorber we then fit the \mbox{Mg\,{\sc ii}}\ doublet profile with a novel method that uses Bayesian statistics to identify the minimum number of Voigt components required to model the data. Our method has two desirable properties: first, it requires minimal input from the user as no initial guesses are required; second, the final result is an optimal statistical description of the data, as the fit does not depend on any particular choice of initial guesses nor on a user-defined prior on the number of components.
The details of the algorithm used will be presented in a forthcoming publication, and are only briefly summarised here.
Once the atomic transition (or transitions in case of multiplets to be fit jointly) has been identified, it is only required to specify the wavelength range (or disjoint ranges) to fit the normalized spectrum. The code then performs a fit of the data with progressively higher number of Voigt components (between a minimum and a maximum number), where each component is defined by a column density ($N$), { a Doppler parameter ($b$), and a redshift ($z$)}. For our fitting of \mbox{Mg\,{\sc ii}}\ absorbers we define uniform priors on the first two quantities in the range $11.5 < \log(N/{\rm cm^2})< 16.0$ and $1<b/{\rm km~s^{-1}}<30$, chosen to avoid modelling features within the noise or large-scale variations in the continuum level. The prior range on redshift is instead internally defined such that the absorption components can cover the selected wavelength range. A constant continuum level is also left as a free parameter in the fit to account for slight imperfections in the normalisation of the spectra. If required following a visual assessment of the doublet profiles, we allow for { a user defined number of} ``filler'' Voigt profiles designed to describe absorption lines arising from blends of different ions at different redshifts. One example is the rightmost line in the profile of the $z\approx0.98$ absorber in Figure \ref{fig:fits_briqso}. The model spectrum is then convolved to match the line spread function of the observed data using a Gaussian kernel.
At each iteration, having defined a model with $n$ components, we sample the multidimensional likelihood space using {\sc PolyChord} \citep{Handley15}, a nested sampling algorithm that has better performance than {\sc MultiNest} for high-dimensional parameter spaces with multiple degeneracies between parameters. This is indeed our case, where complex models can reach a number of free parameters in excess of 50. After the sampling, we use the posterior samples to derive the Akaike Information Criterion \citep[AIC,][]{Akaike74} of each fit, defined as AIC $= 2\times n_{\rm par} - 2\times {\rm ln}(L)$, where $n_{\rm par}$ is the number of free parameters and $L$ is the likelihood of the model. We select as optimal fits those with a likelihood ratio within 1:150 compared to the best {(lowest)} AIC. If the selection includes only one model this is used as our best fit, otherwise we choose (from the selected ones) the model with the minimum number of components.
Figure \ref{fig:fits_briqso} shows the fits of the \mbox{Mg\,{\sc ii}}\ absorption systems identified in the quasar spectra. While the fits are jointly obtained from the 2796~\AA\ and 2803~\AA\ transitions, we show only the stronger 2796~\AA\ transition in this figure. The single absorber found in the fainter quasar, which we discuss in more detail in Section \ref{sec_correlatedabs}, is shown in the Bottom Right panel, while in the other panels we show the absorbers found in the bright quasar sightline. In all panels we find complex kinematic profiles requiring from 3 to 15 Voigt components. Moreover we note that, with the exception of only one system ($z\sim0.98$), the absorbers are found at the redshift of a galaxy group identified in the previous section. Therefore, we set the zero of the velocity axis at the redshift of the galaxy group, except for one absorber where it is set to the redshift of the closest galaxy (ID 14) in velocity space. We stress that the position of the zero on the velocity axes is only made for reference and it does not affect our results.
\begin{figure*}
\centering
\includegraphics[width=1.00\textwidth]{MUDFpc_all_MgII_grpv_with2803.pdf}
\caption{UVES spectra of \mbox{Mg\,{\sc ii}}\ absorbers identified in the brighter QSO-SE\ (all panels except last two rows in the rightmost column), and in the fainter QSO-NW\ (last two rows in the rightmost column). The zero velocity corresponds to the average redshift of the galaxy group associated with the absorber, or to the redshift of the closest galaxy for the $z\approx 0.98$ absorber. The observed spectrum is shown as a black histogram with the $1\sigma$ error array in grey. A Voigt profile Bayesian fitting gives the red models, where individual lines are sampled from the posterior distribution. We include the rest-frame total equivalent width measurements in each panel. The red ticks above the continuum level show the position of individual \mbox{Mg\,{\sc ii}}\ absorption components, which are shown as cyan dotted lines. Pink dotted lines are for filler components arising from blends of different transitions. The blue stars show the spectroscopic redshift of MUDF galaxies.}
\label{fig:fits_briqso}
\end{figure*}
\subsection{Connecting the absorption features and the galaxy population} \label{sec_connectingprop}
The fact that \mbox{Mg\,{\sc ii}}\ absorbers are preferentially found at the redshift of a galaxy group ($5/6$ cases) suggests a direct connection between overdensities of galaxies and large cross sections of cold gas. Leveraging the deep spectroscopic survey in the MUDF that is complete to faint levels/low masses, we systematically explore this connection, in order to understand the physical origin of the gas probed in absorption.
We show in Figure \ref{fig:fits_briqso} the spectroscopic redshift of the galaxies in the MUDF which are within the plotted range for each absorber (blue stars). We immediately note two features: first, each absorber (with the exception of the one at $z\approx0.98$) has one or more galaxies which are overlapping in velocity space with the absorption profile.
Second, the absorbers with more complex kinematics ($z\sim0.88$, and $z\sim1.05$) have the largest number of galaxies at a small velocity offset. The case of the $z\sim0.98$ absorber, instead, stands out for having a complex profile, yet a single galaxy which is offset in velocity. However, given that the bright quasar is close to the South-East edge of the observed field, we cannot rule out the presence of other objects, just outside the MUDF footprint but at smaller velocity separation compared to the galaxy we found. Another issue is that the detection of continuum sources is more difficult at small projected separation from the quasars due to their brightness. We can detect sources as faint as $m_{\rm MUSE} \simeq 27~\rm mag$ at $\approx 25$ kpc and $\approx 18$ kpc in projection from QSO-SE\ and QSO-NW, respectively. However, brighter or highly star forming galaxies would be detected even at smaller projected distances.
Thanks to the integral-field spectroscopic coverage of the MUDF, we next investigate trends in the spatial location of the galaxies which are found in the velocity window defined by the absorbers. Starting with the $z\sim0.67$ absorber as a first example, we note how the \mbox{Mg\,{\sc ii}}\ profile spans a velocity range $\Delta v \sim 100-200$ $\rm km~s^{-1}$\ with respect to the redshift of the group. We recall that the latter quantity is the average of the redshift of the group galaxies, and so we should expect a roughly equal number of them above and below the average value. However, not all of them need to be distributed isotropically with respect to the quasars.
In fact, by looking at Figure \ref{fig:groups_gallery}, we find that the four galaxies with velocities overlapping with the \mbox{Mg\,{\sc ii}}\ absorber are typically closer to the bright quasar than the remaining galaxies in the group. Indeed, their average projected distance from QSO-SE\ is $\approx 130$~kpc, while for the other galaxies it is twice as large, at $\approx 250$~kpc.
Despite the large uncertainties driven by small number statistics, a similar trend emerges from the analysis of the other absorbers associated with the groups, as summarised in Table \ref{tab:dist_grp_gal}. This table reports the typical distance from the sightline QSO-SE\ of galaxies that overlap in velocity with the absorption profile ($d_{\rm in,win}$), compared to the distance for the remaining galaxies in the group. In all cases, galaxies overlapping in velocity with the absorption profile lie at closer projected separation (by a factor of $\approx 1.5-2$) than galaxies that are not overlapping with the \mbox{Mg\,{\sc ii}}\ absorbers.
Considering instead the groups with no \mbox{Mg\,{\sc ii}}\ association (ID 2, 3, and 7), Figure~\ref{fig:groups_gallery} reveals a lack of galaxies in the immediate surroundings of the QSO-SE\ line-of-sight, in line with the trend above. These results therefore suggest that the cold gas absorbers traced by \mbox{Mg\,{\sc ii}}\ arise preferentially from the CGM of one or multiple galaxies which are closer than $\approx 100$ kpc from the quasar line of sight. It should however be noted that the probability of group galaxies giving rise to \mbox{Mg\,{\sc ii}}\ is unlikely to be exclusively modulated by proximity to the line of sight, with covering factors playing a role. Indeed, Figure~\ref{fig:groups_gallery} shows also the presence of galaxies in close proximity to the QSO-NW\ line of sight (mostly from groups 3, 4, and 5) that do not necessarily give rise to absorption (see below).
\begin{table}
\centering
\begin{tabular}{cccc}
\hline
$z_{\rm abs}$ & Group ID & $d_{\rm in,win}$ $\rm{(kpc)}$ & $d_{\rm out,win}$ $\rm{(kpc)}$ \\
\hline
0.67 & 1 & $130\pm56$ & $251 \pm 25$ \\
0.68 & 2 & - & $257 \pm 94$ \\
0.78 & 3 & - & $363 \pm 92$ \\
0.88 & 4 & $274\pm87$ & $522$ \\
1.05 & 5 & $162\pm41$ & $467\pm29$\\
1.15 & 6 & $195$& $321\pm101$\\
1.22 & 7 & - & $322 \pm 83$ \\
\end{tabular}
\caption{Average projected distance of the galaxies which reside in groups at the same redshift of a \mbox{Mg\,{\sc ii}}\ absorber, for galaxies within the velocity window defined by the absorption profile, and outside it. { For groups without an \mbox{Mg\,{\sc ii}}\ detection the average projected distance of all galaxies is given in the last column.} Values without errors have only one object that satisfies the selection threshold. }
\label{tab:dist_grp_gal}
\end{table}
The relation between the strength of \mbox{Mg\,{\sc ii}}\ absorbers and the galaxy distance has been studied extensively in the literature. For instance, \citet{Chen10} report a large statistical sample of galaxy-quasar pairs, finding a strong anti-correlation between the equivalent width of the absorption profile and the distance of the closest galaxy.
We can therefore compare directly in this parameter space the properties of MUDF galaxies, both within groups and in isolation, with the results present in the literature.
In Figure \ref{fig:mgii_w_dist_mass}, we show the rest-frame equivalent width ($W$) for the \mbox{Mg\,{\sc ii}}\ $\lambda2796\AA$ absorption line, obtained integrating the { best-fit} models derived above, as a function of the projected distance of each galaxy from the QSO-SE\ sightline (left panel) and of its stellar mass (right panel). Galaxies belonging to a group that is associated with an \mbox{Mg\,{\sc ii}}\ absorber are shown as red circles and they have been assigned the total $W$ of the absorption profile, while the galaxy which we associate with the $z\sim0.98$ absorption system is shown as a blue circle. For the other sources, we plot $2\sigma$ upper limits measured on the UVES spectrum in a 10 $\rm km~s^{-1}$\ velocity window centred at the redshift of the galaxy.
From this analysis, it appears that the 4 out of 7 groups that are associated with an absorber have high equivalent width and, as noted above, have at least one galaxy which is within 150~kpc of the quasar line of sight. Moreover, these groups tend to host galaxies that are relatively massive ($\log(M_*/\rm{M_\odot})>10$) and therefore, given their CGM is likely to scale with their size \citep[e.g.][]{Chen10}, they are more likely to have a larger cross section of \mbox{Mg\,{\sc ii}}\ that can give rise to the absorption.
An apparent exception is the $z\approx0.67$ group, which seems to lack particularly massive galaxies. However, we argue that in this case the \mbox{Mg\,{\sc ii}}\ absorption is mostly driven by the fact that one galaxy, despite being of a lower stellar mass ($\log(M*_/\rm{M_\odot}) = 8.92$), is very close in projection (29~kpc) and its redshift places it at the centre of the absorption profile. On the other hand, for the three groups without a detection, their galaxies are both further away from the absorber and their stellar masses are low $\log(M_*/M_\odot)\sim 9$, overall reducing the chances that the CGM of group members intercepts the line of sight.
This picture is further reinforced when we make the same analysis for the QSO-NW. We find that the galaxies in groups are on average further away from this line of sight compared to that of the brighter quasar. Only three galaxies are within 100 kpc, two of which are in the $z\approx0.88$ group which we associate with the only \mbox{Mg\,{\sc ii}}\ detection in this sightline (see Section \ref{sec_correlatedabs}).
Moreover, a third lower-redshift quasar ($z\approx 1.285$; cyan cross in Fig. \ref{fig:groups_gallery}), lying close to the edge of the FoV (where no sources have been extracted) is in close spatial proximity to the $z\approx$0.67 group. It does in fact exhibit strong \mbox{Mg\,{\sc ii}}\ absorption at this redshift as seen from the MUSE spectrum, but { we did not find} other absorbers associated with groups or individual galaxies at larger separations from this third sightline.
We therefore conclude that a combination of proximity to the quasar line of sight and presence of massive galaxies hosting large cross sections of cool gas are the main factors that control the high incidence of \mbox{Mg\,{\sc ii}}\ absorbers in these groups.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{MgII_W_vs_r_mass.pdf}
\caption{Rest-frame equivalent width ($W$) for the \mbox{Mg\,{\sc ii}}\ $\lambda2796\AA$ absorption line as a function of impact parameter { (projected distance of each galaxy from the QSO-SE\ sightline for MUDF galaxies, left panel)} and of the galaxy stellar mass (right panel). Galaxies in the MUDF footprint are shown as red circles if they reside in a group and as blue circles if they are isolated. The arrows (red for galaxies in groups and blue for isolated galaxies) show $2\sigma$ upper limits measured on the UVES spectrum in a 10 $\rm km~s^{-1}$\ velocity window centred at the redshift of each galaxy. The black empty stars show galaxies which have a redshift within the \mbox{Mg\,{\sc ii}}\ absorption profile for the strong absorbers shown in Figure \ref{fig:fits_briqso}. The redshift of the groups are shown in the left panel at the y-axis levels of the corresponding values of $W$. The black line is the best fit relation (with the 1-$\sigma$ confidence area shaded in grey) between $W$ and impact parameter from \citet{Chen10}. The orange stars { and black circles are obtained from stacking galaxy-galaxy pairs from the sample of isolated galaxies of \citet{Rubin18} and the sample of group galaxies of \citet{Bordoloi11}.}}
\label{fig:mgii_w_dist_mass}
\end{figure*}
Having established what drives the incidence of \mbox{Mg\,{\sc ii}}\ in these groups, we now compare the absorption properties to samples of galaxies from the literature. To this end, in Figure \ref{fig:mgii_w_dist_mass}, we show the results of the analysis by \citet{Chen10} (black solid line) and the average properties of the CGM of galaxies at $0.35 < z < 0.8$ from foreground and background galaxy pairs in the PRIMUS survey \citep{Rubin18}. These galaxies are not explicitly selected to be within groups. These authors find that the strength of \mbox{Mg\,{\sc ii}}\ absorption declines as a function of impact parameter ($d$, i.e. the projected separation of a galaxy from the probing sightline), with the average gas content of star forming galaxies versus $d$ in the PRIMUS survey being consistent with the fitting function of \citet{Chen10}, albeit with a large intrinsic scatter for individual galaxies.
However, the total equivalent width of \mbox{Mg\,{\sc ii}}\ in galaxies within the MUDF groups is higher than the $1\sigma$ scatter of the data from \citet{Rubin18} when shown as a function of $d$, but are consistent with these data as a function of stellar mass. Likewise, compared to the scaling relation of \citet{Chen10}, the MUDF group galaxies lie consistently above the relation at fixed impact parameter.
These results imply that the group environment has an effect on the cross section of \mbox{Mg\,{\sc ii}}, leading to enhanced equivalent widths at larger distances compared to galaxies not explicitly selected within groups. { \citet{Bordoloi11} presented the radial \mbox{Mg\,{\sc ii}}\ absorption profile from a sample of group galaxies from the zCOSMOS survey finding that groups have more extended absorption profiles compared to a sample of isolated galaxies from the same survey. However, the radial profile for group galaxies is largely consistent with the one from the \citet{Chen10} and \citet{Rubin18} samples. This could be an intrinsic feature of the sample or an effect of contamination from more isolated galaxies in the group sample. }\citet{Nielsen18} studied the \mbox{Mg\,{\sc ii}}\ absorption in a sample of 29 group-like environments (defined to have at least two galaxies within 200 kpc of a background quasar and with $\Delta v < 500$ $\rm km~s^{-1}$). These authors found that the \mbox{Mg\,{\sc ii}}\ median equivalent width in this sample is $W=0.65\AA$, and it is indeed enhanced by a factor 2.2 compared to an isolated galaxy sample. This is also in line with the result of \citet{Gauthier13}, who reported on the \mbox{Mg\,{\sc ii}}\ equivalent width for three galaxy groups associated with ultra-strong \mbox{Mg\,{\sc ii}}\ absorbers ($W>3.5\AA$), which are atypical of the general galaxy population.
Finally, we comment on the properties of the only absorber associated with an isolated galaxy. The complete isolation of this source (ID 14) is unlikely, given its stellar mass ($\log(M_*/M_\odot) = 10.51$). It is possibly living in a massive halo which in turn is expected to host satellite galaxies (see below). And while this source is the closest to the quasar line of sight among individual galaxies in the MUDF, its equivalent width is still elevated for its impact parameter. Moreover, the several narrow kinematic components found in the absorption spectrum seem to suggest, in analogy with the other \mbox{Mg\,{\sc ii}}\ arising from groups, that this absorber could be associated with a group that we do not fully cover in the MUSE footprint.
\subsection{Detection of cold gas absorption in the quasar pair} \label{sec_correlatedabs}
Up to this point, our analysis reveals that groups hosting \mbox{Mg\,{\sc ii}}\ absorbers show galaxies in closer proximity to the line of sight and host more massive galaxies, two factors that are likely to boost the incidence of \mbox{Mg\,{\sc ii}}\ absorption. Moreover, the equivalent width of \mbox{Mg\,{\sc ii}}\ appears to be elevated compared to samples not explicitly selected in groups. We now fully exploit the ability to conduct tomography in the MUDF to assess whether the observed absorbers can be attributed to a widespread intra-group medium, or whether they are more likely to be associated with individual galaxies.
The groups at $z\approx 0.88$ and $z\approx 1.05$ appear to be most suited for this analysis as they both contain a significant number of members that appear to be distributed across the two quasar sightlines.
For the $z\approx 1.05$ absorber, despite a strong ($W=1.67 \AA$) and complex kinematic profile seen against the QSO-SE, we derive only a 2$\sigma$ upper limit of 0.06\AA\ against the QSO-NW. For the group at $z\approx 0.88$, instead, we detect also \mbox{Mg\,{\sc ii}}\ absorption in the UVES spectrum of the fainter QSO-NW. At this redshift, we find a \mbox{Mg\,{\sc ii}}\ absorber also in the spectrum of the bright QSO-SE, which enables us to study whether the kinematics of the absorption is correlated over a scale of 500 kpc. The right panels in Figure \ref{fig:fits_briqso} show the data and Voigt profile models of the absorption systems found at $z\approx 0.88$ in both sightlines (top panel for the bright quasar and bottom panel for the fainter one). Along the line of sight of the faint quasar we find only three Voigt components, as opposed to 15 in the other sightline. These components span a much smaller velocity range ($\sim 100$ $\rm km~s^{-1}$) compared to the absorption system found in the brighter quasar ($\sim 350$ $\rm km~s^{-1}$). Furthermore, we { do not find a significant correlation between the kinematics of the components in the two sightlines. A marginal amount of correlation could be present in the first and last (in velocity space) components seen in the QSO-NW\ sightline, where a component exists at a similar velocity also in the other sightline, however with a significantly different strength.} This result suggests that the cold absorbing gas is not necessarily related to coherent structures in the group as a whole, which is also corroborated by the non-detection of correlated \mbox{Mg\,{\sc ii}}\ in the $z\approx 1.05$ group.
The lack of evidence in support of a dense and homogeneous intra-group medium leaves open the hypothesis that the enhanced absorption in the groups arises from the CGM of individual galaxies,
which has been processed by the environment. Indeed, considering again the
$z\approx 0.88$ group, we find near the QSO-NW\ line of sight a clustering of galaxies with negative velocities compared to the group average redshift, which broadly corresponds to the velocities of the absorption components in this sightline. More specifically, from Figure \ref{fig:groups_gallery}, it appears that the galaxies lie in two sub-groups that are mostly aligned along the SE-NW direction. The fact that both quasar sightlines go through the direction of this alignment is a possible explanation for why this group is the only one detected in both sightlines.
Thanks to the very deep spectroscopy in the MUDF we can further test for the presence of a diffuse cool intra-group medium by stacking the spectra of background galaxies that lie behind the $z\approx 0.88$ group. { We analyse four stacks: A) 18 galaxies selected to have a $S/N>3$ per wavelength channel in two spectral windows bracketing the wavelength of the $z\approx 0.88$ absorption (red ellipses in Figure \ref{fig:musefovstack}); B) seven galaxies that are roughly co-spatial with the distribution of the group galaxies (blue ellipses); C) three galaxies which show \mbox{Mg\,{\sc ii}}\ absorption at $z\approx 0.88$ in their individual spectra (green ellipses); and D) four galaxies lying between the two QSO sightlines and roughly aligned with the geometrical center of the group (orange ellipses).}
For comparison, the position of the group galaxies is shown by black dashed ellipses.
The stacked spectra are shown in Figure \ref{fig:stack7} as a black lines, with 1$\sigma$ uncertainty from bootstrap resampling shown in grey.
{ In stacks A and D we find non detections with 2$\sigma$ upper limits of $W_{2796}<0.28\AA$ and $W_{2796}<0.47\AA$ respectively. In stack B, which is restricted to the galaxies aligned with the two foreground sub-groups, we find a $\sim3.5\sigma$ combined detection of the \mbox{Mg\,{\sc ii}}\ doublet with $W_{2796}=0.61^{+0.22}_{-0.23}\AA$, although with a $\approx 50$ $\rm km~s^{-1}$\ offset with respect to the absorber seen in the bright quasar sightline (red solid line). We checked by comparing the MUSE and UVES spectra of the quasars that this offset is real, and does not arise from mismatches in the wavelength calibration. Lastly, in stack C, which is composed of galaxies in which the absorption signal can be detected in individual spectra, this detection becomes even stronger with $W_{2796}=1.20^{+0.24}_{-0.21}\AA$.
Again, this result reinforces the idea that there exists a large cross section of cool gas in close proximity to the group galaxies (stacks B and C), but that there is no dense and widespread cool gas that gives rise to strong absorption signal filling larger scales beyond the regions traced by galaxies themselves (stack A), or the region between the two sub-groups (stack D).}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{musefovstack.pdf}
\caption{ Location within the MUDF of the background galaxies (red solid ellipses) that we include in the spectral stack A used to search for extended \mbox{Mg\,{\sc ii}}\ absorption around the galaxies (black dashed ellipses) in the $z\sim0.88$ group. Three more stacks (B, C, and D) include the galaxies marked with blue, green, and orange ellipses respectively.}
\label{fig:musefovstack}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{MgII_grp088_stack_paper.pdf}
\caption{Stacked spectra (black) of MUDF background sightlines probing galaxies of the $z\sim0.88$ group around the \mbox{Mg\,{\sc ii}}\ doublet. { The panels from top to bottom refer to stacks A to D as described in the main text.} The gray filled region shows the 1$\sigma$ uncertainty from bootstrap resampling. The zero velocity is defined by the average redshift of the group ($z=0.88205$), and the rest velocity of each transition in the \mbox{Mg\,{\sc ii}}\ doublet is shown by red vertical dashed lines. The red solid line shows the best fit model of the UVES spectrum of the QSO-SE convolved to match the MUSE spectral resolution, { while the red dashed line in the top panel shows the best fit model for the QSO-NW spectrum.}}
\label{fig:stack7}
\end{figure}
\section{Discussion} \label{sec_discussion}
\subsection{A high fraction of strong \mbox{Mg\,{\sc ii}}\ absorbers in galaxy groups}
We performed a search for galaxy groups at $0.5<z<1.5$ in the MUDF,
finding seven groups of at least three galaxies. From abundance matching we obtain a virial mass for these groups in the range $\log(M_h/\rm{M_\odot}) \approx 11 - 13.5$, with virial radii in the range of $\approx 100-200~\rm kpc$. Our most massive group includes 15 galaxies and could evolve into a Virgo-like cluster of galaxies by $z=0$ \citep{Chiang13}.
An independent search for cool gas absorbers, as traced by the \mbox{Mg\,{\sc ii}}\ doublet in the same redshift range, revealed five absorption systems in the high resolution spectrum of the brighter QSO-SE, and one system in the spectrum of the fainter QSO-NW. In total, of these 6 absorbers, 5 are at the same redshift as a galaxy group, a fraction of 83$\%$. For the only absorber not associated with a group, we found a galaxy within 300 $\rm km~s^{-1}$\ in velocity space. This galaxy is massive ($\log(M_*/M_\odot) = 10.51$), which correponds to a dark matter halo mass of $\log(M_h/M_\odot) \approx 12.25$, following \citet{Moster10}. At this mass, it is very likely that the halo hosts several satellite galaxies which could be outside our survey footprint, so with spectroscopic data covering a larger area we might find this galaxy to be part of a group as well. Indeed, by looking at the absorption profiles in Figure \ref{fig:fits_briqso}, this absorber looks kinematically complex, with several strong and narrow absorption components as found for the absorbers associated with galaxy groups.
Despite the alignment in redshift space between the group galaxies and the absorption profiles, it is difficult to unambiguously conclude whether we are observing gas in the intra-group medium, or whether
absorption rises ultimately from the CGM of individual group members.
Throughout our analysis, however, we have uncovered pieces of evidence that more closely support the latter scenario.
We have found that all the \mbox{Mg\,{\sc ii}}\ detections arise in groups with at least one galaxy closer than 150 kpc from the bright quasar line of sight, and the same is true for the single absorber in the faint quasar.
Moreover, the group galaxies which are closer in redshift space to the absorbers are also closer in projection,
and groups with a \mbox{Mg\,{\sc ii}}\ detection appear to host relatively massive galaxies that are expected to have intrinsically larger cross sections of cool gas in their CGM. Similarly, all the non-detections are associated with galaxies at larger distances from the sightline, or that are at lower mass (and hence likely have smaller cross sections) compared to the members of groups with a \mbox{Mg\,{\sc ii}}\ detection.
We have not found compelling evidence for the presence of a widespread intra-group medium { producing strong absorption} for two reasons. First, out of four \mbox{Mg\,{\sc ii}}\ systems associated with groups, only one system (at $z\approx 0.88$) is detected in the two sightlines, and the profiles are not very well correlated in velocity space. Second, when stacking galaxies in the background of this $z\approx 0.88$ group, a detection of \mbox{Mg\,{\sc ii}}\ in absorption emerges only when considering background galaxies in at small projected separations to the foreground ones. Altogether, these pieces of evidence suggest that the { strong absorbers} are associated mostly with the CGM of group galaxies. { However, we do not rule out the presence of a homogeneous low-density intra-group medium which can give rise to weak absorption components.}
This conclusion, however, does not rule out environmental effects from shaping the properties of the CGM of group members. Indeed, when considered by their equivalent width, group galaxies appear to host stronger \mbox{Mg\,{\sc ii}}\ absorbers compared to the general galaxy population. Several studies that have focused on samples of (relatively massive) galaxies probed by quasar sightlines, not selected according to their environment, have uncovered a clear anticorrelation between $W$ and impact parameter $d$ \citep{Steidel94, Chen10}. A similar trend seems to hold also in samples where background galaxies are used to probe foreground ones \citep{Rubin18}.
In these samples, strong absorbers with equivalent width $W>1\AA$ are typically found at $d<30$ kpc. Conversely, in our groups, strong systems are found at distances $\approx 50-100~\rm kpc$, despite the fact that we do not necessarily probe more massive galaxies than these previous studies (see Figure \ref{fig:mgii_w_dist_mass}).
While the most massive galaxies ($M_* \approx 10^{11}~\rm M_\odot$) in samples from the literature are themselves likely members of groups, generally these samples are thought to be more representative of the field population. Indeed, using a light cone from the \citet{Henriques15} semi-analytic model of galaxy formation, we find that for galaxies with a stellar mass in the range $10^{9.75}-10^{10.25}{\rm M_\odot}$ and redshift $0.4<z<0.6$, a fraction of only 42\% live in groups as defined by our method, with this fraction reducing even further at lower stellar mass. Thus, the samples of \citet{Chen10} and \citet{Rubin18}, which in some cases extend down to $10^9~\rm M_\odot$, are representative of a more isolated population of galaxies compared to the sample of group galaxies studied in this work.
Therefore, the difference in absorption strength at fixed impact parameter, combined with the complex kinematical profiles, imply that the group environment plays an active role in boosting the cross section of cool gas present in the CGM of member galaxies.
This result, that emerges from a complete and systematic search of groups in the MUDF, is in line with previous findings from serendipitous discoveries of groups associated with \mbox{Mg\,{\sc ii}}\ absorbers.
For instance, \citet{Whiting06} found at least five galaxies at the redshift of a $W=2.5\AA$ \mbox{Mg\,{\sc ii}}\ absorber. Similarly, \citet{Kacprzak10} found five low-mass galaxies associated with a $W=1.8\AA$ absorber. \citet{Nestor11} found ultra-strong \mbox{Mg\,{\sc ii}}\ absorbers with $W=3.6, 5.6 \AA$ in two systems associated with massive galaxy pairs, and lastly \citet{Gauthier13} found another very strong absorber with $W=4.2 \AA$ in a group of three galaxies. These works found that the closest galaxy is at $d=30-70$ kpc. In the work by \citet{Kacprzak10} a galaxy is found a smaller impact parameter (17.4 kpc). However, from an analysis of the metallicity of the absorber and of the group galaxies, the authors concluded that the closest galaxy is not hosting the absorber, which is more likely to arise from tidal debris in the group environment. { \citet{Bordoloi11} found marginally more extended \mbox{Mg\,{\sc ii}}\ absorption profiles compared to a sample of more isolated galaxies, concluding that the absorbing gas is more likely to be associated to the individual galaxies rather than the intragroup medium}. \citet{Nielsen18} similarly found an enhanced equivalent width of \mbox{Mg\,{\sc ii}}\ absorbers in a sample of 29 galaxy groups with respect to an isolated galaxy sample. These authors have complemented these results with a kinematical analysis of the absorption profiles and concluded that the absorbers originate from an intragroup medium rather than from individual galaxies. However, it remains unclear if these sparse groups (having on average 2-3 members) are part of a virialized halo that is able to host a diffuse intragroup medium. { Lastly, \citet{Bielby17a} studied the \mbox{Mg\,{\sc ii}}\ absorption profile associated to a low-mass group at $z\approx0.28$ finding that the absorbing gas is likely to arise from multiple gas clouds orbiting in the group halo and giving rise to the intragroup medium.}
\subsection{The origin of enhanced cold gas in galaxy groups}
We have uncovered evidence supporting a scenario in which, at fixed stellar mass, group galaxies have a larger cross section of cool gas that gives rise to frequent and strong absorption systems. A question arises about what mechanisms may be responsible for this enhancement.
Several physical mechanisms might contribute to a larger cross section of \mbox{Mg\,{\sc ii}}\ in group galaxies, including higher gas fractions for satellite galaxies in dense environments \citep{Noble17} or stronger outflows from stellar winds due to enhanced star-formation \citep{Mcgee14}. There seems however to be no clear consensus on the relevance of these mechanisms in groups at $z\approx1$ \citep{Wetzel12, Rudnick17, Fossati17}. Moreover we find it difficult to assess the contribution of these mechanisms with our own data. As we argue below, however, a prominent mechanism at play is likely to be gravitational interaction among group members.
It is well known that galaxy groups are the ideal environment to trigger gravitational interactions among members. Indeed, in more massive haloes (i.e. galaxy clusters), the higher velocity dispersion leads to shorter interaction times, with a reduced effect of tidal interactions on the gaseous and stellar structure of galaxies \citep[see][]{Boselli06}.
Massive groups are therefore a sweet-spot in terms of richness and velocity dispersion for gravitational encounters to occur.
For this reason, we argue that the evidence described above suggests that absorbers arise from gas once bound to the group galaxies (or their CGM), which has been stripped by tidal forces. This material, displaced from its original site, naturally boosts the cross section of cool gas, leading to enhanced absorption at larger impact parameters compared to the field galaxies \citep{Morris94}.
These tails, gaseous bridges, and plumes have indeed been imaged in atomic and ionized gas within groups in the local Universe \citep{Mihos12, Rasmussen12, Taylor14, Fossati19}, where the denser components can stretch up to scales of hundreds of kpc. In this picture, the several kinematic components seen in the absorbers could be related to the chaotic orbits of the group galaxies during the stripping process, as seen for instance in the complex kinematic maps of galaxy encounters traced in emission with MUSE \citep{Fossati19}.
This process would explain why a correlated detection of cool gas in both the MUDF quasars is a rare event. Given the distance of $\sim 500$ kpc between the two sightlines, a double detection requires either a very massive group that spans this distance with enough galaxies, a number of which is subject to some degree of gravitational perturbation.
Alternatively, a special alignement of galaxies is required, as in the $z\sim 0.88$ group, which is composed of two sub-groups that fortuitously align with the orientation of the two quasars.
The connection between enhanced \mbox{Mg\,{\sc ii}}\ absorption and a tidal stripping scenario is further reinforced by other works. For instance, \citet{Kacprzak10}, using the high-resolution of {\it HST} imaging data, found perturbed morphologies for the three brightest group galaxies, with tidal tails extending up to $\sim25$ kpc. They conclude that these morphological features are suggestive of merger events or tidal stripping and that dense and cool stripped gas can host the observed \mbox{Mg\,{\sc ii}}\ absorber.
More recently, \citet{Chen19a} studied the same group with deep MUSE observations. These data corroborated the results of \citet{Kacprzak10} revealing a giant nebula of ionized gas contributing to the total mass and metal content of the intra-group medium. An accurate kinematical analysis showed that the \mbox{Mg\,{\sc ii}}\ absorber is indeed located in the stripped gas passing in front of the background quasar. These results point towards the presence of a multi-phase medium { \citep[see also][]{Bielby17a}}, and it is possible that the cool and ionized gas is gradually heated to the group virial temperature, contributing to the warm-hot intra-group medium.
\section{Summary and Conclusions} \label{sec_conclusions}
In this work, we have presented the design, observations, and data reduction methodology for the MUSE Ultra Deep Field (MUDF) survey, together with results from the cold gas content of galaxy groups at $0.5<z<1.5$. The MUDF survey is a 150-hour (on source) large programme on the MUSE instrument at the VLT that is observing a $1.5\times1.2$ arcmin$^2$ region of the sky characterised by the presence of two quasars at $z\approx 3.22$ separated by $\approx$ 60 arcsec. The MUSE data are also complemented by {\it HST} imaging programmes in the near-UV and in the near-IR, and by the deepest {\it HST} near-IR spectroscopic campaign in a single field. Deep high-resolution spectroscopy of the quasars is also being collected with the UVES instrument at the VLT. These rich datasets will enable us to reach several goals, including: an investigation into the connection between gas and galaxies reaching the low-mass regime at $z\sim 2-3$; a search for gas filaments of the cosmic web in emission; and a study of the build-up of the Hubble sequence with cosmic time with accurate information on the morphology, kinematics, gas budget, and star-formation history of galaxies.
In this paper, we have discussed in detail the survey design, the data reduction procedure, the extraction and validation of source catalogues, and the derivation of the physical properties of galaxies from the partial dataset observed to date. As a first application, we investigated the galaxy environments in the MUDF, finding seven groups three or more members at $0.5<z<1.5$ covering a large range in inferred dark matter halo mass ($\log(M_h/\rm{M_\odot}) \approx 11 - 13.5$).
We explored the correlation between galaxies and galaxy groups with cold gas detected via \mbox{Mg\,{\sc ii}}\ absorption in the quasar sightlines. We found five absorption systems in the spectrum of the bright quasar and only one in the faint quasar spectrum. All but one of these systems are at the redshift of a galaxy group, while for the last one we find a massive galaxy at a small velocity separation, which we speculate could itself be a member of a group falling outside the MUDF footprint.
The absorbers have a complex velocity profile which we decompose into several Voigt components using a novel Bayesian technique. We find that, within a given group of galaxies, members that are close in velocity space to the absorber are also closer to the sightline in projection compared to the other group galaxies. Furthermore, through the analysis of correlated absorption in both quasar sightlines and via a tomographic map of one of the groups using background galaxies, we find no significant evidence of a widespread and homogeneous intra-group medium giving rise to { a strong} absorption signal.
Altogether, these results suggest that the absorbers reside in (or are stripped from) the CGM of one or more galaxies within $\approx 100$ kpc from the quasar sightline.
The strength of the absorption seen in groups is higher than what is typically found for more isolated galaxies at comparable impact parameters. This evidence, combined with previous examples in the literature of strong \mbox{Mg\,{\sc ii}}\ absorbers in group-like environments, suggests that the absorbers reside in gas once bound to individual galaxies (or their CGM) that has been stripped at larger radii, boosting the cross section of \mbox{Mg\,{\sc ii}}. Gravitational interactions and tidal forces, in analogy with what is seen in nearby groups, are indeed effective in galaxy groups and obvious mechanisms to strip gas and stars from the galaxy disks, leading to the observed phenomenology.
So far, our analysis relied on deep MUSE data in a double quasar field. These data will soon be complemented by deep spectroscopic and imaging data from {\it HST}, extending the wavelength of our observations from the near-UV to the near-IR. These data will provide us with accurate redshifts for galaxies in the redshift desert ($1.5 < z < 3.0$), as well as strong constraints on the stellar mass and recent star-formation of galaxies down to low stellar mass. The approach of combining multiple quasar sightlines with complete galaxy surveys and an accurate reconstruction of the galaxy environment is critical for understanding the role of strong absorbers in the framework of how galaxies are fed with fresh gas.
In forthcoming papers, we will study the impact of the group environment on the star-formation activity of galaxies, while extending this work to higher redshift.
\section*{Acknowledgements}
We thank J.T. Mendel for the developement of the MC-SPF code used in this work and the anonymous referee for their comments which improved the quality of the manuscript.
M.Fumagalli acknowledges support by the Science and Technology Facilities Council [grant number ST/P000541/1]. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 757535). SC acknowledges support from Swiss National Science Foundation grant PP00P2\_163824. RC was supported by a Royal Society University Research Fellowship. CP thanks the Alexander von Humboldt Foundation for the granting of a Bessel Research Award held at MPA. MR acknowledges support by HST Program GO-15637 provided by NASA through grants from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
This work is based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme IDs 65.O-0299(A), 68.A-0216(A), 69.A-0204(A), 1100.A-0528(A), 1100.A-0528(B), 1100.A-0528(C), 0102.A-0194(A), 0102.A-0194(B).
This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grants ST/H008519/1 and ST/K00087X/1, STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure. This research made use of Astropy \citep{Astropy-Collaboration13}. For codes and data products used in this work, please contact the authors or visit \url{http://www.michelefumagalli.com/codes.html}. Raw data are available via the ESO Science Archive Facility.
\bibliographystyle{mnras}
|
1,108,101,565,292 | arxiv | \section{Introduction}
Hot carrier solar cells (HCSCs) have been proposed as devices, which can increase the conversion efficiency of a single junction solar cell above the Shockley-Queisser limit \cite{1,2,3}. Since thermalization of photogenerated carriers is a major loss mechanism in conventional solar cells, HCSCs have the potential to produce higher efficiency devices using simple single gap semiconductor architectures by eliminating the thermal losses associated with electron-phonon interactions \cite{4,5}.
However, before their practical implementation can be realized, HCSCs must circumvent two main challenges \cite{3,4,6}: 1. Find an absorber material in which hot phonons are longer lived than hot carriers such as to provide the required condition to promote reabsorption of these hot phonons, a phonon bottleneck, which significantly reduces hot carrier relaxation through phonon channels \cite{7}; 2. Implement energy selective contacts \cite{8,9,10} in which only a narrow range of energy (within the hot carrier distribution) can be extracted, restricting the energy distributed through carriers cooling, therefore minimizing the entropy heat transfer loss \cite{3,4,5}.
Here, InAs/AlAs$_{0.16}$Sb$_{0.84}$ quantum-wells are investigated as a candidate hot carrier absorber. The use of quantum wells also offers the potential to facilitate the development of energy-selective contacts and fast carrier extraction via resonant tunneling from quantum wells (QWs) making them an attractive potential system for HCSCs.
\section{EXPERIMENTAL RESULTS AND ANALYSIS}
A schematic of the sample used in this investigation is shown in Fig. \ref{figure1}(a). The InAs/AlAs$_{0.16}$Sb$_{0.84}$ multi-quantum-well heterostructure was grown by molecular beam epitaxy (MBE) at a substrate temperature of 465\degree{C}. A 2000 nm InAs buffer layer was grown on a nominally undoped GaAs substrate to reduce the density of crystalline defects arising from the lattice mismatch of the active region (MQWs) and the substrate. The thickness of the InAs QWs is 2.4 nm and the AlAs$_{0.16}$Sb$_{0.84}$ barriers are 10 nm.
As shown in Fig. \ref{figure1}(b), there is a lower quantum confinement in the valence band (VB) and much larger confinement in the conduction band (CB). The lower energy barrier layers in the VB results in the rapid transfer of the holes absorbed directly in the InAs QWs to the AlAsSb barriers and their enhanced mobility with increasing temperature. Conversely, due to the large confinement in the CB, electrons remain strongly confined at all temperatures \cite{11,12,13}. The type-II band alignment shown in Fig. \ref{figure1} (b) (magnified in 1(d) for clarity) and thermal diffusion of holes results in an excess of electrons (with respect to holes) in the QWs due to the reduced radiative recombination rate.
In addition, the large energy band offset between the QW and barrier facilitates absorption of a large proportion of the solar spectrum directly in the InAs QWs, without significant losses in the barriers. Finally, the narrow QWs enable a design in which the separation of the energy levels in the CB is large ($\sim$ 0.7 eV) resulting in a hot carrier distribution that predominately occupies the ground-state subband; without significant influence of broadening due to occupation of higher order (or barrier) subbands (Fig. \ref{figure1}(c), (d)).
Energy dependent photoluminescence (PL) spectra for a range of power densities at 10 K are shown in Fig.\ref{figure2}(a). At lower powers, a shift in the PL peak energy is evident, which reflects the effects of alloy fluctuations that have a significant effect at low power and temperature \cite{14,15}. At intermediate powers, the peak energy stabilizes and a broadening of the high-energy tail becomes evident. Such high-energy broadening is indicative of the presence of hot carriers \cite{16,17,18} generated by non-equilibrium photogenerated carriers in the CB.
The observation of the shift in peak energy at low power is also evident in Fig.\ref{figure2}(b), which shows the dependence of the peak PL energy (at increasing temperature) versus absorbed power $(P_{abs})$. At powers below 1-2 W/cm$^{2}$ a large increase in the peak PL is observed. However, at higher $P_{abs}$ the peak PL energy saturates, particularly at higher temperatures. This behavior has been shown to be due to the presence of alloy fluctuations at the InAs-AlAsSb interface and the resulting spatial localization of carriers, which is quenched or saturated, with increasing temperature and/or excited carrier density. \cite{19}
\begin{figure}
\includegraphics[scale=0.43]{figure1.pdf}
\caption{\label{figure1}(a) Schematic representation of the InAs/AlAs$_{0.16}$Sb$_{0.84}$ quantum well sample investigated. (b) Simulated energy profiles showing the relative energy of the confinement potential at the Γ point in the conduction (high) and valence band (low). (c) 2D Electron density of states as a function of the energy for this structure. (d) Shows a magnification of the band offsets displaying the type II band alignment with large separation of the energy subbands.}
\end{figure}
The effect of the alloy fluctuations is also illustrated (somewhat) in Figure 2(c), which shows the power dependent behavior of the temperature difference (the difference between carrier and lattice temperatures: $\Delta T=T_{e}-T_{L}$). The carrier temperature, and therefore, $\Delta T$, can be quantified by fitting the high-energy tail of the PL spectrum, using the generalized Planck relation \cite{12,16,17,18,20,21,22,23}
\small
\begin{equation}
I(E)\propto\varepsilon(E) exp\left(-\frac{E}{k_{B}T_{e}}\right)
\end{equation}
\normalsize
Where I is the energy-dependent PL intensity, $\varepsilon$ is the effective emissivity, which is related to the absorption profile, $k_{B}$ is Boltzmann's constant, and $T_e$ represents the carrier temperature extracted from the slope of the PL at energy greater than the band gap. Although hot carriers have been predominately investigated using ultrafast time-resolved spectroscopy \cite{21,22,24}, Equation (1) describes a technique to study the behavior of hot carriers in continuous-wave operation, the mode of operation of solar cells, and therefore presents a more realistic method to interpret the hot carrier dynamics in practical photovoltaic systems.
At low powers, a large shift of the carrier temperature is observed (below 1 W/cm$^{2}$). The validity of Equation (1) for the extraction of $T_e$ assumes that the effective emissivity $(\varepsilon)$, therefore absorption, is constant at a fixed energy. That is, it is independent of the excitation power. Since the PL energy changes rapidly in the low power regime, the initial increase in $T_e$ is attributed to an artifact of the increasing absorption rather than the real carrier temperature. However (as described above) at higher powers ($P_{abs} >$ 2 W/cm$^{2}$) the energy shift stabilizes (Fig.\ref{figure2}(b)) and as such, reflects the (true) carrier temperature; independent of fluctuation effects, which are saturated under these excitation conditions.
It is important to emphasize, once more, that since there is a large energy difference between the ground-state transition and the higher-order states (Fig.\ref{figure1}(d)), the high energy tail represents hot carrier effects related solely to the ground-state of the QW, unperturbed by state-filling effects. It must be noted, however, that although band-to-band recombination in the AlAsSb barriers has little effect on the high-energy tail of QW luminescence, the effects of impurities in the QW \cite{25} and/or localized states at the QW/barrier interface cannot be totally dismissed as contributing to the high-energy tail; the latter of which is discussed in more detail below (see Fig.\ref{figure6}).
\begin{figure}
\includegraphics[scale=0.83]{figure2.pdf}
\caption{\label{figure2}(a) Power dependent PL spectrum at 10 K. (b) Peak PL energy at selected various temperatures. (c) Temperature difference ($\Delta T$) versus absorbed power ($P_{abs}$) at 10 K.}
\end{figure}
The number of carriers generated $(N_c)$ with increasing intensity is indicated with respect to the absorbed power on the upper axis of Fig.\ref{figure2}(b) and (c). The densities absorbed are the order of, or less than, 3$\times$10$^{9}$ cm$^{-2}$ for the excitation levels used. If this is compared to the total 2D density of states calculated for the ground state for the InAs QWs, which is shown in Fig.\ref{figure1}(c) (1$\times$10$^{13}$ cm$^{-2}$), then it becomes clear that the increased contribution of the high energy tail is not the result of significant state-filling, or the saturation of the ground-state, and therefore likely has its origin in inhibited hot carrier relaxation via the creation of a phonon bottleneck \cite{7}. Similar effects have been observed recently in InAs QDs, where the spatial separation of carriers in impurity states leads to the observation of inhibited carrier relaxation as a result of reduced carrier-carrier scattering \cite{26}. The type-II nature of InAs/AlAsSb is expected to lead to similar results here.
Equation (1) shows that the PL spectrum can also give information about the absorption and the effective band gap of the QWs. The pre-exponential term of the Planck distribution describes an effective emissivity term, which is an energy dependent parameter. Fig.\ref{figure3}(a) shows the natural logarithm of this effective emissivity $(ln \varepsilon)$ (closed squares) as a function of photon energy $(E)$ at 10 K for low excitation power, prior to significant hot carrier generation. These data are shown with respect to the power dependent PL. As the energy increases towards the peak PL energy, and therefore band gap, the effective emissivity increases rapidly. Once the energy gap is reached, the effective emissivity increases much more slowly, reflecting (somewhat) the lower rate of change of the absorption at higher energy \cite{27}.
\begin{figure}
\includegraphics[scale=0.25]{figure3.pdf}
\caption{\label{figure3}(a) Natural logarithm of effective emissivity (blue squares) and power dependent PL spectrum as a function of energy. (b) The comparison of the behavior of the highest power PL spectrum and natural logarithm of effective emissivity for several intensities.}
\end{figure}
Fig.\ref{figure3}(b) illustrates the effect of the emissivity term with increasing excitation power obtained by plotting $ln \varepsilon$ $vs.$ $E$ for the PL spectra of Fig.\ref{figure3}(a). These data are compared to highest power PL (shown in black). The behavior of the effective emissivity data is constant across the spectra with the shift in absolute value related to the increasing carrier temperature, as extracted from the slope of the PL, which is inserted in the Eqn. (1) transposed for the natural logarithm of the effective emissivity. The consistency of $ln \varepsilon$ confirms that the pre-exponential term in Equation (1) is indeed independent of power (for the conditions used to extract $T_e$) and demonstrates further that a large separation exists between the ground and the first excited state in the QWs. Therefore, since the carrier temperature is extracted in a region with the constant effective emissivity over a large energy range, $T_e$ is determined with relatively low uncertainty.
Fig.\ref{figure4}(a) and (c) display the dependence of the temperature difference $(\Delta T)$ for temperatures between 10 K and 90 K, and 90 K and 130 K, respectively. The inset to Fig.\ref{figure4}(c) shows the same data at 225 K and 295 K. In Fig.\ref{figure4}(a), the carrier temperature, and therefore $\Delta T$, tends to increase with increasing excitation power. The dependence of the hot carriers and their thermalization rate can be evaluated by studying the rate of the thermalized energy (which is the same as absorbed power in the $V_{oc}$ condition) per degree of temperature change \cite{16,17,18} as described by:
\small
\begin{equation}
\noindent P_{th}=\frac{ntE_{LO}}{\tau_{th}}exp\left(-\frac{E_{LO}}{k_{B}T_{e}}\right)=Q\Delta T exp\left(-\frac{E_{LO}}{k_{B}T_{e}}\right)
\end{equation}
\normalsize
Where $P_{th}$ is the thermalized (absorbed) power, $n$ is carrier density, $t$ is thickness, $\tau_{th}$ is the thermalization time, $E_{LO}$ is the phonon energy for InAs, $k_{B}$ is Boltzmann’s constant, and $T_e$ is the carrier temperature. $\Delta T$ is the difference in temperature between the carriers and the lattice, and $Q$ is the thermalization coefficient \cite{16}.
Equation (2) can be used to extract $Q$, an empirical parameter used to assess the contribution of phonon mediated carrier relaxation in QWs \cite{16,17,18}. A high $Q$ is indicative of efficient phonon-mediated relaxation of hot carriers; therefore, systems with lower $Q$ are desired for practical HCSCs \cite{18}. In Fig.\ref{figure4}(b) and (d), the slope of each data set is used to extract $Q$ for that particular temperature. In Fig.\ref{figure4}(b) it can be seen that increasing the temperature to 90 K results in a $Q$ that is increasing, consistent with the increasing contribution of LO phonon scattering at elevated temperature \cite{16}. As the temperature is increased further (90 K - 130 K), as shown in Fig.\ref{figure4}(d), $Q$ starts to become less dependent on $T_{L}$; stabilizing between 90 K and 130 K despite increasing phonon densities at elevated temperature. To reveal the mechanism for this unusual behavior the effect of $T_e$ with increasing excitation power at higher temperatures needs to be considered.
Fig.\ref{figure4}(a) shows $T_e$ versus power between 10 K and 90 K. As excitation power is increased the carrier temperature also increases, as the ratio of excited carriers to phonon density becomes larger. As the lattice temperature is increased to 90 K, the absolute increase in $T_e$ (with power) begins to slow and reduces. This behavior is expected since the phonon density is larger at elevated temperatures (see Fig.\ref{figure6}b), increasing the prevalence of carrier thermalization. This behavior consequently leads to an increased $Q$ as observed in Fig.\ref{figure4}(b). However, for higher temperatures ($>$ 130 K), we can see the dependence of $T_e$ with absorbed power is less than the dependence of $T_e$ at lower temperature (Fig.\ref{figure4}b). Indeed, although $T_e$ is reduced up to 90 K - stabilizing somewhat through 130 K rather than producing the expected equilibrium carrier distribution via strong LO phonon-relaxation, $T_e$ actually increases; again, despite an increasing phonon density at elevated lattice temperatures.
In addition to this apparent decoupling of the phonon-relaxation channels above 130 K, the effect of excitation temperature, also, becomes less pronounced at higher temperature. The inset to Fig.\ref{figure4}(b) shows the power dependence at 225 K (solid squares) and 295 K (solid circles), respectively. What is evident is: that the absolute $T_{e}$ increases relative to $T_L$ above 130 K and from 225 K to 295 K, the carriers are ``hot", even at lower excitation levels.
This behavior presents an interesting question with respect to the validity of using analysis of $Q$ in type-II systems. The empirical parameter $Q$ has been used previously to assess, or qualify the contribution of phonon-relaxation channels in type-I QWs and evaluate their potential for applications as the absorber in HCSCs \cite{18,28}. Indeed recently, this analysis has also been extended to determine the absolute efficiency that may be produced if such systems were applied to HCSCs under concentrated illumination \cite{18}.
This analysis, however, is based on two principles: 1) that at high temperatures the dominant relaxation channels are related to LO phonon scattering and 2) a constant carrier temperature with respect to excitation power, occurs at (and represents) the equilibrium condition; i.e., the carriers are thermalized at $T_L$. In the case of the type-II QWs investigated here, the behavior of the system is not consistent with these assumptions, particularly at $T > 130$ K. The high (and increasing) $T_{e}$, at $T > 130$ K, along with the relative insensitivity to excitation power, suggests the relaxation of hot carriers in this system is \textit{not dominated by electron-phonon interaction.}
\begin{figure}
\includegraphics[scale=0.83]{figure4.pdf}
\caption{\label{figure4}(a), (c) $\Delta T$ versus power density for several lattice temperatures. (b), (d) Gradients of $P_{abs}$/exp(-E$_{LO}$/(k$_B$T$_{e}$ )) against $\Delta T$ give the thermalization coefficient $(Q)$. The inset graph in (c) displays the independency of $\Delta T$ from power densities at temperatures above 200 K.}
\end{figure}
This is further illustrated in the inset to Fig.\ref{figure5}, which shows the $Q$ analysis at 225 K (closed circles) and 295 K (closed squares). Here, the difficulty in interpreting a thermalization coefficient becomes clear since the independence of $T_e$ with power at these temperatures results in a $Q$ that is large, sometimes infinite, but can also (dependent upon fitting methodology) produce a negative value!
To understand the apparent anomalies in the system under investigation with respect to previous systems presented in the literature \cite{16,17,18}, the nature of the band alignment should be considered. The type-II nature of the InAs/AlAs$_{x}$Sb$_{1-x}$ QWs introduces important differences in the behavior of the samples at high temperature and under intense illumination. At low excitation and at temperatures below 90 K, the PL measured is dominated by a quasi-type-I transition. This is related to recombination of electrons confined in the QWs and holes localized at the InAs/AlAsSb interface \cite{11,12,13}.
At T $>$ 90 K the holes localized at the QW/barrier interface are thermally activated and redistribute into the lower energy AlAsSb barrier region. This delocalization of trapped charges reveals the true type-II band alignment of this system, and consequently the excitons will be spatially separated. It should be noted, if the alloy fluctuations were eliminated, or the materials properties improved, the type-II behavior would be observed at all temperatures.
A consequence of the separation of the electrons and holes is a reduced radiative recombination efficiency, and therefore a longer radiative lifetime. This behavior will result in an excess of electrons in the QWs, reduced carrier-carrier scattering \cite{24}, and (once more) the development of a phonon bottleneck \cite{7,22,23}. As such, the dominant relaxation process in the type-II QWs presented appears related predominantly to the radiative recombination lifetime, rather than phonon mediated processes.
Relation between $Q$ and $\Delta T$ against lattice temperature are displayed to $T_e$ = 130 K (blue triangles). The inset shows that $Q$ cannot be determined through for these data sets. Also shown (closed stars) is the temperature dependent hot carrier temperature, $\Delta T$.
\begin{figure}
\includegraphics[scale=0.83]{figure5.pdf}
\caption{\label{figure5} Relation between $Q$ and $\Delta T$ against lattice temperature are displayed to $T_{e} = 130$ K (blue triangles). The inset shows that $Q$ cannot be determined through for these data sets. Also shown (closed stars) is the temperature dependent hot carrier temperature, $\Delta T$.}
\end{figure}
This behavior further illustrates that the analysis of a thermalization coefficient $(Q)$ used for type-I systems \cite{16} appears invalid here. Indeed, since the \textit{rapid} spatial separation of carriers absorbed directly in the QWs is a general feature of type-II systems, the decoupling of LO-phonons via inhibited radiative recombination should be a general feature across other type-II QWs investigated this way.
Fig.\ref{figure5} illustrates further the unique difference between the dominant hot carrier relaxation processes in type-I and type-II systems. Specifically, Fig.\ref{figure5} shows a comparison of the change of $Q$ (open triangles) and $\Delta T$ (closed stars), versus lattice temperature, $T_{L}$. These data are extracted as in a similar manner to those in Fig.\ref{figure4}. At $T < 90$ K, where the sample displays type-I behavior, ΔT decreases with increasing lattice temperature, i.e., the hot carriers are being thermalized by conventional LO-phonon interaction. In this temperature regime (10 K – 90 K) $Q$ is shown to increase with temperature from 0.2 WK$^{-1}$cm$^{-2}$ to 2 WK$^{-1}$cm$^{-2}$, supporting the idea that $Q$-analysis is valid in this regime, when the system behaves as a (quasi)-type-I QW \cite{29}.
It must be noted, however, that the $Q$ determined here should be considered an upper limit since the diffusion (and therefore mobility) of the carriers absorbed is temperature dependent. Practically, this will result in a change of the absorbed power density as lateral carrier diffusion (and luminescence density area) increases at higher temperatures, before radiative recombination occurs.
At T $>$ 90 K, the nature of the system changes: as the holes delocalize from alloy fluctuations, the system transitions from (quasi)-type-I to type-II. As such, $T_e$ begins to increase, increasing linearly with increasing lattice temperature, up to 300 K. At temperatures between 130 K and 150 K the behavior, or interpretation, of $Q$ becomes ambiguous as the dependence of the hot carriers with excitation power becomes less pronounced (See Fig.\ref{figure4}(c)). In this regime, $T_e$ is dominated by the efficiency of the radiative recombination, which in type-II systems has been shown to extend for 100’s of nanoseconds \cite{30}. Therefore, the analysis of $Q$ at T $>$ 130 K, or more generally in type-II systems, becomes moot.
To investigate this hypothesis further, first-principles density-functional-theory (DFT) calculations using the VASP package \cite{31} were used to explore the electronic structure of an analogous InAs/AlSb heterostructure in which the InAs layer is 2.4 nm thick and AlSb is about 9.4 nm thick. A PBE-GGA exchange-correlation functional \cite{32} was used for the structural relaxation; when calculating the density of states, a hybrid functional was used \cite{33,34}. The heterostructure is very similar to that studied experimentally and allows a qualitative picture of its behavior to be determined. In this theoretical system, AlSb, rather than AlAsSb, is used to simplify the interpretation.
Fig.\ref{figure6}(a) shows the 3D density of states (DOS) calculated for the structure, which is magnified about the energy gap. The valence band edges of InAs and AlSb in this heterostructure are almost degenerate, while the conduction bands are well separated between InAs and AlSb. The calculations thus support the type-II band alignment. It should be noticed that the band gap is normally underestimated in DFT-PBE calculations and also hybrid-functional calculations. \cite{35}
The first peak above the Fermi level at 0.3 eV is an interfacial state is mainly located at the InAs/AlSb interface, which may account for the carrier localization and the aforementioned transition from type-I to type-II in these QWs. This heterostructure displays two distinct interfaces, i.e. an AlSb-InAs (interface \textit{i}) and an Sb-Al/As-In (interface \textit{ii}) with the difference arising from the varied stacking sequence. A close inspection of the (3D) DOS by projecting it to each atom suggests that the interfacial state is more pronounced at the interface \textit{i}, particularly on the interfacial In, As, and Sb atoms. The origin of these interfacial states is under further investigation but may originate from the interfacial strain effect (Fig.\ref{figure6}(a)).
The results indicate that reducing the amount of the Sb (that is, deposit more Al and As) at the interface between the QWs and the barriers may help to reduce these charge trapping levels. The available (3D) DOS of this interfacial state is in the order of 10$^{20}$ cm$^{-3}$ (or 10$^{13}$ cm$^{-2}$ assuming one-nm-thick 2D-interfaces), similar to the InAs conduction band edge. On the other hand, the (3D) phonon density (Fig.\ref{figure6}(b)), that is the overall phonon density without distinguishing different types of phonon, is much higher than the electron density and increases rapidly as a function of temperature. The increased phonon density at higher temperature and the experimentally observed reduced thermalization suggests that phonon-mediated carrier relaxation does not dominate at high temperatures, which is consistent with the hypothesis that a phonon bottleneck results in the type-II QWs presented, supporting the conclusion that the relaxation of hot carriers is dominated by the reduced radiative efficiency in these systems.
The demonstration of robust hot carriers at elevated temperatures (and at reasonable excitation densities), coupled with the relaxation of phonon loss channels, indicates that type-II systems offer a viable route to practical hot-carrier solar cells.
\begin{figure}
\includegraphics[scale=0.12]{figure6.pdf}
\caption{\label{figure6} (a) Calculated 3D electron density of states (DOS) for an InAs/AlSb heterostructure, shown inset (upper). The DOS of InAs and AlSb are plotted by projecting the total DOS onto each component. (b) Calculated 3D phonon density of InAs as a function of temperature. The atomic stacking is schematically illustrated in (a) to show the two distinct interfaces. The size of the atoms is shown based on their covalent radius.}
\end{figure}
The InAs/AlAsSb system, specifically, has several attractive features making it a leading candidate: 1) The large QW to barrier energy separation, which is tunable across the solar spectrum, facilitates efficient absorption of the sun’s energy; 2) The degeneracy of the valence band enables efficient hole extraction, while resonant tunneling structures are a reasonable route for fast – energy selective - hot electron extraction in these systems; and 3) Since the photogenerated carriers absorbed directly in the InAs QW are rapidly separated by the type-II band alignment, the loss of photogenerated carriers to photoluminescence is minimized in the QWs. Work is now underway to develop device architectures to further evaluate these systems in practical solar cell devices.
\section{acknowledgments}
The authors would like to acknowledge the contribution of James Dimmock of Sharp Laboratories of Europe Ltd for useful discussions and the critical reading of this manuscript, and Professor Rui Yang (University of Oklahoma) for his insight into the early sample design. The computation was performed at the Extreme Science and Engineering Discovery Environment (XSEDE) and the OU Supercomputing Center for Education \& Research (OSCER) at the University of Oklahoma.
\section*{References}
|
1,108,101,565,293 | arxiv | \section{Introduction}
Possible relativities as described by relativity symmetries beyond the
Lorentz or Poincar\'e group of Einstein relativity have been catching
quite some interest recently (see for example Refs.\cite{dsr,G}).
Since the pioneering work of Snyder\cite{S}, symmetry deformation, mostly
considered as required to implement an invariant quantum scale, has
been a main key for the direction of theoretical pursuit. That gives the
idea of a quantum relativity. On the other hand, if one does believe that
the entity we used to know as space-time does have a different structure
at the true microscopic/quantum level that can plausibly be described
directly, such a `quantum space-time' will have its own relativity.
The relativity symmetry deformations could be nicely formulated as
Lie algebra stabilizations \cite{CO}. Following the line of thinking,
we implemented in Ref.\cite{023} a linear realization perspective to arrive
at the `quantum space-time' description with the quantum relativity
symmetry as the starting point. Lorentz or Poincar\'e symmetry (of Einstein
relativity) can be considered exactly a result of the stabilization of the
Galilean relativity symmetry. The linear realization scheme in that setting is
nothing other than the Minkowski space-time picture. Such a mathematically
conservative approach, however, leads to a very radical physics perspective
that at the quantum level space-time is to be described as part of something
bigger \cite{023}. The latter as the arena for the description of the new
fundamental physics is called the quantum world
in Ref.\cite{030}. It is to be identified, mathematically, as the coset
space $SO(2,4)/SO(2,3)$ \cite{031}, or the hypersurface
$\eta_{\ssc \mathcal M\mathcal N} X^{\!\ssc \mathcal M} X^{\!\ssc \mathcal N} = 1$
[$\eta_{\ssc \mathcal M\mathcal N} =( -1, 1, 1, 1, 1, -1)$],
\footnote{Note that we have flipped the metric sign convention adopted
in our earlier publications \cite{023,030,031}; from now on, the
time-like (space-like) geometric signature is -1 (+1)}
within the 6D classical geometry with $X^\mu$ ($\mu=0$ to 3) being space-time
coordinates while $X^4$ and $X^5$ being {\it non-space-time} ones. The `time' of
Minkowski space-time is not just an extra spatial dimension. Its nature is
dictated, from the symmetry stabilization perspective, by the physics of having
the invariant speed of light $c$. The other two new coordinates in our `quantum
space-time' picture are likewise dictated to be neither space nor time\cite{023,030}.
We reproduce in table~1 the the suggested physics of the stabilizations/deformations
involved in our stabilizations and extensions by translations (of the corresponding
arenas for the linear realizations) sequence arriving at the $SO(2,4)$ quantum
relativity \cite{030} as illustrated by
{\boldmath \beqa \nonumber
&& ISO(3) \; \rightarrow \;\; SO(1,3)\;\;\hookrightarrow\;\; ISO(1,3) \\
&&\;\; \rightarrow \; SO(1,4)
\;\; \hookrightarrow \;\; ISO(1,4) \; \rightarrow \;\; SO(2,4) \;.
\nonumber \eeqa}
Like $X^0=ct$, we have $X^4=\kappa c \sigma$ and $X^5=\ell\rho$ with
however, $\sigma$ having the physics dimension of time/mass (and a space-like
geometric signature) and $\rho$ a pure number. Understanding the physics role of the
two extra coordinates $\sigma$ and $\rho$ of the quantum world is considered crucial
for any attempt to formulate the dynamics. Here in this letter, we report
a way to approach the challenge --- analyzing the physics of what
we called the Poincar\'e-Snyder relativity.
We will explain first, in the next section, structure of the Poincar\'e-Snyder
relativity, with symmetry denoted by $G(1,3)$. In short, it is mathematically sort
of a `Galilie group' for 4D space-time. The analog of time $t$ as an external
evolution parameter for Galilean dynamics is here given by $\sigma$.
Recall that $\sigma$ has a space-like geometric signature but the physics
dimension of time/mass\cite{023,030}. We are inspired to consider the new
relativity by our studies on the quantum relativity. The ultimate goal is to
analyze and formulate physics directly for the intrinsically quantum arena ---
the quantum world (see discussions in Refs.\cite{030,031}). To better
prepare ourselves for the formidable challenge, we want to take a step backward
and study the relativity(ies) with symmetry between the Einstein and our
quantum case. From the latter perspective, the Poincar\'e-Snyder
relativity is the first step beyond Einstein relativity. Its physics setting
should be not much different from the latter. It has, however, a
mathematical structure very similar to the $G(3)$ Galilean case. The latter
suggests similar mathematics in the formulation of the admissible dynamics for
the two cases, both at the classical and quantum level. The Poincar\'e-Snyder
relativity mechanics may hence be a much more familiar object. We must warn
the readers that the physics interpretations of the similar mathematics
are expected to be quite nontrivial and un-conventional though.
The Poincar\'e-Snyder relativity is still a relativity on 4D Minkowski
space-time, only with an extra kind of momentum
dependent reference frame transformations admitted. These momentum boosts
are independent of the usual velocity boosts, but reduce to the latter
when $\sigma=\tau/m$, the Einstein proper time over the standard (fixed)
particle rest mass \cite{023}. Just as Galilean velocity boosts are
transformations on 3D space dependent on an external parameter time,
the momentum boosts enforce the independent $\sigma$-coordinate external
to 4D space-time. The `dynamic' formulation naturally suggests taking
$\sigma$ as a sort of `evolution' parameter. We called that $\sigma$-dynamics
or $\sigma$-evolution, withholding the exact physics interpretation.
Within the Einstein framework, $\sigma$-evolution looks like
proper time evolution, and as such have actually been used quite a lot in
the literature to describe Einstein relativistic dynamics, both classical
and quantum\cite{HP,cS}. This letter is the first step to take a second
look at the kind of studies, focusing on the difference and superiority
of the new Poincar\'e-Snyder perspective. In particular, we will present
in section 3 results of the picture of quantization as $U(1)$ central
extension \cite{book}. Notice that unlike the quantum relativity, and
possibly the Snyder relativity obtained from the stabilization of the
Poincar\'e-Snyder relativity, the construction of the $G(1,3)$ symmetry
involves none of the quantum physics inspired deformations with quantum scale(s)
as deformation parameters. Hence, there is no reason at all to expect the
relativity to be in any sense quantum. It looks only like a different
perspective to look at classical physics on Minkowski space-time; and as such
should be liable to quantization. Results of section 3 actually lay further
justification to that {\em a posteriori}.
Of course the ultimate justification for the $G(1,3)$
approach from the theoretical side has to come from the relativities
beyond. Or there is the possibility of seeing experimental evidence for
Poincar\'e-Snyder relativity or Snyder relativity through careful studies of
the $\sigma$-dynamics and its physics interpretations.
The kind of physics picture we have in mind
behind our work and the earlier papers \cite{023,030} is not quite like any
of the familiar old pictures, and admittedly not yet fully conceptuallized.
The research program is a very ambitious one, aiming at dynamics beyond any
existing framework. We find the need to take the most conservative strategy,
trying to commit to the minimal conceptual physics picture on any particular
new aspects arising from the formulation before one can be sure that it is the right
way to look at it. We try to learn from the mathematics and logics of the basic
formulation what it can offer. One will see that such a conservative strategy
can still bring out quite some interesting results presented in this letter and in
another accompanying long paper \cite{037}.
{\it Readers be very cautious.}
This letter can be read without detailed reading of the earlier papers, but what
motivates a particular definition or approach would then be difficult to appreciate.
With or without reference to Refs.\cite{023,030}, it is
important for readers to read first what we presented as it is, without assuming
a perspective from any other theory standard or less conventional. This is
especially true with the very similar looking structure from the line of
work on covariant formulation of Einstein relativistic dynamics\cite{HP,cS}.
We highlight here a few crucial points to bear in mind.
The momentum boosts are newly introduced transformations with physical meaning
still to be clarified (see \cite{023} for some discussions). That comes with a
modified or sort of generalized definition of energy-momentum, as $\sigma$
derivatives. The $\sigma$ parameter is of central importance, with physics
content still to be fully understood. It is definitely not a measure of time.
We will introduce the dynamics of the Poincar\'e-Synder relativity as a formal
$\sigma$-evolution, more or
less duplicating mathematically the time dynamics of the Galiliean case. In fact,
a main aim of the studies is to learn about how to think about the physics
of the $\sigma$ parameter. Comparing any expression here with similar ones
having essentially a time evolution pespective from the physics point of
view leads only to confusion. In fact, in the long paper to follow, we will
illustrate the right way to really look at the time evolution results our
$\sigma$-evolution formulation provides in a more fundamental setting --- that
of canonical classical mechanics. For instance, we have derived directly from
the formulation an interesting solution of particle-antiparticle annihilation,
which is considered to be a nontrivial success of our approach \cite{037}.
\section{Poincar\'e-Snyder Relativity}
The Poincar\'e-Snyder relativity is a relativity on 4D Minkowski space-time,
with $\sigma$ as an external parameter. It has otherwise a structure
mimic that of the Galilean case on 3D space. The complete Galilei
group has rotations, translations, as well as velocity boosts as symmetry
transformations on 3D space together with an external time parameter.
Comparing the first two columns of table~1, we can see that the
implementation of Poincar\'e symmetry stabilization through the
invariant (quantum) energy-momentum scale requires considering
a new kind of momentum boosts, as independent from the velocity boosts.
The usual relation between momentum and velocity has to be relaxed
to hold only at the Einstein limit\cite{023}.
The Poincar\'e-Snyder relativity is then just the relativity with
Poincar\'e symmetry extended by such momentum boosts before the deformation,
{\it i.e.} at the unconstrained commuting limit. The relativity may hence
provide a window for us to understand the $\sigma$ parameter in the most
familiar context. Interestingly enough, we came to realize that
parameter(s) of quite close a nature to that of $\sigma$ had been used quite
a lot, to different extents, in the regime of (Einstein) relativistic
quantum mechanics in somewhat ambiguous ways \cite{HP,cS}.
Adopting the perspective of the Poincar\'e-Snyder relativity we
propose here actually helps to put many of such earlier attempts
on solid theoretical footings. From the perspective alone, that comes
at a great cost though --- a new definition of energy-momentum as $\sigma$
derivatives with a physics picture still to be fully understood \cite{023}.
Let us start by a clear illustration
of the structure of our Poincar\'e-Snyder relativity.
Following Ref.\cite{Gilmore} (see Fig.~10.6 for an illustration), we
can describe the Poincar\'e group and the Galilei group as sequential
contractions from $SO(1,4)$
\[ SO(1,4) \longrightarrow ISO(1,3) \longrightarrow G(3) \;. \]
The first step is the well known In\"on\"u-Wigner contraction, a reverse
of the symmetry stabilization. A further, similar, contraction gives the
Galilei group with commuting translations as well as commuting velocity boosts.
We are more interested in the other contraction sequence, from the symmetry
of our quantum relativity. That is the sequence
\[ SO(2,4) \longrightarrow ISO(1,4) \longrightarrow G(1,3) \;, \]
giving the newly named Snyder and Poincar\'e-Snyder relativities. In table~2, we
list the full set of generators for the symmetry groups of all the five relativities.
\footnote{
In a forthcoming paper, we will present more details of the mathematics and
plausible physics pictures of relativity symmetries from the contraction schemes
based on the kind of perspective \cite{041}.}
Wherever there is a change of notation from the
$J_{\ssc \mathcal M\mathcal N}$ generator(s) moving across a row,
a contraction is involved. $J_{\ssc \mathcal M\mathcal N}$ here denotes
the 15 generators of the the {\boldmath\small $so(2,4)$} algebra, satisfying
\beq \label{so}
[J_{\ssc \!\mathcal R\mathcal S}, J_{\ssc\! \mathcal
M\mathcal N}] = -
( \eta_{\ssc \mathcal S\mathcal M} J_{\ssc
\mathcal R\mathcal N} - \eta_{\ssc \mathcal R\mathcal M} J_{\ssc
\mathcal S\mathcal N} + \eta_{\ssc \mathcal R\mathcal N} J_{\ssc
\mathcal S\mathcal M} -\eta_{\ssc \mathcal S\mathcal N} J_{\ssc
\mathcal R\mathcal M}) \;,
\eeq
where again $\eta_{\ssc \mathcal M\mathcal N} =( -1, +1,
+1, +1, +1, -1)$ with the indices go from $0$ to $5$; we use also
$\eta_{\ssc A\!B}$ to denote the $0$ to $4$ part; others
indices follow the common notation . Note that the
$J_{\mu\nu}$'s are the (subset of) Lorentz transformation generators, etc.
All $P_{\ssc A}$'s denote (generators for) the coordinate translations on
the corresponding arena for the linear realizations
--- $M^5$, the Minkowski space-time $M^4$, and the 3D space. The $K_i$'s are
Galilean velocity boosts, and $K_\mu^{\prime}$'s the new commuting momentum boosts.
We have the standard structure
\beq
[J_{\ssc \!A\!B}, P_{\ssc C}] =
(\eta_{\ssc B\!C} P_{\ssc A}
- \eta_{\ssc A\!C} P_{\ssc B}) \;,
\eeq
and
\beq
[J_{\mu\nu}, K_\lambda^{(\prime)}] =
(\eta_{\nu\lambda} K_\mu^{(\prime)}
- \eta_{\mu\lambda} K_\nu^{(\prime)} ) \;,
\eeq
where the latter applies to the two different types of boosts $K_\mu^{\prime}$
and $K_i$'s. Translations and the boosts (not including the so-called Lorentz
boosts which are space-time rotations) always from a commuting set. The external
time evolution $H$ commutes with all others with the only exception given by
\beq
[K_i, H] =
P_i \;.
\eeq
The latter is actually the same commutation relation as that of the generators
$J_{0i}$ and $P_{\ssc 0}$ before the {$ISO(1,3) \longrightarrow G(3)$}
symmetry contraction. In fact, no commutation relation between time translation
and the other generators is changed in the contraction. We choose to use
$H$ instead of $P_{\ssc 0}$ to denote time translation for the Galilean case
only to highlight time being a parameter external to the geometric realization arena
of 3D space. Similarly the external $\sigma$-translation $H^{\prime}$ has the only
non-zero commutators
\beq
[K_\mu^{\prime}, H^{\prime}] =
P_\mu \;,
\eeq
equal to that of the corresponding ones between $J_{{\ssc 4}\mu}$ and $P_{\ssc 4}$.
Note that the translations for Einstein and Galilean relativities are listed
in the rows of the $J_{{\ssc 4}\mu}$'s and $K_\mu^{\prime}$'s for the other
three cases, rather than with the other $P_\mu$ translations. That is done to
emphasize that all the 10 generators of the Poincar\'e or Galilei group,
like the case of the $J_{\mu\nu}$ and $K_\mu^{\prime}$ subset for the $G(1,3)$
group of the Poincar\'e-Snyder relativity, can be obtained through proper contractions
of an $SO(1,4)$ symmetry. This is more in correspondence with the stabilization
sequence presentation in our earlier publications \cite{023,030}, in which we make
no clear distinction nor simultaneous treatment of boosts and translations.
The $J_{\mu\nu}$ and $K_\mu^{\prime}$ set indeed gives an algebra isomorphic to
that of the $J_{\mu\nu}$ and $P_\mu$ set. However, one can also
keep all the $P_i$'s and $P_{\ssc 0}$ on the same role. That corresponds to
seeing the last two groups as from the equally valid contraction sequence
\cite{Gilmore}
\[ SO(2,3) \longrightarrow ISO(1,3) \longrightarrow G(3) \;, \]
perhaps more adapted to tracing back their relations to the full $SO(2,4)$
symmetry. We are interested here mostly in illustrating the structure of the
$G(1,3)$ symmetry for the Poincar\'e-Snyder relativity.
As suggested by the notation, the $G(1,3)$ symmetry has a very similar
structure to that of the Galilean $G(3)$, hence similar mathematical
properties. The latter may imply similar properties, at the level of
mathematical formulations, when applied to describe physics.
Two main features are considered specially interesting.
Firstly, taking away only $H$ from the set of generators of $G(3)$,
the rest generates a subgroup, likewise for taking away $H^{\prime}$
in the case of $G(1,3)$. This is not the case for the set of $ISO(1,3)$
generators with $P_{\ssc 0}$, for example. Particle dynamics under
Poincar\'e symmetry has a no-interaction theorem \cite{no-int,Ste}. The latter
can be interpreted as a consequence of different subgroup structure,
as compared to that of the $G(3)$ symmetry. All generators besides the Hamiltonian
$H$ for the symmetry stay as kinematical ones, which has to generate a subgroup.
The generators besides the Hamiltonian fail to do the same for Poincar\'e symmetry,
leaving rather the three admissible forms of Hamiltonian dynamics as noted by
Dirac\cite{D} (see also Ref.\cite{Ste}) as the alternatives. The generators besides
the $H^{\prime}$ Hamiltonian of the $G(1,3)$ symmetry can be taken as all being
kinematical. Using $G(1,3)$ symmetry to describe `relativistic dynamics', or rather
the dynamics of $\sigma$-evolution on Minkowski space-time, would admit direct
description of interactions as in the Galilean case. It will be interesting to see
if we can learn something about `relativistic dynamics' from such an
approach (see Ref.\cite{037}). Next, we turn to a feature that we want
to focus on here. The $G(1,3)$ group, like $G(3)$, admits a non-trivial
$U(1)$ central extension. Projective group representations required
to describe quantum mechanics are simply unitary representations
of the $U(1)$ central extension \cite{book}. Hence, the $G(1,3)$ may be
a better candidate than the $ISO(1,3)$ for the description of
`relativistic quantum mechanics' as a quantization of `relativistic
mechanics'.
\section{\boldmath Quantization as $U(1)$ central extension}
Aldaya and de~Azc\'arraga \cite{gq} presented a particularly nice approach to
geometric quantization in which the quantum dynamical description of the system
can be obtained with the symmetry group as the basic starting point (see also
Ref.\cite{book}). The approach looks especially relevant to our case in
which we have a new relativity symmetry in search of a clear understanding
of the physics involved. In fact, while the approach gives an elegant
presentation for the quantization of the Galilean system, its application
to the case of Einstein relativity is less than equally appealing. For
the former case, the group to be considered is a $U(1)$ central extension
of the symmetry for the corresponding classical system --- $G(3)$.
The essentially unique nontrivial central extension is
depicted by the modified commutator $[K_i, P_j]= m \delta_{ij} \Xi$, where
$\Xi$ is a central charge ($m$ the particle mass). The $ISO(1,3)$ symmetry,
however, admits no such nontrivial central extension (for an explicit
discussion on admissible central charges for both cases, see Ref.\cite{Ste}).
It is easy to see from our discussion above of the $G(1,3)$ symmetry for the
Poincar\'e-Snyder case that it has a structure mostly parallel to that
of the Galilean $G(3)$. Hence, we should have the same nontrivial
$U(1)$ central extension available for the implementation of such a
quantization scheme. Indeed, when we set out to perform the analysis, we
realized that the work, at least most of the mathematics, has actually
been done \cite{pkg} under a different premise. Confronted with the
difficulty on applying the elegant quantization scheme to (Einstein)
relativistic dynamics, Aldaya and de~Azc\'arraga chose to put the
Poincar\'e symmetry into a mathematical framework that make the scheme
applicable --- essentially taking it to $G(1,3)$. They basically considered
promoting the proper time to an `absolute time' parameter for the formulation.
The latter was used more like as a mathematical trick with any independent
physics meaning not explicitly addressed. The physics results are discussed
only after a symmetry reduction back to the
Einstein setting has been implemented (see also Ref.\cite{ap}).
We choose here to follow mostly the approach of Ref.\cite{gq} and
present first the quantization results in the language of our Poincar\'e-Snyder
relativity formulation. After that we will discuss the very important difference
in physics premise and interpretation we introduce here. We discuss why the
Poincar\'e-Snyder relativity perspective is considered to provide a plausibly more
interesting framework for the bold attempt at the group quantization
formulation of the `quantum relativistic system'. Our approach may
also provides an interesting way to avoid the many `uncomfortable' features
well appreciated in the usual (Einstein) relativistic quantum mechanics, which
otherwise need to be resolved in the quantum field theory framework.
The standard action of $G(1,3)$ on the Minkowski spacetime ($x^\mu$)
with the extra, external, parameter $\sigma$ is given by
\beqa
&& x'^\mu= {\Lambda^{\mu}}_{\!\nu} x^\nu + p^\mu \sigma + A^\mu \;,
\nonumber \\
&& \sigma' = \sigma + b \;.
\eeqa
An element of our extended $G(1,3)$ group may be written as
$g=(b, A^{\mu}, p^{\mu}, {\Lambda^{\mu}}_{\!\nu}, e^{i\theta})$,
with group product rule given by
\beqa
&& A''^\mu= {\Lambda'^{\mu}}_{\!\nu} A^\nu + p'^\mu b + A'^\mu \;,
\qquad
b'' = b' + b \;,
\nonumber \\
&& p''^\mu = {\Lambda'^{\mu}}_{\!\nu} p^\nu + p'^\mu \;,
\qquad\qquad
{\Lambda''^{\mu}}_{\!\nu} ={\Lambda'^{\mu}}_{\!\rho}{\Lambda^{\rho}}_{\!\nu}\;,
\eeqa
and the nontrivial $U(1)$ extension of given by
\beq
\theta''=\theta' +\theta + z \left[A'_\mu {{\Lambda'^{\mu}}_{\!\nu}} p^\nu
+ b(p'^\mu {{\Lambda'_{\mu}}^{\!\nu}} p_\nu
+\frac{1}{2} p'^\mu p'_\mu)\right] \;.
\eeq
The last term is the cocycle the exact choice of which is
arbitrary up to a coboundary \cite{book}; $z$ corresponds to a value
of the central charge is taken as an arbitrary constant at this point.
The right-invariant vector fields are given by
\beqa
&& {\bf X}^{\!\!\ssc R}_b = \frac{\partial}{\partial b} \;,
\qquad \qquad
{\bf X}^{\!\!\ssc R}_{A^\mu} = \frac{\partial}{\partial A^\mu}
+ z p_ \mu \; \frac{i\zeta}{\hbar} \frac{\partial}{\partial \zeta} \;,
\nonumber \\
&& {\bf X}^{\!\!\ssc R}_{p^\mu} = b\frac{\partial}{\partial A^\mu}
+ \frac{\partial}{\partial p^\mu} + z b p_\mu \;
\frac{i\zeta}{\hbar} \frac{\partial}{\partial \zeta} \;,
\nonumber \\
&& {\bf X}^{\!\!\ssc R}_{\omega^{\mu\nu}}
= {\bf \widetilde{X}}^{\!\!\ssc R}_{\omega^{\mu\nu}}
+ A_\nu \frac{\partial}{\partial A^\mu}- A_\mu \frac{\partial}{\partial A^\nu}
+ p_\nu\frac{\partial}{\partial p^\mu}- p_\mu\frac{\partial}{\partial p^\nu}\;,
\nonumber \\
&& {\bf X}^{\!\!\ssc R}_\zeta = \frac{i\zeta}{\hbar} \frac{\partial}{\partial \zeta} \;,
\eeqa
where we skip the details of ${\bf \widetilde{X}}^{\!\!\ssc R}_{\omega^{\mu\nu}}$, the invariant
vector field for the $SO(1,3)$ subgroup
[with $\Lambda(\omega) =e^{\frac{-i}{2}\omega^{\mu\nu} J_{\mu\nu}}$],
leaving it to be given explicitly in the appendix.
Note that $\zeta=\exp({\frac{i}{\hbar}\theta})$ with
$\frac{i\zeta}{\hbar} \frac{\partial}{\partial \zeta}= \frac{\partial}{\partial \theta}$
locally. The left-invariant vector fields are given by
\beqa
&& {\bf X}^{\!\!\ssc L}_b = \frac{\partial}{\partial b}
+ p^\mu \frac{\partial}{\partial A^\mu}
+\frac{z}{2}p^{\mu}p_{\mu} \; \frac{i\zeta}{\hbar} \frac{\partial}{\partial \zeta}\;,
\nonumber \\
&&
{\bf X}^{\!\!\ssc L}_{A^\mu} =
\frac{\partial}{\partial A^\nu}{{\Lambda^{\nu}}_{\!\mu}}\;,
\qquad \qquad
{\bf X}^{\!\!\ssc L}_{p^\mu} = \frac{\partial}{\partial p^\nu} {{\Lambda^{\nu}}_{\!\mu}}
+ z A_\nu {{\Lambda^{\nu}}_{\!\mu}} \;\frac{i\zeta}{\hbar} \frac{\partial}{\partial \zeta}\;,
\nonumber \\
&& {\bf X}^{\!\!\ssc L}_{\omega^{\mu\nu}}
= {\bf \widetilde{X}}^{\!\!\ssc L}_{\omega^{\mu\nu}} \;,
\qquad \qquad
{\bf X}^{\!\!\ssc L}_\zeta = \frac{i\zeta}{\hbar} \frac{\partial}{\partial \zeta} \;;
\eeqa
again explicit expression for the $SO(1,3)$ vector field
${\bf \widetilde{X}}^{\!\!\ssc L}_{\omega^{\mu\nu}}$ is left to the appendix.
We have the quantization form given by the left-invariant 1-forms conjugate
to ${\bf X}^{\!\!\ssc L}_\zeta$, the vertical 1-form; explicitly
\beq
\Theta
= -z A^\nu{{\Lambda_{\nu}}^{\!\mu}} dp_\mu - \frac{z}{2}p^{\mu}p_{\mu} db
+ \frac{\hbar d\zeta}{i\zeta} \;.
\eeq
The characteristic module is defined through the conditions
$i_{\bf X}\Theta=0$ and $i_{\bf X}d\Theta=0$, characterizing the
differential system given by the vector field ${\bf X}^{\!\!\ssc L}_b$.
We have the equations of motion given by
\beqa
&& \frac{db}{d\sigma}=1 \;,
\qquad \qquad
\frac{dA^\mu}{d\sigma}=p^\mu \;,
\nonumber \\
&&
\frac{dp^\mu}{d\sigma}=0 \;,
\qquad \qquad
\frac{d{{\Lambda^{\mu}}_{\!\nu}} }{d\sigma}=0
\quad \left(\;\frac{d{{\omega}^{\mu\nu}} }{d\sigma}=0\;\right) \;,
\nonumber \\
&& \frac{d\zeta}{d\sigma}= \frac{z}{2}p^{\mu}p_{\mu} \frac{i\zeta}{\hbar} \;.
\eeqa
Identifying the integration parameter as $\sigma$ gives
\beqa
&& b=\sigma \;,
\qquad \qquad
A^\mu = p^\mu \sigma + K'^\nu {\Lambda_{\nu}}^{\!\mu}\;,
\qquad \qquad
p^\mu=P^\mu \;,
\nonumber \\
&& \zeta= \zeta_{o} \exp(\frac{iz}{2\hbar}p^{\mu}p_{\mu}\sigma)\;.
\eeqa
Naturally, $A^\mu$ is to be identified as $x^\mu$ giving $p^\mu$
as $\frac{dx^\mu}{d\sigma}$, showing consistence with our original introduction
of the momentum boosts (see table 1) as the extra symmetry transformations
to supplement $SO(1,3)$ and hence getting to $G(1,3)$. We have constants
of motion, $K'^\nu {\Lambda_{\nu}}^{\!\mu}$, $P^\mu$, and
$\zeta_{o}$ which parametrize the quantum manifold. Passing to the latter,
$\Theta$ takes the form
\beq
\Theta_{\!P} = - z K^\mu dP_\mu + \frac{\hbar d\zeta_o}{i\zeta_o} \;.
\eeq
The symplectic 2-form is given by $\omega =d \Theta$. Taking $z=1$,
$H'=\frac{1}{2}p^{\mu}p_{\mu}$, and $K'^\mu=A^\nu{{\Lambda_{\nu}}^{\!\mu}}$
we have
\beq
\omega=d\Theta_{\!P} = - dK'^{\mu}\wedge dp_\mu
\;,
\eeq
where we have taken $z=1$ corresponding to $H'=\frac{1}{2}p^{\mu}p_{\mu}$,
which gives the right form for the classical $\sigma$-Hamiltonian \cite{037}.
The expression suggested the identification of $(K'^\mu,b)$ as
particle `position' variables $(x^\mu,\sigma)$ and $H'$ as the
$\sigma$-Hamiltonian generating `evolution' in the absolute parameter $\sigma$.
The prequantum operator associated with a real function $f$ on
the classical phase space acting on wavefunction $\psi$ is given by
\beq
\hat{f} \psi \equiv - i\hbar \tilde{X}f\cdot \psi
=-i\hbar X_f \cdot \psi + [f- \Theta(X_f)] \psi \;,
\eeq
where $i_{X_f} \omega= -df$. In particular,
\beqa
&&\hat{K'}^\mu = i\hbar \frac{\partial}{\partial P_\mu} \;,
\qquad\qquad
\hat{P}_\mu= -i\hbar \frac{\partial}{\partial K'^\mu} + P_\mu\;,
\nonumber \\
&&\hat{\sigma}= i \hbar\frac{\partial}{\partial H'} \sigma \;,
\qquad\qquad
\hat{H'}= i \hbar \frac{\partial}{\partial \sigma} \;,
\eeqa
where the an extra negative sign is adopted in $\hat{H}'$.
The operators $K'^\mu$ and $P_\mu$ can also be obtained from
${\bf X}^{\!\!\ssc R}_{A^\mu}$ and ${\bf X}^{\!\!\ssc R}_{p^\mu}$.
The full polarization subalgebra can be taken as spanned by
$\{{\bf X}^{\!\!\ssc L}_b,
{\bf X}^{\!\!\ssc L}_{A^\mu}, {\bf X}^{\!\!\ssc L}_{\omega^{\mu\nu}}\}$,
giving the momentum space wavefunction $\phi(P_\mu)$, {\it i.e.} the wavefunction
as dependent only on $P_\mu$ but not $K'^\mu$.
We have then simply $\hat{P}_\mu \phi(P_\mu)= {P}_\mu \phi(P_\mu)$.
${\bf X}^{\!\!\ssc L}_b$ generates the Euler-Lagrange equation
\beq \label{covS}
i \hbar\partial_\sigma \psi + \frac{\hbar^2}{2}\partial_\mu \partial^\mu \psi = 0
\eeq
for the Fourier transform $\psi$ of `momentum' space wavefunction $\phi$.
Note that $\hat{H}'$ and $\hat{P}_\mu$ constitute a complete set of commuting
observables for the configuration space wavefunction.
The equation expresses an operator form of
$H'=\frac{1}{2}p^{\mu}p_{\mu}$ with $\hat{P}_\mu$ reduced to just
$-i\hbar \frac{\partial}{\partial K'^\mu}$, {\it i.e.}
$-i\hbar \frac{\partial}{\partial x'^\mu}$. Eq.(\ref{covS}) is of the same form
as the so-called (Lorentz) covariant Schr\"odinger equation studied in the
literature\cite{cS}, except with the parameter $\sigma$ replacing the proper time
$\tau$ (or rather $\tau/m$). The equation, with again essentially the proper time as
evolution parameter, is also what is obtained in Ref.\cite{pkg}. One can see that the
rest mass, or $m^2/2$ to be exact, of an Einstein particle is just the $\hat{H}'$
eigenvalue. Without considering the spin degree of freedom, the usual (Einstein)
relativistic quantum mechanics corresponds to the $\sigma$ independent eigenvalue equation
for $\hat{H}'$, obtainable from Eq.(\ref{covS}) separation of the $\sigma$ variable
from the $x^\mu$ variables. The eigenvalue equation is the Klein-Gordon equation.
Recall that for an Einstein particle, {\it i.e.} taking the Poincar\'e-Snyder free
particle to the Einstein relativistic limit, we have $\sigma \to \tau/m$ \cite{023,030}.
\section{Discussions}
One can see from the above that Poincar\'e-Snyder relativity provides a
very nice mathematical framework to formulate the the covariant quantum
mechanics except with the Lorentz invariant evolution parameter $\tau$
replaced by the truly independent variable $\sigma$ as suggested from the
quantum relativity framework. The introduction of an extra evolution
parameter in the beautiful quantization scheme of Ref.\cite{pkg} and the various
early discussions of the covariant Schr\"odinger before the 50's~\cite{cS}
as sort of a mathematical tool is now dictated by the quantum relativity
picture. It remains a challenge to interpret the $\sigma$ dependent mechanics
at both the quantum and the classical level beyond the case of an Einstein particle.
We emphasize again that $\sigma$ is not a kind of time parameter.
The key lesson from our perspective is that one has to go beyond thinking about
the `evolution' parameter as essentially time, which confines all earlier literature.
In a somewhat different background
setting, a first discussion of the physics of the $\sigma$ coordinate has been
given in Ref.\cite{023}. The most important point to note is that the framework
actually redefine energy-momentum through $p^\mu=\frac{dx^\mu}{d\sigma}$,
making it not equal to the Einstein form of $m \frac{dx^\mu}{d\tau}$.
In general, for the quantum relativity or the Poincar\'e-Snyder relativity,
particle rest mass becomes a reference frame dependent quantity. A momentum
boost transformation changes the value of $m$ as the magnitude of the
energy-momentum four-vector. In Poincar\'e-Snyder relativity, the vector
transforms by simple addition like the Galilean velocity. This is the new
and most fundamental feature offered by our framework. A related aspects is the lost
of the Einstein rest mass as an intrinsic or fundamental character of a particle.
Here, it is only the magnitude of the particle energy-momentum four-vector
which can be modified by interaction. The feature is illustrated to be useful,
or even necessary, in describing some interesting physics scenario like
particle-antiparticle annihilation\cite{037}.
In Galilean relativity, the kinetic energy of a particle ($\propto v^2$) is
both reference frame dependent and interaction dependent hence time dependent.
Similarly, the (expectation) value of the $\sigma$-Hamiltonian is, in general,
$\sigma$ dependent. To put it another way, the $\sigma$ dependent covariant
Schr\"odinger equation is to be given by
\beq
i \hbar\partial_\sigma \psi - \hat{H}' \psi = 0
\eeq
where $\sigma$-Hamiltonian $\hat{H}'$ operator should be given by
$\frac{1}{2}\hat{P}^\mu\hat{P}_\mu + \hat{V}'$ with $V'$ depicting an `interaction
potential' under the $\sigma$-evolution. The value for the magnitude of the
energy-momentum four-vector would hence change with $\sigma$. Such a picture
is fully collaborated by a classical canonical Hamiltonian picture\cite{037}.
It is interested to note that in various studies of the essentially
$\tau$-parametrized covariant Schr\"odinger equation there had been discussions
of notion of mass indefiniteness~\cite{HP,cS}. Naively, Eq.(\ref{covS}) admits
mixture of eigenstates of different $m^2$ value. Some author actually went
so far as to absorb $m$ into the evolution parameter and made it $\tau/m$,
which is indeed close to our $\sigma$. An example of an explicit physics
considerations of statistical nature, for example, was offered by Hostler\cite{H}.
In our opinion, Feynman was the one that went beyond everybody, in his works on
quantum electrodynamics. Not only did he rewrote the Klein-Gordon equation
in the form of Eq.(\ref{covS}) with $u\equiv\tau/m$ in the place of $\sigma$ \cite{F},
he actually considered the case of $dt/du<0$ hence taking the `evolution' somewhat
beyond a physical time variable. That was actually behind the antiparticle picture
in Feynman diagrams used in the standard quantum field theory \cite{St}. There was
no indication, however, that the Feynman $u$ parameter was taken to have any
independent physical meaning. Our framework discussed above certainly asks for the
$\sigma$ parameter to be taken as a truly important physics parameter beyond the
$\tau/m$ limit. And one should take special caution against thinking about it
too much as a quantity analogous to any `time' variable. For example, Ref.\cite{030}
illustrates it has a close connection to a scaling parameter in the full quantum
relativity. The challenge to fully appreciate the $\sigma$ variable is beyond
us here, but sure a main target of the research program.
In this letter, we introduce the Poincar\'e-Snyder relativity and Snyder
relativity as relativities in between the well known Galilean and Einstein
cases and the quantum relativity --- the relativity for `quantum space-time'.
We illustrate, using the symmetry group geometric quantization framework, how
the Poincar\'e-Snyder relativity may be providing a stronger framework for the
description of the usual relativistic quantum mechanics, from the perspective
of the which the formulation under Einstein relativity is sort of an incomplete
description. The extra `evolution' parameter $\sigma$ have been actually used
in various limiting form by earlier authors. Our Poincar\'e-Snyder relativity
provides a formulation for thinking about the $\sigma$ variable in more serious
manner, on a similar footing as the space and time variables. We will report
further on the physics of $\sigma$-evolutions in future publications.
\section{appendix : Invariant Vector Fields on SO(1,3) Group Manifold}
We give in this appendix some details of our results on
invariant vector fields on the $SO(1,3)$ group manifold.
Starting with a generic group element
$\Lambda(\omega) =e^{\frac{-i}{2}\omega^{\mu\nu} J_{\mu\nu}}$ in terms of
the generators $J_{\mu\nu}$ in the standard form, we rewrite it in terms
of two commuting sets of generators for separate $SU(2)$ groups as
$\Lambda(\omega) = e^{-i({\omega_+}^iJ^+_i)} e^{-i({\omega_-}^iJ^-_i)}$,
where $J^\pm_i=\frac{1}{2}\left( \frac{1}{2}{\epsilon_i}^{jk} J_{jk} \pm i J_{0i} \right)$
respecting $[J^\pm_i,J^\pm_j]=i{\epsilon_{ij}}^{k}J^\pm_k$, and
${\omega_\pm}^i= \frac{1}{2}{\epsilon^i}_{jk}\omega^{jk}\mp i \omega^{0i}
\equiv \theta^i \mp i \eta^i$. The group
product may be written as
\beq
\Lambda(\omega'')=\Lambda(\omega')\Lambda(\omega)
=e^{-i({\omega'_+}^iJ^+_i)} e^{-i({\omega_+}^iJ^+_i)} \,
e^{-i({\omega'_-}^iJ^-_i)} e^{-i({\omega_-}^iJ^-_i)} \;.
\eeq
The left invariant vector fields for the $SU(2)$ group
${\bf {X}}^{\!\!\ssc L}_{\omega_\pm^{i}}$'s can be computed directly. We list the
result here dropping the + or - sign :
\beq \label{su2}
{\bf {X}}^{\!\!\ssc L}_{\omega^i}=\frac{|\omega|\cot({|\omega|}{/2})}{2}
\frac{\partial}{\partial \omega^i}
+\frac{2-|\omega|\cot(|\omega|/2)}{2|\omega|^2}\eta_{ij}\omega^j
\omega^k\frac{\partial}{\partial\omega^k}-\frac{1}{2}\eta^{kh}{\epsilon_i}_{jk}
\omega^j\frac{\partial}{\partial\omega^h} \;.
\eeq
We have then from the relations
${\bf {\widetilde{X}}}^{\!\!\ssc L}_{\theta^i}={\bf {X}}^{\!\!\ssc L}_{\omega_+i}+{\bf {X}}^{\!\!\ssc L}_{\omega_-^i}$ and
${\bf {\widetilde{X}}}^{\!\!\ssc L}_{\eta^i}=-i\left({\bf {X}}^{\!\!\ssc L}_{\omega_+i}-{\bf {X}}^{\!\!\ssc L}_{\omega_-^i}\right)$
\beqa
{\bf {\widetilde{X}}}^{\!\!\ssc L}_{\theta^{i}} &=&
\frac{A}{2}\frac{\partial}{\partial\theta^{i}} + B\frac{\partial}{\partial\eta^{i}}
+\frac{1}{2} (C\theta_i-D\eta_i) \left( \theta^k \frac{\partial}{\partial\theta^k}
+ \eta^k \frac{\partial}{\partial\eta^k} \right)
+\frac{1}{2} (C\eta_i+D\theta_i) \left( \theta^k \frac{\partial}{\partial\eta^k}
- \eta^k \frac{\partial}{\partial\theta^k} \right)
\nonumber \\ &&
-\frac{1}{2} {\epsilon_{ij}}^k \left( \theta^j\frac{\partial}{\partial\theta^k}
+\eta^j\frac{\partial}{\partial\eta^k} \right) \;,
\nonumber \\
{\bf {\widetilde{X}}}^{\!\!\ssc L}_{\eta^{i}} &=&
\frac{A}{2}\frac{\partial}{\partial\eta^{i}} - B\frac{\partial}{\partial\theta^{i}}
-\frac{1}{2} (C\eta_i+D\theta_i) \left( \theta^k \frac{\partial}{\partial\theta^k}
+ \eta^k \frac{\partial}{\partial\eta^k} \right)
+\frac{1}{2} (C\theta_i-D\eta_i) \left( \theta^k \frac{\partial}{\partial\eta^k}
- \eta^k \frac{\partial}{\partial\theta^k} \right)
\nonumber \\ &&
-\frac{1}{2} {\epsilon_{ij}}^{k} \left( \theta^j\frac{\partial}{\partial\eta^k}
- \eta^j \frac{\partial}{\partial\theta^k} \right) \;,
\eeqa
[we mark $SO(1,3)$ vector fields with
${\bf {\widetilde{X}}}^{\!\cdot\cdot}_{\cdot\cdot}$ instead of
just ${\bf {{X}}}^{\!\cdot\cdot}_{\cdot\cdot}$, following the main text], where
\beqa
A(\omega)&=&
\frac{\alpha\sin{\alpha} +\beta\sinh\beta}{\cosh\beta-\cos\alpha}\;,
\qquad\qquad\qquad
B(\omega) \;=\;
\frac{1}{2}\frac{\beta\sin{\alpha} -\alpha\sinh\beta}{\cosh\beta-\cos\alpha}\;,
\nonumber \\
C(\omega)&=&
\frac{2(\alpha^2-\beta^2) ( \cos\alpha- \cosh\beta)
+ (\alpha^2+\beta^2) (\alpha\sin\alpha-\beta\sinh\beta)}
{ (\alpha^2+\beta^2)^2 (\cos\alpha-\cosh\beta)} \;,
\nonumber \\
D(\omega)&=&
-\frac{4\alpha\beta(\cos\alpha-\cosh\beta)+(\alpha^2+\beta^2)(\beta\sin\alpha
+\alpha\sinh\beta)}
{\left(\alpha^2+\beta^2\right)^2(\cos\alpha-\cosh\beta)} \;,
\nonumber \\ &&
\alpha=\frac{1}{\sqrt{2}}\, \sqrt{ \frac{1}{2}\omega_{\mu\nu}\omega^{\mu\nu}
+\sqrt{ \left( \frac{1}{2} \omega_{\mu\nu} \omega^{\mu\nu} \right)^2
+\left( \frac{1}{4} \epsilon_{\mu\nu\rho\sigma} \omega^{\mu\nu} \omega^{\rho\sigma}
\right)^2 }} \;,
\nonumber \\ &&
\beta=\frac{1}{\sqrt{2}}\, \sqrt{ - \frac{1}{2}\omega_{\mu\nu}\omega^{\mu\nu}
+\sqrt{ \left( \frac{1}{2} \omega_{\mu\nu} \omega^{\mu\nu} \right)^2
+\left( \frac{1}{4} \epsilon_{\mu\nu\rho\sigma} \omega^{\mu\nu} \omega^{\rho\sigma}
\right)^2 }} \;.
\eeqa
The above expressions can be written in the recombined form as
\beqa
{\bf \widetilde{X}}^{\!\!\ssc L}_{\omega^{\mu\nu}} &=&
\frac{1}{2}A(\omega)\, \frac{\partial}{\partial\omega^{\mu\nu}}
-\frac{1}{2}B(\omega)\,\epsilon_{\mu\nu\rho\sigma} \frac{\partial}{\partial\omega_{\rho\sigma}}
+\frac{1}{2}C_{\mu\nu}(\omega)\omega^{\rho\sigma}\frac{\partial}{\partial\omega^{\rho\sigma}}
\nonumber \\ &&
-\frac{1}{8}\epsilon_{\mu\nu\tau\lambda}C^{\tau\lambda}(\omega)
\epsilon^{\alpha\beta\rho\sigma}
\omega_{\alpha\beta} \frac{\partial}{\partial\omega^{\rho\sigma}}
+\frac{1}{2}\omega^{\rho\sigma}
\left( \eta_{\mu\rho}\frac{\partial\quad}{\partial\omega^{\nu\sigma}}
+ \eta_{\nu\sigma}\frac{\partial\quad}{\partial\omega^{\mu\rho}} \right) \;,
\eeqa
where
\beqa
{C}_{\mu\nu}(\omega)&=&
\frac{1}{2}\left[C(\omega)\omega_{\mu\nu}
-\frac{1}{2}D(\omega)\epsilon_{\mu\nu\rho\sigma}\omega^{\rho\sigma}\right]\;.
\eeqa
\iffalse
\beqa
{\bf \widetilde{X}}^{\!\!\ssc L}_{\omega^{\mu\nu}} &=&
A(\omega)\, \frac{\partial}{\partial\omega^{\mu\nu}}
-\frac{1}{2}B(\omega)\, \eta_{\mu\lambda}\eta_{\nu\tau}
\epsilon^{\lambda\tau\rho\sigma} \frac{\partial}{\partial\omega^{\rho\sigma}}
+\frac{1}{2}\omega^{\rho\sigma}
\left( \eta_{\mu\rho}\frac{\partial\quad}{\partial\omega^{\nu\sigma}}
+ \eta_{\nu\sigma}\frac{\partial\quad}{\partial\omega^{\mu\rho}} \right)
\nonumber \\ &&
+\frac{1}{4}C(\omega) \left[
3\,\omega_{\mu\nu}\omega^{\rho\sigma}\frac{\partial}{\partial\omega^{\rho\sigma}}
+2\omega^{\rho\sigma} \omega_{\rho\sigma}\frac{\partial}{\partial\omega^{\mu\nu}}
+4\,\eta_{\rho\lambda} \omega^{\rho\sigma}\omega^{\lambda\tau}
\left( \eta_{\mu\tau}\frac{\partial}{\partial\omega^{\nu\sigma}}
-\eta_{\nu\tau}\frac{\partial}{\partial\omega^{\mu\sigma}} \right)\right]
\nonumber \\ &&
-\frac{1}{8}D(\omega) \left( 2\,\omega_{\mu\nu} \eta^{\rho\lambda} \eta^{\sigma\tau}
\epsilon_{\lambda\tau\iota\kappa}
+\epsilon_{\mu\nu\iota\kappa} \omega^{\rho\sigma} \right)
\omega^{\iota\kappa} \frac{\partial}{\partial\omega^{\rho\sigma}} \;.
\eeqa
\fi
Similarly, we have
\beqa
{\bf \widetilde{X}}^{\!\!\ssc R}_{\omega^{\mu\nu}} &=&
\frac{1}{2}A(\omega)\, \frac{\partial}{\partial\omega^{\mu\nu}}
-\frac{1}{2}B(\omega)\,\epsilon_{\mu\nu\rho\sigma} \frac{\partial}{\partial\omega_{\rho\sigma}}
+\frac{1}{2}C_{\mu\nu}(\omega)\omega^{\rho\sigma}\frac{\partial}{\partial\omega^{\rho\sigma}}
\nonumber \\ &&
-\frac{1}{8}\epsilon_{\mu\nu\tau\lambda}C^{\tau\lambda}(\omega)
\epsilon^{\alpha\beta\rho\sigma}
\omega_{\alpha\beta} \frac{\partial}{\partial\omega^{\rho\sigma}}
-\frac{1}{2}\omega^{\rho\sigma}
\left( \eta_{\mu\rho}\frac{\partial\quad}{\partial\omega^{\nu\sigma}}
+ \eta_{\nu\sigma}\frac{\partial\quad}{\partial\omega^{\mu\rho}} \right) \;.
\eeqa
The left- and right-invariant vector fields for the Lorentz group are available
in the literature, for example in Ref.\cite{ap}, typically not in the form
directly with $\omega^{\mu\nu}$ as coordinate parameters. Indeed, we do not find
even expression (\ref{su2}) in the literature. For instance, vector fields
are given in Ref.\cite{ap} with a prior splitting between the Lorentz
boosts and rotations, each in terms of parameters that constitute a 3D vector.
\footnote{The actual vectorial parameter for the the rotation part for example
is essentially a sine function of the $|\omega/2|$. In particular, the explicit
form as given there has an extra factor of 2 coming up in the commutator, which
is a result of a $1/2$ factor when each of the parameters is matched to
$\omega^{ij}$ in the infinitesimal limit effectively rescaling the generators.
}
We find the form as presented here more appealing,
and hope that it can be generalized to $SO(p,q)$. The vector fields
actually have little role to play in the quantization procedure as shown above. As for
the expressions of the other $G(1,3)$ invariant vector fields, we use the Lorentz
transformation matrix $\Lambda^\mu_{\!\nu}$ instead. We consider the expressions
illustrating more transparent physics.
\bigskip
\bigskip
\noindent{\em Acknowledgements :-\ }
Otto Kong would like to thank the Korea Institute for Advanced Study for
hospitality when quite a part of an early version of the work was performed.
This work is partially supported by the research Grant No.
96-2112-M-008-007-MY3 from the NSC of Taiwan.
|
1,108,101,565,294 | arxiv | \section{Introduction}
For every positive integer $N$, define the unimodular random matrix $U_N$ as the random square matrix of dimension $N$ whose entries are i.i.d. complex random variables uniformly distributed on $\mathbb{T}$, the unitary circle in $\mathbb{C}$. Denote by $U_N^*$ the adjoint matrix of $U_N$ and define the squared unimodular random matrix by
$$\rho_N := \frac{1}{N^2} U_NU_N^*. $$
The moments of the mean empirical spectral distribution of $\rho_N$, i.e. the set of values of the form $\mathbb{E}[\mathrm{tr}(\rho_N^k)]$, with $k$ a positive integer and $\mathrm{tr}$ denoting the normalized trace, were studied by Lakshminarayan, Puchala and Zyczkowski in \cite{Arul} in relation with problems in quantum information theory. This moments can be written as $N^{-2k-1}Q_k(N)$, where $Q_k(x)$ is a polynomial with integer coefficients of degree $k+1$. Here, we will calculate the injective traffic distribution of the unimodular ensemble and use it to derive an explicit formula for the coefficients of $Q_k(x)$ in terms of certain combinatorial numbers described in Section 3. In particular, this result disproves the following conjecture.
\begin{conjecture}
\label{conj}
(\cite{Arul}) For $k$ and $N$ positive integers, it holds that
$$
\mathbb{E}[\mathrm{tr}(\rho_N^k)] N^{-2k-1} \sum_{j=2}^{k+1} (-1)^{k-j+1}f_{k-1,k-j+1} N^{j},$$
where $f_{k,j} := \frac{1}{k+1} {{2k+2}\choose{k-j}} {{k+j}\choose {j}}$, are the elements of the Borel triangle.
\end{conjecture}
We acknowledge that this research was motivated by the suggestion made by the first autor in \cite{Arul}, who communicated that the conjectured formula presented systematic deviations when doing simulations for values of $k$ beyond 5.
\section{The traffic distribution method}
The theory of traffic-free probability, introduced by Camille Male in \cite{Maletraffics}, was developed in the context of free probability and was motivated by the problem of studying the asymptotic freeness of permutation invariant families of matrices.
Formally, we will analyse the traffic distribution of $\rho_N$. This method can be simply thought as a generalization of the moment method. First we will need to define the notion of $K$-graph operation defined by C\'ebron, Dahlqvist and Male in \cite{Cebron-Dahlqvist-Male}. This concept was originally introduced in \cite{Mingo-Speicher-Graphs} by Mingo and Speicher as the notion of \emph{graph of matrices}.
\begin{definition}
Let $K$ be a nonnegative integer. A $K$-graph operation is a directed connected graph (which may have loops and multiple edges) with $K$ ordered edges, and two distinguished vertices, which are named \emph{input} and \emph{output}, and will be denoted by $v_{\mathrm{in}}$ and $v_{\mathrm{out}}$ respectively. The input and the output may be the same.
\end{definition}
Intuitively, each $K$-graph will work as a template to define a multilinear operation that takes $K$ square matrices of the same size and returns another square matrix of the same dimension. Given a $K$-graph $g=(V, E, v_{\mathrm{in}}, v_{\mathrm{out}})$ and random matrices $A_1, \dots, A_K$ of dimension $N$, we will denote by $Z_g(A_1\otimes \cdots \otimes A_K)$ the resulting random matrix of substituting, for $r=1, \dots, K$, each random matrix $A_r$ in the $r$-th edge of $g$, and define it as
\begin{equation}
\label{defKop}
Z_g(A_1\otimes \cdots \otimes A_K) (i,j) := \sum_{\substack{\kappa : V \to [N]\\ \kappa(v_{\mathrm{in}})=j, \kappa(v_{\mathrm{out}})=i}} \prod_{r=1}^K A_r(\kappa(w_r), \kappa(v_r)),
\end{equation}
where the $r$-th edge of $g$ goes from the vertex $v_r$ to the vertex $w_r$.
For example, if $g$ has vertices $v_0, v_1, \dots, v_K$, with $v_{\mathrm{in}}=v_0$ and $v_{\mathrm{out}}= v_K$, and edges $(v_0, v_1),$ $\dots, (v_{K-1}, v_K)$, then $Z_g(A_1\otimes \cdots \otimes A_K)$ is the usual product of matrices $A_KA_{K-1} \cdots A_1$. The set of all $K$-graph operations, with $K$ running over all nonnegative, is denoted by $\mathcal{G}$.
The traffic distribution of a family of random matrices $\textbf{A}= (A_i)_{i\in I}$, with $A_i$ of fixed dimension $d$, is the set of values of the form
$$\mathbb{E}[\mathrm{tr}(Z_g(A_{i_1}^{\varepsilon_1} \otimes \cdots \otimes A_{i_K}^{\varepsilon_K}))],$$
where $g$ runs over all elements in $\mathcal{G}$, and for each $K$-graph, the indices $i_1, \dots, i_K$ are non necessarily distinct elements in $I$, while the $\varepsilon_i$ are elements in $\{1, \ast\}$. Note that in particular, the traffic distribution contains all the information of the mixed moments of the family $\textbf{A}$.
Given a $K$-graph operation $g=(V, E, v_{\mathrm{in}}, v_{\mathrm{out}})$, denote by $G=(V',E')$ the directed graph obtained by identifying the input and the output of $g$, and forgetting the \emph{input} and \emph{output} labels. Then, from \pref{defKop} we can see that
$$\mathrm{tr}(Z_g(A_1\otimes \cdots \otimes A_K)) = \frac{1}{N} \sum_{\kappa: V' \to [N]} \prod_{r=1}^K A_r(\kappa(w_r'), \kappa(v_r')), $$
where the $v_r'$ and $w_r'$ are now vertices of $G$, and $(v_r', w_r')$ are the directed edges. Note that with this modification the sum on the right of the latter equation runs over all possible functions on the set of vertices. This observation allows us to restrict our analysis to directed graphs with ordered edges. For $G$ a directed connected graph, possibly with loops and multiple edges, with set of vertices $V$ and with $K$ ordered edges, $(v_1, w_1), \dots, (v_r, w_r)$, we denote
$$\tau[G(A_1, \dots, A_K)] :=\frac{1}{N} \mathbb{E} \left[ \sum_{\kappa: V \to [N]} \prod_{r=1}^K A_r (\kappa(w_r), \kappa(v_r)) \right].$$
The function $\tau$ is called the traffic state in analogy with the theory of non-commutative probability. Note that the traffic distribution of a family of random matrices $\textbf{A}$ can also be thought as the set of values that $\tau$ takes when evaluated on all possible directed graphs in which elements of $\textbf{A}$ or $\textbf{A}^{\ast}$ are put in their edges in all possible ways. The injective version of the traffic state, introduced in \cite{Maletraffics}, is denoted by $\tau^0[\cdot]$ and is defined by
$$\tau^{0}[G(A_1, \dots, A_K)]:=\frac{1}{N} \mathbb{E} \left[ \sum_{\substack{\kappa :V \to [N]\\ \kappa \text{ injective}}} \prod_{r=1}^K A_r(\kappa(w_r), \kappa(v_r)) \right].$$
A direct computation proves the following relation between the traffic state and the injective traffic state.
\begin{lemma}
If $G=(V, E)$ is a directed graph with $K$ ordered edges and $A_1, \dots, A_K$ are $N\times N$ random matrices, then
$$\tau [G(A_1, \dots, A_K)] = \sum_{\pi \in \mathcal{P}(V)} \tau^0[G^{\pi}(A_1, \dots, A_K)],$$
where $\mathcal{P}(V)$ denotes the set of partitions of $V$ and for each $\pi \in \mathcal{P}(V)$, $G^\pi$ denotes the quotient graph induced by $\pi$, that is, $G^\pi$ is the graph obtained by taking $G$ and identifying all vertices that are in a same block in $\pi$, without erasing any edges.
\end{lemma}
In particular, note that if $A$ is a random matrix, $k$ is a positive integer, and $C_k$ is the directed cycle of $k$ vertices, we have that
\begin{equation}
\label{decoftrace}
\mathbb{E}[\mathrm{tr}(A^k)] = \tau[C_k(A)]= \sum_{\pi \in \mathcal{P}(k)} \tau^0[C_k^{\pi}(A)],
\end{equation}
where $C_k(A)$ is a shorthand notation for $C_k(A, \dots, A)$.
In Section 2.3 of \cite{Cebron-Dahlqvist-Male}, it is noted that, using the M\"obius inversion formula, the traffic state can be retrieved from the injective traffic state. Hence, these two functionals posses essentially the same information, nervetheless, when working with random matrices, it is usually easier to compute the values of injective version of the traffic state, which is known as finding the injective traffic distribution of the given family of random matrices.
\section{The formula}
Once we have equation \pref{decoftrace}, to find the wanted formula it will be enough to study for every $k$ and every $\pi \in \mathcal{P}(k)$, the value of $\tau^0[C_k^{\pi}(\rho_N)]$. To do this we will study the injective traffic distribution of the unitary ensemble $U_N$. The latter is possible do to the fact that the mixed moments of the uniform probability measure on $\mathbb{T}$ have a nice formula. Explicitly, for $k$ and $l$ non negative integers
\begin{equation}
\label{moments}
\frac{1}{2\pi}\int_{\mathbb{T}} z^{k} \overline{z}^l dz = \delta_{kl}.
\end{equation}
Let $G=(V,E)$ be a directed connected graph. To every edge $e\in E$ we will assign either $U_N$ or $U_N^*$. Fix a labeling of this sort and denote it by $G(U_N, U_N^*)$. Given $G(U_N, U_N^*)$, let $c:E\to \{\text{red},\text{blue}\}$, be the coloring of $G$ such that $c(e)=\text{red}$ if $e$ is labeled with $U_N$ and $c(e)=\text{blue}$ if $e$ is labeled with $U_N^*$. Let $E_1=\{e\in E: c(e)=\text{red}\}$ and $E_2=\{e\in E: c(e)=\text{blue}\}$. Denote by $\hat{G}(U_N, U_N^*)$ the resulting colored graph.
\begin{figure}[htbp]
\begin{center}
\psset{xunit=1.2cm,yunit=1.2cm,algebraic=true,dimen=middle,dotstyle=o,dotsize=5pt 0,linewidth=1.6pt,arrowsize=3pt 2,arrowinset=0.25}
\begin{pspicture*}(-1.150069508506187,1.515490504884702)(10.55807888747329,3.6520927006487005)
\pscircle[linewidth=2.pt](-0.5754069700607644,2.1485782891127525){0.1}
\pscircle[linewidth=2.pt](1.4245930299392353,2.1485782891127525){0.1}
\pscircle[linewidth=2.pt](3.424593029939236,2.1485782891127525){0.1}
\pscircle[linewidth=2.pt](2.4245930299392353,3.1485782891127525){0.1}
\psline[linewidth=1.2pt]{->}(-0.5754069700607644,2.248578289112753)(1.3645876831703316,2.247219736684062)
\psline[linewidth=1.2pt]{->}(1.4245930299392353,2.048578289112752)(-0.5134963481544288,2.048407911796803)
\psline[linewidth=1.2pt]{->}(1.4921040854464376,2.2223499461654685)(2.33789951264774,3.0647350238863167)
\psline[linewidth=1.2pt]{->}(3.3245930299392357,2.1485782891127525)(1.547553265520604,2.1493770914428523)
\parametricplot[linewidth=1.2pt]{-2.8174164875305885}{2.8076043480745936}{1.*0.3067301387810374*cos(t)+0.*0.3067301387810374*sin(t)+3.719196759830717|0.*0.3067301387810374*cos(t)+1.*0.3067301387810374*sin(t)+2.146354561704604}
\pscircle[linewidth=2.pt](5.224593029939234,2.1485782891127525){0.1}
\pscircle[linewidth=2.pt](7.224593029939236,2.1485782891127525){0.1}
\pscircle[linewidth=2.pt](9.224593029939237,2.1485782891127525){0.1}
\pscircle[linewidth=2.pt](8.224593029939236,3.1485782891127525){0.1}
\psline[linewidth=1.2pt,linestyle=dashed,linestyle=dashed,dash=3pt 3pt,linecolor=blue]{->}(5.224593029939234,2.248578289112753)(7.164587683170332,2.247219736684062)
\psline[linewidth=1.2pt,linecolor=red]{->}(7.224593029939236,2.048578289112752)(5.286503651845571,2.048407911796803)
\psline[linewidth=1.2pt,linecolor=red]{->}(7.292104085446439,2.222349946165467)(8.137899512647742,3.0647350238863167)
\psline[linewidth=1.2pt,linestyle=dashed,linestyle=dashed,dash=3pt 3pt,linecolor=blue]{->}(9.124593029939238,2.1485782891127525)(7.347553265520601,2.1493770914428523)
\parametricplot[linewidth=1.2pt,linestyle=dashed,dash=3pt 3pt,linecolor=blue]{-2.8174164875305885}{2.8076043480745936}{1.*0.3067301387810374*cos(t)+0.*0.3067301387810374*sin(t)+9.519196759830718|0.*0.3067301387810374*cos(t)+1.*0.3067301387810374*sin(t)+2.146354561704604}
\rput[tl](0.37145023696211354,2.680909884392338){$U_N^*$}
\rput[tl](2.3080524327261102,2.5622097624054487){$U_N^*$}
\rput[tl](4.007990337833158,2.519046081682944){$U_N^*$}
\rput[tl](0.3174956360589823,1.866183542772674){$U_N$}
\rput[tl](1.3908242173728795,2.883055649449873){$U_N$}
\psline[linewidth=1.2pt]{->}(3.5345462863189576,2.3912781838167176)(3.4299511277512162,2.2484346398764243)
\psline[linewidth=1.2pt,linecolor=blue]{->}(9.338315969687383,2.3940752063706405)(9.229951127751216,2.2484346398764115)
\end{pspicture*}
\end{center}
\caption{ A labeled graph with its respective colored graph (blue arrows are displayed with a dotted style).}
\end{figure}
Denote by $\tau^0[G(U_N, U_N^*)]$ the value of the injective traffic state evaluated on $G$ with the fixed choice of assignments of $U_N$ and $U_N^*$ to the edges. We have
\begin{equation}
\label{traffdistU}
\tau^0[G(U_N, U_N^*)]= \frac{1}{N} \sum_{\substack{\kappa :V \to [N]\\ \kappa \text{ injective}}} \mathbb{E}\left[ \prod_{(u,v)\in E_1} U_N(\kappa(v), \kappa(u)) \prod_{(r, s)\in E_2} \overline{U_N(\kappa(r), \kappa(s))} \right].
\end{equation}
Note that since the expectation of the product of independent random variables factorizes, by \pref{moments} each term in the sum in the right of \pref{traffdistU} is either 0 or 1. Now we will characterize those terms that do not vanish in terms of the structure of $G$ and the coloring $c$, for this we need the next definition.
\begin{definition} A connected directed graph colored in red and blue is called a \textbf{double directed colored graph (d.d.c.g.)}, if for every $u, v \in V$, not necessarily distinct, the number of red edges going from $u$ to $v$ is equal to the number of blue edges going from $v$ to $u$. We will denote the family of d.d.c.g's by $\mathcal{D}$.
\end{definition}
\begin{figure}[htbp]
\begin{center}
\newrgbcolor{qqzzqq}{0. 0.6 0.}
\newrgbcolor{qqqqcc}{0. 0. 0.8}
\psset{xunit=1.2cm,yunit=1.2cm,algebraic=true,dimen=middle,dotstyle=o,dotsize=5pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25}
\begin{pspicture*}(2.3558370121937435,-1.3615470970400518)(16.82816463602887,2.791946032670507)
\pscircle[linewidth=2.pt](-2.,2.4641016151377557){0.09836262875318949}
\pscircle[linewidth=2.pt,linecolor=qqzzqq](0.,2.4641016151377553){0.09670872187124512}
\pscircle[linewidth=2.pt](1.,0.7320508075688774){0.09759296348517955}
\pscircle[linewidth=2.pt,linecolor=qqzzqq](0.,-1.){0.1}
\pscircle[linewidth=2.pt](-2.,-1.){0.1}
\pscircle[linewidth=2.pt,linecolor=qqzzqq](-3.,0.732050807568879){0.1003207474366654}
\pscircle[linewidth=2.pt](4.927781356839965,2.5042230835599977){0.09836262875318856}
\pscircle[linewidth=2.pt](7.927781356839964,0.7721722759911196){0.09759296348518111}
\pscircle[linewidth=2.pt](4.927781356839964,-0.9598785315777578){0.1}
\pscircle[linewidth=2.pt](6.,0.7320508075688774){0.09912084741145608}
\psline[linewidth=1.2pt,linestyle=dashed,dash=3pt 3pt, linecolor=blue]{->}(-1.90249676408703,2.4511275748365398)(-0.09612428055020428,2.453485609548297)
\psline[linewidth=1.2pt,linecolor=red]{->}(0.06988202916291382,2.397250445469278)(0.9383275722856385,0.8076874268854257)
\psline[linewidth=1.2pt,linestyle=dashed,dash=3pt 3pt, linecolor=blue]{->}(0.964312907900916,0.6412167942295391)(0.054255243755362725,-0.915997806427176)
\psline[linewidth=1.2pt,linecolor=red]{->}(-0.099630188238299,-1.0085921820046546)(-1.90057671059922,-1.010724249378336)
\psline[linewidth=1.2pt,linestyle=dashed,dash=3pt 3pt, linecolor=blue]{->}(-2.093597736422374,-0.9647939815286102)(-2.9599481060278943,0.6400720156899986)
\psline[linewidth=1.2pt,linecolor=red]{->}(-2.9792010327377914,0.8301918047468004)(-2.0566766644061754,2.3837090629514526)
\psline[linewidth=1.2pt,linestyle=dashed,dash=3pt 3pt, linecolor=blue]{->}(5.019697494226831,2.53924636757387)(6.06723914849081,0.8217124337338908)
\psline[linewidth=1.2pt,linecolor=red]{->}(5.904183041215201,0.7574290029498949)(4.873209882867254,2.391212478368995)
\psline[linewidth=1.2pt,linestyle=dashed,dash=3pt 3pt, linecolor=blue]{->}(4.844919183283536,-0.9038976402530898)(5.868198257539013,0.6938199575398962)
\psline[linewidth=1.2pt,linecolor=red]{->}(6.053035634463368,0.6483121434486915)(5.056371421970577,-0.9661164771977976)
\psline[linewidth=1.2pt,linecolor=red]{->}(6.06723914849081,0.8217124337338907)(7.827467491983496,0.8570923937436039)
\psline[linewidth=1.2pt,linestyle=dashed,dash=3pt 3pt, linecolor=blue]{->}(7.931475806877554,0.6746492655743997)(6.053035634463368,0.6483121434486915)
\pscircle[linewidth=2.pt](10.,0.7){0.1}
\pscircle[linewidth=2.pt](12.,0.7){0.1}
\pscircle[linewidth=2.pt](14.,0.7){0.1}
\psline[linewidth=1.2pt,linestyle=dashed,dash=3pt 3pt,linecolor=qqqqcc]{->}(14.,0.8)(12.070186659442916,0.7983671480442277)
\psline[linewidth=1.2pt, linestyle=dashed,dash=3pt 3pt, linecolor=qqqqcc]{->}(14.,0.6)(12.070186659442916,0.6012721613954357)
\parametricplot[linewidth=1.2pt,linecolor=red]{0.7853981633974483}{2.356194490192345}{1.*1.4142135623730951*cos(t)+0.*1.4142135623730951*sin(t)+13.|0.*1.4142135623730951*cos(t)+1.*1.4142135623730951*sin(t)+-0.2}
\parametricplot[linewidth=1.2pt,linecolor=red]{3.9269908169872414}{5.497787143782138}{1.*1.4142135623730951*cos(t)+0.*1.4142135623730951*sin(t)+13.|0.*1.4142135623730951*cos(t)+1.*1.4142135623730951*sin(t)+1.6}
\psline[linecolor=red]{->}(13.941136238280428,0.8555863683258547)(14.,0.8)
\psline[linecolor=red]{->}(13.950430688777374,0.5527743768269571)(14.,0.6)
\psline[linewidth=1.2pt,linecolor=red]{->}(10.,0.8)(11.9090383708087,0.8013567315269511)
\psline[linewidth=1.2pt,linestyle=dashed,dash=3pt 3pt,linecolor=qqqqcc]{->}(10.,0.6)(11.91357583949248,0.6017081094406034)
\psline[linecolor=red]{->}(13.83224735307488,0.9434003425309341)(14.,0.8)
\psline[linecolor=red]{->}(13.789707259154389,0.42681525545255083)(14.,0.6)
\rput[tl](9.819144979642351,1.0939794296359572){$u$}
\rput[tl](11.809360437628646,1.0939794296359572){$v$}
\end{pspicture*}
\caption{In the previous figure blue arrows are displayed with a dotted style. The graph on the left is a d.d.c.g. On the other hand, the graph on the right is not a d.d.c.g. because there is one red edge going from $u$ to $v$ but no blue edge going from $v$ to $u$.}
\end{center}
\end{figure}
\begin{lemma} Let $\kappa: V \to [N]$ be injective. Then
\begin{equation}
\label{eqexpectation}
\mathbb{E}\left[ \prod_{(u,v)\in E_1} U_N(\kappa(v), \kappa(u)) \prod_{(r, s)\in E_2} \overline{U_N(\kappa(r), \kappa(s))} \right]\neq 0,
\end{equation}
if and only if the resulting colored graph $\hat{G}(U_N, U_N^*)=(V, E, c)$ is in $\mathcal{D}.$
\end{lemma}
\begin{proof}
Since $\kappa$ is injective, for any $\{u, v\}$ and $\{r,s\}$ different sets of vertices, the sets $\{\kappa(u),\kappa(v)\}$ and $\{\kappa(r), \kappa(s)\}$ are different. In this case, given that the entries of $U_N$ are independent, $U_N^{\varepsilon_1}(\kappa(u),\kappa(v))$ and $U_N^{\varepsilon_2}(\kappa(r), \kappa(s))$ are independent random variables, for any $\varepsilon_1, \varepsilon_2 \in \{1, \ast\}$.
Hence, edges with different endpoints contribute with independent random variables in the product in \pref{eqexpectation}. Moreover, since $U_N(i,j)$ and $U_N^*(k,l)$ are only dependent if $i=l$ and $j=k$, we will have that edges with the same endpoints contribute with dependent random variables only if they go in the same direction and are of the same color, or if they go in opposite directions and are of different colors. Hence, the left side of \pref{eqexpectation} factorizes in the following way
$$\prod_{u , v\in V}\mathbb{E}\left[ U_N(\kappa(v), \kappa(u))^{d_1(u,v)} \overline{U_N}(\kappa(u), \kappa(v))^{d_2(v,u)}\right], $$
where $d_1(u,v)$ denotes the number of red edges going from $u$ to $v$ and $d_2(v,u)$ the number of blue edges going from $v$ to $u$. By \pref{moments}, the latter product is different from zero only if for every $u$ and $v$, not necessarily distinct, it is satisfied that $d_1(u,v) = d_2(v,u)$, in which case the product equals to $1$. Now note that the condition $d_1(u,v) = d_2(v,u)$ for every $u, v \in V$ translates into the condition $\hat{G}(U_N, U_N^*) \in \mathcal{D}$.
\end{proof}
Combining \pref{traffdistU} with the previous lemma one sees that
\begin{equation*}
\tau^0[G(U_N, U_N^*)]= \frac{1}{N} \sum_{\substack{\kappa :V \to [N]\\ \kappa \text{ injective}}} \mathbbm{1}_{\{ \hat{G}(U_N, U_N^*) \in \mathcal{D}\}}= \begin{cases} \frac{(N)_{|V|}}{N} & \text{if } \hat{G}(U_N, U_N^*) \in \mathcal{D}, \\
0 & \text{otherwise.}
\end{cases}
\end{equation*}
where, for $j$ a positive integer, $(N)_j:= N(N-1)\cdots(N-j+1)$ denotes the Pochhammer symbol.
Now note that if $C_k$ is the directed cycle with $k$ edges and $\rho_N$ is the squared unimodular matrix, then $\tau[C_k(\rho_N)] = N^{-2k} \tau[C_{2k}(U_N, U_N^*)]$, where the labeling $C_{2k}(U_N, U_N^*)$ alternates between $U_N$ and $U_N^*$, i.e. the resulting colored graph $\widehat{C_{2k}}(U_N, U_N^*)$ alternates between red and blue. From the characterization of the injective traffic distribution obtained before we get that
$$\tau[C_{2k}(U_N, U_N^*)] = \frac{1}{N} \sum_{\pi \in \mathcal{P}(2k) } \mathbbm{1}_{\{\widehat{C_{2k}^{\pi}}(U_N, U_N^*)\in \mathcal{D}\}} (N)_{|\pi|}.$$
Denote by $\mathcal{F}(2k, j)$ the number of partitions $\pi \in \mathcal{P}(2k)$ with $j$ blocks and such that $\widehat{C_{2k}^{\pi}}\in \mathcal{D}$. Since $C_{2k}$ has $2k$ edges, if $|\pi| > k+1$, then $\widehat{C_{2k}^{\pi}}$ can not be a d.d.c.g., so $\mathcal{F}(2k, j) =0$ for all $j > k+1$. From what we have developed until now it follows that
\begin{equation}
\label{formulapochhammer}
\mathbb{E}[\mathrm{tr}(\rho_N^k)] = \tau[C_{2k}(U_N, U_N^*)] = N^{-2k-1} \sum_{j=1}^{k+1} \mathcal{F}(2k, j)(N)_j.
\end{equation}
\subsection{The Pochhammer symbols and the change of basis}
The formula that we have provided for $N^{2k+1}\mathbb{E}[\mathrm{tr}(\rho_N^k)]$ is clearly a polynomial in $N$, nevertheless, it is not clear what its coefficients are, which, for example, makes it difficult to compare to the conjecture made in \cite{Arul}.
In traffic-free probability, the Pochhammer symbols appear frequently, so it is convenient to be able to go from the Pochhammer symbols to the monomials of the form $N^k$ and vice versa. More precisely, the family of polynomials $\{(x)_{j}\}_{j=0}^{\infty}$, where $(x)_0:=1$, form a basis for the space of polynomials. Below we will provide the formulas to change from the Pochhammer symbols basis to the usual basis $\{x^j\}_{j=1}^\infty$ and to do the reverse process as well.
We will denote the $k$-th symmetric elementary polynomial in $n$ variables by $e_k(x_1, \dots, x_n)$, i.e.
$$e_k(x_1, \dots, x_n) := \sum_{1\leq j_1 < j_2 < \cdots < j_k\leq n} x_{j_1}\cdots x_{j_k},$$
when $1\leq k\leq n$, and define $e_0(x_1, \dots, x_n):=1$ and $e_k(x_1, \dots, x_n) = 0$ for $k > n$. Using the Vieta relations, for $k>0$ we get that
$$(x)_k = \sum_{j=1}^k (-1)^{k-j} e_{k-j}(1, \dots, k-1)x^j,$$
which yields the following result.
\bigskip
\begin{lemma}
\label{cambioPoch-Usual}
If $\sum_{j=1}^n a_j x^j = \sum_{j=1}^n b_j(x)_j$ then $a_j = \sum_{k=j}^n (-1)^{k-j}e_{k-j}(1, \dots, k-1) b_k$ for every $j=1, \dots, n$.
\end{lemma}
To change from the usual basis to the Pochhammer basis we will use the following well known combinatorial identity
$$N^k = \sum_{j=1}^k \stirling{k}{j} (N)_j,$$
where the numbers $\stirling{k}{j}$ are the Stirling numbers of the second kind, which denote the number of partitions with $j$ blocks of a set with $k$ elements. This equality can easily be proven by a double counting argument for every positive integer $N$. Since it is valid for an infinite number of values it also holds as an equality of polynomials, yielding the following result.
\bigskip
\begin{lemma}
\label{cambioUsual-Poch}
If $\sum_{j=1}^n a_j x^j = \sum_{j=1}^n b_j(x)_j$ then $b_j = \sum_{k=j}^n \stirling{k}{j}a_k$, for every $j=1, \dots, n$.
\end{lemma}
\bigskip
If we assume that the Conjecture \ref{conj} is true, we get the following equality for every positive integers $k$ and $N$
$$\sum_{j=1}^{k+1} \mathcal{F}(2k, j)(N)_j = \sum_{j=2}^{k+1} (-1)^{k-j+1}f_{k-1,k-j+1} N^{j}, $$
using Lemma \ref{cambioUsual-Poch} we get that the latter is equivalent to
\begin{equation}
\label{conjforG}
\mathcal{F}(2k, j) = \sum_{r=j}^{k+1} (-1)^{k-r+1} \stirling{r}{j}f_{k-1, k-r+1}.
\end{equation}
The above formula turns out to be true for all $j$ when $k\leq 5$. Making use of a computer to do the calculations, for $k=6$ and $j=3$, we obtained that the right hand side is 10988 while $\mathcal{F}(6,3)$ is actually 11000. This disproves the conjecture.
On the other hand, we can use Lemma \ref{cambioPoch-Usual} to obtain a formula in the usual basis, yielding the following theorem.
\begin{theorem}
\label{theorem}
For every $k$ and $N$ positive integers we have that $\mathbb{E}[\mathrm{tr}(\rho_N^k)] = N^{-2k-1} \sum_{j=1}^{k+1} a_j N^j$, where
\begin{equation}
\label{formulafinal}
a_j= \sum_{r=j}^{k+1} (-1)^{r-j}e_{r-j}(1, \dots, k) \mathcal{F}(2k, j).
\end{equation}
\end{theorem}
\section{Final remarks}
We used a computer to calculate the values of $\mathcal{F}(2k, j)$. Since the conjecture was based in the first four values of $k$, it is not a surprise that equation \pref{conjforG} holds for all $j$ when $k\leq 4$. However, it was surprising that it also turned out to be true when $k=5$, but not true in general for greater values of $k$. It is also worth remarking how small is the error of the proposed formula for $\mathcal{F}(2k, j)$ when $k$ is small and that in fact it is always true for the cases $\mathcal{F}(2k, 1), \mathcal{F}(2k, 2)$ and $\mathcal{F}(2k, k+1)$ \footnote{It is trivial that $\mathcal{F}(2k,1)=1$, on the other hand, using the series expansion definition of $f_{k,j}$ given in \cite{Francisco} (Definition 2.2), it can be shown directly that the right hand of \pref{conjforG} is 1 when $j=1$, showing that formula \pref{conjforG} holds for $j=1$. With less ease, but with elementary combinatorial methods it can be shown that $\mathcal{F}(2k, k+1) = \frac{1}{k+1}{{2k}\choose{k}}$ and that $\mathcal{F}(2k, 2) = {{2k}\choose{k}}-1$. The latter equation, together with equation \pref{formulapochhammer} yields that when $N=2$ the $k$-th momment of the ESD of the squarred unimodular matrix is $2^{-2k-1}{{2k}\choose{k}}$, confirming that in this case, the ESD is the arcsine distribution supported in $[0,2]$.}.
In this article we append two tables; one of them presenting the values of $\mathcal{F}(2k, j)$ for all $1\leq k \leq 11$ and all $1\leq j \leq k+1$, the other table presents the respective values of the right hand side of \pref{conjforG}. We believe that the right hand side of \pref{conjforG} is actually counting a proper subfamily of the partitions whose associated quotient is a d.d.c.g., this is supported by the fact that the proposed formula has been always seen to be less or equal than $\mathcal{F}(2k, j)$. When there are few blocks, say one or two, or when they are too many, say $k+1$ or $k$, the partitions whose associated quotient is a d.d.c.g. have a very ordered structure. On the other hand, when there are more than two blocks, but not too many, and we have enough vertices, i.e. $k$ is big, the partitions that satisfy the wanted property have no particular structure at all. We think that it is in this case when ``rare" partitions appear that are not counted by the proposed formula.
Below we show the explicit coefficients in the formula \pref{formulafinal} when $k=6$ and $k=7$.
$$N^{13}\mathbb{E}[\mathrm{tr}(\rho_N^6)] = 132 N^{7}-495 N^{6}+772 N^5- 624 N^4 +262 N^3 - 46 N^2.$$
$$N^{15}\mathbb{E}[\mathrm{tr}(\rho_N^7)]= 429 N^8 - 2002 N^7 + 4039 N^6 -4550 N^5 + 3073 N^4 -1204 N^3 + 216 N^2.$$
Finally, we warn the reader that several attempts were made, using the On-line Encyclopedia of Integer Sequences, to find explicit formulas for the numbers $\mathcal{F}(2k, j)$ with $j$ different from 1, 2, $k$ and $k+1$, and to find formulas for the coefficients given in Theorem \ref{theorem} for the monomials $N^j$ with $j$ different from 0, 1, $k$ and $k+1$ . It was not even possible to arrive to a conjecture, making us believe that there is no ``nice" general formula for the numbers $\mathcal{F}(2k,j)$ nor for the moments of the mean ESD of $\rho_N$.
\section*{Acknowledgements}
I thank Octavio Arizmendi for introducing me to traffic-free probability and for his valuable comments on this work. I also thank Jorge Fernandez for providing the code to do the computer calculations of the formulas above. Finally, I thank Daniel Perales for providing a nice and simple proof for the equality $\mathcal{F}(2k, 2) = {{2k}\choose{k}}-1$.
|
1,108,101,565,295 | arxiv | \section{Introduction}
Ground-based telescopes suffer from the degradation of image quality due to the turbulent nature of Earth's atmosphere. This phenomenon, frequently termed as ''seeing'', prevents large-aperture telescopes from achieving their theoretical angular resolution. Even the best observing sites do not allow for observations in the visible with the resolution higher then the diffraction limit of a 20 cm telescope. This means, that long exposures of both very small and extremely large telescope show the same angular resolution.
The Fried parameter, $r_0$, is a quantity which describes the average atmospheric conditions. The Fried parameter is the distance across which the expected change of the wavefront phase is exactly 1/2$\pi$. It may be also understood as the size of the telescope's aperture over which the theoretical diffraction limit may be easily achieved. Since at the best observational sites $r_0$ rarely reaches 20~cm in the visible ($\mathrm{\lambda=0.5~\mu m}$) \citep{SolarSites}, the long exposures obtained from any telescope do not expose the resolution higher than the one reached by a 20 cm telescope. Importantly, the $D/r_0$ ratio is frequently utilized to determine how far the resolution of images obtained by a telescope of $D$ diameter is from its theoretical diffraction limit.
So far, numerous techniques, both hardware- and software-based, have been developed to enhance the resolution of astronomical observations. Probably the most prominent hardware-based approach is adaptive optics \citep[AO,][]{AO1}, in which the compensation of wavefront distortions is performed directly by a deformable mirror. The AO for solar imaging differs from the one used in nighttime observations \citep{SolarAO1,SolarAO2}. There is no point-like object for wavefront sensing which is performed usually with Shack-Hartmann sensor. Instead, the cross-correlation between images observed in individual sub-apertures has to be calculated to estimate the shape of wavefront \citep{SolarAO3}. Significantly poorer daytime atmospheric conditions put much higher demands on the updating frequency of deformable mirror. Fortunately, there is also more light available for sensors which enables such observations.
A popular example of software-based approach to improve the quality of images is the method called Lucky Imaging \citep[LI,][]{Scharmer1989,LI1}. Due to the availability of very fast and low-noise cameras, this method has recently become very popular high-resolution acquisition technique in the visible. However, it is dedicated to smaller telescopes ($<$ 2 m), since it relies on the fact, that there is only a small chance to obtain a diffraction-limited image in a series of very short exposures (i.e., shorter than the coherence time of the atmosphere -- usually up to several milliseconds). This chance, estimated by \cite{LI2}, is relatively high for smaller apertures (e.g., $D/r_0<2$), while it quickly becomes negligible for greater mirrors.
There are many combinations of software and hardware techniques of high-resolution imaging. As an example, \cite{AOLI1} or \cite{AOLI2} present the fusion of Lucky Imaging and adaptive optics (AOLI). In \citet{holo} the deconvolution of a series of short exposures is shown as another possible way to enhance the quality of astronomical images. A wide range of speckle-interferometry methods is also available \citep{saha,SI1}. They involve such approaches as the aperture masking \citep{AM1,AM2} or speckle bispectral reconstructions \citep{speckleMasking,bispectrum}. Some of them have been successfully used for a long time in solar imaging \citep{SolRecOld}.
Either with or without AO, the observer acquires a series of images and then applies several post-processing steps \citep[e.g.,][]{solar_postproc}. One of them is always the selection of best exposures, i.e., the ones having the highest image quality. Such lucky imaging has been very popular for a long time in ground-based solar observatories due to its apparent simplicity and very low hardware requirements (only a fast camera is necessary) \citep{LIsolarOLD}. However, the most advanced imagers utilizing lucky exposures are far from being simple \citep{DKIST}. The rejection of less useful frames is possible due to the high intensity of observed object which allows for acquiring thousands of exposures per second at relatively low resolution or tens of frames if large-format cameras are utilized. The selection is also essential for reaching the high quality of final outcomes regardless of the complexity of succeeding image reconstruction (e.g., simple stacking, deconvolution, or bi-spectral analysis).
The assessment of the temporal quality of registered solar images becomes challenging due to the complex character of observed scene. It is impossible to utilize a basic quality metric frequently used in lucky imaging -- the intensity of the brightest pixel in a speckle pattern -- as it requires a well isolated point-like object (star). Instead, a widely employed method for quality assessment of solar images is the root-mean-square contrast (rms-contrast) which assumes the uniform isotropic properties of granulation \citep{rms1,solar_postproc,rms2}. Unfortunately, the method has several drawbacks like a significant dependency of its effectiveness on the wavelength \citep{rms_bad1} and the sensitivity to the structural contents of the image \citep{DengZHang2015}. This implies that also the tip-tilt effects, which move observed patch slightly changing its contents, will introduce additional noise to quality estimation as the image features, like sunspots or pores, will move in and out of the analyzed region.
Motivated by a recent work by \cite{DengZHang2015} introducing an objective image quality metric to solar observations, we decided to explore which of the numerous quality metrics (QMs) available in the rich literature can be employed for selection of solar frames. This is carried out by investigating the correlation between the outcomes of QMs and the known strength of simulated turbulence. Our review includes 36 QMs with many varying implementations. Implementations refers to different gradient operators, kernel sizes, thresholding techniques, etc., without implying conceptual changes. We utilize reference images from the Solar Optical Telescope (SOT) on board the Hindoe satellite \citet{Tsuneta2008} and use an advanced method for modeling atmospheric turbulences, the Random Wave Vectors \citep[RWV,][]{RWV1} method. The scintillation noise is also included to faithfully reflect all the seeing characteristics. Moreover, we check the computational efficiency of the QMs to indicate which methods are more suitable for application in high-speed real-time image processing.
\section{Experiment}
\subsection{Turbulence simulation}
Since the observations from the space are not disturbed by the atmosphere, as the reference data for our experiments we used the images obtained by the SOT. From a wide range of registered images in Hindoe database\footnote{HINDOE database query form is available at \url{http://darts.isas.jaxa.jp/solar/hinode/query.php} }, we selected a one which contained several regions of enhanced magnetic activity (sunspots). The image was originally obtained at green continuum wavelength, on 29 Nov. 2015 at 21:19:44UT. In the Fig. \ref{allSun} we show the selected reference and indicate the positions of six extracted $100\times100$-pixel patches, (roughly $5'' \times 5''$ each, image scale of 0.054"~pixel$^{-1}$). Patches W1, W2, and W3 include sunspots, while W4, W5, and W6 contain granulation.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{annoted_wycinek.png}
\caption{A part of solar photosphere imaged by the Hindoe satellite. The six patches, ($W1-W6$, $100\times100$ pixels) extracted for the experiment, are annotated. Three of them ($W1,W2,W3$) contain the parts of active regions, while the remaining ones ($W4,W5,W6$) include only granulation. The image was obtained in the green continuum on 29 Nov. 2015 at 21:19:44 UT.}
\label{allSun}
\end{figure}
To investigate the response of various QMs for a given strength of turbulence, we modeled the transfer function of the atmosphere utilizing RWV method. The method allows for reliable modeling of amplitude and phase fluctuations of the incoming optical wavefront. Since a patch of solar photosphere, which is used for quality assessment, can be arbitrarily small we assumed that it is within the isoplanatic angle. Thus, anisoplanatic effects, more complicated for simulation, are neglected in this study, which will otherwise significantly complicate the simulations. Using RVW we generated 1000 blurring kernels (speckles patterns) for ten distinctive scenarios of seeing conditions: $D/r_0$ = 1, 2, ..., 10. We assumed that such a range is representative for observations at best observing sites. Each generated kernel consisted of 1024$\times$1024 pixels and the resolution was two times higher than required by the Nyquist limit, i.e, a single pixel corresponded to an angular size of $\lambda / 4D$ ($\lambda$ -- wavelength, $D$ -- telescope diameter). We opted for such oversampling to be able to combine simulated speckle kernels with reference Hindoe images. Each blurring kernel was normalized so that the summed intensity over all pixels is unity. Exemplary kernels with the corresponding long-exposure seeing disks are presented in Fig. \ref{kernels1}. Evidently, the poorer the seeing conditions (higher $D/r_0$), the more complex is the kernel shape. The simulated tip-tilt effect is also visible as a displacement of the kernel's centroid.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{kernels.jpg}
\caption{Exemplary blurring kernels (negatives) employed in the experiment. The rows correspond to various $D/r_0$ relative atmospheric conditions while the last column exposes the simulated long-exposure seeing disk (i.e., an ensemble average over all the kernels for a given $D/r_0$). Each kernel is presented in a box with a side length of $40\lambda/D$. The auxiliary gray lines indicate the box center highlighting the simulated tip-tilt effect.}
\label{kernels1}
\end{figure*}
Atmospheric scintillation results in varying attenuation of flux collected by a telescope. This type of noise was recently investigated by \cite{osborn}. The authors present the following formula for estimating the relative variance of the total intensity of the observed object:
\begin{equation}
\sigma_{s} =\sqrt{10^{-5}~C^2_Y~D^{-4/3}~t_e^{-1}~ (\textrm{cos} \gamma)^{-3}~\textrm{exp}(-2h_{\mathrm{obs}}/H)},
\end{equation}
where $D$ is telescope diameter, $\gamma$ is the zenith distance , $h_{\mathrm{obs}}$ is the altitude of the observatory, $H$ is the scale height of the atmospheric turbulence, generally assumed to be $\sim$ 8000 m, $C_Y$ is the scaling coefficient which can be determined from turbulence profilers (SCIDARs) and was estimated between 1.30$-$1.67 for best observing sites (see Tab. 1 in the original work \citet{osborn}).
To include such fluctuations in our simulated images, we multiplied each kernel by a random variable with an expected value of unity and a standard deviation equal to $\sigma_{s}$. We assumed (1) the telescope size $D=0.5$ m according to the size of Hindoe/SOT instrument, (2) the observations at zenith distance of $\gamma=60^{\circ}$ and (3) a low scintillation index, $C_y=1.5$, which is expected for (4) high-altitude observatory, $h_{\mathrm{obs}}= 3000$ m. For such conditions the relative scintillation noise is $\sigma_s = 0.032$.
To properly convolve a blurring kernel with a reference patch, the scale of a kernel has to be the same as the image scale. For the assumptions given above, i.e., $D = 0.5$ m telescope observing at $\lambda= 550$ nm, we obtain an image scale of 0.055'' pixel$^{-1}$. This means that the sampling of blurring kernels ($D/4
\lambda$) and the assumed telescope size allow for convolving kernels with the Hindoe images without any prescalling. Several examples of solar images degraded by simulated blurring kernels are presented in Fig. \ref{patches1}.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{patches_examples.png}
\caption{Exemplary simulated images for all six patches (rows) and four selected degrees of atmospherics turbulence, $D/r_0=1,~4,~7,\textrm{~and~}10$ (columns).}
\label{patches1}
\end{figure}
The time-dependent quality, when observing stellar objects, can be determined from the relative intensity of the brightest pixel in the normalized kernel. This is a widely accepted approach to select the sharpest frames in the nighttime lucky imaging \citep{LI1}. However, in our case the object is not point-like and shows complex structures. Thus, we decided to use the quality measure based on the amount of energy preserved from the original frequency spectrum. Since the original image is convolved with a blurring kernel and the original amplitudes in frequency spectrum are multiplied by the amplitudes of a kernel, the value of proposed quality measure can be calculated by summing squared amplitudes in 2-D Fourier transform of a kernel. According to the Parseval's theorem, this also equals to the sum of squared intensities of a kernel directly in the image plane.
Since turbulence is a random process, there is also a possibility that the temporal seeing conditions will become much better or much worse than the ones indicated by the average $D/r_0$ \citep{LI2}. This is in fact what we also observed in our data. For all ten sets of simulated kernels ($D/r_0=1,2,...,10$), the spread of temporal quality is exposed in the histograms in Fig. \ref{qKernel}. In the upper part of Fig. \ref{qKernel} we plotted the amount of preserved energy of the original image over all 1000 kernels for three values of $D/r_0$. Clearly, the quality for the average $D/r_0=4$ can sometimes outperform the conditions registered at $D/r_0=1$. Therefore, $D/r_0$ expresses only the average blurring strength while it can not be taken as a reliable estimator of the quality of individual frames.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{qualitiesPlot.pdf}
\caption{Dependencies of temporal turbulence strength expressed by the maximal intensity in normalized blurring kernel, on the average seeing conditions $D/r_0$. \emph{Left}: amount of preserved image energy for three 1000-kernel sets $D/r_0=1,4$, and 10. \emph{Rigth}: histograms of the preserved image energy over all the simulated kernels. (Color figures are available in online version.)}
\label{qKernel}
\end{figure}
\subsection{Methods}
A set of 35 state-of-the-art QMs was provided in the review of \citet{PertuzPuig2013}. The authors consider the most popular methods dedicated for the assessment of focus in complex scenes by means of contrast measurement. The methods can be categorized into six families of operators, based on: gradients (GRA), Laplacians (LAP), wavelets (WAV), intensity statistics (STA), discrete-cosine transform (DCT), and the miscellaneous methods (MIS), i.e., the ones that can not be categorized into any other group. An overview of the methods with their abbreviations and references is given in Tab.~\ref{tab_par_ref}.
Within the set of techniques one can find the most popular method used in solar image processing, i.e., rms-contrast measure, which was called Normalized Gray-level Variance and abbreviated as STA5 (we followed this nomenclature). The most recent QM proposed by \cite{DengZHang2015}, Median Filter Gradient Similarity (GRA8), was included in our comparison as well.
\input{tabela_parametry_ref.tex}
To improve the effectiveness of QMs for solar observations we had to adjust the parameters of many techniques. This was carried out experimentally by tuning the parameter considering the properties of observed solar scene and analyzing the algorithm's details. Moreover, for some of the methods we proposed major, but still simple, modifications which allowed for enhancing their effectiveness. This resulted in creation of various implementations of the most of methods (labeled as version $A$, $B$, etc.). The set of investigated parameter values and/or the details of applied modifications are given in Tab. \ref{method_params}. The distance parameters, like radius or size of local filtering window, are expressed in pixels (in our case 1 pixel = 0.055''). The thresholds are given in relative values, which means that before applying thresholding the intensities in a patch were normalized such that the intensities cover the range from zero to unity. Several methods have two adjustable parameters, therefore we evaluated various combinations of their values. Including all the modifications, we end up in a total number of 105 implementations of 36 techniques which are included in our comparison.
\input{tabela_parametry.tex}
\subsection{Data analysis}
To assess the performance of all methods we investigated the correlation between the results of QMs and the actual expected quality $Q$. As stated before, the quality was estimated by the total preserved energy with respect to the reference, undisturbed image. We observed that the relation between these two quantities is almost always nonlinear. Fortunately, it is sufficient that the dependency is monotonic as it allows us to distinguish between better and worse atmospheric conditions. Therefore, to estimate the effectiveness of the methods, instead of Pearson's correlation coefficient, we used the Spearman rank-order correlation $C_s$. Such a coefficient is insensitive to any nonlinearities in observed dependencies.
As an example in the upper panel of Fig. \ref{corrExamples} we show significantly different dependencies between the expected quality Q and the outcomes of two QMs (\emph{LAP1} and \emph{MISB}, on the left and right side, respectively) calculated for image patch $W1$. Each set of $D/r_0$ was presented with a different marker/color. We decided to calculate the Spearman correlation coefficient in two ways: (1) within a set of observations for each $D/r_0$ and (2) across all the simulated conditions $D/r_0=1-10$. We present the corresponding results of $C_s$ in each $D/r_0$ subset in the bar plots of Fig. \ref{corrExamples}. The correlation depends on $D/r_0$ so that it becomes apparent which range of atmospheric conditions is most suitable for a given method. For example, the LAP1 method allows for effective estimation of image quality in $D/r_0=1$ while MIS1B is completely useless in this range. The proposed approach to calculate the correlation coefficient permitted estimating the performance of each method for a range of typical atmospheric conditions.
In the example presented in Fig. \ref{corrExamples} the superiority of LAP1 over MIS1B is also visible for $C_s$ calculated over whole set of $D/r_0$. However, the difference is significantly smaller than in particular groups as both methods achieve $C_s > 0.9$ (0.98 and 0.90 for LAP1 and MIS1B, respectively). This is due to the wide range of considered $Q$ values, which makes the correlation between the quantities more evident and leads to the elevated value of the $C_s$ coefficient. Although the differences are not so significant, the correlation for such a wide range of $Q$ values still allows for comparison of the overall effectiveness of QMs, however, within much narrower range of $C_s$ values.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{example1.pdf}
\caption{Examples of correlation between measured and expected image quality for two distinct methods: \emph{LAP1} -- left side and \emph{MIS1B} -- right side. The analysis was performed for image patch W1. The calculated correlation coefficients in each $D/r_0$ set and over whole data set ($D/r_0=1-10$) are presented in the lower bar plots. Colors in point correspond to various $D/r_0$ conditions. (Color figures are available in online version.)}
\label{corrExamples}
\end{figure*}
\subsection{Results and discussion}
The results of the comparison are presented in Tab. \ref{tab_wyniki} and Fig. \ref{families}. While in Tab.~\ref{tab_wyniki} we present only the best four methods for each $D/r_0$, the average performance obtained for whole QM families is summarized in Fig. \ref{families}. We divided the results into two categories, according to the contents of observed image: W1, W2, and W3 -- active regions, and W4, W5, and W6 -- granulation.
\input{tabela_wyniki}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{families.pdf}
\caption{Average correlation coefficients obtained for QM families. The left panel corresponds to the results for active regions, while the right one shows outcomes for granulation patches. (Color figures are available in online version.)}
\label{families}
\end{figure}
According to the results presented in Tab. \ref{tab_wyniki}, there is no single winner covering all possible atmospherics conditions and scene characteristics. However, there are three techniques which showed distinctively better performance and, therefore, they are discussed below.
One of the methods with very impressive performance, is Helmli and Scherer's Mean method (MIS5) proposed by \citet{HelmliScherer2001}. This technique provided very good results for whole range of $D/r_0$ conditions and both types of observed solar regions. For all atmospheric conditions it is always among the four best methods. As indicated by distinctively higher correlation coefficients calculated over $D/r_0=1-10$, its usefulness should be considered when the seeing is highly varying or unknown. In summary, this method should be the first choice among all tested techniques.
The second noteworthy method is the Median Filter Gradient Similarity (GRA8) method, which was recently proposed by \citet{DengZHang2015}. It shows the best performance for very good atmospherics conditions ($D/r_0<4$). However, to achieve the high effectiveness we had to slightly modify the method by (1) combining both horizontal and vertical gradients and (2) changing the size of kernel. The superiority of the method is evident especially for active regions (W1$-$W3), while it is slightly less efficient when assessing only granulation patches (W4$-$W6), especially for $D/r_0>2$. Interestingly, this technique was not included in the best 12 methods indicated by wide-range correlations, $D/r_0=1-10$ which indicates that it should not be applied for observations with poor or unknown atmospherics conditions. The results of the GRA8 method recommend it for the utilization in observations assisted by AO since in this case the image quality is significantly enhanced, and the effective $D/r_0$ is small.
The last method, which provides good results is the DCT Energy Ratio proposed by \cite{ShenChen2006}. Its performance is the highest for moderate and poor atmospherics conditions when observing granulation regions. This method is second best method when the whole range of $D/r_0$ is taken into account, for patches W4-W6. The parameter of this method - the size of sub-block - should be selected accordingly to the blurring strength, i.e., the larger the $D/r_0$, the larger the sub-block.
The method frequently used in solar observations, rms-contrast or normalized gray-level variance (STA5), showed surprisingly poor performance. Since the rms-contrast measure was originally developed for granulation regions, with isotropic and uniform characteristics, we investigated how the method performed for this type of scene. The results presented in Fig. \ref{rms_test} indicate that indeed the techniques allows for achieving significantly higher correlation values for granulation. Still, the outcomes based on rms-contrast were much less correlated with the expected image quality than the results achieved by the best techniques. Therefore, it did not appeared in any ranking presented in Tab. \ref{tab_wyniki}. Our conclusions agree with the observations made by \cite{DengZHang2015} who also indicate relatively low efficiency of the common rms-contrast measure.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{rms_contrast.pdf}
\caption{Comparison of the effectiveness of most popular image quality technique used for solar images, i.e., rms-contrast. Significantly higher correlation values are achieved for granulation regions. }
\label{rms_test}
\end{figure}
Some interesting conclusions can be drawn from the average performance of each QM family presented in Fig. \ref{families}. The Laplacian-based methods (LAP) show very good average correlation with $D/r_0$ for both types of observed scenes. While the LAP-based methods are indicated only several times in the rankings presented in Tab. \ref{tab_wyniki}, they still should be considered as a base for new, better techniques. The biggest difference between the efficiency when changing from granulation to active regions, can be observed in statistical methods (STA). This agrees well with the results shown in Fig. \ref{rms_test} and appears due to the assumption made in STA methods that regions should expose uniform properties across the observed field. The strange shape of GRA dependency visible in the right plot of Fig. \ref{families} arises due to the characteristics of Median Filter Gradient Similarity (GRA8). As it was stated before, the performance of this method rises significantly as the $D/r_0$ decrease. This behavior biases the average efficiency of GRA family.
The possibility of using a method in real-time image selection can be especially important, as it allows for recording only the most useful data. Therefore, for completeness of our comparison, we measured the time required for processing a single image patch by each method. The evaluations were performed on a single core of IntelCore i7-3930K operating at 3.2~Ghz. The procedures were repeated $10^4$ times to obtain the average execution time. The results are presented in Tab.~\ref{tab_times} wherein we show the methods and their average computation time (single-image analysis) in seconds, alternately.
The results show that real-time frame selection for the assumed size of image patch is possible for frame rates more than 1000fps for most of the analyzed methods. This can be, however, not true if one wants to process large, full resolution, frames and/or if several steps of image calibration have to be applied. For such a case, the usage of graphics processing units (GPU) and accompanying optimization of a code should be considered. However, in our comparison we decided to perform measurements on a single CPU core, so that the further reduction of execution time with a multi-core machine can be estimated. For most of the included methods, the analysis is performed independently in local sub-regions of whole image, which makes them easily parallelized.
We observed that the Median Filter Gradient Similarity (\emph{GRA8}) requires visibly more computation time ($>$0.01 s) than \emph{MIS5} ($<$0.002 s). It is mostly due to the median filtering which requires sorting pixels in a sliding window. This indicates that \emph{GRA8}, compared to \emph{MIS5}, would require more computing resources and/or better code optimization to operate at high frame rates utilized frequently in solar observations. Unfortunately, the Discrete Cosine Transform techniques (\emph{DCT1} and \emph{DCT2}) have significantly higher execution time, which makes them useless for real-time computation. These methods should be considered only in the post-processing. The fastest method was the one most frequently utilized in solar observations, \emph{STA5}. Unfortunately its mediocre performance is not compensated by the distinctively higher computational efficiency.
\input{tabela_czasy.tex}
\section{Conclusions}
The assessment of image quality is an integral part of observations of the solar photosphere and chromosphere. It is essential to precisely select only the images of the highest fidelity before performing consecutive image reconstruction. Unfortunately the complexity and variety of observed structures make the estimation of image quality a challenging task. There is no single point-like object available, therefore, the selection techniques utilized in nighttime lucky imaging are not applicable.
In this study we decided to employ 36 techniques with various implementations and evaluated their precision in selecting the best exposures. The methods were based on various principles (gradients, intensity statistics, wavelets, Laplacians, Discrete Cosine Transforms) and were published over the last 40 years. Additionally, we enhanced their performance by applying simple modifications and by adjusting their tunable parameters.
In the comparison we employed reference images, containing both active regions and granulation areas, obtained by the Hidnoe satellite. The selected patches were degraded by convolution with blurring kernels generated by RWV method which faithfully reflects the seeing characteristics. We assumed a wide range of relative atmospheric conditions, starting from nearly undisturbed observations at $D/r_0=1$ to mediocre seeing at $D/r_0=10$. The reference quality of each simulated image was objectively estimated by measuring the amount of energy preserved in the Fourier spectrum of the original image. Eventually, to assess the efficiency of the compared techniques, we calculated Spearman's correlation coefficient between the outcomes of each method and the expected image quality.
The results of our comparison showed that the efficiency depends on the strength of atmospheric turbulence. For good seeing, $D/r_0<4$, the best method is the Median Filter Gradient Similarity \citep{DengZHang2015}, (GRA8), the most recent technique proposed for solar observations. Importantly, the original idea had to be slightly modified to enhance the method's performance. On the other hand, when the seeing conditions cover a wide range, the most efficient method is the Helmli and Scherer's Mean \citep{HelmliScherer2001}, (MIS5). This method should be considered when observing without AO or if the seeing conditions are unknown. The last distinctive method was the DCT Energy Ratio \citep{ShenChen2006} which showed high performance in poor atmospheric conditions when observing granulation regions. The measurements of execution time indicated that of the three mentioned techniques the Helmli and Scherer's Mean has significantly higher computational efficiency which recommends it for utilization with extremely high frame rates and/or larger image patches.
\begin{acks}
We would like to thank for numerous important comments, thorough review and great help in polishing the English provided by the anonymous Reviewer. This research was supported by the Polish National Science Centre grants 2016/21/D/ST9/00656 and (A. Popowicz) and 2015/17/N/ST7/03720 (K. Bernacki). We would like also to acknowledge the support from Polish Ministry of Science and Higher Education funding for statutory activities.
\end{acks}
\bibliographystyle{spr-mp-sola}
|
1,108,101,565,296 | arxiv | \section{Introduction}
Effective temperature (\teff) and surface gravity (\logg) are among the most fundamental properties of a star. Determination of these parameters (or their proxy) has allowed for an understanding of how stars form and how they evolve over time.
In the early days of spectroscopy, direct measurement of these parameters was challenging due to a lack of precise stellar models. However, it was possible to assign stars a particular spectral classification that can be used as a proxy for \teff. Spectral types, ranging from O to M, were determined through examining the type and the strength of various absorption lines. Each type can further be subdivided into subtypes, usually 0 to 9. Similarly, through examining widths of certain lines, stars were assigned into different luminosity classes that can be used as a proxy of the size of a star, ranging from supergiants (I) to dwarfs (V). Hundreds of thousands of stars were classified in such a manner \citep{gray2009}. This led to complex classifications such as, e.g., G3II, to which more can be added to further express features pertaining to unique metallicity of a particular star, information on multiplicity, as well as ranges that would signify an uncertainty in classification. In the era predating computers, this allowed to immediately grasp the precise nature of a particular star of interest. Nowadays, however, given the diversity and complexity of the encoded information, such a classification can be difficult and impractical to use when dealing with a large number of stars.
Over the years, the models of stellar atmospheres have improved, allowing to create grids of synthetic spectra with known \teff, \logg, and other parameters, to which the spectra of real stars could be compared \citep[e.g.,][]{kurucz1979,kurucz1993,coelho2005,husser2013}. However, not all synthetic spectra are created equal due to inclusion of different physics, and they may have substantial systematic differences in the derived stellar parameters when matched against real data. Careful consideration of individual spectral features can help in fine-tuning the extracted parameters, but, as these features are different across different spectral types, this is not always practical to do in a self-consistent manner when analyzing spectra from large surveys.
In particular, although a good performance by various survey pipelines for programs such as APOGEE has been previously achieved for low mass stars with \teff$<$7000 K, many such pipelines have lacked calibration to measure parameters of hotter stars due to a sparsity of their spectral features \citep[e.g.,][]{garcia-perez2016}.
Previously, we have developed a neural network, APOGEE Net, which self-consistently measured \teff, \logg, and [Fe/H] for cool (\teff$<$6500 K) stars, including more exotic and rare objects, such as pre-main sequence stars \citep{olney2020}. In this work, we extend its functionality to estimate these parameters for all stellar objects, ranging from early O to late M types.
\section{Data}
\subsection{Data products}
The APOGEE project \citep[S. Majewski et al. in prep]{blanton2017,majewski2017} uses two spectrographs mounted at two 2.5 meter telescopes - one at the Apache Point Observatory (APO), and the other one at the Las Campanas Observatory (LCO) \citep{bowen1973,gunn2006}. It is capable of observing up to 300 sources simultaneously in the H band (1.51–1.7 $\mu$m), with the resolution of $R\sim$22,500 with the field of view being 3$^\circ$ in diameter at APO, and 2$^\circ$ in diameter at LCO \citep{wilson2010,wilson2019}.
The latest public data release is DR17 \citep[J. Holtzman et al. in prep]{abdurrouf2021}. It consists of $\sim$660,000 unique stars, many with multiple visit spectra. Due to the targeting strategy of the survey \citep{zasowski2013,zasowski2017}, a large fraction of the sources that have been observed are red giants. However, a number of other targets have also been observed, such as provided by additional goal and ancillary programs \citep{beaton2021,santana2021} - they include pre-main sequence stars in several star-forming regions, and main sequence and evolved stars of various spectral types. In particular A \& F dwarfs are observed in every field due to their use as telluric calibrators.
\subsection{Existing Pipelines}
The primary pipeline for the reduction and extraction of stellar parameters from the APOGEE spectra is ASPCAP \citep{garcia-perez2016}. It performs a global simultaneous fit for eight stellar parameters, which include \teff, \logg, $v\sin i$, various metallicity metrics, and other parameters, through performing a least-squares-fit between the data and a synthetic spectral grid. In old data releases, full set of derived stellar parameters were made available only for red giants. For main sequence dwarfs or pre-main sequence stars, while some \teff\ were reported, neither metallicity nor \logg\ were made available. DR16 has expanded the list of stars for which it derived full parameters to all sources with \teff$<8,000$ K \citep{ahumada2020}. DR17 (SDSS-IV collaboration in prep.; J. Holtzman et al. in prep) has added \teff\ and \logg\ for hotter stars up to \teff$<20,000$ K. However, despite the increase in coverage, and improvement in parameters over different data releases, there are some systematic features that remain which make these parameters not optimal for some types of stars, particularly those that are still young (Figure \ref{fig:pipelines}).
The first alternative to a direct grid fitting performed by ASPCAP was a data-driven approach, The Cannon \citep{ness2015,casey2016}. It was trained on a population of select stars with the parameters from ASPCAP from DR10 in an attempt to improve on some of the systematic features that arose as a mismatch between a synthetic spectra and the data. Its parameters are only limited to the red giants. The latest value added catalog was released in DR14 \citep{abolfathi2018}. From DR16 onwards it has offered no improvements compared to ASPCAP\footnote{\url{https://www.sdss.org/dr16/irspec/the-cannon/}}. There was an effort to extend The Cannon to also report \teff\ and [Fe/H] for the M dwarfs \citep{birky2020}, but there was no unified model that could perform on the full APOGEE sample.
The Payne \citep{ting2019} was another data-driven approach that was trained on the Kurucz atmospheric models. Unlike the Cannon they did not use ASPCAP parameters for label transfer, and derived their own set of labels. Prior to DR16, they were the first to produce a comprehensive set of stellar parameters (including \logg) for dwarfs with \teff$<8000$ K. Although robust across much of the parameter space, there was a degeneracy between \logg\ and metallicity among M dwarfs, confusing their parameters with pre-main sequence stars. As such, they did not report parameters for sources with \teff$<4000$ K and \logg$>3.5$ dex.
APOGEE Net I \citep{olney2020} attempted to build on the efforts from the Payne, supplementing the available parameters for intermediate mass dwarfs and the red giants with \teff, \logg, and [Fe/H] derived from photometric relations and the theoretical isochrones for the M dwarfs and the pre-main sequence stars. This combination of the parameters was used to train a neural network that is capable of deriving stellar properties for APOGEE spectra for all stars with \teff$<$6700 K in a homogeneous manner. However, a lack of a training sample of sources hotter than \teff$>$8000 K resulted in all spectra that were dominated by the H lines to clump at the edge of the grid and, therefore, parameters for stars with \teff$>$6700 K were not reliable.
There have been numerous efforts to derive spectral types for OB stars towards a number of star forming regions through cross-matching APOGEE observations to the optical spectra with known types and deriving relations based on equivalent widths of various H lines in the near-infrared \citep{roman-lopes2018,roman-lopes2019,roman-lopes2020, borissova2019,ramirez-preciado2020}. For optimal performance, however, these efforts require initial separation of B \& F stars (as they can have a comparable strength of the H lines, requiring other lines to disentangle them). Furthermore, such a classification does not currently bridge the gap of A \& F stars to fully connect OB stars to the stellar parameters of all the other sources observed by APOGEE.
\begin{figure*}
\epsscale{1.1}
\plottwo{archive1.pdf}{archive2.pdf}
\plottwo{archive3.pdf}{archive4.pdf}
\caption{Stellar parameters derived from APOGEE spectra from various pipeline+dataset combinations (listed clockwise from upper left): Synspec version of ASPCAP, DR17 (J. Holtzman et al. in prep; [Fe/H] is not available for cool and hot dwarfs); The Cannon \citep{ness2015}, DR14; The Payne \citep{ting2019}, DR14; and APOGEE Net I \citep{olney2020}, DR17. \label{fig:pipelines}}
\end{figure*}
\begin{splitdeluxetable*}{cccccccBccccc}
\tablecaption{Stellar parameters for sources observed by APOGEE
\label{tab:params}}
\tabletypesize{\scriptsize}
\tablewidth{\linewidth}
\tablehead{
\colhead{APOGEE} &
\colhead{$\alpha$} &
\colhead{$\delta$} &
\colhead{\logteff\tablenotemark{$^a$}} &
\colhead{\logg\tablenotemark{$^a$}} &
\colhead{[Fe/H]\tablenotemark{$^a$}} &
\colhead{SNR} &
\colhead{\teff$_{,\mathrm{train}}$\tablenotemark{$^b$}} &
\colhead{\logg$_{,\mathrm{train}}$\tablenotemark{$^b$}} &
\colhead{[Fe/H]$_{,\mathrm{train}}$\tablenotemark{$^b$}} &
\colhead{Reference\tablenotemark{$^c$}} &
\colhead{Ref. Spec\tablenotemark{$^c$}} \\
\colhead{ID} &
\colhead{(J2000)} &
\colhead{(J2000)} &
\colhead{(dex)} &
\colhead{(dex)} &
\colhead{(dex)} &
\colhead{} &
\colhead{(dex)} &
\colhead{(dex)} &
\colhead{(dex)} &
\colhead{} &
\colhead{Type} }
\startdata
2M00000019-1924498 & 0.000832 & -19.413851 & 3.7351 $\pm$ 0.0027 & 4.238 $\pm$ 0.038 & -0.293 $\pm$ 0.013 & 229.8 & 3.734 & 4.224 & -0.311 & Paper I & \\
2M00004819-1939595 & 0.200805 & -19.666548 & 3.6667 $\pm$ 0.0024 & 1.674 $\pm$ 0.058 & -1.308 $\pm$ 0.031 & 311.7 & 3.670 & 0.88 & & Teff, 2013AJ....146..134K & \\
2M00011569+6314329 & 0.315407 & 63.242489 & 4.137 $\pm$ 0.020 & 3.684 $\pm$ 0.074 & & 238.7 & 4.306 & 4.04 & & SpT, 1972A\&A....17..253M & B2V\\
2M00032713+5533033 & 0.863082 & 55.550926 & 4.183 $\pm$ 0.028 & 3.81 $\pm$ 0.10 & & 263.2 & & & & & \\
\enddata
\tablenotetext{}{Only a portion shown here. Full table is available in electronic form.}
\tablenotetext{a}{Parameters predicted in this work}
\tablenotetext{b}{Parameters used to train the network}
\tablenotetext{c}{Reference for the parameters used in training; SpT shows that the spectral type and luminosity class were available, Teff shows that \teff, \logg, and occasionally [Fe/H] measurements were available.}
\end{splitdeluxetable*}
\section{The APOGEE Net Model}
\subsection{Training Labels}
To construct the training labels for the high mass stars, we used SIMBAD \citep{simbad} to search for sources with existing literature measurements of their spectral parameters. Unfortunately, direct and accurate measurements of \teff\ and \logg\ for high mass stars observed by APOGEE are rare, however, many more had a reported spectral type and a luminosity class (if both are available), which can be used as a proxy of \teff\ and \logg.
To perform a transformation, we have compiled a list of high mass stars, independent of the APOGEE footprint, for which there exist independent measurement for both spectral type and luminosity class, as well as a separate measurement of \teff\ and \logg. In total, we have collated 851 measurements from \citet{lyubimkov2002,adelman2004,repolust2004,lyubimkov2005,crowther2006,kudritzki2008,martayan2008,fraser2010,lyubimkov2010,lyubimkov2012,nieva2013,bianchi2014,david2015,mahy2015,cazorla2017,martins2017,molina2018,heuser2018}.
We find that among OBAF stars, there is usually an unambiguous correlation between the two sets of parameters (Figure \ref{fig:grid}). We note that among the cooler stars ($<$6000 K), the luminosity class and \logg\ tend to be rather inconsistent \citep{morgan1937} -- e.g., it is relatively common for giants found in the red clump to have a luminosity class of III, IV, or V, despite having very similar \teff\ and \logg. However, as reliable \teff\ and \logg\ are available for cool stars, in their case, such an exercise of converting these \logg\ from luminosity classes is unnecessary.
We have encoded all spectral types to a numeric value: O=00, B=10, A=20, F=30, such that a star with a class of B2.5 would have a numeric label of 12.5. Similarly luminosity classes I-V were transformed to numeric labels 1-5. If a star had a range of luminosity classes listed, an average was taken -- e.g., IV-V would be labelled as 4.5. We then used a simple convolutional neural network to perform an interpolation between the two sets of parameters for the OBAF stars to construct a transformation grid.
In total, in constructing the training sample, we retained all of the sources from APOGEE Net I if they had \teff$<$6700 K or if they had \logg$<11.3\times\log_{10}(T_\mathrm{eff})-40$ (Figure \ref{fig:train}), to potentially improve on the parameters on the supergiants, as their previously estimated \teff\ and \logg\ might have been artificially compressed due to a relative rarity of such objects. They were supplemented with \teff\ and \logg\ either transformed from the spectral type, or from independently available measurements. They are listed in Table \ref{tab:params}.
In addition to these parameters, APOGEE Net I has also included [Fe/H] as a parameter it predicts. To retain this functionality, we have preserved Fe/H input for low mass stars, and included it if an independent measurement exists. However, as a spectrum of high mass stars does not have prominent metal lines, requiring this feature would substantially limit the training sample. Thus, we have forced [Fe/H]=0 for all the hot stars where it was unavailable, and we do not report on the predicted metallicity of the stars with \teff$>$6500 K to compensate for the inclusion of these spurious values in training.
\begin{figure*}
\epsscale{1.1}
\plottwo{spt.pdf}{lum.pdf}
\caption{Distribution of \teff\ and \logg\, color-coded by luminosity class (left panel) and spectral type (right panel) for sources in the literature in which both sets of these parameters are measured independently. Diamonds are the data, crosses are the computed grid. \label{fig:grid}}
\end{figure*}
\begin{figure}
\epsscale{1.1}
\plotone{train.pdf}
\caption{Distribution of \teff\ and \logg\ in the training sample classes. Yellow dots are the sources from APOGEE Net I, red circles are sources with \teff\ and \logg\ from SIMBAD for hot stars in a parameter space inaccessible to APOGEE Net I. Blue squares show the sources for which \teff\ and \logg\ were transformed from the corresponding spectral type and luminosity class combination.\label{fig:train}}
\end{figure}
\subsection{Model Training}
\subsubsection{Initial experiment setup}
The original trained model from Paper I saw only \teff\ in the range of 3,000--6,700 K; including sources with \teff$>$50,000 K substantially skewed the weights within the network, and led to both to a decreased performance on cooler stars ($<$6000 K), as well as a lack of generalization among high mass stars.
Therefore, we instead trained a new model from scratch, using a similar model architecture and the PyTorch library \cite{pytorch}. In our new model, we converted \teff\ into $\log$ space, and renormalized \logteff, \logg, and [Fe/H] through z-score standardization. That is, given a value $x$, it is standardized as $z = \frac{x - \bar{x}}{S}$, where $\bar{x}$ denotes the mean value of our training set, and $S$ the standard deviation. The actual mean and standard deviation values are reported in Table \ref{tab:ave}.
\begin{deluxetable}{ccc}
\tablecaption{Parameters for standardization
\label{tab:ave}}
\tabletypesize{\scriptsize}
\tablewidth{\linewidth}
\tablehead{
\colhead{Parameter} & \colhead{Average} & \colhead{Standard Deviation}}
\startdata
\logteff & 3.67057 & 0.09177 \\
\logg & 2.92578 & 1.18907 \\
Fe/H & -0.13088 & 0.25669 \\
\enddata
\end{deluxetable}
\begin{deluxetable*}{cc}
\tablecaption{Classifier Hyperparameter Tuning Values
\label{tab:classparams}}
\tabletypesize{\scriptsize}
\tablewidth{\linewidth}
\tablehead{
\colhead{Hyperparameter} & \colhead{Values}}
\startdata
$Optimizer$ & SGD, ASGD, Adam, Adamw, Adamax (Best: Adamax) \\
$Learning$ $Rate$ & 0.005-0.000005 (Best: 0.00075) \\
$Dropout$ & 1\%-80\% (Best: 0.07646)\\
$Minibatch$ $Size$ & 128 (Best: 128)\\
$Loss$ $Weighter$ & KDE, Grid, Linear on \teff, Exponential on \teff (Best: KDE) \\
$KDE$ $Bandwidth$ & 0.08-1 (Best: 0.7671) \\
$Kernel$ $Size$ & 3, 5, 7, 11 (Best: 3) \\
$Double$ $Conv$ $Channel$ & True, False (Best: False) \\
$Disable$ $Conv$ Layers & Combination from \{ 2, 4, 6, 8, 10, 12 \} or None (Best: None) \\
$Color$ $Model$ $Depth$ & 0-10 $\in \mathbb{Z}$ (Best: 5) \\
$Color$ $Model$ $Width$ & 8, 16, 32, 64, 128, 256, 512, Varied (Best: Varied) \\
\hline\hline
\enddata
\end{deluxetable*}
The limited number of OB stars in the the training data (and even greater scarcity of blue giants and supergiants) poses challenges for training; specifically, there is a risk that the model will prioritize maximizing performance among stars in the dense regions of the \teff-\logg\ space, even if it means sacrificing performance on these relatively rare stars. To penalize the model for ignoring these rare stars, we apply a non-uniform weighting to each star in our training objective. We explored various weighting schemes for our objective function, including gridding the \teff-\logg\ parameter space and weighting stars in each grid cell inversely proportional to the number of stars in the grid cell.
Ultimately, we settled upon a weighting scheme using Kernel Density Estimation (KDE) \cite{scott1992}. We used KDE to approximate the density of stars in the 2d standardized \teff-\logg\ space, and weight each star inversely proportional to its density.
If $d_i$ is the density estimate for a star $i$, we weight the loss on star $i$ with $\max(\frac{c}{d_i}, 5)$. The $1/d_i$ term accomplishes the inversely proportional weighting; the max function puts a cap on how much we weight any given star; and $c$ is a constant chosen such that the average weight across stars is 1.
Training a neural network relies on particular design decisions and hyperparameters (such as the learning rate, expression for loss, architecture of the model itself, etc.). To improve the performance we tuned our model by performing a hyperparameter sweep with Weights and Biases (Table \ref{tab:classparams}), a developer tool for machine learning \citep[wandb,][]{wandb}. Tuning involves a guided search of the hyperparameter space, repeatedly training models with different hyperparameters and evaluating their resulting performance on a held out validation set.
\subsubsection{Colors}\label{sec:color}
Although the model architecture from \citet{olney2020} that operated only on the spectra could perform well on the full data sample, here we explore adding metadata for each star to further improve the performance. This metadata consists of 2MASS and Gaia EDR3 broadband photometry (G, BP, RP, J, H, K), as well as parallax \citep{2mass,gaia-collaboration2021}.
Temperature-sensitive features can be robustly recovered directly from the spectra itself, and, as the APOGEE spectra cover the H band, the effects of reddening are not as significant in comparison to the optical light. Nonetheless, the spectra cover only a narrow $\sim0.2\mu$m band. As such, providing colors to the network allows it to infer the shape of the entire SED. Through having access to additional data, it allows the network to more finely tune its predictions.
We feed color information as a 7-dimensional vector into a fully connected deep neural network (DNN) with rectified linear unit (ReLU) element-wise non-linearities after each hidden layer as the activation function to achieve a non-linear transformation of the data. Afterwards, the output of this DNN together with the output of convolutional layers used for the spectra is concatenated to form a single tensor. This combined tensor is subsequently passed through another DNN to generate the final output. (Figure \ref{fig:model})
\begin{figure}
\epsscale{1.1}
\plotone{AP2_Diagram.pdf}
\caption{Architecture of the neural net model used in this paper. See also Appendix \ref{sec:appendix} for code.\label{fig:model}}
\end{figure}
When tuning the hyperparameters of this model, we also included the number of hidden layers in the DNN (from 0 to 10, where 0 meant there was no color feature branch) and the number of hidden units in each layer.
There are alternative mechanisms one could use to inject the color feature information; we leave an investigation of their relative merits to future work. The final model is presented in Appendix \ref{sec:appendix}.
\subsubsection{Uncertainties}
We generate the uncertainties in the parameters in a manner similar as that described in \citet{olney2020}. We used the uncertainties in the spectral flux that is reported in the DR17 apStar file, and created multiple realization of the same spectrum. This was done through generating zero-mean, unit-variance random Gaussian noise for each wavelength, multiplying it by the spectral uncertainties, and adding the resulting noise to the spectrum. If the uncertainties in the spectral flux were larger than five times the median uncertainty across all wavelengths, they were clipped to that limit. This was done to prevent bad portions of a spectrum (such as in particularly noisy regions, e.g., near the telluric lines) from skewing the weights of the model. Although apStar files contain a pixel mask to identify bad pixels, it was not used, to allow the network to recognize the noise on its own and down-weight it when the model was training. The metadata (colors and parallaxes) were passed as-is without adding variance.
Different realization of the same spectrum were all independently passed through to the model, resulting in slightly different predictions of \teff, \logg, and [Fe/H]. The final reported parameters for each star are a median of 20 such predictions, which was deemed representative compared to an arbitrarily large number of scatterings for a few stars. The uncertainties in these parameters are estimated through a standard deviation, to measure a scatter in the outputs between different realizations.
\section{Results}
\subsection{Low mass stars}
The training and initial testing on a withheld sample was performed on the APOGEE data reduced with the DR16 version of the pipeline. We then apply the trained model to the APOGEE DR17 data in full. The two sets of parameters between DR16 and DR17 reductions are typically consistent with each other within the $\sim$1.2--1.5$\sigma$.
The resulting distribution of \teff\ and \logg\ is shown in Figure \ref{fig:tefflogg}. Although [Fe/H] are predicted for all stars, we do not report it for sources with \teff$>$6500 K, as the training labels for them are unreliable.
The typical reported uncertainties are 0.03 dex in \logg, 0.002 dex in \logteff\ (30 K for a 6000 K star), and 0.01 dex in [Fe/H], which is largely consistent with the typical uncertainties in APOGEE Net I. On average the reported uncertainties are also comparable to those reported by ASPCAP DR17 for these parameters.
Overall, the parameters for cool stars show a good agreement with APOGEE Net I (Figure \ref{fig:onetoone}). Examining the difference in the parameters between two versions of the pipeline relative to the reported uncertainties, the scatter is typically within 1.5-2$\sigma$ (Figure \ref{fig:onetoone}, right panel). As such, this is likely a factor that should be considered regarding a systematic uncertainty in the absolute calibration of the spectral parameters between two separate models.
While the derived parameters with APOGEE Net I and II are largely self-consistent, there may be further systematic features that may not necessarily be accounted by the model. Different parts of the parameter space had different means of deriving their initial set of labels. Some of those labels were based on synthetic spectra matching to the data, some of them were based on theoretical isochrones, and some of them were based on empirical photometric relations. Such a stitching together of various sets of labels may have caused discontinuities in the parameter space that the model has learned to interpolate across. The physics included in computing the original set of synthetic spectra from which some of the labels were derived may be incomplete - this may introduce systematic offsets in the predicted parameters that are difficult to quantify. This is generally the case with most pipelines deriving parameters for data from large spectroscopic surveys. Nonetheless, the accuracy that can be inferred from the overall parameter distribution is sufficient for a wide variety of applications.
\begin{figure*}
\epsscale{1.1}
\plotone{tefflogg.pdf}
\caption{Derived \teff\ and \logg\ distribution for all $\sim$630,000 stars in the Milky Way in the APOGEE DR17 data. \label{fig:tefflogg}}
\end{figure*}
\begin{figure*}
\epsscale{1.1}
\gridline{\fig{logg1-1.pdf}{0.33\textwidth}{}
\fig{teff1-1.pdf}{0.33\textwidth}{}
\fig{distro.pdf}{0.33\textwidth}{}
} \vspace{-0.8cm}
\caption{Comparison of parameters between the training labels and the model predictions, using the reserved test sample that was not used in training the model. Left: Comparison of \logg, color coded by \teff. Middle: Comparison of \teff, color coded by \logg. Right: Difference between the training labels derived from APOGEE Net I for cool stars with \teff$<6500$K and the model predictions for the same stars in this work, divided by the uncertainties measured both by APOGEE Net I and II added in quadrature, to demonstrate the typical magnitude of residuals, in $\sigma$. \label{fig:onetoone}}
\end{figure*}
\subsection{High mass stars}
\begin{figure*}
\epsscale{1.1}
\plottwo{sptmatch.pdf}{lummatch.pdf}
\caption{Left: Comparison between spectral types in literature and the \teff\ measured for the stars in the sample, color coded by the luminosity class. Yellow circles show the sources with spectral types measured directly from the APOGEE spectra, from \citet{roman-lopes2019,roman-lopes2020,ramirez-preciado2020}. Spectral types are formatted to range as O=0-10, B=10-20, A=20-30, etc. Right: distribution of \teff\ and \logg\ color coded by the literature luminosity class. Note that the concentration of the supergiants at \teff$\sim$6000 K and \logg=3.5 is primarily from the Magellanic clouds (Section \ref{sec:mc}). \label{fig:sptmatch}}
\end{figure*}
\begin{figure*}
\epsscale{1.1}
\plotone{apogee_teff_comp.pdf}
\plotone{apogee_logg_comp.pdf}
\caption{Top: Example APOGEE spectra for sources with \logteff\ ranging from 4.5 (blue) to 3.6 (red), with a step of 0.1 dex. The spectra are arbitrarily scaled in flux. Some of the temperature sensitive lines that can be used to infer parameters of hot stars are shown in gray, such Brackett H lines and He II lines. The gaps in the spectra correspond to the chip gap. Bottom: Spectra with similar \logteff$\sim$4.1, but with three different \logg.\label{fig:teffgrid}}
\end{figure*}
\begin{figure*}
\epsscale{1.1}
\plottwo{hrteff.pdf}{hrlogg.pdf}
\caption{HR diagram of the APOGEE sample of higher mass stars, color coded by the predicted \teff\ (left) and \logg\ (right). Black arrows show the reddening vector corresponding to 1 $A_V$. \label{fig:hr}}
\end{figure*}
In general, as APOGEE Net II matches the performance of APOGEE Net I for low mass stars; the primary improvement in the pipeline is in its new ability to generalize the parameters of high mass stars. Indeed, the derived \teff\ are strongly correlated to the previously measured spectral types for the same stars (Figure \ref{fig:sptmatch}), particularly among the main sequence stars. There is some scatter; for the high mass stars this scatter is in part driven by multiplicity. As almost all O stars tend to be multiples \citep{preibisch2001,duchene2013}, it is not uncommon for spectral features of both stars to be detected in the same spectrum. Although spectral types can contain information on both stars (e.g., O5+O9.5), such information is difficult to encode, as such, a comparison is done for only one star in the pair, but the derived \teff\ may favor a second star.
Different wavelength regimes may also be more or less sensitive to the presence of a companion, as such, optical spectra, from which most spectral types have been derived may not offer a perfect match to the H band APOGEE spectra. As such, the correlation between spectral type and the measured \teff\ is particularly strong among B stars that had spectral types derived directly from the APOGEE spectra \citep{ramirez-preciado2020} rather than those measured from other datasets.
The hot main sequence stars (\teff$>10^4$ K, \logg$>3.7$) are relatively numerous in the sample and they tend to have a set \logg\ as a function of their \teff\, and their \teff\ (and the spectral features that correspond to it) vary smoothly as a function of the mass of the star. However, this is not the case for blue giants and blue supergiants. Only a few hundred of these stars with luminosity classes from the literature have been observed. There is also a large gap in \logg\ between the blue giants and blue supergiants (e.g., Figure \ref{fig:grid}, difference between red and green lines); this gap is difficult for a CNN to fully reproduce, especially given a limited sample size owing to the rarity of such objects, resulting in \logg\ of supergiants being overestimated and placing them closer to \logg\ distribution of other stars. Finally, luminosity classes at \teff$<8,000$ K become less precisely defined relative to \logg\ (e.g., Figure \ref{fig:grid}). Combined, these effects make it difficult to achieve optimal performance in extracting stellar parameters of these type of stars. The mismatch in \logg\ for the supergiants between the training labels and those predicted by the model is partially apparent in Figure \ref{fig:onetoone}, but these objects are extremely rare.
However, we note that qualitatively, examining the distribution of hot stars in the right panel of Figure \ref{fig:sptmatch} color coded by their luminosity types does show a preference for assigning more luminous types to the sources with lower \logg\ at a given \teff. Thus, although their \logg\ may be overestimated, it can nonetheless be used to separate stars of different luminosity classes.
Examining the spectra directly, they can be sorted based on their \teff\ measurements into a sequence (Figure \ref{fig:teffgrid}, top). While hot stars lack the number of spectral features that are present in cooler stars, equivalent widths of H lines that fall into the APOGEE spectrum, combined with various atomic features (for \teff$<$10,000 K), as well as He II absorption (for \teff$>25,000$ K), are effective at determining \teff. Similarly, surface gravity broadening is imprinted on the H lines: while \logg\ is more difficult to measure, dwarfs do appear to have somewhat wider lines than the giants (Figure \ref{fig:teffgrid}, bottom).
Another method of evaluation is via HR diagram. Figure \ref{fig:hr} shows that the bluer stars are preferentially hotter as well, and that the more luminous stars tend to have lower \logg. It should be noted that nearby OB stars are too bright to be targeted for the spectroscopic observations with the current targeting strategy, as such hot stars tend to be more distant and more susceptible to be reddened due to extinction. Indeed, the gradient of constant \teff\ or \logg\ color coded on the HR diagram does appear to follow the reddening law, and it is possible to deredden the hottest OB stars (\logteff$>4.2$) to the tip of the main sequence.
\begin{figure*}
\epsscale{1.1}
\plotone{obsky.pdf}
\plotone{sgsky.pdf}
\caption{Top: Distribution of sources in the APOGEE DR17 data in galactic coordinates, color coded by the maximum \teff\ of a source along a line of sight. Note that sources with \teff$>$10,000 K tend to be concentrated in the plane, and sources with \teff$>$20,000 K tend to primarily trace young star forming regions. Bottom: distribution of the sources, but limited to only sources with \teff$>$8,000 K, with the symbol size representative of \logg$<$3.5, to highlight the position of blue and yellow giants and supergiants. \label{fig:obsky}}
\end{figure*}
The final method of evaluation of the accuracy in the determination of \teff\ and \logg\ of high mass stars with respect to the other sources is through examining the distribution of the sources within the Galaxy. As high mass stars have short lifetimes, they are not expected to be located far from their regions of birth. Indeed, the hottest stars (\teff$>20,000$ K) have a very clumpy spatial distribution that almost exclusively trace known star forming regions. Similarly, somewhat cooler stars ($10,000<$\teff$<20,000$ K) are located primarily within the Galactic disk, with the disk scale height increasing with decreasing \teff\ (Figure \ref{fig:obsky}), as the disk scale height also depends on the age of the stellar population tracer being employed.
Similar distribution of sources is observed among the blue and yellow supergiants. Although their \teff\ is lower than for the main sequence counterparts of the same mass, selecting sources with, e.g., \teff$>$8,000 K and \logg$<$3.5 also preferentially traces younger parts of the Galaxy.
\subsection{Magellanic clouds}\label{sec:mc}
The APOGEE-2 survey has primarily observed the stars of the Milky Way, as such, APOGEE Net has been trained in the Milky Way stars almost exclusively - however, there have been several dedicated programs targeting stars in the Magellanic clouds \citep[MCs,][]{santana2021,zasowski2017,nidever2020}). These galaxies are significantly more metal-poor \citep{nidever2020,hasselquist2021}, as such, the unfamiliar chemistry and other features unique to MCs may skew some of the weights within the APOGEE Net model, particularly as abundances also affect \teff\ and \logg\ due to various elements playing a significant role in the energy transport in the outer structure of a star. As such, these extreme conditions result in particularly informative region to evaluate the performance of the network.
We select the MC members based on the parallax ($<$0.05 mas) and radial velocity ($>$100 km s$^{-1}$). Examining the distribution of \teff\ and \logg\ of these stars does show some surprising features: the red giant branch appears to be trifurcated, in a manner that is not seen in the APOGEE sample of the Milky Way (Figure \ref{fig:mc}). However, there are physical interpretations for this segmentation: the stars within each overdensity in the spectroscopic parameter space do trace different spatial regions in the MCs, and some of them have a particular targeting flag. Furthermore, the relative placement of these segments is mostly consistent with the parameters derived from ASPCAP. However, in ASPCAP the sequences are somewhat closer together, partially overlapping, producing a more continuous gradient. This could be due to the fact that in producing a fit, ASPCAP is considering all stellar parameters (including \teff, \logg, and all of the elemental abundances) independent variables, and \teff\ could be skewed by the influence of opacity contributed by the C and O abundances.
Along the red giant branch (RGB), the middle of the three overdensities (Figure \ref{fig:mc}, yellow) are the common RGB stars and red supergiants (RSG); the location of this branch is consistent with what is observed in the Milky Way.
The hotter branch (Figure \ref{fig:mc}, dark blue) is somewhat more peculiar. Though there are stars in the Milky Way that do fall into that parameter space, they do so solely due to being regular RGB stars with low metallicity, there is not a concentration of them at \teff$\sim$5000 K \logg$\sim$1.8 as there is in MCs. Their placement on 2MASS color-magnitude diagrams suggests that they are likely to be carbon rich AGB-C stars \citep{boyer2011,nidever2020}.
The cooler branch (Figure \ref{fig:mc}, green) is particularly unusual, as there are no counterparts to these stars in the APOGEE sample in the Milky Way. However, all of these sources share the same targeting flag of APOGEE2\_WASH\_GIANT, which is a classification that pre-dates the release of Gaia DR2, performed using Washington filter system to separate likely dwarfs and giants. Though sources with WASH\_GIANT flag have been observed from both northern and southern hemisphere, the peculiar concentration is limited only among the members of the Magellanic clouds. This clump also matches well to the variable stars identified by OGLE, usually as long period variables with the subtype of small amplitude red giants \citep{soszynski2009,soszynski2011}. The placement of these stars along the color-magnitude diagram suggests they are oxygen rich AGB-O stars \citep{boyer2011}.
Stars with \teff$>$6000 K and \logg$>2$ (Figure \ref{fig:mc}, red) appear to be young blue and yellow supergiants, and they appear to be spatially concentrated in the regions of active star formation.
Sources with \logg$\sim$2 and \teff$\sim$6000 K (Figure \ref{fig:mc}, light blue) have different metallicity from the supergiants that is closer towards solar. Most of these sources have been specifically targeted for being Cepheid variables.
\begin{figure*}
\epsscale{1.1}
\gridline{\fig{mc_plain.pdf}{0.31\textwidth}{}
\fig{mc_metal.pdf}{0.34\textwidth}{}
\fig{mc_segmented2.pdf}{0.31\textwidth}{}
} \vspace{-0.8cm}
\gridline{\fig{mc_cmd.pdf}{0.33\textwidth}{}
\fig{mc_aspcap.pdf}{0.33\textwidth}{}
} \vspace{-0.8cm}
\gridline{\fig{mc1.pdf}{0.33\textwidth}{}
\fig{mc2.pdf}{0.33\textwidth}{}
\fig{mc3.pdf}{0.33\textwidth}{}
} \vspace{-0.8cm}
\gridline{
\fig{mc5.pdf}{0.33\textwidth}{}
\fig{mc4.pdf}{0.33\textwidth}{}
} \vspace{-0.8cm}
\caption{Stars observed by APOGEE towards the Magellanic clouds. Top left: Distribution of \teff\ and \logg\ of the likely members (red), superimposed on the Milky Way stars in grayscale background. Top middle: Same as before, only with stars color-coded by their estimated [Fe/H]. Top right: Same as before, but separating the stars into five categories based on the parameter space: red giant branch/red supergiant (RGB/RSG), carbon-rich AGB stars (C-AGB), oxygen-rich AGB stars (O-AGB), blue and yellow supergiants (BSG/YSG), and the Cepheid variables. Second row left: 2MASS color magnitude diagram of the stars that fall into this parameter space. Second row right: ASPCAP parameters for the stars in this parameter space. Although three sequences of RGB/AGB stars overlap, they are nonetheless distinct from one another in this dataset as well. Other panels show the spatial distribution of the stars in these categories. \label{fig:mc}}
\end{figure*}
\section{Conclusions}
We present stellar parameters from the APOGEE spectra derived via a neural network, APOGEE Net. This is the first pipeline for analyzing these data that is capable of extracting stellar parameters for all stars, regardless of their temperature (from $\sim$3,000 K to $\gtrsim$50,000 K, which surpasses the hot limit of 20,000 K reported by ASPCAP in DR17) or surface gravity (from $\sim$0 to $\sim$5 dex), in a self-consistent manner. This includes pre-main sequence stars, main sequence dwarfs, red giants, massive main sequence stars, and blue supergiants.
These parameters do have some dependence on the trained neural network model: although this is less of an issue for common objects, the rarer classes of objects may not fully reach the precise parameter space these objects are supposed to inhabit, e.g., blue supergiants may have their \logg\ somewhat overestimated (closer to the main sequence) than the parameters in the training set may imply. Nonetheless, the network does appear to place stars along a representative sequence in both \teff\ and \logg\, as such these types of stars can nonetheless be identified and selected.
In addition to the stars observed in the Milky Way, APOGEE Net appears to achieve adequate performance in regions with less familiar (to the neural network) chemistry, such as the Magellanic clouds. Although it does produce some surprising features in the \teff\ \& \logg\ parameter space (which are also present in other pipelines, such as ASPCAP, but to a lesser degree, possibly due to allowing for independent parameters for C \& O in the fitting process), these features do appear to be physically motivated, identifying distinct classes of objects.
APOGEE DR17 is the final data release produced under SDSS-IV. The next generation of the survey, SDSS-V, intends to obtain spectra of $>6$ million stars, providing a much more homogeneous spectroscopic coverage of the Milky Way \citep{kollmeier2017}, including young stars along the Galactic disk, both low mass and high mass. APOGEE Net and its subsequent iterations provide the means to uniformly infer stellar parameters in these spectra, which in turn allows to analyze the star forming history of the Galaxy.
As more and more spectra are observed, and the census of stars in rarer classes grows, it may be possible to retrain the network to achieve an even better generalization across the entirety of the parameter space. However, the catalog presented here is a substantial step forward in characterizing spectroscopic stellar properties for all stars.
\software{ TOPCAT \citep{topcat}, APOGEE Net II}
\begin{acknowledgments}
KPR acknowledges support from ANID FONDECYT Iniciaci\'on 11201161. AS gratefully acknowledges funding support through Fondecyt Regular
(project code 1180350) and from the ANID BASAL project FB210003. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org.
SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard \& Smithsonian (CfA), the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
\end{acknowledgments}
|
1,108,101,565,297 | arxiv | \section{Introduction}
Understanding how protostellar and protoplanetary disks form is of fundamental importance to theories of star- and planet formation. Observations show their ubiquity around Class II objects \citep[e.g.,][]{AndrewsWilliams2005}. In recent years, doubt was cast on their accepted formation mechanism, when it was shown that for flux freezing \textit{magnetic braking} is so effective in removing angular momentum from the parent core that large-scale ($\approx 10^{2}~\mathrm{AU}$) disks are suppressed entirely \citep{AllenLiShu2003,MellonLi2008,HennebelleFromang2008}. This scenario held true even when a simplified version of ambipolar diffusion \citep{MellonLi2009} was included in the model, and has been referred to as the \textit{magnetic braking catastrophe}. Recently, \citet{HennebelleCiardi2009} demonstrated that inclination effects can modify the efficiency of magnetic braking, but a supercritical mass-to-flux ratio by a factor $>3-5$ (i.e., a weak magnetic field) was still required to form a large-scale disk. \citet{DuffinPudritz2009} performed three-dimensional simulations with ambipolar diffusion, but only resolved the first core, and did not find Keplerian motion.
Runaway collapse of a prestellar core can effectively trap the magnetic flux in the prestellar phase \citep[e.g.,][]{BasuMouschovias1994}. If the evolution continued to proceed under flux-freezing, a big magnetic flux problem would remain, since the emerging star would hold $10^{3}-10^{5}$ times more magnetic flux than observed in T Tauri stars. At densities $\lesssim 10^{12}~\mathrm{cm}^{-3}$, ambipolar diffusion causes flux leakage, while at even higher densities, matter decouples entirely from the magnetic field, and \textit{Ohmic dissipation} becomes dominant \citep[e.g.,][]{NakanoEtAl2002}. Both effects are revitalized after the formation of a central star \citep{LiMcKee1996, ContopoulosEtAl1998}. Recently, \citet{KrasnopolskyEtAl2010} have shown that for an isothermal core without self-gravity, only an `anomalous' resistivity---a factor of $100$ larger than the canonical level---allows disks of size $10^{2}~\mathrm{AU}$ to form during the Class 0 phase. However, their simulations are dominated by numerical reconnection events that make precise statements about the efficacy of magnetic braking difficult.
Currently, there is no evidence for the presence of centrifugal disks larger than $\approx 50~\mathrm{AU}$ around Class 0 or Class I objects \citep[e.g.,][]{MauryEtAl2010}.
However, there are outflows observed even at these early ages. It is therefore reasonable to assume that disks form at a small scale and only subsequently grow to the larger sizes observed in the Class II phase. We demonstrate the first part explicitly by using a canonical level of Ohmic dissipation alone, and speculate that the combined effects of ambipolar diffusion and Ohmic dissipation will allow for the second part. Additionally, an initially small disk could expand significantly if angular momentum transport is regulated by internal processes \citep[e.g.,][]{Basu1998,VorobyovBasu2007}.
\citet{MachidaEtAl2007} performed three-dimensional simulations of resistive MHD on a nested grid, following the evolution to stellar densities, but were only able to integrate until a few days after stellar core formation. We extend their calculations in a dimensionally-simplified model in order to simultaneously address the magnetic flux problem, integrate further in time, and study the formation of a centrifugal disk. We show that catastrophic magnetic braking can be avoided, and that a small disk forms in a very early phase of evolution.
\section{Method}\label{sec:Method}
We solve the normalized MHD equations in axisymmetric thin-disk geometry \citep[see][]{CiolekMouschovias1993,BasuMouschovias1994}, assuming vertical hydrostatic equilibrium in a vertical one-zone approximation. An integral method for calculating the self-gravity of an infinitesimally-thin disk is used \citep[detailed in][]{CiolekMouschovias1993}, with modifications for the finite extent and finite thickness of the flattened core.
In our model, the magnetic field points solely in the vertical direction inside the disk, but also possesses radial and azimuthal components ($B_{r}$ and $B_{\phi}$) at the disk surfaces and above. $B_{r}$ is determined from a potential field assuming force-free and current-free conditions in the external medium. We calculate $B_{\phi}$ and implement magnetic braking using a steady-state approximation to the transport of Alfv\'en waves in the external medium, as in \citet{BasuMouschovias1994}. Owing to numerical complexity, a calibration of this method with results of three-dimensional MHD wave propagation through a stratified compressible medium has not been done to date. We modify the ideal-MHD induction equation to include Ohmic dissipation:
\begin{equation}
\frac{\partial B_{z, \mathrm{eq}}}{\partial t} +
\frac{1}{r} \frac{\partial}{\partial r} \left( r B_{z, \mathrm{eq}} v _{r}\right)\\
=\frac{\eta}{r} \frac{\partial}{\partial r} \left( r \frac{\partial B_{z, \mathrm{eq}}}{\partial r} \right)
-\frac{\partial \eta}{\partial r} \frac{\partial B_{z, \mathrm{eq}}}{\partial r}.
\label{eq:induction}
\end{equation
Here, $B _{z, \mathrm{eq}}$ denotes the $z$-component of the magnetic field at the midplane of the disk, and $v _{r}$ is the radial component of the neutral velocity.
We use the parametrization of \citet{MachidaEtAl2007} for the resistivity calculated by \citet{NakanoEtAl2002}, with a dimensionless scaling parameter $\widetilde{\eta} _{0}$ whose standard value is unity. The resistivity is then
\begin{multline}
\eta = \widetilde{\eta} _{0}~1.3\times 10^{18}~\left(\frac{n}{10^{12}~\mathrm{cm}^{-3}}\right)\left({\frac{T}{10~\mathrm{K}}}\right)^{1/2}\\
\times\left[1-\tanh \left( \frac{n}{10^{15}~\mathrm{cm}^{-3}}\right)\right] ~\mathrm{cm}^{2}~\mathrm{s}^{-1},
\label{eq:eta}
\end{multline
where $n$ is the volume number density, and the term in square brackets is a cutoff representing the restoration of flux freezing at high densities. The uncertainties in $\widetilde{\eta} _{0}$ hinge largely on the grain properties \citep[e.g.,][]{MachidaEtAl2007}. Different from \citet{MachidaEtAl2007}, we do not (inconsistently) pull the resistivity outside all spatial derivatives.
For simplicity, we replace the detailed energy equation by a barotropic relation. The temperature-density relation of \citet{MasunagaInutsuka2000} is transformed into a pressure-density relation using the ideal gas law $P=nk_{\mathrm{B}}T$, where $P$ is the pressure, $k_{\mathrm{B}}$ is Boltzmann's constant, and $T$ is the temperature. We calculate the midplane pressure self-consistently, including the effects of the weight of the gas column, constant external pressure ($P_{\mathrm{ext}} = 0.1~\pi G \Sigma _{\mathrm{0}} ^{2}/2$), magnetic pressure, and the extra squeezing added by a central star (once present).
The MHD equations are solved with the method of lines \citep[e.g.,][]{Schiesser1991} using a finite volume approach on an adaptive grid with up to $1024$ radial cells in logarithmic spacing. The smallest cell is initially $10^{-2}~\mathrm{AU}$ and as small as $0.02~\Rsun$ at the highest refinement. We use the second-order van-Leer TVD advection scheme \citep{vanLeer1977}, and calculate all derivatives to second-order accuracy on the nonuniform grid. The code will be described in detail in a forthcoming paper.
\section{Initial conditions and normalization}\label{sec:Initial Conditions}
We assume that our initial state was reached by core contraction preferentially along magnetic field lines \citep[e.g.,][]{FiedlerMouschovias1993} and rotational flattening, and start with initial profiles for the column density and angular velocity given by
\begin{align}
\Sigma \left( r \right) = \frac{\Sigma _{0}}{\sqrt{1+\left(r/R\right)^{2}}},&&
\Omega \left( r \right) = \frac{2 \Omega _{\mathrm{c}}}{\sqrt{1+\left(r/R\right)^{2}}+1}.
\label{eq:initial_conds}
\end{align
Here, $R\approx 1,500~\mathrm{AU}$ approximately equals the Jeans length at the core's initial central density (see below). The column density profile is representative of the early stage of collapse \citep[e.g.,][]{Basu1997,DappBasu2009}, and the angular velocity profile reflects that the specific angular momentum of any parcel is proportional to the enclosed mass.
We assume an initial profile for $B _{z, \mathrm{eq}}$ in a way that the normalized mass-to-flux ratio $\mu=\Sigma / B_{z, \mathrm{eq}}~2 \pi \sqrt{G}=2$ everywhere, which is the approximate starting point of runaway collapse \citep[e.g.,][]{BasuMouschovias1994}. The radial velocity is initially zero. The initial state is not far from equilibrium, because the pressure gradient and magnetic and centrifugal forces add up to $\approx 82\%$ of the gravitational force. Our results do not depend strongly on the choice of initial state as long as gravity remains dominant.
The initial central column density and number density are $\Sigma _{0} = 0.23~\mathrm{g}~\mathrm{cm}^{-2}$ and $n _{\mathrm{c}} = 4.4\times 10^{6}~\mathrm{cm}^{-3}$, respectively. The total mass and radius of the core are $2.5~\Msun$ and $1.2 \times 10^{4}~\mathrm{AU}$, respectively. The initial central magnetic field strength is $B_{z, \mathrm{eq}}\approx 200~\mu G$. We choose the external density in a way that $n _{\mathrm{c}} / n _{\mathrm{ext}} = 500$, (i.e., $n _{\mathrm{ext}} \approx 10^{3}~\mathrm{cm}^{-3}$), and the central angular velocity $\Omega _{\mathrm{c}}$ so that the cloud's edge rotates at a rate of $1~\mathrm{km}~\mathrm{s}^{-1}~\mathrm{pc}^{-1}$, consistent with observations of molecular cloud cores \citep{GoodmanEtAl1993,CaselliEtAl2002}.
\section{Results}\label{sec:Results}
\subsection{Prestellar phase and formation of the second core}\label{subsec:results_prestellar}
During the prestellar phase (for number densities $n< 10^{11}~\mathrm{cm}^{-3}$) the collapse proceeds in a nearly self-similar fashion. We find that---insensitive to initial conditions---the column density is approximately $\propto r^{-1}$ for three orders of magnitude of central enhancement, which corresponds to the volume density being $\propto r^{-2}$ for a central enhancement of $\approx 10^{6}$. This profile is characteristic of a collapsing prestellar core \citep[e.g.,][]{Larson1969}. The collapse proceeds dynamically, and to a good approximation under isothermality, flux-freezing, and without significant magnetic braking \citep{BasuMouschovias1994}.
Once the density reaches $\approx 10^{11}~\mathrm{cm}^{-3}$, the central region becomes opaque and traps the energy released by the collapse, which previously could escape freely as radiation. This region heats up \citep{Larson1969, MasunagaInutsuka2000} and its thermal pressure gradient temporarily stabilizes it against further collapse. This is the \textit{first core}. Its density and temperature increase with continued accretion, while its size stays almost constant at $\approx$ a few AU, bounded by an accretion shock. The external gravitational potential of this object closely resembles that of a point mass, and an expansion wave develops and moves outward at nearly the sound speed \citep{Shu1977}. Material within this region moves at near free-fall speed.
When the temperature in the first core reaches $\approx$ $2000~\mathrm{K}$, for $n \gtrsim 10^{15}~\mathrm{cm}^{-3}$, hydrogen molecules are collisionally dissociated. This process provides an energy sink, so that the temperature rise stagnates, and the collapse reinitiates. As the temperature rises yet further, hydrogen is ionized sufficiently that flux freezing is re-established. Collapse is then finally halted, and sufficiently high densities are reached that electron degeneracy becomes important \citep{MasunagaInutsuka2000}. A protostellar core (the \textit{second core}) forms with a radius $\approx$ a few $\Rsun$ \citep[e.g.,][]{Larson1969}. This Class 0 object initially only has a mass of a few $\times 10^{-3}~\Msun$. The gravitational potential resembles that of a point mass outside the second core, and an expansion wave once again moves outward from the accretion shock, eventually consuming the entire region of the previous first core.
Figure \ref{fig:sigma_omega_mu} shows the profiles of column density, mass-to-flux ratio and angular velocity shortly after the second core forms ($\approx 4.8 \times 10^{4}~\mathrm{yr}$ into the simulation). For $n \gtrsim 10^{12}~\mathrm{cm}^{-3}$, Ohmic dissipation becomes dynamically important \citep{NakanoEtAl2002}, because all charge carriers decouple from the magnetic field, and flux is dissipated. While the density in the first core increases, we find the magnetic field strength remains stagnant. A \textit{magnetic wall} \citep{LiMcKee1996,ContopoulosEtAl1998} forms at $\approx 10~\mathrm{AU}$, visible as a sharp transition in column density in the resistive model ($\widetilde{\eta} _{0} = 1$, top panel). Here, infalling neutrals within the expansion wave are temporarily slowed down by the relatively well-coupled magnetic field that is expelled from the first core with a radius $\approx 1~\mathrm{AU}$. Further inward, the neutrals resume near-free-fall motion, but with enhanced magnetic support and at a greater column density than for flux-freezing ($\widetilde{\eta} _{0} = 0$, dotted line). Under angular momentum conservation (no magnetic braking), the additional rotational support stabilizes the first core against further collapse (top panel, dash-dotted line), consistent with previous findings \citep[e.g.,][]{SaigoTomisaka2006}.
Because of magnetic flux dissipation, the mass-to-flux ratio increases by almost three orders of magnitude in the first core region for $\widetilde{\eta} _{0} = 1$, but by almost two orders of magnitude even for $\widetilde{\eta} _{0}$ as low as $0.01$ (Fig. \ref{fig:sigma_omega_mu}, middle panel). The torque on the cloud caused by magnetic braking scales linearly with the amount of enclosed flux \citep[][]{BasuMouschovias1994}. Ohmic dissipation therefore allows spin-up to proceed, even though the rotation rate is still reduced by a factor of a few outside the first core, compared with the case without magnetic braking (Fig. \ref{fig:sigma_omega_mu}, bottom panel, dash-dotted line). In the flux-freezing case, the comparatively slow evolution of the first core allows enough time for magnetic braking to spin down the first core region, and `catastrophically' brake it (Fig. \ref{fig:sigma_omega_mu}, bottom panel, dotted line).
\begin{figure}
\includegraphics[width=0.9\hsize]{15700fig1.eps}
\centering
\caption{
Spatial profiles of various quantities after the second collapse (after $\approx 4.8 \times 10^{4}~\mathrm{yr}$).
\textbf{Top:} The first and second core and their accretion shocks are at radii
$\approx 1~\mathrm{AU}$ and $\approx 5 \times 10^{-3}~\mathrm{AU}\approx 1~\Rsun$, respectively. Within the expansion wave outside the first core,
the column density profile assumes that of free-fall collapse in the flux-freezing case ($\widetilde{\eta} _{0}=0$), and shows a
magnetic wall in the resistive case. Beyond $\approx 20~\mathrm{AU}$, the prestellar infall profile remains unchanged. Without magnetic braking (dash-dotted line), the first core is larger and rotation prevents further collapse.
\textbf{Middle:} The mass-to-flux ratio is increased by (even weak) Ohmic dissipation by $\gtrsim 10^{2}$.
The influence is significant even well outside the boundary of the first core (at a few AU).
\textbf{Bottom:} For flux-freezing, catastrophic magnetic braking spins down the first core to nearly the background rotation rate. In the resistive case (solid line), the rotation rate outside the first core is reduced only slightly compared with the case without magnetic braking (dash-dotted line).
}
\label{fig:sigma_omega_mu}
\end{figure}
\subsection{Evolution after second core formation}\label{subsec:results_protostellar}
When the second core forms, the thin-disk formulation breaks down, because the object is now truly hydrostatic and spherical. Presumably, dynamo processes within the fully convective protostar will also take over, and the magnetic field will mostly decouple from that of its parent core \citep{MestelLandstreet2005}. Therefore, we switch off magnetic braking in the second core, and introduce a sink cell with a size of $3~\Rsun$, slightly larger than the second core. The processes within it are beyond the scope of our model, but are not expected to significantly influence the surroundings. This is not necessarily the case with a sink cell of size $\approx 10~\mathrm{AU}$, as is the more common approach \citep[e.g.,][]{VorobyovBasu2007,MellonLi2008,MellonLi2009}.
Figure \ref{fig:sigma_vr_acc_disk} shows the profiles of column density, infall velocity, and the ratio of centrifugal to gravitational acceleration about a year after the introduction of the sink cell. Centrifugal balance is achieved in a small region ($\approx 10~\Rsun$) close to the center (bottom panel) in the resistive model. This is a necessary and sufficient condition for the formation of a centrifugally-supported disk. At the same time all infall is halted there and the radial velocity plummets (middle panel). After a few years of evolution, a Toomre instability develops, and the rotationally-supported structure breaks up into a ring (top panel). At this point, we stop the simulation, because more physics would be required to follow the further evolution of the disk. Our model allows a clear distinction between a magnetic pseudo-disk, a flattened (disk-like) prestellar core, and a centrifugal (nearly Keplerian) disk. This distinction is not clear in profiles from three-dimensional simulations \citep{MachidaEtAl2007,DuffinPudritz2009}.
Figure \ref{fig:fieldlines} shows the magnetic field line topology above and below the disk on two scales ($10~\mathrm{AU}$ and $100~\mathrm{AU}$), for both flux-freezing and resistive models. They are calculated immediately after the formation of the second core, assuming force-free and current-free conditions above a thin disk \citep{MestelRay1985}. The split monopole of the $\widetilde{\eta} _{0} = 0$ model (dashed lines) is created as field lines are dragged in by the freely falling material within the expansion wave front at $\approx 20~\mathrm{AU}$. This is replaced by a much more relaxed field line structure in the resistive case (solid lines). The extreme flaring of field lines in the $\widetilde{\eta} _{0} = 0$ model is a fundamental cause of the magnetic braking catastrophe. \citet{GalliEtAl2009} presented similar field configurations resulting from a simplified model for resistive collapse.
\begin{figure}
\includegraphics[width=0.9\hsize]{15700fig2.eps}
\centering
\caption{
Spatial profiles of various quantities $\approx 1~\mathrm{yr}$ after the introduction of a sink cell
of size $\approx 3~\Rsun$.
\textbf{Top:} The Toomre-unstable centrifugally-supported disk breaks up into a ring.
\textbf{Middle:} Infall is halted after the formation of a centrifugal disk at
$\approx 5\times 10^{-2}~\mathrm{AU}\approx 10~\Rsun$ in the resistive case ($\widetilde{\eta} _{0}=1$),
while for flux-freezing ($\widetilde{\eta} _{0}=0$), infall continues.
\textbf{Bottom:} Ratio between centrifugal and gravitational accelerations. The dashed line indicates
rotational balance, achieved within $\approx 10~\Rsun$
with Ohmic dissipation. For flux-freezing, rotational support is negligible in the first core region
owing to the magnetic braking catastrophe.
}
\label{fig:sigma_vr_acc_disk}
\end{figure}
\begin{figure}
\begin{minipage}[b]{0.46\hsize}
\centering
\includegraphics[scale=0.57]{15700fig3a.eps}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.46\hsize}
\centering
\includegraphics[scale=0.45]{15700fig3b.eps}
\end{minipage}
\caption{Magnetic field lines. The box on the left has dimensions $10~\mathrm{AU}$ on each side, while the box on the right has dimensions $100~\mathrm{AU}$. The dashed lines represent the flux-freezing model ($\widetilde{\eta} _{0} = 0$), while the solid lines show the same field lines for the resistive model ($\widetilde{\eta} _{0} = 1$). The second core has just formed and is on the left axis midplane.}
\label{fig:fieldlines}
\end{figure}
\section{Discussion and conclusions}\label{sec:Discussion}
We demonstrate the formation of a centrifugally-supported disk despite the presence of magnetic braking. The magnetic braking catastrophe is averted by including the canonical level of Ohmic dissipation, which removes large amounts of magnetic flux from the high-density region of the first core. In the absence of Ohmic dissipation, this region would be spun down tremendously prior to the second collapse. We emphasize that disk formation happens very shortly after the second collapse in a region very close to the central object, while it is still very small ($<10^{-2}~\Msun$). This is consistent with the observational evidence of outflows at a very young age.
Our simulations yield $\approx 0.1-1~\mathrm{kG}$ magnetic fields, comparable to those observed in T Tauri stars \citep[e.g.,][]{JohnsKrull2007}, in a central object of mass $\approx 10^{-2}~\Msun$. This is achieved by non-ideal MHD effects reducing the field strength by $\approx 10^{3}$ compared to a flux-freezing model. Our model does not have the capability of including outflows or jets, even though those are launched very close to the stellar surface.
There is presently no evidence for centrifugal disks $\gtrsim 50~\mathrm{AU}$ around Class 0 objects \citep[e.g.,][]{AndreEtAl2002, MauryEtAl2010}. ALMA will allow observers to improve on this, and to probe for disks down to $\approx 10~\mathrm{AU}$. We anticipate that the centrifugal disk that forms in our simulations \textit{can} grow over time into disks of size $\approx 100~\mathrm{AU}$ observed around Class II objects.
Recent work \citep{MachidaEtAl2010} shows that magnetic braking can be cut off at late times as the envelope is accreted, and the existing disk can also grow by internal angular momentum redistribution processes \citep[e.g.,][]{VorobyovBasu2007}. Furthermore, we speculate that ambipolar diffusion \citep[e.g.,][]{KunzMouschovias2010} has the potential to dissipate enough flux outside the first core (an area not significantly affected by Ohmic dissipation) to reduce braking and to allow the disk to form there as well. We will present results of a study including both non-ideal MHD effects and grain physics in an upcoming paper.
\section*{Acknowledgments}
The authors thank the participants of the CC2YSO conference for engaging and illuminating discussions. W.B.D. was supported by an NSERC Alexander Graham Bell Canada Graduate Scholarship, and S.B. by an NSERC Discovery Grant.
|
1,108,101,565,298 | arxiv | \section{Introduction}
\label{s1}
Quantum field theory (QFT) on classical
Friedmann-LeMa\^{\i}tre-Robertson-Walker (FLRW) space-times is well
developed and has had remarkable success in accounting for structure
formation in inflationary cosmologies (see, e.g., \cite{mw}). In
this analysis one assumes that the background space-time is
adequately described by classical general relativity. During the
inflationary era, this assumption is reasonable because, e.g., in
the standard scenarios the matter density $\rho$ even at the onset of
inflation is less than $10^{-10}\, \rho_{\rm Pl}$, where $\rho_{\rm Pl}$ is the Planck
density. However, even in an eternal inflation, the underlying
classical space-time has a big bang singularity \cite{bgv}. The
theory is thus incomplete. In particular, the presence of this
singularity makes it awkward to introduce initial conditions, e.g.,
on the quantum state of matter.
To know what really happened in the Planck regime near the
singularity, we need a quantum theory of gravity. While a fully
satisfactory quantum gravity theory is still not available, over
the past 2-3 years, loop quantum cosmology (LQC) has provided a
number of concrete results on this Planck scale physics. (For recent
reviews, see, e.g., \cite{aa-badhonef,aa-saloniki}.) LQC is a
symmetry reduced version of loop quantum gravity (LQG)
\cite{alrev,crbook,ttbook}), a non-perturbative, background
independent approach to the unification of general relativity and
quantum physics. Here, space-time geometry is treated quantum
mechanically from the start. In the symmetry reduced cosmological
models these quantum geometry effects create a new repulsive force
when space-time curvature enters the Planck regime. The force is so
strong that the big bang is replaced by a specific type of quantum
bounce \cite{mb1,abl,aps1,aps2,aps3,acs,cs1,kl}. The force rises
very quickly once $\rho$ exceeds
$\sim 0.01\,\rho_{\rm Pl}$ to cause the bounce but also dies very quickly
after the bounce, once the density falls below this value.
Therefore, the quantum space-time of LQC is very well approximated
by the space-time continuum of general relativity once the curvature
falls below the Planck scale. This scenario is robust in the sense
that it is borne out for k=0 models with or without a cosmological
constant \cite{bp,ap}, k=1 closed models, and
\cite{apsv,warsaw}, k=-1 open models%
\footnote{The current treatment of the k=-1 models is not entirely
satisfactory because it regards the extrinsic curvature $K_a^i$ as
a connection and relies on holonomies constructed from it.
However, a closer examination shows that this is not necessary
\cite{awe4}.}
\cite{kv}. The Bianchi I model which incorporates anisotropies
\cite{awe2} and the k=0 model with an inflationary potential with
phenomenologically viable parameters \cite{aps4}.
In this paper, we will use the detailed quantum geometries that have
been constructed in LQC for the k=0, $\Lambda =0$, FLRW models with
a massless scalar field as a source. The full physical Hilbert space
of LQC is infinite dimensional. Every physical state undergoes a
quantum bounce in a precise sense \cite{acs}. However, for physical
applications of interest here, we will consider only those states
which are sharply peaked on a classical geometry at \emph{some} late
time and follow their evolution. Surprisingly, LQC predicts that
dynamics of these states is well approximated by certain `effective
trajectories' \cite{jw,vt} in the gravitational phase space at
\emph{all} times, including the bounce point \cite{aps3,acs}. As one
would expect, this effective trajectory departs sharply from the
solution to Einstein's equation near the bounce. However it does
define a smooth space-time metric, but its coefficients now involve
$\hbar$. These quantum corrections are extremely large in the Planck
regime but, as indicated above, die off quickly and the effective
space-time is indistinguishable from
the classical FLRW solution in the low curvature region.%
\footnote{The availability of a singularity free effective
space-time can be extremely useful. For example, it has enabled one
to show that, although Bousso's covariant entropy bound \cite{rb} is
violated very near the singularity in classical general relativity,
it is respected in the quantum space-time of LQC.}
Thus, LQC provides specific, well-defined quantum geometries from
which FLRW space-times emerge away from the Planck scale. At a
fundamental level, one does not have a single classical metric but
rather a probability amplitude for various metrics. So, the
question naturally arises: \emph{How do quantum fields propagate
on these quantum geometries?}
Availability of a satisfactory quantum theory of fields on a quantum
geometry would provide new perspectives in a number of directions.
First, it could provide a coherent theory of structure formation
from first principles.
For example, one may be able to specify the initial conditions
either in the infinite past where quantum space-time is well
approximated by a flat classical geometry, or, at the bounce point
which now replaces the big-bang. Second, the theory is also of
considerable importance from a more general conceptual
perspective. For, it should provide a bridge between quantum
gravity and QFT in curved space-times. What precisely are the
implications of the quantum fluctuations of geometry on the
dynamics of other quantum fields? What, in particular, are the
consequences of light cone fluctuations? Finally, this theory
would lead to a rich variety of new avenues in mathematical
physics. How is the relational time of quantum gravity related to
the more familiar choices of time one makes in QFT in curved
space-times? How do the standard anomalies of QFT on classical
background geometries `lift' up to QFT on quantum geometries? What
precisely are the approximations that enable one to pass from
quantum QFT on quantum geometries to those on classical
geometries?
The purpose of this paper is to provide the first steps to
addressing these important issues. More precisely, we will present
the basics of a framework to describe \emph{test} quantum fields on
the quantum FLRW geometries provided by LQC.
QFT in curved space-times has been developed in two directions. The
first is the more pragmatic approach that cosmologists have
developed to study structure formation, particle creation by given
gravitational fields, and their back reaction on the geometry (see,
e.g., \cite{mw}). Here, one uses the background geometry to make a
mode decomposition and regards the quantum field as an assembly of
oscillators. Typically, one focuses on one mode (or a finite number
of modes) at a time and ignores the difficult functional analytical
issues associated with the fact that the field in fact has an
infinite number of degrees of freedom. The second direction is the
more mathematical, algebraic approach that provides a conceptually
complete home for the subject (see, e.g., \cite{rmw,bsk}). Here the
focus is on the structure of operator algebras, constructed
`covariantly' using the background space-time geometry. States are
treated as suitably regular positive linear functionals on the
algebras. Not only is there no mode decomposition but one does not
tie oneself to any one Hilbert space. Our long range goal is to
generalize both sets of analyses to quantum space-times.
In this paper we will make a beginning by following the more
pragmatic approach: As in the literature on cosmology, we will use
mode decomposition. However in this analysis, our emphasis will be
on conceptual issues. First, in LQC one is led to a relational
dynamics because there is no background space-time. More precisely,
one `deparametrizes' the theory: the massless scalar field $T$
---the matter source in the background space-time--- is treated as
the `evolution parameter' with respect to which the physical degrees
of freedom ---the density, volume, anisotropies and other matter
fields, if any--- evolve. Therefore, in QFT on FLRW quantum
geometries, it is natural to continue to use $T$ as time. In QFT on
classical FLRW space-times, on the other hand, one generally uses
the conformal or proper time as the evolution parameter. We will
resolve this conceptual tension. Second, in the quantum gravity
perspective, dynamics is encoded in the quantum constraint equation.
In QFT on a classical FLRW geometry, on the other hand, dynamics of
the test quantum field is generated by a Hamiltonian. We will show
how this Hamiltonian naturally emerges from the quantum constraint
in a suitable approximation. The analysis is quite intricate because
it involves different notions of time (or, equivalently, lapse
fields) at different stages. Finally we will be able to pin-point
the implications of the quantum fluctuations of geometry on the
dynamics of the test quantum field. This discussion will, in turn,
enable us to spell out the approximations that are essential to pass
from the QFT on a quantum FLRW geometry to that on its classical
counterpart.
The paper is organized as follows. In section \ref{s2} we summarize
key properties of quantum \emph{space-time} geometries that emerge
from LQC and recall the relevant features of QFT on a classical FLRW
background. In section \ref{s3} we introduce the Hamiltonian set-up
to describe test fields on classical and quantum background
geometries and in section \ref{s4} we show how the two are related.
Section \ref{s5} contains a summary and presents the
outlook.\\
\emph{Remark:} Much of the detailed, recent work in LQC assumes that
the matter source is a massless scalar field $T$ which, as we saw,
plays the role of a global, relational time variable. The overall
strategy is flexible enough to allow \emph{additional} matter
fields. The new issues that arise are technical, such as whether the
relevant operators continue to be essential self-adjoint. However
if, as in the simplest inflationary scenario, there is only a
massive scalar field ---and no massless ones--- one faces new
conceptual issues. In this case the scalar field serves as a good
time variable only `locally'. That is, one has to divide evolution
in `epochs' or `patches' in each of which the scalar field is
monotonic along dynamical trajectories. The discussion of the
quantum bounce is not much more complicated because the bounce
occurs in a single patch \cite{aps4}. The problem of joining
together these `patches' on the other hand is more complicated.
Although it can be managed in principle (see e.g. \cite{cr-time}),
at present it seems difficult to handle in practice.
\section{Background quantum geometry}
\label{s2}
LQC provides us a non-perturbative quantum theory of FLRW
cosmologies. Because it is based on a Hamiltonian treatment,
relation to the classical FLRW models was spelled out through
dynamical trajectories in the classical phase space \cite{aps3}.
In particular, the emphasis has been on the relational Dirac
observables, such as the matter density, anisotropies and
curvature at a given value of the scalar field. On the other hand,
quantum field theory on classical FLRW backgrounds is developed on
given classical space-times, rather than on dynamical trajectories
in the phase space of general relativity. Therefore, as a first
step we need to reformulate one of these descriptions using the
paradigm used in the other. In this section, we will recast the
LQC description, emphasizing space-times over phase space
trajectories. Relation to the cosmology literature will then
become more transparent.
We will focus on the k=0, $\Lambda=0$ FLRW models with a massless
scalar field as source. To avoid a discussion of boundary conditions
on test fields in section \ref{s3}, we will assume that the spatial
3-manifold is $\mathbb{T}^3$, a torus with coordinates $x^i \in (0,
\ell)$. It will be clear from our discussion that with appropriate
changes the analysis can be extended to include a cosmological
constant, or anisotropies, or closed k=1 universes.
\subsection{Space-time geometries and phase space trajectories}
\label{s2.1}
In this subsection we will clarify the relation between various
notions of time that feature in LQC and set up a dictionary between
the phase space and space-time descriptions.
Spatial homogeneity and isotropy implies that the space-time metric
has the form:
\begin{equation} \label{g-proper} g_{ab}\,{\rm d} x^a {\rm d} x^b\, = \, -{\rm d} t^2 +
q_{ij}\,{\rm d} x^i {\rm d} x^j \, \equiv \, -{\rm d} t^2 + a^2 {\rm d} {{\vec{x}}}^2 \end{equation}
where $q_{ij}$ is the physical spatial metric and $a$ is the scale
factor. Here the coordinate $t$ is the proper time along the world
lines of observers moving orthogonal to the homogeneous slices.
As explained in section \ref{s1}, in LQC one uses a relational time
defined by a massless scalar field which serves as a matter source.
Because of this and because we will also have a test scalar field
$\varphi$ in section \ref{s3}, we will denote the massless scalar
source by $T$. Since $T$ satisfies the wave equation with respect to
$g_{ab}$, in LQC it is most natural to consider the \emph{harmonic
time coordinate} $\tau$ satisfying $\Box \tau =0$. Then the
space-time metric assumes the form
\begin{equation} \label{g-harmonic} g_{ab}\,{\rm d} x^a {\rm d} x^b \,\, =\,\, -a^6\, {\rm d}
\tau^2 + q_{ij}\,{\rm d} x^i {\rm d} x^j \,\, \equiv\,\, -a^6\, {\rm d}\tau^2 +
a^2 {\rm d} {{\vec{x}}}^2 \end{equation}
Let us now spell out the relation of this space-time metric to the
phase space trajectories. In LQC, the gravitational part of the
phase space is conveniently coordinatized by a canonically
conjugate pair $(\nu,b)$ where $\nu$ is essentially the volume of
the universe and $b$, the Hubble parameter $\dot{a}/a$ (where, as
usual, the `dot' refers to derivative w.r.t. proper time $t$)
\cite{acs,cs1}. More precisely, the volume is given by
\begin{equation} \label{V1} V \equiv \ell^3a^3\, =\, 2\pi \gamma \ell_{\rm Pl}^2 |\nu|\end{equation}
and the Hubble parameter by $\dot{a}/a = b/\gamma$, where $\gamma$
is the so called Barbero-Immirzi parameter of LQG.%
\footnote{Following LQG, in LQC one uses orthonormal frames rather
than metrics. Since these frames can regarded as `square-roots' of
metrics, the configuration space is doubled. $\nu, b \in {\mathbb{R}}^2$ are
constructed from the orthonormal frame and its time derivative,
and the sign of $\nu$ depends on the orientation of the frame. The
canonical commutation relations are: $[\hat{b}, \, \hat{\nu} ] = 2i$.}
(Its value, $\gamma \approx 0.24$, is fixed by black hole entropy
calculations.) \emph{Throughout this paper, we will pass freely
between $V, \nu$ and the scale factor $a$.}
The canonically conjugate pair for the scalar field is $(T, P_{(\T)})$.
Dynamics is generated by the Hamiltonian constraint, $NC$, where $N$
is the lapse function and $C$ the constraint function:
\label{C1} \begin{equation} C = \frac{P_{(\T)}^2}{2V} - \frac{3}{8\pi G}\,
\frac{b^2}{\gamma^2}\,V \, \approx 0\end{equation}
where, as usual, the weak equality holds on the constraint
hypersurface. If one uses the time coordinate $t$, then it follows
from (\ref{g-proper}) that the lapse is $N_t =1$, while if one uses
$\tau$ (\ref{g-harmonic}) implies that the lapse is $N_\tau = a^3$.
In the second case, the Hamiltonian constraint is:
\begin{equation} \label{hc1} C_\tau := N_\tau C \,\equiv\, \frac{P_{(\T)}^2}{2\ell^3} -
\frac{3}{8\pi G}\, \frac{b^2}{\gamma^2}\, \frac{V^2}{\ell^3}\, ,\end{equation}
whence the time evolution of the scalar field is given by
\begin{equation} \label{T} T = \frac{P_{(\T)}}{\ell^3}\,\, \tau\ . \end{equation}
(Here we have set the integration constant to zero for simplicity).
$P_{(\T)}$ is a constant of motion which, for definiteness, will be
assumed to be positive. Then, as one would expect, in any solution
to the field equations the scalar field $T$ grows linearly in the
harmonic time $\tau$ . Thus, although $T$ does not have the
physical dimensions of time, it is a good evolution parameter.
Therefore, following the LQC literature, we will refer to it as the
\emph{relational time} parameter. On any given solution, we can
freely pass from $\tau$ to $T$ and write the space-time metric as:
\begin{equation} \label{g-phi} g_{ab}\, {\rm d} x^a {\rm d} x^b\,\, =\,\, -
\frac{a^6\ell^6}{P_{(\T)}^2}\, \ddT^2 + q_{ij}\, {\rm d} x^i {\rm d} x^j
\,\,\equiv\,\, \frac{a^6\ell^6}{P_{(\T)}^2}\, \ddT^2 + a^2 {\rm d} {{\vec{x}}}^2 \end{equation}
The only difference from (\ref{g-harmonic}) is that the lapse is
modified: $N_{T} = (\ell^3/{P_{(\T)}})\, N_{\tau}$ whence, in any
\emph{given} space-time, two lapse functions are related just by a
constant. However, in the \emph{phase space}, $P_{(\T)}$ varies from one
dynamical trajectory to another, whence the relation is much more
subtle. If we regard $T$ as a parameter, $\tau$ evolves
non-trivially on the full phase space, and vice versa. In quantum
gravity, we do not have a fixed space-time but a probability
amplitude for various geometries. Therefore, the situation in the
phase space is a better reflection of what happens in the quantum
theory. Indeed, as we will see in section \ref{s3}, the difference
between $\tau$ and $T$ plays a deep role there.
Since the relation between the phase space and space-time notions is
important for our subsequent discussion, we will conclude with a
useful dictionary:
\begin{itemize}
\item A point in the phase space $\, \leftrightarrow\,$ A
homogeneous slice in space-time (i.e., ${\mathbb T}^3\times
{\mathbb{R}}$) equipped with the initial data for the gravitational
and scalar field;
\item A curve in the phase space along which $T$ is monotonic
$\, \leftrightarrow\,$ A metric $g_{ab}$ and a scalar field
$T$ on space-time;
\item A curve in the phase space along which $T$ is monotonic
\emph{and} $P_{(\T)}$ is constant $\, \leftrightarrow\,$ A metric
$g_{ab}$ and a scalar field $T$ satisfying $\Box T =0$ on
space-time;\, and, finally,
\item A dynamical trajectory in the phase space $\,
\leftrightarrow\,$ A solution $(g_{ab}, T)$ to the
Einstein-Klein Gordon equation on space-time.
\end{itemize}
\subsection{Quantum FLRW space-times}
\label{s2.2}
In LQC one first constructs the quantum kinematics for the symmetry
reduced models by faithfully mimicking the unique kinematics of LQG,
selected by the requirement of background independence
\cite{lost,cf}. One then writes the quantum counterpart of the
Hamiltonian constraint (\ref{hc1}) as a self-adjoint operator on the
kinematical Hilbert space:
\begin{equation} \label{hc2} \hat{C}_\tau\, \Psi_o(\nu,T)\, =\, -
\frac{\hbar^2}{2\ell^3}\, \Big(\partial_{T}^2 +
\Theta\Big)\,\Psi_o(\nu,T)\, , \end{equation}
where $\Theta$ turns out to be a difference operator in $\nu$ given
by
\begin{equation} \label{Theta} \Theta \Psi_o (\nu, T) = \frac{3\pi G}{\lambda^2}\, \nu \,
\Big[ (\nu + 2\lambda)\Psi_o(\nu+4\lambda) - 4\nu \Psi_o(\nu) + (\nu
-2\lambda)\Psi_o(\nu-4\nu)\Big]\, . \end{equation}
Here, $\lambda^2 = 4\sqrt{3}\pi\gamma\ell_{\rm Pl}^2 $ is the smallest
non-zero eigenvalue of the LQG area operator (on states relevant to
homogeneity and isotropy) \cite{awe1,awe2,aa-badhonef} and we use
subscript (or superscript) $o$ to emphasizes that structures
developed in this section refer to what will serve as the
\emph{background} quantum geometry. Physical states must satisfy
\footnote{Recall from footnote 2 that $\nu \rightarrow -\nu$
corresponds just to change in the orientation of the orthonormal
frame which does not change the metric. Since the theory does not
involve any spinor fields, physics is insensitive to this
orientation. Therefore states must also satisfy $\Psi(\nu,T) =
\Psi(-\nu,T)$.}
\begin{equation} \label{hc3} \hat{C}_\tau \Psi_o(\nu,T) = 0 \, .\end{equation}
A standard `group averaging procedure', which is applicable to a
wide class of constrained systems, then provides the scalar product
enabling us to construct the physical Hilbert space $\mathcal{H}_{\rm phy}^o$. Since
the form of the constraint (\ref{hc2}) resembles the Klein Gordon
equation on a (fictitious) static space-time coordinatized by $\nu,
T$, as one might expect, $\mathcal{H}_{\rm phy}^o$ is built out of `positive frequency
solutions' to (\ref{hc2}). More precisely, $\mathcal{H}_{\rm phy}^o$ consists of
solutions to
\begin{equation} \label{hc4} -i\hbar \partial_T \Psi_o (\nu, T) = \hat{H}_o
\Psi_o(\nu,T)\quad {\rm where} \quad \hat{H}_o = \hbar\sqrt{\Theta}\,
.\end{equation}
with finite norm with respect to the scalar product
\begin{equation}\label{ip1} \langle \Psi_o,\,\Psi_o\rangle = \frac{\lambda}{\pi}\,
\sum_{\nu = 4n\lambda}\, \frac{1}{|\nu|}\,\bar{\Psi}_o(\nu, T_0)\,
\Psi_o^\prime (\nu, T_0)\, . \end{equation}
where the right side can be evaluated at any internal time $T_0$.
Note that in their $\nu$ dependence physical states have support on
the lattice $\nu = 4n\lambda$, where $n$ ranges over all integers
(except zero). We will generally work in the Schr\"odinger
representation. Then, the states can be regarded as functions
$\Psi_o(\nu)$ of $\nu$ which have finite norm (\ref{ip1}) and which
evolve via (\ref{hc4}). The Hilbert space spanned by $\Psi(\nu)$ will
be denoted by $\mathcal{H}_{\rm geo}$. For later use we note that the
classical expression (\ref{V1}) of volume implies that the volume
operator $\hat{V}$ acts on $\mathcal{H}_{\rm geo}$ simply by multiplication:
\begin{equation} \label{V2} \hat{V}\Psi_o(\nu) = 2\pi \gamma \ell_{\rm Pl}^2 |\nu|
\Psi_o(\nu)\, .\end{equation}
Every element $\Psi_o$ of $\mathcal{H}_{\rm phy}^o$ represents a 4-dimensional
quantum geometry. However, to make contact with QFT on classical
FLRW space-times, we are interested only in a subset of these
states which can be described as follows. Choose a classical,
expanding FLRW space-time in which $P_{(\T)} \gg \hbar$ (in the
classical units $G$=$c$=1) and a homogeneous slice at a late time
$T_o$, when the matter density and curvature are negligibly small
compared to the Planck scale. This defines a point $p$ in the
classical phase space. Then, one can introduce coherent states
$\Psi_o (\nu, T_o)$ in $\mathcal{H}_{\rm geo}$ which are sharply peaked at
$p$ \cite{aps2,aps3,acs}. Let us `evolve' them in the internal
time $T$ using (\ref{hc4}). One can show \cite{aps3,acs} that
these states remain sharply peaked on the classical trajectory
passing through $p$ for all $T > T_o$. In the backward
time-evolution, it does so till the density reaches approximately
$1\%$ of the Planck density. As explained in section \ref{s1},
even in the deep Planck regime the wave function remains sharply
peaked but the peak now follows an effective trajectory which
undergoes a quantum bounce. At the bounce point the matter
density attains a maximum, $\rho_{\rm max} \approx 0.41 \rho_{\rm Pl}$.%
\footnote{The existence of this maximum value does \emph{not} follow
simply from the fact that $|\nu|$ is bounded below by $4\lambda$. Its
origin is more subtle \cite{acs,klp}: $\hat\rho = (1/2)\, \hat{V}^{-1}
\hat{P}_{(T)}^2 \hat{V}^{-1}$ and the maximum value, $0.41\rho_{\rm Pl}$, of
$\langle \hat\rho \rangle$ is the same no matter how large $P_{(\T)} =
\langle\hat{P}_{(T)}\rangle$ is.}
After the bounce the density and the space-time curvature start
decreasing and once the density falls below about $1\%$ of the
Planck density, the effective trajectory becomes essentially
indistinguishable from a classical FLRW trajectory. Although the
effective trajectory cannot be approximated by any classical
solution in a neighborhood of the bounce point, $P_{(\T)}$ is constant
along the entire effective trajectory. The dictionary given at the
end of section \ref{s2.1} then implies that the effective space-time
has a contracting FLRW branch in the past and an expanding FLRW
branch in the future. The scalar field $T$ satisfies $\Box T =0$
everywhere but Einstein's equations break down completely in an
intermediate region. Thanks to the quantum evolution equation
(\ref{hc3}), the two branches are joined in a deterministic fashion
in this region. \emph{By a quantum background geometry, we will mean
a physical state $\Psi_o(\nu,T)$ with these properties.} There is a
large class of such states and our considerations will apply to all
of them.
Of particular interest to us are the volume operators $\hat{V}_{T_0}$
on $\mathcal{H}_{\rm phy}^o$ representing the volume of the universe at any fixed
instant $T_0$ of internal time:
\begin{equation} [\hat{V}_{T_0} \Psi_o](\nu,T)\, =\, e^{(i/\hbar)\hat{H}_o
(T-T_0)}\,\, (2\pi \gamma\ell_{\rm Pl}^2 |\nu|)\,\, e^{-(i/\hbar)\hat{H}_o
(T-T_0)}\,\, \Psi_o(\nu, T)\, . \end{equation}
Thus, the action of $\hat{V}_{T_0}$ on any physical state $\Psi_o(\nu,
T)$ is obtained by evolving that state to $T=T_0$, acting on it
by the volume operator and then evolving the resulting function of
$\nu$ using (\ref{hc4}). Each $\hat{V}_{T_0}$ is a positive definite
self-adjoint operator. Hence one can define any (measurable)
function of $\hat{V}_{T_0}$ ---such as the scale factor
$\hat{a}_{T_0}$--- via its spectral decomposition. Finally, the
matter density operator $\hat\rho_{T_0}$ at time $T=T_0$ is given
by
\begin{equation} \hat\rho_{T_0}\, =\, \frac{1}{2}\,\,\hat{V}^{-1}_{T_0}\,
\hat{P}_{(T)}^2\, \hat{V}^{-1}_{T_0}\, \equiv\,
\frac{\hbar^2}{2}\,\,\hat{V}^{-1}_{T_0}\, \Theta\, \hat{V}^{-1}_{T_0}\,
.\end{equation}
As explained above, in background quantum geometries $\Psi_o(\nu,T)$
considered in this paper, the expectation values of $\hat{\rho}_{T}$
attain their maximum value $\rho_{\rm max} \approx 0.41 \rho_{\rm Pl}$ at the
bounce point.
In the kinematical setting, $\hat{\nu}, \hat{T}, \hat{P}_{(T)}, \Theta$
are independent self-adjoint operators. However, in the passage to
the physical Hilbert space $\mathcal{H}_{\rm phy}^o$ a `de-parametrization' occurs as
in the quantum theory of a parameterized particle (see, e.g.,
\cite{at}). On the physical sector \emph{we no longer have an
operator $\hatT$ but just a parameter $T$} and the operator
$\hat{P}^2_{(T)}$ gets identified with $\hbar^2\Theta$. Consequently,
the space-time metric (\ref{g-phi}) can be represented as a
self-adjoint operator on $\mathcal{H}_{\rm phy}^o$ as follows \cite{klp}:
\begin{equation} \label{g-op} \hat{g}_{ab}\, {\rm d} x^a {\rm d} x^b\,\, =\,\, - :
{\hat{V}^2_{T}}{\hat{H}_o^{-2}}:\,\, \ddT^2 + \hat{q}_{ij}\, {\rm d} x^i {\rm d}
x^j \,\,\equiv\,\, \, :{\hat{V}^2_{T}}{\hat{H}_o^{-2}}:\,\,\ddT^2 +
\hat{V}^{2/3}_{T} {\rm d} {{\vec{x}}}^2 \,.\end{equation}
Thus, the geometry is quantum because the metric coefficients
$\hat{g}_{TT}$ and $\hat{q}_{ij}$ are now quantum operators. In
(\ref{g-op}), a suitable factor ordering ---denoted by $:\,\,\, :$
--- has to be chosen because the volume operator $\hat{V}_{T}$ does
not commute with the Hamiltonian $\hat{H}_o$ of the background quantum
theory. The simplest choice would be to use an anti-commutator but
it would be more desirable if the ordering is determined by some
general principles. (Note that $\hat{H}_o^{-2}$ is well-defined
because $\hat{H}_o$ is a positive self-adjoint operator.)
\section{The test quantum field}
\label{s3}
This section is divided in to two parts. In the first we summarize
the essential features of QFT on classical FLRW backgrounds in a
language that is well-suited for our generalization to quantum
backgrounds and in the second we carry out the generalization.
\subsection{QFT on classical FLRW backgrounds}
\label{s3.1}
As in section \ref{s2}, let us fix a 4-manifold $M = \mathbb{T}^3
\times {\mathbb{R}}$, equipped with coordinates $x^j \in (0, \ell)$ and $x_0
\in \mathbb R$. Consider on it a FLRW 4-metric $g_{ab}$ given by
\begin{equation} g_{ab} {\rm d} x^a {\rm d} x^b = -N_{x_0}^2(x_0) dx_0^2\ +\ a^{2}(x_0)
{\rm d} {{\vec{x}}}^2 \, ,\end{equation}
where, as usual, the lapse function $N_{x_0}$ depends on the choice
of time coordinate $x_0$. Consider a real, massive, test Klein
Gordon field $\varphi$ satisfying $(\Box - m^2)\,\varphi =0$ on this
classical space-time $(M,g_{ab})$. Note that $\varphi$ is \emph{not}
required to be homogeneous. Quantum theory of this field can be
described with various degrees of rigor and generality. As explained
in section \ref{s1}, in this paper, we will consider the simplest
version in terms of mode decomposition.
The canonically conjugate pair for the test scalar field consists of
fields $(\varphi, \pi_{(\varphi)})$ on a $x_0 = {\rm const}$ slice. Let us perform
Fourier transforms:
\begin{equation} \varphi(x_j, x_0) = \frac{1}{(2\pi)^{3/2}} \sum_{{{\vec{k}}}\in \mathcal{L}} \,
\varphi_{{\vec{k}}} (x_0)\, e^{i k_jx^j}\quad {\rm and} \quad \pi_{(\varphi)}(x_j, x_0)
= \frac{1}{(2\pi)^{3/2}} \sum_{{\vec{k}}\in \mathcal{L}} \, \pi_{{\vec{k}}}(x_0)\, e^{i
k_jx^j}\, ,\end{equation}
where $\mathcal{L}$ is the 3-dimensional lattice spanned by $(k_1,k_2,k_3)\in
((2\pi/\ell)\,\, \mathbb{Z})^3$, ${\mathbb{Z}}$ being the set of integers.
The Fourier coefficients are canonically conjugate, $\{\varphi_{{\vec{k}}},\,
\pi_{\vec{k'}}\} = \delta_{{\vec{k}},\,-\vec{k'}}$ and, since
$\varphi({\vec{x}},x_0)$ is real, they satisfy the conditions: $\varphi_{{\vec{k}}} =
\bar{\varphi}_{-\vec{k}}$ and $\pi_{{\vec{k}}} = \bar{\pi}_{-\vec{k}}$. The
time dependent Hamiltonian (generating evolution in $x_0$) is given
by:
\begin{eqnarray} H_\varphi (x_0) &=& \frac{1}{2}\, \int \frac{N_{x_0}(x_0)}{a^3(x_0)}\,
\Big[\pi_{(\varphi)}^2 + a^4(x_0) (\partial_i\varphi )^2 + m^2 a^6(x_0)\,
\varphi^2 \Big]\, {\rm d}^3x \nonumber\\
&=& \frac{N_{x_0}(x_0)}{2a^3(x_0)}\, \sum_{{\vec{k}}\in \mathcal{L}}\, \bar{\pi}_{{\vec{k}}}
\pi_{{\vec{k}}} + ({{\vec{k}}}^2 a^4(x_0) + a^6(x_0) m^2)\,\bar{\varphi}_{{\vec{k}}}
\varphi_{{\vec{k}}}\, . \end{eqnarray}
In the literature, the test scalar field $\varphi$ is often regarded as
an assembly of harmonic oscillators, one for each mode. To pass to
this description, first note that because of the reality conditions,
the Fourier modes are inter-related. One can find an independent set
by, e.g., considering the sub-lattices $\mathcal{L}^\pm$ of $\mathcal{L}$ as follows:
\begin{eqnarray} \mathcal{L}^+ &=& \{ {{\vec{k}}}: k_3 >0\}\, \cup \{{{\vec{k}}}: k_3 =0, k_2>0\} \cup
\{{\vec{k}}: k_3 =0, k_2=0, k_1>0 \}\quad {\rm and}\nonumber\\
\mathcal{L}^- &=& \{{\vec{k}}: -\vec{k} \in \mathcal{L}^+ \}\, .\end{eqnarray}
Then, for each ${\vec{k}} \in \mathcal{L}^+$, we can introduce real variables
$q_{\pm{\vec{k}}}, p_{\pm{\vec{k}}}$,
\begin{equation} \varphi_{{\vec{k}}} =\frac{1}{\sqrt 2}( q_{{\vec{k}}} + i q_{-\vec{k}}), \quad{\rm
and}\quad
\pi_{{\vec{k}}} = \frac{1}{\sqrt 2} (p_{\vec{k}} + i p_{-\vec{k}}). \end{equation}
The pair $(q_{\pm{\vec{k}}},\, p_{\pm{\vec{k}}})$ is canonically conjugate for
each ${\vec{k}} \in \mathcal{L}^+$. In terms of these variables, the Hamiltonian
becomes
\begin{equation} H_\varphi(x_0) = \frac{N_{x_0}(x_0)}{2a^3(x_0)}\, \sum_{{\vec{k}} \in \mathcal{L} }\,
{p}^2_{{\vec{k}}} +( {{\vec{k}}}^2 a^4(x_0) + m^2 a^6(x_0))\, {q}^2_{{\vec{k}}}
\end{equation}
where we have set $q_0:= \varphi_{\vec{k}=0}$ and $\pi_0 :=
\pi_{{\vec{k}}=0}$. Thus, the Hamiltonian for the test field is the same
as that for an assembly of harmonic oscillators, one for each ${\vec{k}}
\in \mathcal{L}$.
To pass to the quantum theory, let us focus on just one mode
${{\vec{k}}}$. Then we have a single harmonic oscillator. So the Hilbert
space is given by $H_{{\vec{k}}} = L^2(\mathbb{R})$, the operator
$\hat{q}_{{\vec{k}}}$ acts by multiplication, $\hat{q}_{{\vec{k}}} \psi({q}_{{\vec{k}}}) =
q_{{\vec{k}}} \psi(q_{{\vec{k}}})$ and $\hat{p}_{{\vec{k}}}$ acts by differentiation
$\hat{p}_{{\vec{k}}} \psi (q_{{\vec{k}}}) = -i\hbar {\rm d} \psi/{\rm d} q_{{\vec{k}}}$. The
time evolution is dictated by the time dependent Hamiltonian
operator $\hat{H}_{{\vec{k}}}(x_0)$:
\begin{equation} \label{sch1} i\hbar \partial_{x_0}\psi(q_{{\vec{k}}}, x_0)\, =\,
\hat{H}_{{\vec{k}}}(x_0) \psi(q_{{\vec{k}}}, x_0)\, \equiv\,
\frac{N_{x_0}(x_0)}{2a^3(x_0)}\, \Big[{\hat{p}}_{{\vec{k}}}^2 + ({{\vec{k}}}^2
a^{4}(x_0)+ m^2a^6(x_0)){\hat{q}}_{{\vec{k}}}^2 \Big]\, \psi(q_{{\vec{k}}}, x_0). \end{equation}
In this theory, there is considerable freedom in choosing the time
coordinate $x_0$ (and hence the lapse function $N_{x_0}$). One
generally chooses $x_0$ to be either the conformal time $\eta$ or
the proper time $t$. However, as we saw in section \ref{s2.2}, in
quantum geometry only the relational time $T$ is a parameter;
$\eta, t$ and even the harmonic time $\tau$ become operators \cite{klp}.
Therefore, in QFT on a quantum geometry, while it is relatively
straightforward to analyze evolution with respect to $T$,
conceptually and technically it is more subtle to describe evolution
with respect to conformal, proper or harmonic time (as it requires
the introduction of conditional probabilities). In the standard QFT
on classical FLRW space-times, on the other hand, $T$ plays no
role; indeed, the source of the background geometry never enters the
discussion. This tension is conceptually significant and needs to be
resolved to relate QFT on classical and quantum FLRW geometries.
\subsection{QFT on quantum FLRW backgrounds}
\label{s3.2}
Recall first that in full general relativity dynamics is generated
by constraints. Our system of interest is general relativity coupled
to a massless scalar field $T$ and a massive scalar field $\varphi$,
where $T$ is spatially homogeneous and $\varphi$ is in general
inhomogeneous but regarded as a \emph{test} field propagating on the
homogeneous, isotropic geometry created by $T$. Therefore, we can
start with the constraint functions on the \emph{full} phase space
of the gravitational field, $T$ and $\varphi$, but impose isotropy and
homogeneity on the gravitational field and $T$ and retain terms
which are at most quadratic in $\varphi$ and $\pi_{(\varphi)}$. The fact that we
are ignoring the back reaction of $\varphi$ on the gravitational field
implies that, among the infinitely many constraints of this theory,
only the zero mode of the scalar constraint is relevant for us. That
is, we need to smear the scalar constraint \emph{only} with
homogeneous lapse functions (and can ignore the Gauss and the vector
constraints). For concreteness, as in section \ref{s2.1}, we will
choose the harmonic time coordinate $\tau$ and the corresponding
lapse function $N_\tau = a^3$. Then, in the truncated theory now
under consideration, the scalar constraint (\ref{C1}) is replaced
by:
\begin{equation} \label{C2} C_\tau \equiv N_\tau C \,=\, \frac{P_{(\T)}^2}{2\ell^3} -
\frac{3}{8\pi G}\, \frac{b^2}{\gamma^2}\,\frac{V}{\ell^3} \, + \frac{1}{2}\,\int
[\pi_{(\varphi)}^2+ a^4 (\partial_i \varphi)^2 + m^2 a^6 \varphi^2]\, {\rm d}^3x \approx\,
0 \end{equation}
(Recall that the volume and the scale factor are related by $V=
\ell^3a^3$.) If we focus just on the ${\vec{k}}$th mode, the constraint
simplifies further:
\begin{equation} \label{C3} C_{\tau, {\vec{k}}}\, = \, \frac{P_{(\T)}^2}{2\ell^3} - \frac{3}{8\pi
G}\, \frac{b^2}{\gamma^2}\,\frac{V}{\ell^3} \, + H_{\tau,{\vec{k}}}\end{equation}
where
\begin{equation} H_{\tau,{\vec{k}}} = \frac{1}{2}\, [\,{p}_{{\vec{k}}}^2 + ({{\vec{k}}}^2 a^4 + m^2
a^6) q_{{\vec{k}}}^2\,]\end{equation}
In quantum theory, then, physical states $\Psi(\nu, q_{\vec{k}},T)$ must
be annihilated by this constraint, i.e., must satisfy:
\begin{equation} \label{hc5} -\hbar^2\partial_T^2\, \Psi(\nu,q_{\vec{k}},T)\, =\, [\,
\hat{H}_o^2 - 2\ell^3\, \hat{H}_{\tau,{\vec{k}}}\,]\, \Psi(\nu,q_{\vec{k}},T)\, ,
\end{equation}
where as in section \ref{s2.2}, $\hat{H}_o^2 = \hbar^2\Theta$ is the
difference operator defined in (\ref{Theta}). (Although $\hat{a}$ is
an operator, it commutes with $\hat{q}_{{\vec{k}}}$ and $\hat{p}_{{\vec{k}}}$ on the
kinematical Hilbert space. So there are no factor ordering
subtleties in the definition of $\hat{H}_{\tau,{\vec{k}}}$.) As in section
\ref{s2.2}, the construction of the physical inner product requires
us to take the `positive-frequency' square-root of this equation.
More precisely, on the tensor product $\mathcal{H}_{\rm geo}\otimes L^2({\mathbb{R}})$
of the quantum geometry Hilbert space $\mathcal{H}_{\rm geo}$ and the
${\vec{k}}$-mode Hilbert space $L^2({\mathbb{R}})$, the operator $[\,\hat{H}_o^2 -
2\ell^3\, \hat{H}_{\tau,{\vec{k}}}\,]$ on the right hand side of (\ref{hc5})
is symmetric and we assume that it can be made self-adjoint on a
suitable domain. On the physical Hilbert space, this operator gets
identified with $\,\hat{P}^2_{(T)}$. Since classically $P_{(\T)}^2$ is a
positive Dirac observable, we are led to restrict ourselves to the
positive part of the spectrum of $[\,\hat{H}_o^2 - 2\ell^3\,
\hat{H}_{\tau,{\vec{k}}}\,]$ and then solve the evolution equation
\begin{equation} \label{hc6} -i\hbar\, \partial_T\, \Psi(\nu,q_{\vec{k}},T) =
[\hat{H}_o^2 - 2\ell^3\, \hat{H}_{\tau,{\vec{k}}}]^{\frac{1}{2}}\,
\Psi(\nu,q_{\vec{k}},T)\,=: \hat{H} \Psi(v, q_{\vec{k}}, T) .\end{equation}
The solutions are in the physical Hilbert space $\mathcal{H}_{\rm phy}$ of the
truncated theory provided they have a finite norm with respect to
the inner product:
\begin{equation}\label{ip2} \langle \Psi_1 |\Psi_2\rangle = \frac{\lambda}{\pi}\, \sum_{\nu
= 4n\lambda}\, \frac{1}{|\nu|}\,\int_{-\infty}^{\infty} {\rm d} q_{{\vec{k}}}\,\,
\bar{\Psi}_1(\nu,q_{\vec{k}}, T_0)\, \Psi_2 (\nu,q_{\vec{k}},T_0)\, \end{equation}
where the right side is evaluated at \emph{any} fixed instant of
internal time $T_0$. As one might expect, the physical observables
of this theory are the Dirac observables of the background geometry
---such as the time dependent density and volume operators
$\hat{\rho}(T)$ and $\hat{V}(T)$--- and observables associated with the
test field, such as the mode operators $\hat{q}_{{\vec{k}}}$ and
$\hat{p}_{{\vec{k}}}$.
Formally, this completes the specification of the quantum theory of
the test field $\hat{\phi}$ on a quantum FLRW background geometry. We
have presented this theory (as well as the QFT a classical
background in section \ref{s3.1}) using the Schr\"odinger picture
because this is the description one is naturally led to when,
following Dirac, one imposes quantum constraints to select physical
states. However, at the end of the process it is straightforward to
re-express the theory in the Heisenberg picture.\\
\emph{Remark:} In this section we began with the constraint
(\ref{C2}) on the classical phase space spanned by $(\nu,b; T, P_{(\T)};
\varphi, \pi_{(\varphi)})$. Solutions to this theory do include a back reaction of
the field $\varphi$ but just on the homogeneous mode of the classical
geometry. In the final quantum theory, the Hamiltonian of the field
$\varphi$ features on the right side of (\ref{hc6}) whence, in the
Heisenberg picture, it affects the evolution of geometric operators.
As in the classical theory, this evolution incorporates back
reaction of the field $\hat\varphi$ but just on the homogeneous mode of
the quantum geometry. Mathematically, we have a closed system
involving $\hat\nu, \hat\varphi,T$ whence this inclusion of the back
reaction is consistent. However, physically it is not as meaningful
because we have ignored the back reaction at the same order that
would add inhomogeneities to the quantum geometry. So, from a
physical viewpoint, \emph{all} corrections to quantum geometry which
are quadratic in $\hat\varphi$ should be consistently ignored. We will
explicitly impose this restriction in section \ref{s4.3}. However,
the classical theory determined by (\ref{C2}) and the quantum theory
constructed in this section can be directly useful in some
applications where it is meaningful to ignore inhomogeneous metric
perturbations and study the homogeneous mode, including the back
reaction corrections.
\section{Comparison}
\label{s4}
In this section we will compare QFT on a classical background
discussed in section \ref{s3.1} and QFT on quantum FLRW geometries
discussed in section \ref{s3.2}. The discussion is divided into
three sub-sections which provide the successively stronger
simplifications of the dynamical equation (\ref{hc6}) that are
needed to arrive at the dynamical equation (\ref{sch1}) on a
classical FLRW space-time.
\subsection{Simplification of the evolution equation}
\label{s4.1}
Let us begin by using the test field approximation. Since the back
reaction of the scalar field $\varphi$ is neglected, the theory
constructed in section \ref{s3.2} can be physically trusted only on
the sector on which $\hat{H}_o^2$ dominates over ${2\ell^3}\,
\hat{H}_{\tau,{\vec{k}}}$. On this sector, one can expand out the
square-root on the right side of (\ref{hc6}) in a useful fashion.
First let us consider the regime in which the support of $\Psi
(\nu)$ is on $\nu \gg \lambda$. (For semi-classical states of
quantum geometry under consideration, this condition is not a real
restriction.) Furthermore, suppose for a moment that there is a
negative cosmological constant, i.e., $\hat{H}^2_o$ is replaced by
$\hat{H}_\Lambda^2 = \hat{H}_o^2 + C \Lambda \nu^2$, where $C$ is a
positive constant. Then, one can show that Eq. (\ref{hc6}) modified by the
presence of a negative $\Lambda$ can be approximated by:
\begin{equation} \label{expansion} -i\hbar\partial_T\, \Psi(\nu,q_{\vec{k}},T) =
\Big(\hat{H}_\Lambda \,-\, \big(\ell^{-3}
\hat{H}_\Lambda\big)^{-\frac{1}{2}}\,\, \hat{H}_{\tau,{\vec{k}}}\,\,
\big(\ell^{-3}\hat{H}_\Lambda\big)^{-\frac{1}{2}}\Big)\,
\Psi(\nu,q_{\vec{k}},T)\, .\end{equation}
We will assume that the same approximation holds in the
$\Lambda=0$ case,%
\footnote{Thus, introduction of $\Lambda$ at this intermediate stage
is like a `regularization'. Alternatively, one can restrict oneself
to the case where there is a negative cosmological constant from the
beginning.}
i.e., we will use the following simplification of (\ref{hc6}):
\begin{equation} \label{hc7} -i\hbar\partial_T\, \Psi(\nu,q_{\vec{k}},T) =
\Big(\hat{H}_o \,-\, \big(\ell^{-3} \hat{H}_o\big)^{-\frac{1}{2}}\,\,
\hat{H}_{\tau,{\vec{k}}}\,\, \big(\ell^{-3}\hat{H}_o\big)^{-\frac{1}{2}}\Big)\,
\Psi(\nu,q_{\vec{k}},T)\, .\end{equation}
We will now show that the second term on the right side of
(\ref{hc7}) has a direct interpretation. In the classical theory,
$H_{\tau, {\vec{k}}}$ is the Hamiltonian generating evolution in harmonic
time $\tau$. Since the corresponding lapse function $N_\tau$ is
related to the lapse function $N_T$ corresponding to the relational
time $T$ via $N_T = (P_T \ell^3)^{-1}N_\tau $, the Hamiltonian
generating evolution in $T$ is given by $H_{T, {\vec{k}}} =
(\ell^{-3}P_T)^{-1} H_{\tau, {\vec{k}}} \approx (\ell^{-3}H_o)^{-1}
H_{\tau, {\vec{k}}}$, where in the last step we have again used the test
field approximation. The second term on the right side of
(\ref{hc7}) is \emph{precisely} a specific quantization of $H_{T,
{\vec{k}}}$. This is just as one would physically expect because the left
side of (\ref{hc7}) is the derivative of the quantum state with
respect to $T$. Thus, we can rewrite (\ref{hc7}) as:
\begin{equation} \label{hc8} -i\hbar\partial_T\, \Psi(\nu,q_{\vec{k}},T) =
\big(\hat{H}_o \,-\, \hat{H}_{T,{\vec{k}}}\big)\,\Psi(\nu,q_{\vec{k}},T) \, .\end{equation}
The non-triviality lies in the fact that this evolution equation
arose from a systematic quantization of the $(\nu, \varphi, T)$-system
where geometry is also quantum. As in LQC we began with the quantum
constraint operator associated with the harmonic time, and then used
the group averaging procedure to find the physical Hilbert space.
This naturally led us to take a square root of the quantum
constraint and then a simplification which is valid in the test
field approximation automatically provided the extra factor to
rescale the lapse operator just in the right manner to pass from the
harmonic to the relational time. Thus, there is coherence between
the constrained dynamics, various notions of time involved,
deparametrization of the full theory and the test field
approximation.
\subsection{Interaction picture}
\label{s4.2}
The simplified evolution equation (\ref{hc8}) is rather analogous to
the Schr\"odinger equation (\ref{sch1}) in QFT on a classical FLRW
background. However, there are two key differences. First, in
(\ref{hc8}) the background geometry appears through \emph{operators}
$\hat{V}$ and $\hat{H}_o$ while in (\ref{sch1}) it appears through the
\emph{classical} scale factor $a(x_0)$ and (if we set $x_0 =T$) the
constant $\ell^3/P_{(\T)} = N_{T}/a^3$ determined by the momentum of
background scalar field. The fact that there are operators on the
Hilbert space $\mathcal{H}_{\rm geo}$ of quantum geometry in one case and
classical fields on space-time $M$ in the second is not surprising.
But there is also a more subtle, second difference. The operators
$\hat{H}_o$ and $\hat{V}$ which features on the right side of
(\ref{hc8}) do \emph{not} depend on time:%
\footnote{This also occurs in the classical theory. There, in place
of the Hamiltonian, we have the constraint function $C =
{P_{(\T)}^2}/{2V} - ({3}/{8\pi G})\, ({b^2V}/{\gamma^2})$ on the phase
space. $b,V$ which appear in the expression are determined just by
the point at which $C$ is evaluated; there is no time parameter on
which they could depend! This is in fact the origin of the fact the
$\hat{V}$ and $\hat{H}_o$ in (\ref{hc8}) do not depend on time.}
$\hat{V}\,\Psi(\nu,q_{\vec{k}},T) = 2\pi \gamma \ell_{\rm Pl}^2 |\nu|\,
\Psi(\nu,q_{\vec{k}},T)$ and $\hat{H}_o\, \Psi(\nu,q_{\vec{k}},T) = \hbar
\sqrt{\Theta}\, \Psi(\nu,q_{\vec{k}},T)$. The scale factor $a(x_0)$ that
appears in (\ref{sch1}) on the other hand is explicitly time
dependent. This is because while (\ref{hc7}) provides a quantum
evolution equation for the state $\Psi(\nu,q_{\vec{k}},T)$ that depends on
(the ${\vec{k}}$th mode of) the test field $\varphi$ \emph{and} the quantum
geometry (encoded in $\nu$), (\ref{sch1}) evolves the state
$\psi(q_{\vec{k}},T)$ just of the test scalar field on the given time
dependent background geometry (encoded in $a(x_0)$).
To make the two evolutions comparable, therefore, we need to recast
(\ref{hc8}) in such a way that the test field evolves on a
background, \emph{time-dependent} quantum geometry. This can be
readily achieved by working in the `interaction picture'. More
precisely, it is natural to regard $\hat{H}_o$ in (\ref{hc8}) as the
Hamiltonian of the heavy degree of freedom and $\hat{H}_{T,{\vec{k}}}$ as a
perturbation governing the light degree of freedom and, as in the
interaction picture, set
\begin{equation} \Psi_{\rm int} (\nu,q_{\vec{k}},T) := e^{-(i/\hbar) \hat{H}_o\,
(T-T_0)}\,\Psi (\nu,q_{\vec{k}},T)\, .\end{equation}
where $T_0$ is any fixed instant of relational time. Then,
(\ref{hc8}) yields the following evolution equation for
$\Psi_{\rm int}$:
\begin{eqnarray} \label{hc9} i\hbar \partial_T\, \Psi_{\rm int}(\nu,q_{\vec{k}},T) &=&
\frac{1}{2}\, \big(\ell^3 \hat{H}_o\big)^{-\frac{1}{2}}\, \Big[ p^2_{\vec{k}} +
({{\vec{k}}}^2 \hat{a}^4(T)\, +\, m^2 \hat{a}^6(T)) q_{{\vec{k}}}^2 \Big]
\big(\ell^3
\hat{H}_o\big)^{-\frac{1}{2}}\, \Psi_{\rm int}(\nu,q_{\vec{k}},T)\nonumber\\
&=:& \hat{H}^{\rm int}_{T,{\vec{k}}}\,\, \Psi_{\rm int}(\nu,q_{\vec{k}},T)\, .
\end{eqnarray}
Here the operators $\hat{a}(T)$ (and their powers) are defined on the
Hilbert space $\mathcal{H}_{\rm geo}$ of quantum geometry (now tied to the
internal time $T_o$):
\begin{equation} \hat{a}(T) = e^{-(i/\hbar)\, \hat{H}_o (T-T_o)}\,\, \hat{a}\,\,
e^{(i/\hbar)\, \hat{H}_o (T-T_o)}\, \quad{\rm with}\quad \hat{a}\ =\
\frac{1}{\ell}|\hat{V}|^{\frac{1}{3}} . \end{equation}
Thus, in this `interaction picture' quantum geometry is in effect
described in the Heisenberg picture ---states of quantum geometry
are `frozen' at time $T= T_o$ but the scale factor operators
evolve--- while the test field is described using the Schr\"odinger
picture. Therefore, the quantum evolution equation (\ref{hc9}) is
now even more similar to the Schr\"odinger equation (\ref{sch1}) for
the test field on a classical background. However, the lapse
$\hat{N}_T$ and powers of the scale factor $\hat{a}$ are still
operators on $\mathcal{H}_{\rm geo}$. In the next subsection we will specify
the approximations necessary to reduce (\ref{hc9}) to (\ref{sch1}).
\subsection{Replacing geometric operators by their mean values}
\label{s4.3}
Let us now assume that the state $\Psi_{\rm int}(\nu,q_{\vec{k}},T)$
factorizes as $\Psi_{\rm int}(\nu,q_{\vec{k}},T) = \Psi_o(\nu,T_0) \otimes
\psi(q_{\vec{k}},T)$ where $\Psi_o(\nu,T_0)$ is a quantum geometry state
introduced in section \ref{s2.2}, peaked at an effective LQC
geometry of the $(\nu,\varphi)$-system. This assumption is justified
because $\varphi$ is a test field, i.e., its back reaction is ignored.
Then, (\ref{hc9}) further simplifies as follows
\begin{eqnarray} \label{hc10} \Psi_o(\nu,T_0) \otimes [i\hbar \partial_{T}\,
\psi(q_{\vec{k}}, T)] =\,\, \frac{1}{2}\,\big[ (\ell^{-3} \hat{H}_o)^{-1}\,
\Psi_o(\nu, T_0)\big] \, &\otimes&\, \big[\hat{p}_{{\vec{k}}}^2\,
\psi(q_{\vec{k}},T)\big]\nonumber \\
+\, \frac{1}{2}\,\big[ (\ell^{-3} \hat{H}_o)^{-\frac{1}{2}}\, ({{\vec{k}}}^2
\hat{a}^4(T) + m^2 \hat{a}^6(T))\, (\ell^{-3} \hat{H}_o)^{-\frac{1}{2}}\,
\Psi_o(\nu,T_0)\big]\,&\otimes&\, \big[\hat{q}_{{\vec{k}}}^2\, \psi(q_{\vec{k}},
T)\big]\end{eqnarray}
Let us now suppose that $\Psi_o(\nu, T_o)$ is normalized and take
the scalar product of (\ref{hc10}) with $\Psi_o(\nu,T_0)$. Then, we
obtain:
\begin{eqnarray} \label{sch2} i\hbar\partial_T\, \psi(q_{\vec{k}}, T)
&=&\frac{1}{2}\, \big{\langle}\, (\ell^{-3}\hat{H}_o)^{-1}\big{\rangle}
\,\, \hat{p}_{{\vec{k}}}^2\,\, \psi(q_{\vec{k}}, T)
\,+\, \frac{1}{2}\, \Big[\, {{\vec{k}}}^2\, \big{\langle}\,
(\ell^{-3}\hat{H}_o)^{-\frac{1}{2}} \hat{a}^4(T)
(\ell^{-3}\hat{H}_o)^{-\frac{1}{2}} \big{\rangle}\nonumber\\
&+& m^2 \, \big{\langle}\, (\ell^{-3}\hat{H}_o)^{-\frac{1}{2}}
\hat{a}^6(T)
(\ell^{-3}\hat{H}_o)^{-\frac{1}{2}} \big{\rangle}\Big]\, \hat{q}_{{\vec{k}}}^2\,
\psi(q_{\vec{k}}, T)\, \end{eqnarray}
where $\langle\hat{A}{\rangle}$ denotes the expectation value of the
operator $\hat{A}$ in the quantum geometry state $\Psi_0$. Thus, in
this equation all geometrical quantities are c-numbers. Nonetheless,
(\ref{sch2}) \emph{is in general different from} (\ref{sch1})
\emph{because expectation values of products of operators do not
equal products of expectation values of operators.} We discuss the
differences and analogies below.
Eq (\ref{sch2}) tells us how the quantum state of the mode $q_{\vec{k}}$
`evolves', but the background geometry is neither classical nor
quantum in the sense of section \ref{s2.2}. The mode knows about the
background geometry only through the three expectation values that
feature on the right side of (\ref{sch2}). Therefore one is led to
ask if there is an \emph{effective} classical FLRW space-time such
that the Schr\"odinger equation (\ref{sch1}) on it is equivalent to
(\ref{sch2}).
To address this question, let us begin with the plausible assumption
that the quantum geometry state $\Psi_0$ is sharply peaked at the
expectation values $\bar{P}_{(T)}$ and $\bar{a}$ of $\hat{H}$ and
$\hat{a}$ respectively and, \emph{furthermore}, work in the
approximation in which quantum fluctuations of geometry can be
ignored. A priori this is a very strong simplification but, for
cosmological applications, this approximation can be justified
because the quantum geometries $\Psi_o(\nu,T)$ have incredibly small
dispersions along the entire effective trajectory \cite{cs2}. Then,
(\ref{sch2}) reduces to:
\begin{equation} \label{sch3} i\hbar\partial_T\, \psi(q_{\vec{k}}, T) =
\frac{\bar{N}_T}{2\bar{a}^3}\, \Big[\, {\hat{p}}_{{\vec{k}}}^2 + ({{\vec{k}}}^2
\bar{a}^{4}(x_0)+ m^2 \bar{a}^6(x_0)) {\hat{q}}_{{\vec{k}}}^2\, \Big]\,
\psi(q_{{\vec{k}}}, x_0)\, . \end{equation}
This is exactly the Schr\"odinger equation (\ref{sch1}) governing
the dynamics of the test quantum field on a classical space-time
with scale factor $\bar{a}$ containing a massless scalar field $T$
with momentum $\bar{P}_{(T)} = \bar{a}^2\ell^3/\bar{N}_T$. This is
the precise sense in which the dynamics of a test quantum field on a
classical background emerges from a more complete QFT on quantum
FLRW backgrounds. Note however that, even with this strong
simplification, the classical space-time is \emph{not} a FLRW
solution of the Einstein-Klein-Gordon equation. Rather, it is the
effective space-time $(M, \bar{g}_{ab})$ a la LQC on which the
quantum geometry $\Psi_o(\nu,T)$ is sharply peaked. But as discussed
in sections \ref{s1} and \ref{s2.1}, away from the Planck regime,
$(M, \bar{g}_{ab})$ is extremely well-approximated by a classical
FLRW space-time $(M, g^o_{ab})$. Thus, starting from quantum
geometry and making a series of well-motivated approximation, we
have arrived at a QFT of a test field $\varphi$ which is a non-trivial
extension of the QFT on a standard $(M, g^o_{ab})$. It has the same
structure as the standard theory but is defined on a much larger
space-time in which the big bang is replaced by a quantum bounce and
there is an infinite pre-big-branch. Therefore, although the theory
developed in this section describes a test quantum field $\hat{\varphi}$
on classical backgrounds and approximates the standard QFT on
classical FLRW geometries at late times, it also contains a lot of
new physics, particularly in the Planck regime around the bounce.
Next, it is interesting to return to the equation (\ref{sch2}) and
\emph{not} make additional simplifications. One can still ask if
there is a classical metric tensor
\begin{equation} g^\prime_{ab}dx^adx^b\ =\ -{N^\prime}^2(T)\, \ddT^2 +
{a^\prime}^2(T)\, {\rm d}\vec{x}^2\end{equation}
such that (\ref{sch2}) agrees with the Schr\"odinger equation
(\ref{sch1}) on $(M,g^\prime_{ab})$. For this agreement to hold, the
scale factor $a^\prime(T)$ and the lapse function $N^\prime(T)$
should satisfy the following system of equations:
\begin{align} N^\prime(T)\ &=\ \ell^3 {a^\prime}^3(T)\big{\langle}\,
\hat{H}_o^{-1}\big{\rangle}\\
N^\prime(T)a^\prime(T)\ &=\ \ell^3 \big{\langle}\, \hat{H}_o^{-\frac{1}{2}}
\hat{a}^4(T) \hat{H}_o^{-\frac{1}{2}} \big{\rangle}\\
m^2\,N^\prime(T){a^\prime}^3(T)\ &=\ m^2\,\ell^3 \big{\langle}\,
\hat{H}_o^{-\frac{1}{2}} \hat{a}^6(T)\ell^{-3}\hat{H}_o^{-\frac{1}{2}}
\big{\rangle}.
\end{align}
In the case then the test field is massless, the third equation
disappears and there is clearly a solution
$(N^\prime(T),\,a^\prime(T))$. But note that the interpretation of
(\ref{sch2}) as the evolution equation for $\psi(q_{\vec{k}}, T)$ on the
classical space-time $(M, g^\prime_{ab})$ is not entirely
satisfactory because, if the quantum geometry state is sharply
peaked at $\langle\hat{a}\rangle = \bar{a}$ and $\langle
\hat{P}_{(T)}\rangle = \bar{P}_{(T)}$, then $a^\prime(T)\,\not=
\bar{a}(T)$ and $N^\prime(T)\, \not=\,
\ell^3\,{\bar{a}^3}/{\bar{P}_{(T)}}$. Thus, deductions about the
quantum geometry made from the dynamics of the test scalar field
would be different from those made by observing the geometry
directly, e.g., from the measurement of the Hubble parameter or of
the volume at the bounce point. Finally, in the case when the test
scalar field $\varphi$ has mass, on the other hand, if the quantum
geometry fluctuations are not negligible, dynamics of the test field
given by (\ref{sch2}) cannot be interpreted as dynamics of the test
field on \emph{any} classical FLRW background.
\section{Discussion}
\label{s5}
Consider QFT of a massive, test, scalar field $\hat\varphi$ on a classical
FLRW space-time $(M, g_{ab}^o)$ with a massless scalar field $T$ as
its matter source. Our main goal was to derive this theory from that
the scalar field $\hat\varphi$ on a quantum geometry $\Psi_o(\nu,T)$ that
replace $(M, g_{ab}^o)$ in LQC. Conceptually the two theories are
quite distinct:
\begin{itemize}
\item They use very different notions of time. In particular,
the conformal time $\eta$ and the proper time $t$ used in
the first are non-trivial operators in the second
\cite{klp};
\item In the first, dynamics is generated by a Hamiltonian while
in the second, it has to be teased out of a constraint;
\item In the first, there is a fixed classical metric $g^o_{ab}$
in the background which is used repeatedly in the
construction of the QFT, while in the second there is only a
probability distribution for various metrics encoded in
$\Psi_o(\nu,T)$; and
\item While in the first theory the scale factor $a$
is a given function on $M$, in the second theory we are
confronted with quantum fluctuations of (different powers
of) the operator $\hat{a}$.
\end{itemize}
Our first task was to set up an appropriate framework to explore the
relation between the two theories in detail
To construct the second of these theories, in section \ref{s3} we
began with the constrained quantum system for the gravitational
field coupled with the scalar fields $T$ and $\varphi$ but made
simplifications to encode the idea that the space-time geometry and
$T$ are homogeneous and $\varphi$ is (inhomogeneous but) a test field
whose back reaction is ignored. This theory was de-parameterized by
singling out $T$ as the relational time variable with respect to
which the gravitational field and $\varphi$ evolve. The states of the
coupled system are then functions $\Psi(\nu,\varphi,T)$ of volume $\nu$
(or, equivalently, the scale factor) of the universe, the massive
test field $\varphi$ and the massless scalar field $T$. We found that
their inner product is given by (\ref{ip2}) and their dynamics is
governed by the Schr\"odinger equation (\ref{hc6}). Thus, a quantum
theory of the test field $\varphi$ on quantum geometries could be
constructed although we do not have a fixed classical metric or a
fixed causal structure in the background.
In section \ref{s4} we made successive approximations to simplify
(\ref{hc6}), all of which are well-motivated by the set-up of the
problem:
\begin{itemize}
\item We regarded variables $(\nu,T)$ which provide the
background geometry as the heavy degree of freedom and the
test field $\varphi$ as the light degree to simplify the
Hamiltonian operator in (\ref{hc6});
\item We assumed that the state $\Psi(\nu,\varphi,T)$ can be
expanded as $\Psi(\nu,\varphi,T) = \Psi_o(\nu, T)\otimes
\psi(\varphi, T)$ where $\Psi_o(\nu,T)$ is the quantum geometry
that replaces the classical FLRW space-time in LQC, and took
the scalar product of the evolution equation (\ref{hc6})
w.r.t. the quantum geometry state $\Psi_o(\nu,T)$ to obtain
an evolution equation for $\psi(\varphi)$.
\item To simplify this equation on $\psi(\varphi)$, we ignored the
quantum fluctuations of geometry by replacing the
expectation values of products of geometrical operators by
products of their expectation values. The result was the
standard Schr\"odinger equation (\ref{sch3}) for a test
field $\varphi$ on a classical background.
\end{itemize}
However, equation (\ref{sch3}) has two non-standard features. First,
the classical background is \emph{not} a FLRW space-time
$(M,g^o_{ab})$ but rather an effective space-time $(M,
\bar{g}_{ab})$ on which the LQC state $\Psi_o(\nu,T)$ is sharply
peaked. Second, the Schr\"odinger equation naturally arises with
$T$ as the time variable. This is unusual from the perspective of
QFT on classical backgrounds because $T$ is the massless scalar
field that acts as the source of the gravitational field while QFT
on classical backgrounds, as normally formulated, has no knowledge
of the source. Rather, the time variables that are normally used
---the conformal time $\eta$ or the proper time $t$--- arise
directly from the metric $g^o_{ab}$. However, from the perspective
of quantum geometry, these are unnatural because while $T$ is a
parameter in that theory, as we noted above, $\eta$ and $t$ are not;
they get promoted to operators. Of course, once we have arrived at
the `lower' theory ---i.e., QFT on the classical space-time $(M,
\bar{g}_{ab})$--- it is straightforward to reformulate dynamics in
terms of either $\eta$ or $t$. But at a more fundamental level,
\emph{it is the relational time $T$ that appears to be the natural
time parameter.} Finally, let us return to the first difference. The
effective space-time $(M, \bar{g}_{ab})$ is a non-trivial extension
of the FLRW solution $(M, g^o_{ab})$ in which the big bang is
replaced by a bounce and there is an infinite pre-big-bang branch.
However, FLRW solutions $(M, {g}^o_{ab})$ are excellent
approximations to the effective space-times $(M,\bar{g}_{ab})$ in
the expanding, post-big-bang branch \emph{away from the Planck
regime.} Furthermore, our QFT on effective space-times does reduce
to the standard one on FLRW solutions when the space-time curvature
is smaller than the Planck scale. Moreover, it provides a physically
interesting extension near and to the past of the big bounce.
Because $(M,\bar{g}_{ab})$ is non-singular, this theory opens a new
window on the Planck scale physics which was inaccessible to QFT on
classical FLRW solutions.
Thus, in this paper we have laid down foundations for further work
with applications to cosmology as well as mathematical physics. We
will conclude by indicating directions that are being currently
pursued. First, we need to include the back reaction of $\varphi$ on
geometry, treating it as a perturbation. As far as the homogeneous
mode of the gravitational field is concerned, this is already
achieved in the evolution equation (\ref{hc6}) (see the remark at
the end of section \ref{s3.2}). Inclusion of inhomogeneous
gravitational perturbations remains an open issue. Second, we have
to analyze the quantum dynamics of the gauge invariant combinations
$\Phi$ of $\varphi$ and the scalar perturbations of the metric. Here the
important step is to construct the Mukhanov variable $\Phi$ starting
from the full quantum constraint. Existing literature on
cosmological perturbations in the LQG setting \cite{dt,ghtw} is
likely to be directly useful in this task. The mathematical theory
of propagation of $\Phi$ on the quantum background geometry
$\Psi_o(\nu,T)$ would be rather similar to that of $\varphi$ analyzed in
this paper. Third, we have to account for the origin of the massless
scalar field $T$ which plays the role of time for us. It seems most
natural to have a single scalar field $\Phi$, the homogeneous mode
of which would provide the relational time parameter $T$ and the
inhomogeneous modes, the physical perturbations that lead to
structure formation. This seems feasible. However, it is likely that
the resulting relational time will not be global. Thus, as remarked
at the end of section \ref{s1}, the analysis in quantum geometry may
have to be divided into `epochs' in each of which the homogeneous
part of $\Phi$ will serve as a relational time variable. If these
three steps can be carried out to completion, we will have a
coherent framework to analyze cosmological perturbations and
structure formation which is free from the limitations of a big bang
singularity. In particular, one will then be able to evolve
perturbations across the big bounce and study phenomenological
implications. Immediately after the big bounce, there is a short
epoch of super-inflation in LQC (see
\cite{mb2} and especially \cite{ps}). The possibility that
ramifications of this sudden and very rapid expansion may be
observable has drawn considerable attention of cosmologists
recently. A more complete QFT on quantum geometries will provide a
systematic avenue to analyze these issues.
The second direction for further work is motivated by mathematical
physics (although it too has some implications to cosmology). In
this paper we focused on a single mode of the scalar field $\varphi$.
Inclusion of a finite number of modes is completely straightforward.
Inclusion of all modes, on the other hand, involves functional
analytic subtleties. Recall, however, that in quantum geometry, the
volume operator has a non-zero minimum value, $2\pi \gamma \ell_{\rm Pl}^2
|\nu|_{\rm min} = 8\pi\gamma\lambda \ell_{\rm Pl}^2$. Therefore, in a certain
sense there is a built-in ultra-violet cut-off. A careful
examination may well reveal that this cut-off descends to the test
scalar field $\varphi$, in which case $\varphi$ would have only a finite
number of modes and the treatment presented here will suffice.
However, if this possibility is not realized, one would have to
resolve the functional analytical difficulties. Our first task is to
address these issues. Second, a number of ideas related to the
algebraic approach are being explored. This approach can be applied
directly to the effective space-times $(M, \bar{g}_{ab})$ that
emerge from LQC. What can one say about the (regularized)
stress-tensor of $\varphi$ and its back reaction on the geometry? Is
there a sense in which the Schr\"odinger equation (\ref{hc6})
already includes these effects? More importantly, can one extend the
algebraic approach systematically to cosmological \emph{quantum}
geometries? At first the extension seems very difficult, if not
impossible, because so many of the structures normally used in the
algebraic approach to QFT on classical space-times use the fact that
we have access to a \emph{fixed} space-time metric. However, in the
cosmological context, additional structures ---such as a preferred
foliation--- naturally become available and they enable one to
construct the required $\star$-algebras of field operators in the
canonical setting. Also, the background quantum geometries
$\Psi_o(\nu,T)$ are rather well-controlled and one may be able to
use the fact that they are extremely sharply peaked around effective
space-times \cite{cs2}. Can one exploit this setting to introduce
the analogs of Hadamard states? We believe that such generalizations
are now within reach.
\section*{Acknowledgments} We have profited from discussions with
Alejandro Corichi, Klaus Fredenhagen, Tomasz Pawlowski and Param
Singh. This work was supported in part by the NSF grants
PHY0456913 and PHY0854743 the Polish Ministerstwo Nauki i
Szkolnictwa Wyzszego grants 1 P03B 075 29, 182/N-QGG/2008/0 and
the 2007-2010 research project N202 n081 32/1844, the Foundation
for Polish Science grant ``Master'', The George A. and Margaret M.
Downsbrough Endowment and the Eberly research funds of Penn State.
|
1,108,101,565,299 | arxiv | \section{Introduction}
The development of high technologies contributes not only
to the growth of corporations and world economies but also
to the development of fraud, which leads to losses of billions
of dollars every year around the world.
In 2018, eight Indian banks incurred \textdollar1.3 billion in
losses in a fraud case involving Kingfisher Airlines founder
Vijay Mallya\footnote{\url{https://www.theguardian.com/world/2020/apr/20/kingfisher-airlinestycoon-vijay-mallya-loses-appeal-extradition-india}}. In another case, the Agricultural Bank of
China faced losses of
\begin{figure}[ht]
\centering
\vspace{4.5 ex}
\includegraphics[width=\linewidth]{fig_1.png}
\caption{Convolutional neural network architectures designed for fraud detection: a) 1D-CNN, b) F-DenseNet, c) SpiderNet}
\Description{Convolutional neural network architectures designed for fraud detection.}
\end{figure}
\noindent\textdollar497 million after being defrauded by employees of billionaire Guo Wengui\footnote{\url{https://www.reuters.com/article/us-china-corruption-tycoonidUSKBN1900DL}}.
Hacker attacks are another global problem. In 2019, the
FBI issued an official announcement that global losses from
fraudulent Business Email Compromise (BEC) reached \textdollar26
billion during the period from June 2016 to July 2019\footnote{\url{https://www.ic3.gov/Media/Y2019/PSA190910}}.
Another growing threat is social engineering, which has
hit Russian bank customers seriously. According to the
official data of the Bank of Russia, losses of Russian banks’
clients from card fraud reached \textdollar130 million in 2020, which
is 10 times higher than similar losses in 2017\footnote{\url{https://cbr.ru/analytics/ib/fincert/##a_119487}}.
Anti-fraud tools can be roughly divided into directive,
preventive, and detective. Directive tools such as instructions
and warnings work like a scarecrow and only affect
untrained fraudsters. Preventive tools help prevent fraud, but
over time, fraudsters adapt and find ways to get around them.
Detective tools are essential to detect fraud and minimize
losses if fraud has not been prevented. Statistical approaches
and machine learning methods are used to develop detective
tools. However, there are unresolved problems
in this area, such as instability and low generalizing ability
of anti-fraud models, as well as high privacy of domain
expertise\cite{Bolton06}.
On the other hand, in recent years, we have seen outstanding advances in deep learning and the successful application of neural networks to practical tasks such as computer vision \cite{Krizhevsky28, He16, He17, Wang57} and natural language processing \cite{Graves15, TVan52, Rajpurkar44, Paulus42, Vaswani53}. This gives us hope that innovative ideas proposed in deep learning will help to remove some of the issues in fraud detection modeling.
In this paper, we propose a convolutional neural network architecture SpiderNet designed to solve fraud detection problems. We noticed that convolutional and pooling layers principles are very similar to the methods of manual processing of information by anti-fraud analysts during investigations. In addition, skip-connections used in convolutional networks \cite{He16} make it possible to use features of various power, including fraud scores from external providers.
Our proposed technique allows us to increase the generalizing ability of anti-fraud models and is an important advantage of SpiderNet over classical machine learning methods and popular neural network architectures. We show that SpiderNet provides better quality compared to Random Forest and convolutional 1D-CNN and F-DenseNet network architectures adapted for fraud modeling (Figure 1). Moreover, comparing the SpiderNet results with the classic CNN and 1D-DenseNet architectures, we show that the feature locality property is lost while working with tabular data but remains crucial for images. Therefore, the technique of transferring CNN architectures from the CV domain, which is popular in applied tasks, does not give the best result. This confirms the thesis that the application of neural networks to applied tasks requires a deep understanding of the domain area, and it is important to adapt neural network architecture to the specifics of the task.
In addition to the SpiderNet architecture, in this paper, we propose new approaches for developing anti-fraud rules called B-tests and W-tests. Our feature engineering method is based on the idea of identifying manipulation in financial statements using Benford's law \cite{Benford04}. Our proposed approaches can be generalized to any data type, which allows developing B-tests and W-tests for any fraudulent schemes, and not just for identifying accounting manipulations, where Benford's law is applicable \cite{Lu37, Nigrini40, Lu38}. Our results showed that B-tests and W-tests give a significant increase to the quality of our fraud detection models.
To assess the quality of fraud detection models, we use the AUC ROC and Average Precision (AUC PR) metrics, as well as developed by our team business metric PL (Prevented Losses) that allows estimating the funds saved from internal fraud. Our proposed PL metric allows us to solve an important industry issue of assessing the economic efficiency of models developed for industrial use.
We train, evaluate, and compare our SpiderNet with other algorithms on two datasets – private and public. The private dataset contains data on loan applications and internal fraud by POS partners of a large Russian bank, one of the top 50 Russian banks in terms of assets. The public dataset was obtained from an online payment fraud detection competition organized by Ant Financial Services Group\footnote{\url{https://dc.cloud.alipay.com/index##/topic/data?id=4}} \footnote{\url{https://www.kaggle.com/gmhost/atec-anti-fraud/version/2}}.
Testing SpiderNet on two datasets with different types of fraud (internal and transactional) gives us reason to believe that our proposed methods will work well for other types of fraud because SpiderNet is based on the general concepts of fraudulent behavior formulated by Edwin Sutherland and Donald Cressey in criminology \cite{Sutherland49} and Gary Becker in behavioral economics \cite{Becker03}.
The rest of the paper is structured as follows:
\begin{itemize}
\item In section 2, we describe a general theoretical model of fraudulent behavior, provide a classification of anti-fraud tools, describe the principles of developing fraud detection models, and outline current problems in this area;
\item In section 3, we provide a review of the current state of neural networks for image classification and fraud detection
\item In section 4, we describe the general intuition of the SpiderNet architecture and present the schema of the Spider-Block;
\item In Section 5, we describe methods for the automated development of B-tests and W-tests anti-fraud rules;
\item In Section 6, we describe our experiment: provide characteristics of datasets, feature engineering methods, data preprocessing methods, tricks for training models, hyperparameters of algorithms, and metrics for models’ quality assessment;
\item In section 7, we demonstrate the results of the experiments;
\item And in section 8, we summarize, draw general conclusions and outline open questions for future research.
\end{itemize}
\section{Fraud Detection}
The foundations of modern criminology and the theory of white-collar fraud were laid nearly 100 years ago by Edwin Sutherland and his student Donald Cressey \cite{Sutherland49},who are considered some of the most influential criminologists of the 20th century. Sutherland and Cressy proposed to consider the crime not from the position of criminal law, as was customary, but from the position of sociology, applying basic sociological concepts in criminology.
Four decades later, Nobel laureate Gary Becker, using the principle of economics imperialism, described the economic model of crime \cite{Becker03}, according to which crime can be viewed as an activity that some people choose rationally, comparing the expected benefits and expected costs:
\begin{equation}
\left( 1 - \pi\right) \ast U\left( W_C\right) - \pi \ast S > U\left( W_L\right)
\end{equation}
\begin{list}{}
\item where $\pi$ -- the probability of being caught (assessed by the criminal, i.e. subjectively)
\item $U\left( \cdot \right)$ - individual utility function;
\item $S$ -- penalties incurred in the event of capture (for example, a fine or criminal punishment);
\item $W_C$ -– proceeds of crime;
\item $W_L$ -– income from legal activities.
\end{list}
The left side of inequality (1) characterizes crime-related elements; the right side is the utility from legal earnings. Logically, when inequality is satisfied, the individual, other things being equal, will prefer to break the law.
To minimize losses from fraudulent activities, companies need to develop and implement anti-fraud tools that affect the components of inequality (1). In particular, the total losses from fraud can be reduced by increasing the probability $\pi$ and decreasing the $W_C$ component by increasing fraudsters’ costs for bypassing anti-fraud protection.
Anti-fraud actions can be divided into 3 types\footnote{The Information Security Standards (CISSP) use more granular typing: preventative, deterrent, detective, corrective, recovery, compensation, directive, administrative, logical/technical, physical.}:
\begin{itemize}
\item {\itshape Directive} –- instructions, regulations, training materials, contracts, etc.;
\item {\itshape Preventive} –- tools that are aimed at preventing fraudulent activities (locks, safes, passwords, etc.);
\item {\itshape Detective} –- tools used to detect fraud (anti-fraud rules and models, investigation techniques, etc.).
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{fig_2.png}
\caption{Arrangement of anti-fraud processes in the bank according to the principles of the scientific method.}
\Description{Anti-fraud processes in the bank .}
\end{figure}
Preventive tools are the most effective, but fraud is highly adaptive, which means that fraudsters constantly come up with new ways how to get around the company's security. That is why the fight against fraud is compared to "confrontation of armor and projectiles". To face this constant arms race, the company's anti-fraud sector requires an integrated, systems approach. One of the main principles of the system approach is the arrangement of anti-fraud processes in the form of a cyclic scheme according to the principles of the scientific method.
The scheme in Figure 2 shows how anti-fraud technologies are developed and improved. One of the main ideas of the scheme is that fraud and counteraction to fraud (anti-fraud) constantly influence each other \cite{Wallace54}. It means that, according to the scheme, the company should regularly review its anti-fraud processes, modifying and improving them. Therefore, detective tools that allow detecting new fraudulent schemes and vulnerabilities in processes are an important part of the anti-fraud system in a company.
The detective tools are based on strong rules developed by experts and anti-fraud models built on these strong rules. Strong rules development consists of three key stages (Figure 3):
\begin{enumerate}
\item At the first stage, simple features and rules are developed by experts;
\item At the second stage, complex rules are combined using arithmetic and logical operations over simple features and rules;
\item At the third stage, strong rules with high predictive power for detecting fraud are selected from complex rules.
\end{enumerate}
The general principle of developing strong rules is similar to the method of collecting evidence in forensic science when all kinds of evidence are first collected, and then, based on the combinations of the collected evidence, a crime is proved. A similar principle is at the heart of convolutional neural networks, in which sequential convolution operations form complex features, and pooling operations filter out noise, i.e. select strong features that are good at predicting the target variable. This is why convolutional networks can be considered a powerful and intuitive algorithm for developing fraud detection models.
\begin{figure}[ht]
\vspace{4.5ex}
\centering
\includegraphics[width=\linewidth]{fig_3.png}
\vspace{1.3ex}
\caption{Scheme of developing strong rules for fraud detection.}
\Description{Scheme of developing strong rules for fraud detection.}
\end{figure}
\section{Related work}
\subsection{CNN Architectures}
Over the past decade, convolutional neural networks (CNNs) have made a breakthrough in computer vision problems solution \cite{Paulus43}. The basic principles of CNN's work were borrowed from the works of Hubel and Wiesel, who studied the visual cortex in the 1950s and 1960s \cite{Hubel24, Hubel25}. The first CNN architecture was proposed by Yann LeCun in the late 1980s \cite{LeCun32}, and in the late 1990s, LeCun's research group developed the LeNet-5 architecture \cite{LeCun31}, which consisted of convolutional and pooling layers that perform the function of implicit regularization of the neural network.
A breakthrough in computer vision came in 2012 when the deep convolutional neural network AlexNet won the ILSVRC computer vision competition in image classification problems \cite{Krizhevsky28}. The authors noticed that increasing CNN depth has an implicit regularization effect, which later became mainstream and was exploited in such architectures as VGG \cite{Simonyan48}, NiN \cite{Lin35}, Inception, GoogleNet \cite{Szegedy50}, ResNet \cite{He16}, and others.
In 2014, the Google team showed excellent results at ILSVRC, proposing the GoogLeNet architecture \cite{Szegedy50}, a feature of which was Inception blocks with bottlenecks and 1x1 convolutions, which allowed increasing the number of convolutional channels, i.e. the width of the neural network. This technique also became popular and found its application in the architectures WRN \cite{Zagoruyko61}, Xception \cite{Chollet09}, ResNeXt \cite{Xie60}, MobileNets \cite{Howard21}, NASNet \cite{Zoph63, Zoph64}, etc.
In 2015, the Microsoft Research team proposed another breakthrough architecture – ResNet \cite{He16} that used skip connections, which create an ensemble effect. This technique proved to be very effective and was later used in WRN \cite{Zagoruyko61}, Xception \cite{Chollet09}, ResNeXt \cite{Xie60}, FractalNet \cite{Larsson30}, DenseNet \cite{Huang22}, etc.
In 2019, the Google Research team published an EfficientNet approach \cite{Tan51} to scale deep CNNs. Authors noticed that the key characteristics of convolutional architectures, such as depth, width, and resolution, depending on each other, therefore, for scaling, it is necessary to select the optimal combination of these parameters. Later, the EfficientNet approach allowed the participants to take the top places in the Deepfake Detection Challenge on Kaggle, which was organized by Facebook in early 2020.
In addition to the developed architectures, effective tricks have been proposed for training CNN:
\begin{itemize}
\item Regularization method Weight Decay (L2 regularization) \cite{Krogh29, Jia26, He18};
\item The Dropout technique \cite{Hinton20}, which allows to accidentally disconnect links in fully connected layers;
\item Stochastic Depth technique \cite{Huang23} used in ResNet networks to random disable blocks;
\item Augmentation methods: Cutout \cite{DeVries11}, Mixup \cite{Zhang62}, CutMix \cite{Yun46}, Valid/Diverse Noise, Flipping, etc. \cite{Xie59};
\item Learning-rate reduction strategies that improve the convergence of gradient descent methods \cite{Donoghue41, Loshchilov36, Fort13, Li34}.
\end{itemize}
These and other proposed ideas allowed to achieve high results in Computer Vision.
\subsection{Fraud Prediction Models}
Statistical methods and machine learning have been used for fraud detection tools development for many years. In 2002, Bolton and Hand \cite{Bolton06} highlighted the key problems of using statistical methods in fraud detection modeling. Here are some of these issues that remain relevant today:
\begin{enumerate}
\item The scope of fraud is increasing with the development of high technology;
\item Fraudsters bypass preventive technologies over time, so detective tools are needed;
\item On the other hand, detective algorithms also degrade over time, so they need to be regularly updated;
\item Databases and anti-fraud methods are closed from the scientific community. This makes it difficult to research and develop this area;
\item To develop anti-fraud models, unsupervised (search for anomalies) and supervised (search for known fraudulent patterns) algorithms are used. Unsupervised models make a lot of mistakes, because anomalies may be caused by operational errors, marketing promotions, etc. Supervised models are trained on historical data, so they are poor at catching new fraudulent schemes;
\item There is a global problem of class imbalance - there are much fewer fraudulent observations than non-fraudulent ones. This leads to a high false positive rate, which is why many false-positives are sent for investigation and the use of models in anti-fraud processes becomes expensive.
\end{enumerate}
In the discussion of this paper, Provost and Breiman noted other important issues:
\begin{enumerate}[resume]
\item Models are customized for a specific fraudulent scheme, which makes them difficult to scale to other types of fraud;
\item Big data is needed to develop effective anti-fraud models, so this remains the lot of large companies;
\item Machine learning algorithms do not solve the anti-fraud problem, expert knowledge and understanding of fraudulent schemes are needed.
\end{enumerate}
The development of anti-fraud models is most often solved as a binary classification problem (fraud/non-fraud), where classical algorithms are used, such as SVM, Logistic Regression, Random Forest, and others \cite{Bhattacharyya05}.
With the onset of the deep learning boom, neural networks began to be used in fraud detection modeling and almost completely replaced classical machine learning methods in papers \cite{Kanika27}.
Wiese and Omlin \cite{Wiese58} proposed using a recurrent LSTM network to detect fraudulent credit card transactions. The authors suggested that since recurrent networks were designed to process sequences, they should be good at detecting fraudulent patterns in card transaction sequences. Authors were able to demonstrate the advantage of the LSTM over the SVM algorithm, but the LSTM network failed to beat the simple fully connected neural network FFNN due to the insufficient number of fraudulent transactions in the dataset.
Fu et al. \cite{Fu14} solved a similar problem in detecting card fraud by training the architecture of the LeNet-5 convolutional neural network developed by Yann LeCun for image processing. To train LeNet-5 on the card fraud detection problem, the authors presented transactions in the form of rectangular matrices of features, which were fed to the CNN input as two-dimensional pictures. The results obtained showed the superiority of LeNet-5 over the classical algorithms SVM, ANN, and Random Forest.
Heryadi and Warnars \cite{Heryadi19} continued to develop these ideas and tried to combine CNN with a recurrent LSTM network. The authors' hypothesis was the following: CNNs, due to convolutions, should detect short-term fraudulent patterns, LSTM network, due to long short-term memory, should work well with long sequences of fraudulent transactions. Authors have developed a hybrid CNN-LSTM architecture, but experiments have shown that simple CNN detects fraud better than CNN-LSTM. Authors made an important conclusion from their results: long-term fraudulent schemes that cannot be detected for a long time are extremely rare, in contrast to short-term fast fraudulent schemes. This is confirmed by Becker's economic model of crime (eq. 1).
Attempts to apply the popular neural network architectures CNN and RNN for fraud detection tasks continued in subsequent research efforts.
Li et al. \cite{Li33} used the DenseNet architecture to detect electricity theft in China.
Chen and Liu \cite{Chen07} refined the DenseNet architecture by adding an Inception module to the beginning of the network and additional skip connections between the Inception layer and Dense blocks. Their new CNN architectures named LI and DI have improved the results of standard CNN and DenseNet for the task of transaction fraud detection.
Cheng et al. \cite{Cheng08} proposed a Spatio-temporal neural network STAN based on attention. The STAN architecture includes an Attention module and a simple CNN. For the task of detecting transaction fraud, STAN showed better quality compared to CNN, LSTM, etc.
Li and Liu \cite{Zhenchuan65} proposed to use a special loss function for transaction fraud detection, which solved the problem of intraclass variability. Their loss function FCL (full center loss), constructed as a combination of DCL (distance center loss) and ACL (angle center loss), worked like batch normalization.
For the tasks of organized fraud on Internet sites (fake reviews, bots, spam, etc.) detection, graph neural networks are gaining popularity today, allowing feature extraction for interconnected objects \cite{Shuhan47, Wang55, Dou12, Wang56}.
\begin{figure}[ht]
\vspace{5ex}
\centering
\includegraphics[width=0.985\linewidth]{fig4_newnew.png}
\vspace{1.3ex}
\caption{Examples of one-dimensional heatmaps of features for internal fraud detection: a) average feature values for the top-10 fraudulent POS-partners of the bank; b) average feature values for the top-10 non-fraudulent POS-partners of the bank.}
\Description{Examples of one-dimensional heatmaps of features.}
\end{figure}
\section{SpiderNet}
\subsection{Problem Formulation}
One of the main problems of using neural networks in fraud detection tasks is that many of the proposed architectures were migrated from other domains (mainly from popular CV and NLP) without a deep understanding of why these architectures should work on fraud detection tasks.
When designing a neural network architecture for fraud detection, our intuition is that anti-fraud rules developed by experts are a kind of digital evidence (by analogy with pixels in images, which are digital features processed by convolutions in CNN). At the same time, anti-fraud rules have different power like forensic evidence. Moreover, anti-fraud rules can work in conjunction with each other, strengthening the evidence base. This intuition tells us that CNN's convolution and pooling operations are the most appropriate tools for combining anti-fraud rules and selecting strong combinations of them.
In computer vision problems, it was shown that a sequence of several convolutional layers allows one to create hierarchical feature maps using the locality property in images \cite{Lee66}. On the other hand, the locality property is lost in tabular data, since the features can be arranged in a different order. Nevertheless, studies of CNN architectures have shown that convolutions learn well not only medium-frequency (local) features but also low-frequency ones (texture, background, color, etc.) \cite{Goodfellow67, Hermann68}. This allows us to assume that CNN architectures should perform well on tabular data, where 1D convolutions can be applied to the feature vector. In classification problems, such feature vectors will differ for observations of different classes in color and texture (Figure 4).
\begin{figure}[ht]
\vspace{4.5ex}
\centering
\includegraphics[width=\linewidth]{Fig_new_5_big.png}
\vspace{1.3ex}
\caption{The dropout principle between neurons in CNN: a) connections between layers in a fully connected neural network (MLP); b) connections between layers in a CNN.}
\Description{The dropout principle between neurons in CNN.}
\end{figure}
The mathematical intuition of how convolutions work can be explained through CNN regularization by drop connections between neurons. Moreover, convolutions drop connections in an orderly manner, as opposed to the random dropout used in fully connected networks (Figure 5). In addition, the receptive field in convolutions allows capturing a small number of features, due to which many small models are trained in CNN on these subsets, which are then summed up, achieving the ensemble effect.
On the other hand, if the combination of anti-fraud rules obtained on the hidden layers has a strong predictive ability, then we want to use this combination without additional processing, forwarding it to the output layer of the neural network using skip connections.
This intuition well represents the best practices of anti-fraud investigations and can be implemented using a fully connected residual network, which we call SpiderNet (Figure 1).
\subsection{Spider Block}
SpiderNet consists of blocks that are connected using skip connections. Thus, each block receives features from all previous blocks, and these features are processed using convolutional and pooling layers and are forwarded to all subsequent blocks. Cause we want to have only the strongest features at the output, the blocks that are farthest from the network output contain several pooling layers, filtering out weak and medium features. Conversely, the closer the block is to the network output, the fewer pooling layers it contains since the features that have reached this block have already passed several convolutional and pooling layers. The general architecture of SpiderNet's block is shown in Figure 6.
SpiderNet is a convolutional feedforward network with skip connections between blocks. Formally, the kth Spider-block is defined by the recursive formula:
\begin{equation}
y_k = \mathcal{F}_k\left( y_{k-1}\oplus\ldots\oplus y_{1}\right) = \mathcal{F}_k\left(\sum_{i=1}^{k-1}\oplus y_i\right)
\end{equation}
\begin{list}{}
\item where $y_k$ is a vector of $n-k$ outputs for the kth block;
\item $\mathcal{F}_k\left( \cdot \right)$ is the kth block operator, which combines the functions dropout, convolution, batch normalization, ReLU, and Max-pooling;
\item $\oplus$ is the concatenation operator for incoming vectors;
\end{list}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{fig_4.png}
\caption{Scheme of the kth Spider-block with convolutional layer and $n-k$ pooling layers ($n$ is the total number of Spider-blocks).}
\Description{Scheme of the kth Spider-block.}
\end{figure}
\begin{list}{}
\item $\sum\oplus$ is a short notation for vector concatenation;
\item $y_i$ is the output vector of the ith block, which is fed to the input of the kth block $\left(1 \leq i < k\right)$.
\end{list}
In the last nth Spider-block, after convolutional layer and BatchNorm+ReLU and MaxPooling layer, there is also global average pooling which is applied to reduce the dimension of the channels. After these operations, the vector $y_n$ is fed to two fully connected layers with dropout and SoftMax output for binary classification.
Expression (2) demonstrates the mathematical intuition of the SpiderNet architecture. The output vector of the kth block is obtained by transforming the concatenated outputs from the previous blocks, which are smaller neural networks, which receive concatenated vectors of the previous blocks as input, etc. Thus, SpiderNet works as an ensemble of neural networks, which allows improving the convergence and generalization of the entire neural network.
\section{B-tests and W-tests}
The peculiarity of fraud modeling is that most of the fraud rules are developed manually by experts. This because fraudulent schemes are diverse, especially internal fraud schemes that are designed for specific vulnerabilities of the organization.
We offer B-tests and W-tests approaches that automate the manual process and generalize feature engineering for various types of internal fraud.
\subsection{B-tests}
B-tests are based on Benford's law, discovered at the end of the 19th century by Simon Newcomb \cite{Newcomb39}, who noticed that the values of some numerical data start with one more often than two, two more often than three, and so on. Later, Frank Benford \cite{Benford04} empirically confirmed this law for various social and physical phenomena, and the first-digit law was named after him.
In the 90s of the twentieth century, Mark Nigrini used Benford's law to audit financial statements and managed to identify managerial embezzlement of \textdollar2 million in the office of the Arizona State Treasurer \cite{Nigrini40}.
Summarizing these ideas for all types of data (numerical and categorical), we propose a B-tests technique, which consists of comparing the distributions of data characterizing an object with the general population of all objects (for example, the activities of employees or partners of the company).
The general idea is that internal fraud is rare, i.e. it does not greatly affect the distribution of the general population, while the data on the activities of fraudsters are very different from the average (Figure 7).
B-tests can be customized according to various characteristics of the sample, such as the partner sphere of business, the region of the partner's location, the number of quantiles in distributions, the divergence cut-off threshold, etc. Thus, the setting of B-tests can be reduced to an optimization task of searching for anomalies \cite{Btest01}.
B-tests can be calculated using various metrics reflecting the divergence between two distributions, for example, Chi-squared test, K–S test, Anderson–Darling test, etc.
\begin{figure}
\centering
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=\linewidth]{Fig_new_7_1_big1.png}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=\linewidth]{Fig_new_7_2_big2.png}
\end{subfigure}\vspace{0pt}
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=\linewidth]{Fig_new_7_3_big3.png}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=\linewidth]{Fig_new_7_4_big4.png}
\end{subfigure}
\caption{Examples of B-tests for internal fraud detection.}
\Description{Examples of B-tests.}
\end{figure}
For our B tests, we calculate the area difference for two discrete distributions using the following formula:
\begin{equation}
S = \dfrac{1}{2} \sum_{i=1}^{n}\left| a_i - b_i\right|
\end{equation}
\begin{list}{}
\item where $a_i$ and $b_i$ are the compared distributions;
\item $n$ is the number of quantiles in the distribution.
\end{list}
The number of quantiles in the distribution n and the threshold for S depend on the number of objects in the samples and are tunable hyperparameters.
\subsection{W-tests}
W-tests solve the same problem as B-tests using the Wasserstein metric, which is defined as:
\begin{equation}
W_p\left(\mu, \nu\right) = \inf_{\gamma \in \Gamma\left(\mu, \nu\right)} E_{\left(x, y\right) \sim \gamma} \left[ \left\| x - y\right\| \right]
\end{equation}
\begin{list}{}
\item where $E\left[Z\right] $ denotes the expected value of a random variable $Z$ and the infimum is taken over all joint distributions of the random variables $X$ and $Y$ with marginals $\mu$ and $\nu$ respectively.
\end{list}
Wasserstein metric solves the problem of the insufficient number of objects in samples, which is typical for B-tests. However, the Wasserstein metric can be calculated only for numeric data types, so W-tests cannot completely replace B-tests.
\section{Experiment Setup}
\subsection{Datasets}
We trained and compared SpiderNet with other algorithms on two datasets: private data and public data. The general characteristics of datasets are presented in Table 1.
{\itshape Private dataset}. Our private dataset contains data on the credit activity of POS partners of a large Russian bank, one of the top 50 Russian banks in terms of assets.
Before applying the models, the bank used a rule-based approach. When the rules were triggered, the top-worst POS partners were sent to the bank's security for investigation. Since the security staff is limited, the number of investigations is also limited and amounts to 40 investigations per week. Thus, the main goal of applying the model approach was to reduce the time it takes to identify fraudulent partners with an unchanged investigation budget.
The bank has a procedure for blocking unprofitable POS partners based on risk indicators, which are calculated approximately 90 days after the first fraudulent loan is issued. Therefore, to form the target variable, the loss for the POS partner was calculated within 90 days from the date of the fraud rules calculation. Unprofitable POS partners were marked as fraudulent, profitable ones - as non-fraudulent. This approach to constructing the target variable makes it possible to detect problem partners.
Since fraud rules could have been triggered during the loss assessment period, the evaluation of the model approach will show the uplift to the existing rule-based process, and not the full economic effect of the model.
The records in the private dataset are presented as a vector of features for each POS partner as of the weekly slice date. Therefore, a single POS partner can be included in the sample several times with different slice dates. The depth of calculation of the features ranges from 7 to 60 days ago from the date of the slice. A flag, indicating an internal fraud event, was set to each record as a target variable. The overall task comes down to predicting the internal fraud of the POS partner based on the data of its historical activities.
{\itshape Public dataset}. The public dataset comes from an online payment fraud detection competition hosted by Ant Financial Services Group. The dataset contains the values of features for payment transactions and a binary target variable that reflects whether the payment is fraudulent or not.
\begin{table}
\begin{wrapfigure}{r}{0.05\linewidth}
\vspace{6.5ex}
\hspace{-40ex}
\includegraphics[width=\linewidth]{spider.png}
\end{wrapfigure}
\begin{flushleft}
\caption{Characteristics of private and public datasets}
\label{tab:freq}
\begin{tabular}{lll}
\toprule
Attribute&Private Data&Public Data\\
\midrule
{\bfseries Source} & Russian Bank& \parbox[l]{2cm}{Ant Financial Services Group}\\
{\bfseries Type of data}& POS credits& Payments\\
{\bfseries Type of fraud}& Internal& Transaction\\
{\bfseries Period time}& 03.2014-10.2019& 09.2017-11.2017\\
{\bfseries All observations, \#}& 1 880 499& 990 006\\
{\bfseries Fraud observations, \#}& 5 327& 12 122\\
{\bfseries Fraud rate, \%}& 0.28& 1.22\\
{\bfseries All features, \#}& 509& 297\\
{\bfseries Selected features, \#}& 163& 128\\
\bottomrule
\end{tabular}
\end{flushleft}
\end{table}
\subsection{Feature Selection}
To form the final samples, we did data preprocessing. Firstly, we checked the features for validity and removed features with a low fill rate. Secondly, we selected features using a cross-correlation matrix. Of a pair of highly correlated features, we left the one that had a stronger correlation with the target variable.
As a result of data preprocessing, 163 features remained in the private dataset, and 128 features remained in the public dataset.
\subsection{Model Architectures and Tricks}
To assess the generalization ability and demonstrate the benefits of SpiderNet, we compared the results of several machine learning algorithms:
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{fig_6.png}
\caption{SpiderNet-6 (left) and SpiderNet-8 (right) neural network architectures for fraud detection tasks.}
\Description{SpiderNet-6 (left) and SpiderNet-8 (right) neural network architectures for fraud detection tasks.}
\end{figure*}
\begin{enumerate}
\item {\itshape Random Forest} is a baseline model that has been chosen as a strong algorithm and industry standard for modeling on tabular data. We used 5-fold cross-validation and the Optuna library to tune the Random Forest hyperparameters \cite{Akiba02}.
\item {\itshape 1D-CNN} is a one-dimensional convolutional network with classical alternation of convolutional and pooling layers and two fully connected layers and a SoftMax layer. We trained CNN with 3, 6, and 8 convolutional layers;
\item {\itshape 1D-DenseNet} is a classic DenseNet architecture \cite{Huang22} with an implementation for one-dimensional vectors. We trained two-block architectures with 3 and 4 convolutions in each block;
\item {\itshape F-DenseNet} is a DenseNet architecture adapted for fraud prediction with two fully connected convolutional blocks containing convolutional and pooling layers. We trained architectures with 3 and 4 convolutional layers in each block;
\item {\itshape SpiderNet} is our fully connected residual convolutional network with convolutional-pooling blocks. We trained 6 and 8 block architectures (Figure 8).
\end{enumerate}
We used weight decay (L2-regularization) in BatchNorm and fully connected layers, and dropout on fully connected layers as regularizers for all neural networks. In the 1D-CNN, F-DenseNet, and SpiderNet architectures we applied the BatchNorm and ReLU transformations after each convolutional layer. In the F-DenseNet and SpiderNet blocks, as an additional regularization for the skip connections, we used dropout after concatenating the input vectors for each block (Figure 4). We also used the fraud-rate leveling technique in batches to solve the problem of the lack of fraud samples in batches in case of over-class imbalance.
To train and assess the quality of the models, we performed a stratified split of private and public datasets into a train $(80\%)$, validation $(10\%)$, and test $(10\%)$ samples. We tuned the networks to the validation sample using Grid-Search and early stopping.
\subsection{Performance Measures}
We used AUC ROC and AUC PR metrics to evaluate the quality of models. The AUC ROC metric and the linearly related Gini coefficient are industry standards in banking modeling. However, as shown by Saito and Rehmsmeier \cite{Saito45}, the AUC ROC accepts unreasonably high values and becomes uninformative on over unbalanced samples, in contrast to the AUC PR, which adequately estimates the quality of models on unbalanced samples. Therefore, we tuned the hyperparameters of the models using the AUC PR.
To evaluate the quality of fraud detection models on the private dataset, we have developed the special metric PL (Prevented Loss), which shows how much loss from internal fraud the model prevents. To calculate the PL, we estimate prevented loss for each partner:
\begin{equation}
PL^{\left(i\right)} = P_{\left(T_l - T_a\right)} \cdot \frac{DR - DR_0}{1 - DR_0}
\end{equation}
\begin{list}{}
\item where $T_l$ is the whole considered period of loss;
\item $T_a$ is the period on which the model works;
\item $\left(T_l - T_a\right)$ is the period after the model is triggered (for our sample, it is 90 days – the empirical period for which the bank's Security detects fraud without using the model);
\item $DR$ is the Default-Rate for loans issued by the partner for the period $\left(T_l - T_a\right)$;
\item $DR_0$ is a "zero target" for Default-Rate in which the loan portfolio has zero profit;
\item $P_{\left(T_l - T_a\right)}$ is the partner's loan portfolio for the period $\left(T_l - T_a\right).$
\end{list}
Total PL is calculated based on the company's total investigative resources, i.e. on the assumption that security costs do not increase. Total Prevented Loss shows the net profit from the model due to the earlier detection of fraud than it had been before the model was applied:
\begin{equation}
PL = \sum_{i=1}^{k} PL^{\left(i\right)} \cdot b_i
\end{equation}
\begin{list}{}
\item where $PL^{\left(i\right)}$ is prevented loss for the ith fraud partner;
\item $k$ is the number of the first $k$ partners with the highest model probability of fraud (for our bank $k = 40$);
\item $b_i$ is a binary variable showing the event for the ith partner: 1 – fraud, 0 – no fraud.
\end{list}
For the public dataset, the economic metric was not developed due to the lack of the necessary financial data in the dataset.
\begin{table*}
\caption{Quality of models for internal (private dataset) and transactional (public dataset) fraud detection: the best results are highlighted in bold; $95\%$ confidence intervals are shown in parentheses.}
\label{tab:result}
\begin{tabular}{ll|ll|ll}
\textbf{\#} & \textbf{Model} & \multicolumn{2}{c|}{\textbf{Private data (test sample)}} &
\multicolumn{2}{c}{\textbf{Public data (test sample)}} \\
\cline{3-6}
&& \multicolumn{1}{c}{\textbf{AUC PR}} & \multicolumn{1}{c|}{\textbf{AUC ROC}}
& \multicolumn{1}{c}{\textbf{AUC PR}}
& \multicolumn{1}{c}{\textbf{AUC ROC}} \cr
\midrule
1& Random Forest & 0.0650 $(\pm0.001116)$ & 0.9371 $(\pm0.009253)$ & 0.4881 $(\pm0.003114)$ & 0.9709 $(\pm0.003572)$\\
\hline
2& CNN-3 & 0.0527 $(\pm0.001012)$ & 0.9339 $(\pm0.012978)$ & 0.4462 $(\pm0.003096)$ & 0.9670 $(\pm0.004674)$\\
3 & CNN-6 & 0.0644 $(\pm0.001111)$ & 0.9385 $(\pm0.011432)$ & 0.4908 $(\pm0.003114)$ & 0.9711 $(\pm0.004780)$ \\
4 & CNN-8 & 0.0708 $(\pm0.001161)$ & 0.9288 $(\pm0.009605)$ & 0.5099 $(\pm0.003114)$ & 0.9718 $(\pm0.004511)$ \\
\hline
5 & DenseNet-6 [3; 3] & 0.0646 $(\pm0.001113)$ & 0.9315 $(\pm0.009091)$ & 0.4757 $(\pm0.003111)$ & 0.9669 $(\pm0.004935)$ \\
6& DenseNet-8 [4; 4] & 0.0691 $(\pm0.001148)$ & 0.9310 $(\pm0.010545)$ & 0.4854 $(\pm0.003113)$ & 0.9686 $(\pm0.004661)$ \\
\hline
7 & F-DenseNet-6 [3; 3] & 0.0732 $(\pm0.001179)$ & 0.9263 $(\pm0.014509)$ & 0.5092 $(\pm0.003114)$ & 0.9708 $(\pm0.005082)$ \\
8 & F-DenseNet-8 [4; 4] & 0.0575 $(\pm0.001054)$ & 0.9186 $(\pm0.015820)$ & 0.4968 $(\pm0.003114)$ & 0.9704 $(\pm0.004780)$ \\
\hline
9 & SpiderNet-6 & \textbf{0.0948} $(\pm0.001326)$ & \textbf{0.9484} $(\pm0.008004)$ & \textbf{0.5375} $(\pm0.003106)$ & \textbf{0.9721} $(\pm0.004763)$ \\
10 & SpiderNet-8 & 0.0680 $(\pm0.001139)$ & 0.9277 $(\pm0.009588)$ & 0.5160 $(\pm0.003113)$ & 0.9684 $(\pm0.004744)$
\end{tabular}
\end{table*}
\section{Experiment}
\subsection{Model Optimization and Tuning}
We tuned models by AUC PR and selected the best hyperparameters for the models.
\begin{description}[leftmargin=0pt, parsep = 0.7ex, topsep = 1ex]
\item[\underline{\normalfont Random Forest:}] \hspace{0pt} \\
{\itshape Private data}: class\_weight=0.0167, max\_depth=7, n\_estimators=129;
{\itshape Public data}: class\_weight=0.0744, max\_depth=7, n\_estimators=90.
\item[\underline{\normalfont CNN:}] \hspace{0pt} \\
{\itshape Private data}: layers=8, l2\_batch=0.0001, kernel\_size=3, dropout=0.25, weight\_decay=0.0002, filters=10, learning\_rate=0.003, hidden=100;
{\itshape Public data}: layers=8, l2\_batch=0.0001, kernel\_size=5, dropout=0.25, weight\_decay=0.0002, filters=10, learning\_rate=0.003, hidden=30.
\item[\underline{\normalfont DenseNet:}] \hspace{0pt} \\
{\itshape Private data}: block\_sizes = [4, 4], initial\_filters=5, initial\_stride=1, k=5, conv\_kernel\_width=3, bottleneck\_size=2, theta=0.5, transition\_pool\_stride=1, initial\_conv\_width=5, initial\_pool\_width=2, initial\_pool\_stride=2;\\
{\itshape Public data}: block\_sizes = [4, 4], initial\_filters=5, initial\_stride=1, k=10, conv\_kernel\_width=5, bottleneck\_size=2, theta=0.5, transition\_pool\_stride=1, initial\_conv\_width=5, initial\_pool\_width=2, initial\_pool\_stride=2.
\item[\underline{\normalfont F-DenseNet:}] \hspace{0pt} \\
{\itshape Private data}: blocks = 2, convolutions=3+3, weight\_decay=0.0002, l2\_batch=0.0002, kernel\_size=7, filters=15, hidden=60, dropout=0.25, learning\_rate=0.003;\\
{\itshape Public data}: blocks = 2, convolutions=3+3, weight\_decay=0.0002, l2\_batch=0.0002, kernel\_size=5, filters=15, hidden=100, dropout=0.25, learning\_rate=0.003.
\item[\underline{\normalfont SpiderNet:}] \hspace{0pt} \\
{\itshape Private data}: blocks=6, l2\_batch=0.0001, kernel\_size=3, filters=10, hidden = 100, weight\_decay=0, dropout=0.25, learn\_rate=0.005, dropout\_block\_k=$0.001*k^4$ (where k=$\{4, 5\}$ is a block number);\\
{\itshape Public data}: blocks=6, l2\_batch=0.0002, kernel\_size=7, filters=15, hidden=30, weight\_decay=0.0002, dropout=0.25, learn\_rate=0.005, dropout\_block\_k=$0.001*k^4$ (where k=$\{4, 5\}$ is a block number).
\end{description}
For all neural networks, we used the Adam optimizer.
\subsection{Results}
The results of the trained models showed that the SpiderNet-6 architecture had the best quality on both datasets for both AUC PR and AUC ROC (Table 2). Confidence intervals were calculated using the asymptotic method for AUC PR \cite{Boyd69} AUC ROC \cite{Cortes70}. We see clear dynamics that increase in the number of skip-connections results in increase in quality for fraud detection models: {CNN-8: 0 skip connections} $\rightarrow$ {F-DenseNet-6: 2 skip connections} $\rightarrow$ {SpiderNet-6: 10 skip connections}. This confirms our hypothesis that strong features can pass well from the different layers of neural network immediately to its output due to skip connections. On the other hand, we see that the best model of the classic DenseNet-8 performs worse than the best CNN-8, which has no skip connections. This indirectly confirms the hypothesis that, for tabular data, bottlenecks between blocks don't allow strong features to go directly to the output layers of the network. The F-DenseNet architecture, which is adapted for fraud detection tasks, does not contain a bottleneck between blocks, so the quality of the best F-DenseNet-6 surpasses the quality of CNN-8 and DenseNet-8. These results demonstrate the importance of developing a neural network architecture for the specifics of the task.
\begin{table}
\caption{Quality of models for internal fraud detection.}
\label{tab:result2}
\begin{tabular}{ll|cc}
\textbf{\#} & \textbf{Model} & \multicolumn{2}{c}{\textbf{Private data (test sample)}}\\
\cline{3-4}
&& \multicolumn{1}{c}{\textbf{Fraud, \#}} & \multicolumn{1}{c}{\textbf{PL}} \cr
\midrule
\mathstrut & Random classifier & 48 & \textdollar325 604 \\
\hline
1& Random Forest & 208 & \textdollar2 079 527 \\
\hline
2& CNN-3 & \textbf{312} & \textdollar2 235 707 \\
3 & CNN-6 & 280 & \textbf{\textdollar2 753 821} \\
4 & CNN-8 & 280 & \textdollar2 337 297 \\
\hline
5 & DenseNet-6 [3; 3] & 240 & \textdollar2 324 181 \\
6& DenseNet-8 [4; 4] & 288 & \textdollar2 433 914 \\
\hline
7 & F-DenseNet-6 [3; 3] & 240 & \textdollar2 297 848 \\
8 & F-DenseNet-8 [4; 4] & 272 & \textdollar2 402 470 \\
\hline
9 & SpiderNet-6 & 304 & \textdollar2 570 014 \\
10 & SpiderNet-8 & 264 & \textdollar2 379 977 \\
\hline
\mathstrut & Perfect classifier & 888 & \textdollar4 659 439
\end{tabular}
\end{table}
\begin{table}
\caption{Sign tests for two pairs of results (the PL and Fraud metrics are normalized according to the perfect classifier).}
\label{tab:result3}
\begin{tabular}[t]{p{0.21\linewidth}|p{0.15\linewidth}p{0.15\linewidth}|p{0.15\linewidth}p{0.15\linewidth}}
\mathstrut & \textbf{CNN-3} & \textbf{Spider \linebreak Net-6} & \textbf{CNN-6} & \textbf{Spider \linebreak Net-6}\\
\hline
\textbf{Private data:} & \mathstrut & \mathstrut & \mathstrut & \mathstrut \\
AUC PR& 0.0527 & \textbf{0.0948} & 0.0644 & \textbf{0.0948}\\
AUC ROC& 0.9339 & \textbf{0.9484} & 0.9385 & \textbf{0.9484} \\
PL (recall)& 0.4798 & \textbf{0.5516} & \textbf{0.5910} & 0.5516\\
Fraud (recall)& \textbf{0.3514} & 0.3423 & 0.3153 & \textbf{0.3423}\\
\textbf{Public data:} & \mathstrut & \mathstrut & \mathstrut & \mathstrut \\
AUC PR& 0.4462 & \textbf{0.5375} & 0.4908 & \textbf{0.5375}\\
AUC ROC& 0.9670 & \textbf{0.9721} & 0.9711 & \textbf{0.9721}\\
\hline
p-value& \multicolumn{2}{c|}{0.015625} & \multicolumn{2}{c}{0.015625}\\
\end{tabular}
\end{table}
We also confirm Saito’s and Rehmsmeier’s results \cite{Saito45} that the AUC ROC metric is not informative for problems with unbalanced data and shows unreasonably high values. For our datasets, the AUC PR metric is preferred. Although the results for the private dataset according to the AUC PR metric are low, we consider them satisfactory, since they don't take into account the full effect of the models, but demonstrate an uplift when switching from the rule-based approach to model one. In addition, the evaluation of these models according to the PL metric showed a significant positive economic effect with an unchanged budget for investigations (Table 3). Since the PL metric is not normalized, Table 3 also shows the efficiency for the random and perfect classifiers on the private dataset. The results demonstrate that the SpiderNet has an almost 8-fold increase in financial efficiency compared to a random classifier and has less than a 2-fold decrease to the perfect model (as if the bank had identified all fraudulent POS partners before committing crimes – so-called "Minority Report").
We also see that for the private dataset, SpiderNet-6 ranked second according to PL metric, losing to the simpler CNN-6. This may be because the hyperparameters in the Grid-Search were selected following the integral metric AUC PR, while the PL metric is a threshold and covers a small number of POS-partners who have been checked by the bank's security (40 partners per week out of 6786 on average).
On the other hand, as we can see from the results of Table 3, SpiderNet-6 outperforms CNN-6 by the number of identified fraudulent POS partners (out of all limited number of those verified by the security) but loses to CNN3 by this indicator. The number of fraudulent partners detected is also an important metric of the model quality. Hand et al. \cite{Hand71} suggest that in mass fraud, it is better to tune the model to maximize the number of detected fraud cases than to maximize prevented losses. This approach can have a greater influence on fraudsters, since, as we know from Becker's concept, the number of identified fraudulent cases directly proportionally affects the probability of being caught (eq. 1).
The results obtained show that on various datasets and various quality metrics, SpiderNet-6 loses to CNN-3 and CNN-6 only in one metric. At the same time, in 5 other cases, SpiderNet-6 wins over CNN-3 and CNN-6. To assess the statistical significance of the obtained SpiderNet-6 advantage, we tested the null hypothesis and alternative one-sided hypothesis using a sign test \cite{Conover72}:
\begin{enumerate}
\item For SpiderNet-6 and CNN-3
$H_0^{(1)}:$ models are of the same quality
\begin{displaymath}
P(Quality_{ SpiderNet-6} > Quality_{ CNN-3}) = 0.5
\end{displaymath}
$H_1^{(1)}:$ SpiderNet-6 has higher quality
\begin{equation*}
P(Quality_{ SpiderNet-6} > Quality_{ CNN-3}) > 0.5
\end{equation*}
\item For SpiderNet-6 and CNN-6
$H_0^{(2)}:$ models are of the same quality
\begin{equation*}
P(Quality_{ SpiderNet-6} > Quality_{ CNN-6}) = 0.5
\end{equation*}
$H_1^{(2)}:$ SpiderNet-6 has higher quality
\begin{equation*}
P(Quality_{ SpiderNet-6} > Quality_{ CNN-6}) > 0.5
\end{equation*}
\end{enumerate}
\begin{figure}[ht]
\centering
\centering\includegraphics[width=\linewidth]{Fig_new_9_big.png}
\caption{Influence of B-tests and W-tests on model quality (negative impact means a decrease in the model quality when adding B/W-tests)}
\Description{Influence of B-tests and W-tests on model quality (negative impact means a decrease in the model quality when adding B/W-tests)}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=\linewidth]{Fig_new_10_1_big.png}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=\linewidth]{Fig_new_10_2_big.png}
\end{subfigure}\vspace{0pt}
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=\linewidth]{Fig_new_10_3_big.png}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=\linewidth]{Fig_new_10_4_big.png}
\end{subfigure}
\caption{SpiderNet quality (AUC PR), depending on the dropout techniques in convolutional layers: 1) the value of zero corresponds to SpiderNet quality without a dropout; 2) the value of 0.25 corresponds to constant dropout=0.25 used in all Spider-blocks (vanilla technique); 3) the value of exp(k) corresponds to an exponential increase in the dropout value as the Spider-block number increases (our technique).}
\Description{SpiderNet quality.}
\end{figure}
The obtained results of p-values are presented in Table 4 and we see that in both pairwise comparisons the null hypothesis is rejected in favor of the alternative, i.e. with a 95\% significance level SpiderNet-6 more often shows better quality than CNN-3 or CNN-6.
We also tested the effectiveness of B-tests and W-tests by training the models on two private samples:
\begin{enumerate}
\item Full sample containing 24 B-tests, 15 W-tests, and 124 expert rules;
\item Truncated sample containing 124 expert rules.
\end{enumerate}
Our results showed the high efficiency of B-tests and W-tests by AUC PR (Figure 9). Adding B-tests and W-tests gave us the AUC PR increase of $19.4\%$. At the same time, we see that B/W-tests have a greater impact on models with skip connections. This may be because B/W-tests are strong rules and for a better impact they must be sent immediately to the network output.
We also see that SpiderNet-6 trained on expert rules without B-tests shows the best quality by AUC PR metric, which once again proves the stability of the results obtained.
In addition, the results of experiments demonstrated that our exponential dropout technique in convolutional layers provides significant gains in quality over a constant or zero dropout (Figure 10). This trick allows adding more regularization in the last blocks, which get more information from the previous blocks.
All program codes and detailed results are available at: \url{https://github.com/aasmirnova24/SpiderNet}
\section{Conclusions}
In this paper, we investigated deep learning methods for fraud prediction and proposed a novel architecture of neural networks for fraud detection problems. Taking inspiration from the skip connection concepts of Resnet \cite{He16}, FractalNet \cite{Larsson30}, Adanet \cite{Cortes10}, and DenseNet \cite{Huang22} convolutional networks for computer vision, we developed SpiderNet architecture for fraud detection, guided by anti-fraud expert knowledge. Using convolutional layers, our SpiderNet creates hierarchical combinations of anti-fraud rules, and a fully connected structure of skip connections between blocks allows strong rules to be forwarded immediately to the network output. Also, our network can select strong rules early on through the use of a multi-layered structure of pooling layers in Spider-blocks. Moreover, the down-dropout technique we use between Spider-blocks is an additional regularizer against the excessive complexity of the network.
In our opinion, SpiderNet should work well for heterogeneous input data, when there are clear leaders among the rules supplied to the network input and they must be forwarded to the output without additional transformation (for example, scores of other models or strong rules).
It can also be noted that our results confirm the hypothesis of ResNet authors, which they formulated in their paper \cite{He16}: "The residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems".
In this paper, we also proposed new methods for developing anti-fraud rules – B-tests and W-tests, which significantly affect the quality of the models. We hope that B/W-tests will contribute to the solution of the problem “The Blind Men and the Elephant”, which was formulated by Foster Provost \cite{Bolton06}.
We also hope that the metric of quality Prevented Losses developed by us allows us to solve an important issue for the industry: how to evaluate the economic efficiency of anti-fraud models developed for industrial use.
We understand that our SpiderNet does not solve all the anti-fraud modeling problems identified by Bolton, Hand, Provost, and Breiman \cite{Bolton06}. In particular, we still use expert rules designed for specific fraud types to train models. Our B-tests and W-tests partially solve this problem, but there are other strong methods for fraud feature engineering, such as graph methods \cite{Shuhan47, Wang55, Dou12, Wang56}, entropy changing methods \cite{Fu14}, and variance anomaly detection methods (V-tests, similar to B-tests). We plan to work on these topics in our future research.
An important component of SpiderNet is skip-connection, which helps to forward strong features directly to the output layers of the network, partially solving the problem of locality in convolutions, when the order of features in the input vector is important, and their rearrangement leads to a change in the quality of the model. However, the current implementation of SpiderNet does not completely solve the locality problem. Our future work will focus on this problem.
We also assume that SpiderNet might work well not only for fraud detection but also for other types of modeling tasks that use tabular data. Testing this hypothesis is also a topic for our future research.
\begin{acks}
We are grateful to Dmitry Efimov for his valuable advice.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,565,300 | arxiv |
\section{Introduction}
In the spiked covariance model proposed by \cite{johnstone2004sparse}, we are given data
${\mathbf x}_1,{\mathbf x}_2,\dots,{\mathbf x}_n$ with ${\mathbf x}_i\in \mathbb{R}^p$ of the form\footnote{Throughout the paper, we follow the convention of denoting
scalars by lowercase, vectors by lowercase boldface, and
matrices by uppercase boldface letters.}:
\begin{align}
{\mathbf x}_i &= \sum_{q=1}^{r}\sqrt{\beta_{q}}\, u_{q,i} \,{\mathbf v} _q+ {\mathbf z}_i\,, \label{eq:model}
\end{align}
Here ${\mathbf v}_1,\dots, {\mathbf v}_r \in\mathbb{R}^p$ is a set of orthonormal vectors, that we want to
estimate, while $u_{q,i} \sim{\sf N}(0, 1)$ and ${\mathbf z}_i \sim{\sf N}(0, {\rm I}_p)$
are independent and identically distributed. The quantity $\beta_q\in
\mathbb{R}_{>0}$ quantifies the signal-to-noise ratio.
We are interested in the high-dimensional limit $n, p\to\infty$ with
$\lim_{n\to\infty}p/n= \alpha\in(0,\infty)$. In the rest of this
introduction we will refer to the rank one case, in order to simplify
the exposition, and drop the subscript $q=\{1,2,\dots,r\}$. Our
results and proofs hold for general bounded rank.
The standard method of principal component analysis
involves computing the
sample covariance matrix ${\mathbf G} = n^{-1}\sum_{i=1}^n{\mathbf x}_i{\mathbf x}_i^{{\sf T}}$ and estimates
${\mathbf v}={\mathbf v}_1$ by its principal eigenvector ${\mathbf v}_{\mbox{\tiny{\sc PC}}}({\mathbf G})$. It is a well-known
fact that, in the high dimensional asymptotic $p/n\to \alpha > 0$, this yields an inconsistent estimate
\cite{johnstone2009consistency}.
Namely $\|{\mathbf v}_{\mbox{\tiny{\sc PC}}}-{\mathbf v}\|_2\not\to 0$ in the high-dimensional asymptotic limit,
unless $\alpha\to 0$ (i.e. $p/n\to 0$).
Even worse, Baik, Ben-Arous and P\'ech\'e \cite{baik2005phase} and Paul \cite{paul2007asymptotics} demonstrate a phase transition
phenomenon: if $\beta< \sqrt{\alpha}$ the estimate is asymptotically
orthogonal to the signal $\<{\mathbf v}_{\mbox{\tiny{\sc PC}}},{\mathbf v}\>\to 0$. On the other hand, for $\beta>\sqrt{\alpha}$,
$\<{\mathbf v}_{\mbox{\tiny{\sc PC}}},{\mathbf v}\>$ remains strictly positive as $n,p\to\infty$.
This phase transition phenomenon has attracted considerable attention
recently within random matrix theory
\cite{feral2007largest,capitaine2009largest,benaych2011eigenvalues,knowles2013isotropic}.
These inconsistency results motivated several efforts to exploit
additional structural information on the signal ${\mathbf v}$.
In two influential papers, Johnstone and Lu
\cite{johnstone2004sparse,johnstone2009consistency} considered the
case of a signal ${\mathbf v}$ that is sparse in a suitable basis, e.g. in the
wavelet domain. Without loss of generality, we will assume here that ${\mathbf v}$
is sparse in the canonical basis ${\mathbf e}_1$, \dots ${\mathbf e}_p$.
In a nutshell, \cite{johnstone2009consistency} proposes the following:
\begin{enumerate}
\item Order the diagonal entries of the Gram matrix
${\mathbf G}_{i(1),i(1)}\ge {\mathbf G}_{i(2),i(2)}\ge\dots\ge{\mathbf G}_{i(p),i(p)}$, and let
$J\equiv \{i(1),i(2),\dots,i(k)\}$ be the set of indices
corresponding to the $k$ largest entries.
\item Set to zero all the entries ${\mathbf G}_{i,j}$ of ${\mathbf G}$ unless $i,j\in
J$, and estimate ${\mathbf v}$ with the principal eigenvector of the
resulting matrix.
\end{enumerate}
Johnstone and Lu formalized the sparsity assumption by requiring that ${\mathbf v}$ belongs to a weak $\ell_q$-ball
with $q\in (0,1)$. Instead, here we consider a
strict sparsity constraint where ${\mathbf v}$
has exactly $k$ non-zero entries, with magnitudes bounded
below by $\theta/\sqrt{k}$ for some constant $\theta >0$.
Amini and Wainwright \cite{amini2009high} studied
the more restricted case when every entry of ${\mathbf v}$ has equal
magnitude of $1/\sqrt{k}$.
Within this model, it was proved that
diagonal thresholding successfully recovers the support of ${\mathbf v}$
provided ${\mathbf v}$ is sparse enough, namely $k\le C\sqrt{n/\log p}$ with
$C= C(\alpha, \beta)$ a constant \cite{amini2009high}. (Throughout the paper
we denote by $C$ constants that can change from point to point.)
This result is a striking improvement over vanilla
PCA. While the latter requires a number of samples scaling as the
number of parameters\footnote{Throughout the introduction, we write
$f(n)\gtrsim g(n)$ as a shorthand of \emph{`$f(n)\ge C\, g(n)$ for a
some constant $C = C(\beta,\alpha)$'.} Further $C$
denotes a constant that may change from point to point.} $n\gtrsim p$, sparse PCA via diagonal thresholding achieves the
same objective with a number of samples scaling as the
number of \emph{non-zero} parameters, $n\gtrsim k^2\log p$.
At the same time, this result is not as strong as might have
been expected. By searching exhaustively over all possible supports
of size $k$ (a method that has complexity of order $p^k$) the correct
support can be identified with high probability as soon as $n\gtrsim
k\log p$. On the other hand, no method can succeed for much smaller
$n$, because of information theoretic obstructions \cite{amini2009high}.
Over the last ten years, a significant effort has been devoted to developing practical
algorithms that outperform diagonal thresholding, see e.g.
\cite{moghaddam2005spectral,zou2006sparse,d2007direct,d2008optimal,witten2009penalized}.
In particular, d'Aspremont et al. \cite{d2007direct} developed a
promising M-estimator based on a semidefinite programming (SDP)
relaxation. Amini and Wainwright \cite{amini2009high} carried out an
analysis of this method and proved that, if \emph{(i)} $k\le C(\beta)\, n/\log
p$, and \emph{(ii)} if the SDP solution has rank one, then the SDP
relaxation provides a
consistent estimator of the support of ${\mathbf v}$.
At first sight, this appears as a satisfactory solution of the
original problem.
No procedure can estimate the support of ${\mathbf v}$ from less than $k\log
p$ samples, and the SDP relaxation succeeds in doing it from --at most-- a constant
factor more samples.
This picture was upset by a recent, remarkable result by Krauthgamer,
Nadler and Vilenchik \cite{KrauthgamerSPCA} who showed that
the rank-one condition assumed by Amini and Wainwright
does not hold for $ \sqrt{n}\lesssim k\lesssim (n/\log p)$. This
result is consistent with recent work of Berthet and Rigollet
\cite{berthet2013computational} demonstrating that sparse PCA cannot be
performed in polynomial time in the regime $k\gtrsim \sqrt{n}$, under
a certain computational complexity conjecture for the so-called
planted clique problem.
In summary, the sparse PCA problem demonstrates a fascinating
interplay between computational and statistical barriers.
\begin{description}
\item[From a statistical perspective,] and disregarding computational
considerations, the support of ${\mathbf v}$ can be estimated consistently
if and only if $k\lesssim n/\log p$. This can be done, for instance,
by exhaustive search over all the $\binom{p}{k}$ possible supports of
${\mathbf v}$. (See \cite{vu2012minimax,cai2013sparse} for a minimax analysis.)
\item[From a computational perspective,] the problem appears to be
much more difficult. There is rigorous evidence
\cite{berthet2013computational, ma2013computational}
that no polynomial
algorithm can reconstruct the support unless $k\lesssim \sqrt{n}$.
On the positive side, a very simple algorithm (Johnstone and Lu's
diagonal thresholding) succeeds for $k\lesssim \sqrt{n/\log p}$.
\end{description}
Of course, several elements are still missing in this emerging
picture. In the present paper we address one of them, providing
an answer to the following question:
\begin{quote}
\emph{Is there a polynomial time algorithm that is guaranteed to solve
the sparse PCA problem with high probability for $\sqrt{n/\log
p}\lesssim k\lesssim \sqrt{n}$?}
\end{quote}
We answer this question positively by analyzing a covariance
thresholding algorithm that proceeds, briefly, as follows.
(A precise, general definition, with some technical changes is given
in the next section.)
\begin{enumerate}
\item Form the empirical covariance matrix ${\mathbf G}$ and set to zero all its entries that
are in modulus smaller than $\tau/\sqrt{n}$, for $\tau$ a suitably
chosen constant.
\item Compute the principal eigenvector $\mathbf{\widehat{v}}_1$ of this thresholded
matrix.
\item Denote by ${\sf B}\subseteq \{1,\dots,p\}$ be the set of indices
corresponding to the $k$ largest entries of $\mathbf{\widehat{v}}_1$.
\item Estimate the support of ${\mathbf v}$ by `cleaning' the set ${\sf B}$.
(Briefly, ${\mathbf v}$ is estimated by thresholding ${\mathbf G}\mathbf{\widehat{v}}_{{\sf B}}$
with $\mathbf{\widehat{v}}_{{\sf B}}$ obtained by zeroing the entries outside ${\sf B}$.)
\end{enumerate}
Such a covariance thresholding approach was proposed in
\cite{KrauthgamerSPCA}, and is in turn related to earlier work by
Bickel and Levina \cite{bickel2008regularized}. The formulation
discussed in the next section presents some technical differences that
have been introduced to simplify the analysis. Notice that, to
simplify proofs, we assume $k$ to be known: This issue is discussed in
the next two sections.
The rest of the paper is organized as follows. In the next section we
provide a detailed description of the algorithm and state our main
results. Our theoretical results assume full knowledge of problem
parameters for ease of proof. In light of this, in Section \ref{sec:practical} we discuss a practical implementation
of the same idea that does not require prior knowledge of problem parameters, and is
entirely data-driven. We also illustrate the method through simulations.
The complete proofs are available in the accompanying supplement, in
Sections \ref{sec:prelim}, \ref{sec:proofmain} and \ref{sec:proofcorr} respectively.
\section{Algorithm and main result}
\begin{algorithm}
\caption{Covariance Thresholding}
\label{alg:ct}
\begin{algorithmic}[1]
\State {\bf Input:} Data $({\mathbf x}_i)_{1\le i\le 2n}$, parameters
$k_q\in {\mathbb N}$, $\tau,\rho\in \mathbb{R}_{\ge 0}$;
\State Compute the empirical covariance matrices ${\mathbf G}\equiv \sum_{i=1}^n{\mathbf x}_i{\mathbf x}_i^{{\sf T}}/n$ , ${\mathbf G}' \equiv \sum_{i=n+1}^n%
{\mathbf x}_i{\mathbf x}_i^{\sf T}/n$;
\State Compute ${\mathbf{\widehat{\Sigma}}} = {\mathbf G} - {\rm I}_p$ (resp. ${\mathbf{\widehat{\Sigma}}}' = {\mathbf G}'-{\rm I}_p$);
\State Compute the matrix $\eta({\mathbf{\widehat{\Sigma}}})$ by soft-thresholding
the entries of ${\mathbf{\widehat{\Sigma}}}$:
\begin{align*}
\eta({\mathbf{\widehat{\Sigma}}})_{ij}
&= \begin{cases}
{\mathbf{\widehat{\Sigma}}}_{ij}-\frac{\tau}{\sqrt{n}} & \mbox{if
${\mathbf{\widehat{\Sigma}}}_{ij}\ge \tau/\sqrt{n}$,}\\
0& \mbox{if $-\tau/\sqrt{n}<{\mathbf{\widehat{\Sigma}}}_{ij}< \tau/\sqrt{n}$,}\\
{\mathbf{\widehat{\Sigma}}}_{ij}+\frac{\tau}{\sqrt{n}} & \mbox{if
${\mathbf{\widehat{\Sigma}}}_{ij}\le -\tau/\sqrt{n}$,}
\end{cases}
\end{align*}
\State Let $(\mathbf{\widehat{v}}_{q})_{q\le r}$ be the first $r$ eigenvectors of $\eta({\mathbf{\widehat{\Sigma}}})$;
\State Define ${\mathbf s}_q\in\mathbb{R}^p$ by $s_{q,i} = \widehat{v}_{q,i}\mathbb{I}(\abs{\widehat{v}_{q, i} \ge \theta/2\sqrt{k_q}})$;
\State {\bf Output:} ${\widehat{\sf Q}} = \{i\in [p]: \;\exists\, q \text{ s.t. } |({\mathbf{\widehat{\Sigma}}}'{\mathbf s}_q)_i|\ge \rho \}$.
\end{algorithmic}
\end{algorithm}
For notational convenience, we shall assume hereafter that $2n$ sample
vectors are given (instead of $n$): $\{{\mathbf x}_i\}_{1\le i\le2n}$. These
are distributed according to the model (\ref{eq:model}).
The number of spikes $r$ will be treated as a
known parameter in the problem.
We will make the following
assumptions:
\begin{enumerate}
\item[{\sf A1}] The number of spikes $r$ and the signal strengths
$\beta_1,\dots,\beta_r$ are fixed as $n,p\to\infty$.
Further $\beta_1>\beta_2>\dots\beta_r$ are all \emph{distinct}.
\item[{\sf A2}] Let ${\sf Q}_q$ and $k_q$ denote the support of ${\mathbf v}_q$ and its size respectively.
We let ${\sf Q} = \cup_q{\sf Q}_q$ and $k = \sum_q k_q$ throughout.
Then the non-zero
entries of the spikes satisfy $|v_{q,i}|\ge
\theta/\sqrt{k_q}$ for all $i\in {\sf Q}_q$ for some $\theta >0$. Further, for any $q, q'$
we assume $\abs{v_{q, i}/v_{q', i}} \le \gamma$ for every $i\in{\sf Q}_q\cap{\sf Q}_{q'}$, for
some constant $\gamma$.
\end{enumerate}
As before, we are interested in
the high-dimensional limit of $n, p\to\infty$ with $p/n \to
\alpha$. A more detailed description of the covariance
thresholding algorithm for the general model (\ref{eq:model}) is given in Table \ref{alg:ct}.
We describe the basic intuition for the simpler rank-one case
(omitting the subscript $q\in\{1,2,\dots,r\}$), while stating results
in generality.
We start by splitting the data into
two halves: $({\mathbf x}_i)_{1\le i\le n}$ and
$({\mathbf x}_{i})_{n< i\le 2n}$ and compute the respective sample covariance matrices ${\mathbf G}$ and
${\mathbf G}'$ respectively. As we will see, the matrix ${\mathbf G}$ is used to obtain a
good estimate for the spike ${\mathbf v}$. This estimate, along with the (independent) second part
${\mathbf G}'$, is then used to construct a consistent estimator for the supports of ${\mathbf v}$.
Let us focus on the first phase of the algorithm, which aims to obtain a good
estimate of ${\mathbf v}$. We first compute ${\mathbf{\widehat{\Sigma}}} = {\mathbf G} - {\rm I}$.
For $\beta>\sqrt{\alpha}$, the
principal eigenvector of ${\mathbf G}$, and hence of ${\mathbf{\widehat{\Sigma}}}$ is positively
correlated with ${\mathbf v}$, i.e. $\lim_{n\to\infty}\<\mathbf{\widehat{v}}_1({\mathbf{\widehat{\Sigma}}}),{\mathbf v}\> >0$.
However, for $\beta<\sqrt{\alpha}$, the noise component in ${\mathbf{\widehat{\Sigma}}}$ dominates
and the two vectors become asymptotically
orthogonal, i.e. for instance $\lim_{n\to\infty}\<\mathbf{\widehat{v}}_1({\mathbf{\widehat{\Sigma}}}),{\mathbf v}\> =0$.
In order to reduce the noise level, we must exploit the sparsity of
the spike ${\mathbf v}$.
Denoting by ${\mathbf X}\in\mathbb{R}^{n\times p}$ the matrix with rows ${\mathbf x}_1$,
\dots ${\mathbf x}_n$, by ${\mathbf Z}\in\mathbb{R}^{n\times p}$ the matrix with rows ${\mathbf z}_1$,
\dots ${\mathbf z}_n$, and letting ${\mathbf u} = (u_1,u_2,\dots,u_n)$, the model
(\ref{eq:model}) can be rewritten as
\begin{align}\label{eq:model2}
{\mathbf X} &= \sqrt{\beta}\, {\mathbf u} \,{\mathbf v}^{{\sf T}} + {\mathbf Z}\, .
\end{align}
Hence, letting $\beta' \equiv \beta\|u\|^2/n\approx\beta$, and ${\mathbf w}
\equiv \sqrt{\beta}{\mathbf Z}^{{\sf T}}{\mathbf u}/n$
\begin{align}
{\mathbf{\widehat{\Sigma}}} &= \beta'\,{\mathbf v}\bv^{{\sf T}} + {\mathbf v}\,{\mathbf w}^{{\sf T}}+{\mathbf w} \, {\mathbf v}^{{\sf T}} + \frac{1}{n}{\mathbf Z}^{{\sf T}}{\mathbf Z}\;\;
- {\rm I}_p, . \label{eq:SigmaDef}
\end{align}
For a moment, let us neglect the cross terms $({\mathbf v}{\mathbf w}^{{\sf T}}+{\mathbf w}
{\mathbf v}^{{\sf T}})$. The `signal' component $\beta'\,{\mathbf v}\bv^{{\sf T}}$ is sparse
with $k^2$ entries of magnitude $\beta/k$, which (in the regime of
interest $k =\sqrt{n}/C$) is equivalent to $C\beta/\sqrt{n}$. The `noise' component
${\mathbf Z}^{{\sf T}}{\mathbf Z}/n -{\rm I}_p$ is dense with entries of order $1/\sqrt{n}$.
Assuming
$k/\sqrt{n}$ a small enough constant, it should be possible to remove
most of the noise by thresholding the entries at level of order
$1/\sqrt{n}$. For technical reasons, we use the soft thresholding function
$\eta:\mathbb{R}\times\mathbb{R}_{\ge 0}\to \mathbb{R}, \, \eta(z; \tau) = {\operatorname{\rm{sgn}}}(z)(\abs{z}-\tau)_+$.
We will omit the second argument from $\eta(\cdot; \cdot)$ wherever it is clear from context. Classical denoising theory \cite{DJ94a,johnstone2013function} provides upper bounds the
estimation error of such a procedure. Note however that these results fall short of our goal.
Classical theory measures estimation error by (element-wise)
$\ell_p$ norm, while here we are interested in the resulting
principal eigenvector. This would require bounding, for instance,
the error in operator norm.
Since the soft thresholding function $\eta(z; \tau/\sqrt{n})$ is affine
when $z \gg \tau/\sqrt{n}$, we would expect that the following decomposition
holds approximately (for instance, in operator norm):
\begin{align} \label{eq:heurDecom}
\eta({\mathbf{\widehat{\Sigma}}}) &\approx \eta\left( \beta'{\mathbf v}\bv^{\sf T} \right) + \eta\left( \frac{1}{n}{\mathbf Z}^{\sf T}{\mathbf Z} -{\rm I}_p\right).
\end{align}
The main technical challenge now is to control the operator norm of
the perturbation $\eta({\mathbf Z}^{\sf T}{\mathbf Z}/n - {\rm I}_p)$.
It is easy to see that $\eta({\mathbf Z}^{\sf T}{\mathbf Z}/n -{\rm I}_p)$ has entries of
variance $\delta(\tau)/n$, for $\delta(\tau)\to 0$ as
$\tau\to\infty$. If entries were independent with mild decay, this would imply --with high probability--
\begin{align}
\norm{\eta\left( \frac{1}{n}{\mathbf Z}^{\sf T}{\mathbf Z}-{\rm I}_p \right)}_2 \lesssim C\delta(\tau),\label{eq:kernRMnorm}
\end{align}
for some constant $C$. Further, the first component
in the decomposition (\ref{eq:heurDecom})
is still approximately rank
one with norm of the order of $\beta'\approx \beta$. Consequently, with standard linear algebra
results on the perturbation of eigenspaces \cite{davis1970sin}, we obtain an error bound $\norm{\mathbf{\widehat{v}} -{\mathbf v}}\lesssim \delta(\tau)/C'\beta$.
Our first theorem formalizes this intuition and provides a bound
on the estimation error in the principal components of $\eta({\mathbf{\widehat{\Sigma}}})$.
\begin{theorem}\label{thm:corr}
Under the spiked covariance model \myeqref{eq:model} satisfying Assumption
{\sf A1}, let $\mathbf{\widehat{v}}_q$ denote the $q^\text{th}$
eigenvector of $\eta({\mathbf{\widehat{\Sigma}}})$
using threshold $\tau$.
For every $\alpha, (\beta_q)_{q=1}^r \in (0, \infty)$, integer $r$ and every ${\varepsilon} >0$
there exist constants, $\tau = \tau({\varepsilon},\alpha, (\beta_q)_{q=1}^r, r, \theta)$ and $ 0 < c_*=c_*({\varepsilon},\alpha, (\beta_q)_{q=1}^r, r, \theta)< \infty$
such that, if $\sum_q k_q = \sum_q|{\rm supp}({\mathbf v}_q)| \le c_*\sqrt{n})$, then
\begin{align}
{\mathbb P}\Big\{\min( \norm{\mathbf{\widehat{v}}_q - {\mathbf v}_q}, \norm{\mathbf{\widehat{v}}_q + {\mathbf v}_q}) \le
{\varepsilon}\;\; \forall q\in \{1,\dots,r\}\Big\}\ge 1-\frac{\alpha}{n^4}\, .
\end{align}
\end{theorem}
It is clear from the discussion above that the proof of Theorem \ref{thm:corr}
requires a formalization of \myeqref{eq:kernRMnorm}. Indeed, the spectral properties
of
random matrices of the type $f({\mathbf Z}^{\sf T}{\mathbf Z}/n - {\rm I}_p)$ , called
inner-product
kernel random matrices, have attracted
recent interest within probability theory \cite{el2010information,el2010spectrum,cheng2012spectrum}.
In this literature, the asymptotic
eigenvalue distribution of a matrix $f({\mathbf Z}^{\sf T}{\mathbf Z}/n-{\rm I}_p)$ is the object of
study. Here $f:\mathbb{R}\to\mathbb{R}$ is a kernel function and is applied entry-wise
to the matrix ${\mathbf Z}^{\sf T}{\mathbf Z}/n-{\rm I}_p$, with ${\mathbf Z}$ a matrix
as above. Unfortunately, these results do not suffice to prove Theorem \ref{thm:corr} for the
following reasons:
\begin{itemize}
\item The results \cite{el2010information,el2010spectrum} are
perturbative and provide conditions under which the spectrum of $f({\mathbf Z}^{\sf T} {\mathbf Z}/n-{\rm I}_p)$
is close to a rescaling of the spectrum of $({\mathbf Z}^{\sf T} {\mathbf Z}/n-{\rm I}_p)$ (with
rescaling factors depending on the Taylor expansion of $f$ close to $0$).
We are interested instead in a non-perturbative regime in which the
spectrum of $f({\mathbf Z}^{\sf T} {\mathbf Z}/n-{\rm I}_p)$ is very different from the one of
$({\mathbf Z}^{\sf T} {\mathbf Z}/n-{\rm I}_p)$ (and the Taylor expansion is trivial).
\item The authors of \cite{cheng2012spectrum} consider $n$-dependent kernels, but their results are
asymptotic and concern the weak limit of the empirical spectral distribution of $f({\mathbf Z}^{\sf T}{\mathbf Z}/n-{\rm I}_p)$.
This does not yield an upper bound on the spectral norm\footnote{Note that \cite{cheng2012spectrum}
also provide a non-asymptotic bound for the spectral norm of $f({\mathbf Z}^{\sf T}{\mathbf Z}/n-{\rm I}_p)$ via the moment method, but
this bound diverges with $n$ and does not give a result of the type
of \myeqref{eq:kernRMnorm}.}
of $f({\mathbf Z}^{\sf T}{\mathbf Z}/n-{\rm I}_p)$.
\end{itemize}
Our approach to prove Theorem \ref{thm:corr} follows instead the
so-called ${\varepsilon}$-net method: we develop
high probability bounds on the maximum Rayleigh quotient:
\begin{align*}
\max_{{\mathbf y}\in S^{p-1}} \<{\mathbf y}, \eta({\mathbf Z}^{\sf T}{\mathbf Z}/n-{\rm I}_p){\mathbf y}\> &= \max_{{\mathbf y}\in S^{p-1}} \sum_{i, j }\eta\left( \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_j\>}{n}; \frac{\tau}{\sqrt{n}} \right)y_i y_j,
\end{align*}
where $S^{p-1} = \{{\mathbf y}\in\mathbb{R}^p:\|{\mathbf y}\|=1\}$ is the unit sphere.
Since $\eta({\mathbf Z}^{\sf T}{\mathbf Z}/n -{\rm I}_p)$ is not Lipschitz continuous in the
underlying Gaussian variables ${\mathbf Z}$, concentration does not follow
immediately from Gaussian isoperimetry. We have to develop more
careful (non-uniform) bounds on the gradient of $\eta({\mathbf Z}^{\sf T}{\mathbf Z}/n-{\rm I}_p)$
and show that they imply concentration as required.
While Theorem \ref{thm:corr} guarantees that $\mathbf{\widehat{v}}$ is a reasonable
estimate of the spike ${\mathbf v}$ in $\ell_2$ distance (up to a sign flip),
it does not provide a consistent estimator of its support. This brings us
to the second phase of our algorithm. Although $\mathbf{\widehat{v}}$ is not even expected
to be sparse, it is easy to see that
the largest entries of $\mathbf{\widehat{v}}$ should have significant overlap
with ${\rm supp}({\mathbf v})$. Steps 6, 7 and 8 of the
algorithm exploit this property to construct a consistent estimator
${\widehat{\sf Q}}$ of the support of the spike ${\mathbf v}$. Our second theorem guarantees
that this estimator is indeed consistent.
\begin{theorem}\label{thm:main}
Consider the spiked covariance model of \myeqref{eq:model} satisfying
Assumptions {\sf A1}, {\sf A2}. For any
$\alpha, (\beta_q)_{q\le r} \in (0, \infty)$, $\theta, \gamma>0$ and integer $r$, there exist constants $c_*, \tau, \rho$ dependent on
$\alpha, (\beta_q)_{q\le r}, \gamma, \theta, r$, such that, if $\sum_q k_q = |{\rm supp}({\mathbf v}_q)|\le c_*\sqrt{n}$, the Covariance
Thresholding algorithm of Table \ref{alg:ct} recovers the union of supports of ${\mathbf v}_q$
with high probability.
Explicitly, there exists a constant $C>0$ such that
\begin{align}
{\mathbb P}\Big\{{\widehat{\sf Q}} =\cup_q{\rm supp}({\mathbf v}_q) \Big\} \ge 1-\frac{C}{n^4}\, .
\end{align}
\end{theorem}
Before passing to the proofs of Theorem \ref{thm:corr} and Theorem
\ref{thm:main} (respectively in Sections \ref{sec:proofcorr} and
\ref{sec:proofmain} of the Supplementary Material), it is useful to pause for a few remarks.
\begin{remark}
We focus on a consistent estimation of the union of the supports
$\cup_q{\rm supp}({\mathbf v}_q)$ of the spikes. In the rank-one case, this
obviously corresponds to the standard support recovery. In the general case,
once the union is correctly estimated,
estimating the individual supports poses no additional difficulty: indeed, since
$|\cup_q{\rm supp}({\mathbf v}_q))|=O(\sqrt{n})$ an extra step with $n$ fresh samples ${\mathbf x}_i$
restricted to ${\widehat{\sf Q}}$ yields consistent estimates for ${\mathbf v}_q$, hence ${\rm supp}({{\mathbf v}_q})$.
\end{remark}
\begin{remark}
Recovering the signed supports ${\sf Q}_{q,+} = \{i\in[p] : v_{q, i} > 0\}$ and
${\sf Q}_{q,-} = \{i\in[p]: v_{q,i} <0\}$ is possible using the same technique
as recovering the supports ${\rm supp}({\mathbf v}_q)$ above, and poses no additional difficulty.
\end{remark}
\begin{remark}
Assumption {\sf A2} requires $|v_{q,i}|\ge
\theta/\sqrt{k_q}$ for all $i\in {\sf Q}_q$.
This is a standard requirement in the support recovery literature \cite{wainwright2009sharp, meinshausen2006high}.
The second part of assumption {\sf A2} guarantees that when the supports of two spikes overlap, their
entries are roughly of the same order. This is necessary for our proof technique
to go through. Avoiding such an assumption altogether remains an open
question.
\end{remark}
Our covariance thresholding algorithm assumes knowledge of the correct
support sizes $k_q$.
Notice that the same assumption is made in earlier theoretical, e.g. in
the analysis of SDP-based reconstruction by Amini and Wainwright \cite{amini2009high}.
While this assumption is not realistic in applications, it helps to
focus our exposition on the most challenging aspects of the problem.
Further, a ballpark estimate of $k_q$ (indeed of $\sum_{q}k_q$) is actually sufficient.
Indeed consider the algorithm obtained by replacing steps 7 and 8 as
following.
\begin{itemize}
\item[{\sc 7:}] Define ${\mathbf s}'_q\in\mathbb{R}^p$ by
\begin{align} \label{eq:Sprime}
s'_{q, i} = \begin{cases}
\widehat{v}_{q, i} & \mbox{ if } |\widehat{v}_{q, i}| > \theta/(2\sqrt{k_{0}})\\
0 & \mbox{ otherwise.}
\end{cases}
\end{align}
\item[{\sc 8:}] Return
\begin{align} \label{eq:Cprime}
{\widehat{\sf Q}} = \cup_q\{i\in [p]: \; |({\mathbf{\widehat{\Sigma}}}'{\mathbf s}'_q)_i|\ge \rho\}\, .\end{align}
\end{itemize}
The next theorem shows that this procedure is effective even if $k_0$
overestimates $\sum_q k_q$ by an order of magnitude. Its proof is deferred to
Section \ref{sec:proofmain}.
\begin{theorem}\label{thm:main3}
Consider the spiked covariance model of \myeqref{eq:model}. For any
$\alpha, \beta \in (0, \infty)$, let constants $c_*, \tau, \rho$
be given as in Theorem \ref{thm:main}. Further assume $k =
\sum_{q}|{\rm supp}({\mathbf v}_q)|\le c_*\sqrt{n}$, and $\sum_{q}k\le k_0\le 20\, \sum_q k_q$.
Then, the Covariance
Thresholding algorithm of Table \ref{alg:ct}, with the definitions
in Eqs.~(\ref{eq:Sprime}) and (\ref{eq:Cprime}),
recovers the union of supports of ${\mathbf v}_q$ successfully, i.e.
\begin{align}
{\mathbb P}\Big({\widehat{\sf Q}} = \cup_q{\rm supp}({\mathbf v}_q)\Big) \ge 1-\frac{C}{n^4}\, .
\end{align}
\end{theorem}
\section{Practical aspects and empirical results}\label{sec:practical}
Specializing to the rank one case, Theorems \ref{thm:corr} and \ref{thm:main} show that
Covariance Thresholding succeeds with high probability for a number of
samples $n\gtrsim k^2$, while Diagonal Thresholding requires $n\gtrsim
k^2\log p$. The reader might wonder whether eliminating the $\log p$ factor
has any practical relevance or is a purely conceptual improvement.
Figure \ref{fig:supportRecovery} presents simulations on synthetic data under the
strictly sparse model, and the Covariance Thresholding algorithm of
Table \ref{alg:ct}, used in the proof of Theorem \ref{thm:main}. The
objective is to check whether the $\log p$ factor has an impact at
moderate $p$. We compare this with Diagonal Thresholding.
\begin{figure}[t]
\includegraphics[width=0.33\linewidth]{supportRecoveryDT.pdf}
\includegraphics[width=0.33\linewidth]{supportRecoveryCT.pdf}
\includegraphics[width=0.33\linewidth]{supportRecoveryCTdata.pdf}
\caption{The support recovery phase transitions for Diagonal Thresholding (left) and
Covariance Thresholding (center) and the data-driven version
of Section \ref{sec:practical} (right). For Covariance Thresholding, the
fraction of support recovered correctly \emph{increases} monotonically with
$p$, as long as $k \le c\sqrt{n}$ with $c\approx 1.1$. Further, it
appears to converge to one throughout this region. For Diagonal
Thresholding, the fraction of support recovered correctly
\emph{decreases} monotonically with $p$ for all $k$ of order
$\sqrt{n}$.
This confirms that Covariance Thresholding (with or without knowledge
of the support size $k$) succeeds with high
probability for $k \le c\sqrt{n}$, while Diagonal Thresholding
requires a significantly sparser principal component.
\label{fig:supportRecovery}}
\end{figure}
We plot the empirical
success probability as a function of $k/\sqrt{n}$ for several values
of $p$, with $p=n$. The empirical success probability was computed by
using $100$ independent instances of the problem.
A few observations are of interest: $(i)$ Covariance
Thresholding appears to have a significantly larger success
probability in the `difficult' regime where Diagonal Thresholding
starts to fail; $(ii)$ The curves for Diagonal
Thresholding appear to decrease monotonically with $p$ indicating that
$k$ proportional to $\sqrt{n}$ is not the right scaling for this
algorithm (as is
known from theory); $(iii)$ In contrast, the curves for Covariance
Thresholding become steeper for larger $p$, and, in particular, the
success probability increases with $p$ for $k\le 1.1\sqrt{n}$. This
indicates a sharp
threshold for $k ={\rm const}\cdot\sqrt{n}$, as suggested by our theory.
In terms of practical applicability, our algorithm in Table \ref{alg:ct}
has the shortcomings of requiring knowledge of problem parameters
$\beta_q, r, k_q$. Furthermore, the thresholds $\rho, \tau$ suggested by theory need
not be optimal.
We next describe a
principled approach to estimating (where possible) the parameters
of interest and running the algorithm in a purely data-dependent manner.
Assume the following model, for $i\in [n]$
\begin{align*}
{\mathbf x}_i &= {\boldsymbol{\mu}} + \sum_q\sqrt{\beta_q}u_{q,i}{\mathbf v}_q + \sigma{\mathbf z}_i,
\end{align*}
where ${\boldsymbol{\mu}}\in\mathbb{R}^p$ is a fixed mean vector, $u_{q, i}$ have mean $0$
and variance $1$, and ${\mathbf z}_i$ have mean $0$ and covariance ${\rm I}_p$.
Note that our focus in this section is not on rigorous analysis, but instead to demonstrate
a principled approach to applying covariance thresholding in practice.
We proceed as follows:
\begin{description}
\item [Estimating ${\boldsymbol{\mu}}$, $\sigma$:]
We let $\widehat{\boldsymbol{\mu}} = \sum_{i=1}^n {\mathbf x}_i/n$ be the empirical mean estimate
for $\mu$. Further letting $\overline{\mathbf{X}}={\mathbf X}-\mathbf{1}\widehat{{\boldsymbol{\mu}}}^{\sf T}$ we
see that $pn-(\sum_q k_q)n \approx pn$ entries of $\overline{\mathbf{X}}$ are mean $0$ and variance $\sigma^2$.
We let $\widehat{\sigma} = {{\rm MAD}(\overline{\mathbf{X}})}/{\nu}$
where ${\rm MAD}(\,\cdot\,)$ denotes the median absolute
deviation of the entries of the matrix in the argument, and $\nu$
is a constant scale factor. Guided by the
Gaussian case, we take $\nu = \Phi^{-1}(3/4) \approx 0.6745$.
\item[Choosing $\tau$:]
Although in the statement of the theorem, our choice of $\tau$ depends on the SNR $\beta/\sigma^2$,
we believe this is an artifact of our analysis. Indeed it is reasonable to
threshold `at the noise level', as follows. The noise component of
entry $i,j$ of the sample covariance (ignoring lower order
terms) is given by $\sigma^2\<{\mathbf z}_i, {\mathbf z}_j\>/n$. By the central limit theorem,
$\<{\mathbf z}_i, {\mathbf z}_j\>/\sqrt{n} {\,\stackrel{\mathrm{d}}{\Rightarrow} \,} {\sf N}(0, 1)$. Consequently, $\sigma^2\<{\mathbf z}_i, {\mathbf z}_j\>/n \approx
{\sf N}(0, \sigma^4/n)$, and we need to choose the (rescaled) threshold proportional
to $\sqrt{\sigma^4} = \sigma^2$. Using previous estimates, we let
$\tau = \nu'\cdot \widehat{\sigma}^2$
for a constant $\nu'$. In simulations, a choice $3\lesssim \nu' \lesssim 4$ appears to work well.
\item[Estimating $r$:]
We define ${\mathbf{\widehat{\Sigma}}} = \overline{\mathbf{X}}^{\sf T}\overline{\mathbf{X}}/n-\sigma^2{\rm I}_p$ and soft threshold it
to get $\eta({\mathbf{\widehat{\Sigma}}})$ using $\tau$ as above.
Our proof of Theorem \ref{thm:corr} relies on the fact
that $\eta({\mathbf{\widehat{\Sigma}}})$ has $r$ eigenvalues that are separated from the
bulk of the spectrum. Hence, we estimate $r$ using $\widehat{r}$: the number of
eigenvalues separated from the bulk in $\eta({\mathbf{\widehat{\Sigma}}})$. The edge of the
spectrum can be computed numerically using the Stieltjes transform method as
in \cite{cheng2012spectrum}.
\item[Estimating ${\mathbf v}_q$:]
Let $\mathbf{\widehat{v}}_q$ denote the $q^{\text{th}}$
eigenvector of $\eta({\mathbf{\widehat{\Sigma}}})$. Our theoretical analysis indicates that $\mathbf{\widehat{v}}_q$ is
expected to be close to ${\mathbf v}_q$. In order to denoise $\mathbf{\widehat{v}}_q$, we assume
$\mathbf{\widehat{v}}_q\approx (1-\delta){\mathbf v}_q + {\boldsymbol{\eps}}_q$, where ${\boldsymbol{\eps}}_q$ is additive random noise.
We then threshold ${\mathbf v}_q$ `at the
noise level' to recover a better estimate of ${\mathbf v}_q$. To do this, we
estimate the standard deviation of the ``noise'' ${\boldsymbol{\eps}}$ by
$\widehat{\sigma_{{\boldsymbol{\eps}}}}
= {{\rm MAD}({\mathbf v}_q)}/{\nu}$. Here we set --again guided by the Gaussian heuristic-- $\nu \approx 0.6745$. Since ${\mathbf v}_q$ is sparse,
this procedure returns a good estimate for the size of the noise
deviation. We let $\eta_{H}(\mathbf{\widehat{v}}_q)$ denote the vector obtained by hard
thresholding $\mathbf{\widehat{v}}_q$: set $(\eta_H(\mathbf{\widehat{v}}_q))_i = \mathbf{\widehat{v}}_{q,i} \text{ if }
\abs{\widehat{v}_{q,i}} \ge \nu' \widehat{\sigma}_{{\boldsymbol{\eps}}_q}$
and $
0 \text{ otherwise.}$
We then let $\mathbf{\widehat{v}}^*_q = \eta(\mathbf{\widehat{v}}_q)/\norm{\eta(\mathbf{\widehat{v}}_q)}$ and return $\mathbf{\widehat{v}}^*_q$
as our estimate for ${\mathbf v}_q$.
\end{description}
Note that --while different in several respects-- this empirical approach
shares the same philosophy of the algorithm in Table
\ref{alg:ct}.
On the other hand, the data-driven algorithm presented in this section is less
straightforward to analyze, a task that we defer to future work.
Figure \ref{fig:supportRecovery} also shows results of a support recovery
experiment using the `data-driven' version of this section. Covariance thresholding
in this form also appears to work for supports of size $k \le \text{const}\sqrt{n}$.
Figure \ref{fig:threePeak} shows the performance of vanilla PCA, Diagonal Thresholding
and Covariance Thresholding on the ``Three Peak'' example of Johnstone and Lu \cite{johnstone2004sparse}.
This signal is sparse in the wavelet domain and the simulations employ the data-driven
version of covariance thresholding. A similar experiment with the ``box'' example of Johnstone
and Lu is provided in the supplement. These experiments demonstrate that, while for large values of $n$ both Diagonal
Thresholding and Covariance Thresholding perform well, the latter
appears superior for smaller values of $n$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.45]{threePeak.pdf}
\caption{The results of Simple PCA, Diagonal Thresholding and Covariance Thresholding (respectively) for
the ``Three Peak'' example of Johnstone and Lu \cite{johnstone2009consistency} (see Figure 1
of the paper). The signal is sparse in the `Symmlet 8' basis. We use $\beta = 1.4, p=4096$, and the rows correspond to sample sizes $n=1024, 1625, 2580, 4096$
respectively. Parameters for Covariance Thresholding are chosen as in Section \ref{sec:practical},
with $\nu' = 4.5$. Parameters for Diagonal Thresholding are from \cite{johnstone2009consistency}. On each curve,
we superpose the clean signal (dotted). \label{fig:threePeak}}
\end{figure}
\section{Proof preliminaries}\label{sec:prelim}
In this section we review some notation and preliminary facts
that we will use throughout the paper.
\subsection{Notation}
We let $[m] = \{1,2,\dots,m\}$ denote the set of first $m$ integers.
We will represent vectors using boldface lower case letters,
e.g. ${\mathbf u}, {\mathbf v}, {\mathbf x}$. The entries of a vector ${\mathbf u}\in\mathbb{R}^n$ will be
represented by $u_i, i\in[n]$.
Matrices are represented using boldface upper
case letters e.g. ${\mathbf A}, {\mathbf X}$. The entries of a matrix ${\mathbf A} \in\mathbb{R}^{m\times n}$ are
represented by ${\mathbf A}_{ij}$ for $i\in[m], j\in[n]$.
Given a matrix ${\mathbf A}\in\mathbb{R}^{m\times n}$, we generically let ${\mathbf a}_1$,
${\mathbf a}_2, \dots, {\mathbf a}_m$ denote its rows, and ${\mathbf{\tilde{a}}}_1$,
${\mathbf{\tilde{a}}}_2, \dots, {\mathbf{\tilde{a}}}_n$ its columns.
For $E\subseteq [m]\times [n]$, we define the projector operator
${\cal P}_E:\mathbb{R}^{m\times n}\to \mathbb{R}^{m\times n}$ by letting
${\cal P}_E({\mathbf A})$ be the matrix with entries
\begin{align}
{\cal P}_{E}({\mathbf A})_{ij} = \begin{cases}
{\mathbf A}_{ij} & \mbox{if $(i,j)\in E$,}\\
0 & \mbox{otherwise.}
\end{cases}
\end{align}
If $E = E_1\times E_2$, we write ${\cal P}_{E_1, E_2}$ for
${\cal P}_{E_1\times E_2}$. In the case $E=E_1\times E_2$ we
also define a projection operator ${\widetilde{\cal P}}_{E_1, E_2}:\mathbb{R}^{m\times n}\to
\mathbb{R}^{|E_1|\times|E_2|}$ that returns the $E_1\times E_2$ submatrix.
If $m=n$, and $E$ is the diagonal,
we write ${\mathcal{P}_{\sf d}}$ for ${\cal P}_E$. If instead $E$ is the complement of
the diagonal, we write ${\mathcal{P}_{\sf nd}}$.
For a matrix ${\mathbf A}\in\mathbb{R}^{m\times n}$, and a set $E\subseteq[n]$, we
define its column restriction ${\mathbf A}_{E}\in\mathbb{R}^{m\times n}$ to be the
matrix obtained by setting to $0$ columns outside $E$:
\begin{align*}
({\mathbf A}_{E})_{ij} &= \begin{cases}
{\mathbf A}_{ij} &\text{ if }j\in E,\\
0 &\text{otherwise. }
\end{cases}
\end{align*}
Similarly ${\mathbf y}_E$ is obtained from ${\mathbf y}$ by setting to zero all
indices outside $E$.
The operator norm of a matrix ${\mathbf A}$ is denoted by $\norm{{\mathbf A}}$ (or
$\norm{{\mathbf A}}_2$)
and its Frobenius norm by $\norm{{\mathbf A}}_F$. We write $\norm{{\mathbf x}}$ for the standard $\ell_2$
norm of a vector ${\mathbf x}$.
We let ${\sf Q}_q$ denotes the support of the $q^{\text{th}}$
spike ${\mathbf v}_q$. Also, we denote the union of the supports of ${\mathbf v}_q$ by ${\sf Q}=\cup_q{\sf Q}_q$.
The complement of a set $E\in[n]$ is denoted by $E^c$.
We write $\eta(\cdot; \cdot)$ for the soft-thresholding function.
By $\partial\eta(\cdot ; \tau)$ we denote the derivative of $\eta(\cdot; \tau)$ with respect
to the \emph{first} argument, which exists Lebesgue almost everywhere.
In the statements of our results, consider the limit of large $p$ and large
$n$ with $p/n\to\alpha$. This limit will be referred to either as ``$n$ large enough''
or ``$p$ large enough'' where the phrase ``large enough'' indicates dependence of $p$ (and
thereby $n$) on specific problem parameters.
\subsection{Preliminary facts}
Let $S^{n-1}$ denote the unit sphere in $n$ dimensions, i.e. $S^{n-1} = \{{\mathbf x} : \norm{{\mathbf x}} = 1\}$.
We use the following definition (see \cite{Vershynin-CS}) of
the ${\varepsilon}$-net of a set $X\subseteq\mathbb{R}^n$:
\begin{definition}[Nets, Covering numbers]\label{def:nets}
A subset $T^{\varepsilon}(X)\subseteq X$ is called an ${\varepsilon}$-net of $X$ if every point in
$X$ may be approximated by one in $T^{\varepsilon}(X)$ with error at most ${\varepsilon}$. More precisely:
\begin{align*}
\forall x\in X,\quad \inf_{y\in T^{\varepsilon}(X)} \norm{x - y} &\le {\varepsilon}.
\end{align*}
The minimum cardinality of an ${\varepsilon}$-net of $X$, if finite, is called its covering number.
\end{definition}
The following two facts are useful while using ${\varepsilon}$-nets to bound the
spectral norm of a matrix. For proofs, we refer the reader to \cite{Vershynin-CS}.
\begin{lemma}
Let $S^{n-1}$ be the unit sphere in $n$ dimensions. Then there exists an ${\varepsilon}$-net
of $S^{n-1}$, $T^{\varepsilon}(S^{n-1})$ satisfying:
\begin{align*}
|T^{\varepsilon}(S^{n-1})| \le \left( 1+ \frac{2}{{\varepsilon}} \right)^n.
\end{align*}
\label{lem:epsnetcard}
\end{lemma}
\begin{lemma}
Let ${\mathbf A}\in\mathbb{R}^{n\times n}$ be a symmetric matrix. Then:
\begin{align*}
\norm{{\mathbf A}}_2 = \sup_{{\mathbf x}\in S^{n-1}}|\<{\mathbf x}, {\mathbf A}{\mathbf x}\>| \le (1-2{\varepsilon})^{-1}\sup_{{\mathbf x}\in T^{\varepsilon}(S^{n-1})} |\<{\mathbf x}, {\mathbf A}{\mathbf x}\>|.
\end{align*}
In particular, if ${\mathbf A}$ is a random matrix, then for $\Delta>0$ we have:
\begin{align*}
{\mathbb P}\left\{ \norm{{\mathbf A}}_2 \ge \Delta \right\} &\le \left( 1+\frac{2}{{\varepsilon}} \right)^n \sup_{{\mathbf x}\in T^{\varepsilon}(S^{n-1}) }{\mathbb P}\left\{ \abs{\<{\mathbf x}, {\mathbf A}{\mathbf x}\>} \ge \Delta(1-2{\varepsilon}) \right\}.
\end{align*}
\label{lem:specnormbnd}
\end{lemma}
Throughout the paper we will denote by $T^{\varepsilon}_n$ the \emph{minimum cardinality} ${\varepsilon}$-net on the unit sphere
$S^{n-1}$, which naturally satisfies Lemma \ref{lem:epsnetcard}. Further, for a non-zero vector ${\mathbf y}\in\mathbb{R}$, we define
the set $S^{n-1}_{\mathbf y} = \{{\mathbf x} : \<{\mathbf x}, {\mathbf y}\>=0, \norm{{\mathbf x}}=1\}$ and let its minimum cardinality ${\varepsilon}$-net
be denoted by $T^{\varepsilon}_n({\mathbf y})$. Since $S^{n-1}_{\mathbf y}$ is isometric to $S^{n-2}$, Lemma \ref{lem:epsnetcard}
holds for $T^{\varepsilon}_{n}({\mathbf y})$ as well.
We now state some measure concentration results that we will use
at various points in the proofs of Theorems \ref{thm:corr} and \ref{thm:main}.
\begin{lemma}
Consider ${\mathbf z}\sim{\sf N}(0, {\rm I}_N)$ be a vector of $N$ i.i.d. standard normal
random variables on a probability space $(\Omega,{\cal F}, {\mathbb P})$. Suppose
$F:\mathbb{R}^N\to\mathbb{R}$ is a $\mathbb{R}$-valued, continuous, a.e. differentiable function
and $G\in{\mathcal{B}}_{\mathbb{R}^N}$ is a closed convex set satisfying:
\begin{align*}
\norm{{\nabla} F({\mathbf z})}\mathbb{I}({\mathbf z}\in G)&\le L \quad {\mathbb P}\emph{-a.e.} \\
{\mathbb P}\left\{ G \right\} &\ge 1-q.
\end{align*}
Then, there exists a function $F_L:\mathbb{R}^N\to \mathbb{R}$ such that $F_L$ is
$L$-Lipschitz throughout and $F_L$ coincides with $F$ on the set $G$. Further
for each $\Delta>0$ we have that:
\begin{align*}
{\mathbb P}\left\{ |F({\mathbf z}) - \E F({\mathbf z})| \ge \Delta \right\} &\le q + 2\exp\left( -\frac{\widetilde{\Delta}^2}{2L^2} \right),
\end{align*}
where $\widetilde{\Delta} = \Delta - |\E F({\mathbf z}) - \E F_L({\mathbf z})|$.
\label{lem:basicConc}
\end{lemma}
\begin{proof}
For any ${\mathbf y}, {\mathbf y}' \in G$ we have that:
\begin{align*}
F({\mathbf y}') &= F({\mathbf y}) + \int_{0}^{1} \<{\nabla} F(t {\mathbf y}' + (1-t){\mathbf y}), {\mathbf y}' - {\mathbf y}\>
\mathrm{d} t .
\end{align*}
From this we obtain that $|F({\mathbf y}') - F({\mathbf y})| \le L\norm{{\mathbf y}' - {\mathbf y}}$ using the bound
on ${\nabla}{F}$ in $G$ and the convexity of $G$. By Kirszbraun's theorem, there
exists an $L$-Lipschitz extension $F_L$ of $F$ to $\mathbb{R}^N$. Indeed we may
take $F_L({\mathbf y}) = \inf_{{\mathbf y}'\in G} F({\mathbf y}) + L\norm{{\mathbf y} - {\mathbf y}'}$. Then:
\begin{align*}
{\mathbb P}\left\{ |F({\mathbf z}) - \E F({\mathbf z})| \ge \Delta \right\} &= %
{\mathbb P}\left\{ |F({\mathbf z}) - \E F({\mathbf z})| \ge \Delta ; {\mathbf z} \in G \right\} + %
{\mathbb P}\left\{ |F({\mathbf z}) - \E F({\mathbf z})| \ge \Delta ; {\mathbf z}\in G^c \right\} \\
&\le {\mathbb P}\{ |F_L({\mathbf z}) -\E F_L({\mathbf z})| \ge \widetilde{\Delta}\} + {\mathbb P}\{G^c\}
\end{align*}
Applying Gaussian concentration of measure \cite{Ledoux} to $F_L$ finishes the proof.
\end{proof}
For further reference, we define the following:
\begin{definition}
For a function $F:\mathbb{R}^N\to\mathbb{R}$, a constant $L>0$ and a measurable set $G$, we call
$F_L(\cdot)$ the \emph{$G, L$-Lipschitz extension} of $F(\cdot)$. It is given by:
\begin{align*}
F_{L}\left({\mathbf y} \right) &= \inf_{{\mathbf y}' \in G} \left( F({\mathbf y}') + L\norm {{\mathbf y} - {\mathbf y}'}\right).
\end{align*}
\end{definition}
\begin{lemma}
Let ${\mathbf A}\in\mathbb{R}^{M\times N}$ be a matrix with i.i.d. standard normal
entries, i.e. ${\mathbf A}_{ij}\sim{\sf N}(0, 1)$. Then, for every $t\ge 0$:
\begin{align*}
{\mathbb P}\left\{ \norm{{\mathbf A}}_2 \ge \sqrt{M} + \sqrt{N} + t \right\} &\le \exp\left( -\frac{t^2}{2} \right).
\end{align*}
\label{lem:gaussianmatnorm}
\end{lemma}
The proof of this result can be found in \cite{Vershynin-CS}.
\section{Proof of Theorems \ref{thm:main} and \ref{thm:main3}}\label{sec:proofmain}
In this section we prove Theorem \ref{thm:main}
and Theorem \ref{thm:main3}, assuming that Theorem
\ref{thm:corr} holds. The proof of the latter can be found in Section
\ref{sec:proofcorr}.
\subsection{Proof of Theorem \ref{thm:main} }
For any fixed ${\varepsilon}>0$, and assume $\sum_q k_q \le \sqrt{n\log\tau/\tau^3}$,
where $\tau = \tau({\varepsilon}, {\underline{\beta}}, \alpha)$ as per Theorem \ref{thm:corr}.
Then we have for every $q$, $\norm{\mathbf{\widehat{v}}_q - {\mathbf v}_q} \le {\varepsilon}$
with probability at least $1-C/n^4$ for some constant $C>0$.
Throughout the proof, we will work on this favorable event of Theorem
\ref{thm:corr}, namely use
\begin{align}
{\mathbb P}\Big({\widehat{\sf Q}} \neq \cup_q{\rm supp}({\mathbf v}_q)\Big) \le
{\mathbb P}\Big({\widehat{\sf Q}} \neq \cup_q{\rm supp}({\mathbf v}_q) ;\;\; \norm{\mathbf{\widehat{v}}_q - {\mathbf v}_q}^2 \le {\varepsilon}^2\Big) +\frac{C}{n^4}
\, ,
\end{align}
hence focusing on bounding the first term on the right hand side.
It is convenient to isolate the following lemma.
\begin{lemma}\label{lem:AboutS}
Assume $\|\mathbf{\widehat{v}}_q-{\mathbf v}_q\|^2\le {\varepsilon}^2$ and that $\abs{v_{q, i}} \ge \theta/\sqrt{k_q}$.
Let ${\sf B}_q \equiv {\rm supp}({\mathbf s}_q)$ with ${\mathbf s}_q$
defined as per Algorithm \ref{alg:ct}, step 7. Then $|{\sf B}_q\triangle{\sf Q}_q|
\le 4{\varepsilon}^2 k_q/\theta^2$ and hence $|{\sf B}_q\cap{\sf Q}_q|\ge (1-4{\varepsilon}^2/\theta^2)k_q$.
(Here $\triangle$ denotes
the symmetric set-difference.) Further $\min(\norm{{\mathbf s}_q - {\mathbf v}_q}^2, \norm{{\mathbf s}_q+{\mathbf v}_q}^2) \le 5{\varepsilon}^2$.
\end{lemma}
\begin{proof}
Recall that $s_{q, i} = \widehat{v}_{q, i}\mathbb{I}(\abs{\widehat{v}_{q,i}}\ge \theta/2\sqrt{k_q})$.
Since $\abs{v_{q, i}} \ge \theta/\sqrt{k_q}$:
\begin{align*}
{\sf B}_q \triangle {\sf Q}_q &\subseteq \left\{i: \abs{v_{q, i} - \widehat{v}_{q, i}} \ge \frac{\theta}{2\sqrt{k_q}}\right\}.
\end{align*}
Thus $\abs{{\sf B}_q\triangle{\sf Q}_q} \le 4k_q\norm{\mathbf{\widehat{v}}_q -{\mathbf v}_q}^2/\theta^2 \le 4{\varepsilon}^2k_q/\theta^2$.
Now we bound the error $\norm{{\mathbf s}_q - {\mathbf v}_q}$, assuming that
$\norm{\mathbf{\widehat{v}}_q - {\mathbf v}_q} \le {\varepsilon}$. The other case is handled in
an analogous fashion:
\begin{align*}
\norm{{\mathbf s}_q - {\mathbf v}_q}^2 &= \sum_{i\in{\sf Q}_q} (\widehat{v}_{q, i}\mathbb{I}(|\widehat{v}_{q, i}|\ge\theta/2\sqrt{k_q}) - v_{q, i})^2 %
+ \sum_{i\in{\sf Q}_q^c} (\widehat{v}_{q, i} )^2\mathbb{I}(\abs{\widehat{v}_{q, i}}\ge \theta/2\sqrt{k_q})\\
&=\sum_{i\in{\sf Q}_q}v_{q, i}^2 \mathbb{I}(\abs{\widehat{v}_{q, i}} \le \theta/2\sqrt{k_q})
+ \sum_{i\in{\sf Q}_q} (\widehat{v}_{q, i} - v_{q, i})^2 \mathbb{I}(\abs{\widehat{v}_{q, i}} \ge \theta/2\sqrt{k_q}) %
+ \sum_{i\in{\sf Q}_q^c} (\widehat{v}_{q, i} )^2\mathbb{I}(\abs{\widehat{v}_{q, i}}\ge \theta/2\sqrt{k_q}) \\
&\le \sum_{i\in{\sf Q}_q} v_{q, i}^2 \mathbb{I}(\abs{\widehat{v}_{q, i} -v_{q, i}} \ge |{v_{q, i}| - \theta/(2\sqrt{k_q})}) + \norm{\mathbf{\widehat{v}}_q - {\mathbf v}_q}^2 \\
&\le \sum_{i\in{\sf Q}_q} \frac{v_{q, i}^2}{(\abs{v_{q, i}} - \theta/2\sqrt{k_q})^2} (\widehat{v}_{q, i} - v_{q, i})^2 + \norm{\mathbf{\widehat{v}}_q -{\mathbf v}_q}^2 \\
&\le 5\norm{\mathbf{\widehat{v}}_q -{\mathbf v}_q}^2 \le 5{\varepsilon}^2.
\end{align*}
The first inequality above follows from triangle inequality as
$\abs{\widehat{v}_{q, i}} \ge \abs{v_{q, i}} - \abs{\widehat{v}_{q, i} - v_{q, i}}$.
The second inequality employs $\mathbb{I}(z\ge z') \le (z/z')^2$. The final
inequality uses the fact that $\abs{v_{q, i}} \ge \theta/2\sqrt{k_q}$
implies $\abs{v_{q, i}}/(\abs{v_{q, i}} - \theta/2\sqrt{k_q}) \le 2$.
\end{proof}
Now we are in position to prove the main theorem. Without loss of
generality, we will
assume that $\<\mathbf{\widehat{v}}_q, {\mathbf v}_q\> >0$ for every $q$. The other case is treated in the same way.
Recall that ${\mathbf{\widehat{\Sigma}}}'$ was formed from the samples $({\mathbf x}_i)_{n< i\le 2n}$, which
are independent of $\mathbf{\widehat{v}}_q$ and hence ${\sf B}_q$. We let ${\mathbf X}'\in\mathbb{R}^{n\times p}$
denote the matrix with rows $({\mathbf x}_i)_{n<i\le 2n}$ we have, in the same fashion
as \myeqref{eq:model2}, ${\mathbf X}' = \sum_q\sqrt{\beta_q}{\mathbf u}^{\prime}_q({\mathbf v}_q)^{\sf T} + {\mathbf Z}'$. We let ${\mathbf{\tilde{z}}}'_i , 1\le i\le p$
denote the columns of ${\mathbf Z}'$.
For any $i$:
\begin{align*}
({\mathbf{\widehat{\Sigma}}}'{\mathbf s}^1)_i &= \frac{\beta_1\norm{{\mathbf u}^{\prime}_1}^2\<{\mathbf v}_1, {\mathbf s}_1\>v_{1,i}}{n} %
+ \sum_{q\ne 1} \frac{\beta_q\norm{{\mathbf u}^{\prime}_q}_2^2\<{\mathbf v}_q, {\mathbf s}_q\>v_{q,i}}{n} %
+ \sum_{q\ge 1} \frac{\sqrt{\beta_q}}{n} (\<{\mathbf Z}^{\prime {\sf T}}{\mathbf u}^{\prime}_q, {\mathbf s}_1\> v_{q,i} + \<{\mathbf v}_q, {\mathbf s}_1\>({\mathbf Z}^{\prime {\sf T}}{\mathbf u}_q)_i) \\%
& + \sum_{q'> q} \frac{\sqrt{\beta_q\beta_{q'}}}{n}\<{\mathbf u}^{\prime}_q, {\mathbf u}^{\prime}_{q'}\> (v_{q, i} \<{\mathbf v}_{q'}, {\mathbf s}_1\> + v_{q',i} \<{\mathbf v}_q, {\mathbf s}_1\>) %
+\frac{1}{n} \sum_{j\in{\sf B}^1, j\ne i} \<{\mathbf{\tilde{z}}}^{\prime}_j, {\mathbf{\tilde{z}}}^{\prime}_i\>s_{1,j} %
+ \bigg( \frac{\lVert{\mathbf{\tilde{z}}}^{\prime}_i\rVert^2}{n} - 1 \bigg)s_{1, i}
\end{align*}
Let $T_1, T_2 \dots T_5$ denote the terms above.
Firstly, by a standard calculation $n/2\le \norm{{\mathbf u}^{\prime}_q}_2^2 \le 2n$
and $\max_{q\ne q'} |\<{\mathbf u}^{\prime}_q, {\mathbf u}^{\prime}_q\>| \le \sqrt{Cn\log n}$ with probability
at least $1-rn^{-10}$ for some constant $C$. Further, using Lemma \ref{lem:AboutS}
and Cauchy-Schwarz we have that $\<{\mathbf v}_1, {\mathbf s}_1\> \ge (1 - 5{\varepsilon}^2)$ and $|\<{\mathbf v}_q, {\mathbf s}_1\>| \le \norm{{\mathbf v}_1 - {\mathbf s}_1} \le 3{\varepsilon}$.
This
implies that:
\begin{align*}
\abs{T_1} &\ge \frac{\beta_1(1-5{\varepsilon}^2)\abs{v_{1,i}}}{2},\\
\abs{T_2} &\le 6{\varepsilon}\sum_{q>1}\beta_q\abs{v_{q, i}}, \\
\abs{T_4} &\le C((\beta_q)_{q\le r})\sqrt{\frac{\log n}{n}}.\\
\end{align*}
Now consider the term
$T_5 = \sum_{j\in{\sf B}_1\backslash i }\<{\mathbf{\tilde{z}}}'_i, {\mathbf{\tilde{z}}}'_j\>s_{1, j}/n = \<{\mathbf{\tilde{z}}}'_i,\sum_{j\in{\sf B}_1\backslash i}s_{1, j}{\mathbf{\tilde{z}}}'_j\>/n$.
Thus, $T_5 {\,\stackrel{\mathrm{d}}{=} \,} Y_{ij}\equiv \<{\mathbf{\tilde{z}}}_i', {\mathbf{\tilde{z}}}_j'\norm{{\mathbf s}_1}\>/n$ for $j\ne i$. Conditional on ${\mathbf{\tilde{z}}}'_j$,
$Y_{ij}\sim{\sf N}(0, \lVert{{\mathbf{\tilde{z}}}'_j}\rVert^2\norm{{\mathbf s}_1}^2/n^2)$. Using the Chernoff bound ,
$\norm{{\mathbf{\tilde{z}}}'_i}^2\le 2n$ with probability at leat $1-\exp(-n/8)$ and, conditional on this event,
$\abs{Y_{ij}} \le \sqrt{C'\log n/n}$ with probability at least $1 -n^{-10}$ for some absolute constant $C'$. It
follows from the union bound that $\abs{T_5} \le \sqrt{C'\log n/n}$ with probability at least $1-2n^{-10}$
for $n$ large enough. Using a similar calculation $\abs{T_3} \le \sqrt{C'( (\beta_q)q )\log n/n}$ with
probability exceeding $1-n^{-10}$. Finally using Proposition \ref{prop:diag} below, we have that
\begin{align*}
\abs{T_5} &\le \norm{{\mathbf s}_1} \max_i \bigg(\frac{\norm{{\mathbf{\tilde{z}}}_i}^2}{n} - 1\bigg) \\
&\le \sqrt\frac{C''\log n}{n},
\end{align*}
with probability at least $1-n^{-10}$. Here we used the fact that $\norm{{\mathbf s}_1} \le \norm{\mathbf{\widehat{v}}_1} = 1$.
By Assumption {\sf A2}, and the above estimates, we have with probability
at least $1-n^{-9}$:
\begin{align*}
\text{For }i\in{\sf Q}_1, \quad\abs{({\mathbf{\widehat{\Sigma}}}{\mathbf s}_1)_i} &\ge \frac{\beta_1}{2} (1-5{\varepsilon}^2 - 12{\varepsilon}\gamma \sum_q \beta_q)\abs{v_{1, i}} - \sqrt{C\log n/n} \\
&\ge \frac{\beta_1(1-5{\varepsilon}^2 - 12{\varepsilon}\gamma \sum_q \beta_q)\theta}{4\sqrt{k_1}} - \sqrt{\frac{C\log n}{n}}, \\
\text{For }i\in [p]\backslash(\cup_q{\sf Q}_q), \quad \abs{({\mathbf{\widehat{\Sigma}}}{\mathbf s}_1)_i} &\le \sqrt\frac{C\log n}{n}.
\end{align*}
Choosing ${\varepsilon} = {\varepsilon}((\beta_q)_{q\le r}, r, \theta, \gamma)$
small enough and using threshold $\rho = \min_q(\beta_q\theta/4\sqrt{k_q})$ we
have that ${\sf Q}_1\subseteq {\widehat{\sf Q}}$ and ${\widehat{\sf Q}}\subseteq\cup_q{{\sf Q}_q}$.
The analogous guarantees for all $1\le q\le r$ imply Theorem \ref{thm:main}.
\subsection{Proof of Theorem \ref{thm:main3}}
Analogously to the previous proof, we fix ${\varepsilon}>0$, and observe that
$\sum_q{k_q} \le \sqrt{n\log\tau/\tau^3}$,
where $\tau = \tau({\varepsilon}, {\underline{\beta}}, \alpha, \theta)$, and per
Theorem \ref{thm:corr}. Then we have that $\norm{\mathbf{\widehat{v}}_q - \mathbf{\widehat{v}}}^2 \le {\varepsilon}/20$
with probability at least $1-C/n^4$ for some constant $C>0$.
We then use
\begin{align}
{\mathbb P}\Big({\widehat{\sf Q}} \neq \cup_q{\rm supp}({\mathbf v}_q)\Big) \le
{\mathbb P}\Big({\widehat{\sf Q}} \neq \cup_q{\rm supp}({\mathbf v}_q) ;\;\; \norm{\mathbf{\widehat{v}}_q - {\mathbf v}}^2 \le \frac{{\varepsilon}}{20r}\Big) +\frac{C}{n^4}
\, ,
\end{align}
and bound the first term.
The key change with respect to the proof of theorems \ref{thm:main}
is that we need to replace Lemma \ref{lem:AboutS} with the following lemma,
whose proof follows exactly the same argument as that of Lemma \ref{lem:AboutS}.
\begin{lemma}
Assume $\|{\mathbf v}_q-\mathbf{\widehat{v}}_q\|^2\le {\varepsilon}/20$, and let ${\sf B}'\equiv {\rm supp}({\mathbf s}')$ with ${\mathbf s}'$
defined as per Eq. (\ref{eq:Sprime}). Further assume $k\le k_0\le 20\,k$. Then $\norm{{\mathbf s}_q -{\mathbf v}_q}^2 \le 5{\varepsilon}^2$.
\end{lemma}
The rest of the proof of Theorem \ref{thm:main3} is identical to the
one of Theorem \ref{thm:main} in the previous section.
\section{Proof of Theorem \ref{thm:corr}}\label{sec:proofcorr}
Since ${\mathbf{\widehat{\Sigma}}} = {\mathbf X}^{{\sf T}}{\mathbf X}/n - {\rm I}_p$, we have: %
\begin{align}\label{eq:empcov}
{\mathbf{\widehat{\Sigma}}} &= \sum_{q=1}^r \left\{\frac{\beta_q\norm{{\mathbf u}_q}^2}{n}{\mathbf v}_q({\mathbf v}_q)^{\sf T} + \frac{\sqrt{\beta_q}}{n}({\mathbf v}_q({\mathbf u}_q)^{\sf T}{\mathbf Z} + {\mathbf Z}^{\sf T}{\mathbf u}_q{\mathbf v}^{\sf T})\right\} \nonumber\\
&\quad+ \sum_{q\ne q'} \left\{\frac{\sqrt{\beta_q\beta_{q'}}\<{\mathbf u}_q,
{\mathbf u}_{q'}\>}{n}{\mathbf v}_q({\mathbf v}_{q'})^{\sf T} \right\} + \frac{{\mathbf Z}^{\sf T}{\mathbf Z}}{n} - {\rm I}_p.
\end{align}
We let ${\sf D} =\{(i, i) : i\in[p]\backslash\cup_q{\sf Q}_q\}$ be the diagonal
entries not included in any support and ${\sf Q}=\cup_q{\sf Q}_q$ denote the
union of the supports. Further let ${\sf E} = \cup_{q}({\sf Q}_q\times{\sf Q}_q)$, ${\sf F} = ({\sf Q}^c\times{\sf Q}^c)\backslash{\sf D}$,
and ${\sf G} = [p]\times[p]\backslash({\sf D}\cup{\sf E}\cup{\sf F})$. Since these are disjoint we have:
\begin{align}%
\eta({\mathbf{\widehat{\Sigma}}}) &= \underbrace{{\cal P}_{{\sf E}}\left\{ \eta({\mathbf{\widehat{\Sigma}}}) \right\}}_{{\mathbf S}} %
+\underbrace{{\cal P}_{{\sf F}}\left\{ \eta\left( \frac{1}{n}{\mathbf Z}^{\sf T} {\mathbf Z} \right) \right\}}_{{\mathbf N}} %
+\underbrace{{\cal P}_{{\sf G}}\left\{ \eta({\mathbf{\widehat{\Sigma}}}) \right\}}_{{\mathbf R}_1}%
+\underbrace{{\cal P}_{{\sf D}}\left\{ \eta( {\mathbf{\widehat{\Sigma}}}) \right\}}_{{\mathbf R}_2}. \label{eq:decomp}
\end{align}
The first term corresponds to the `signal' component while
the last three terms correspond to the `noise' component.
Theorem
\ref{thm:corr} is a direct consequence of the next four propositions.
The first of these proves that the signal component is preserved, while
the others demonstrate that the noise components are small.
\begin{proposition}\label{prop:signal}
Let ${\mathbf S}$ denote the first term in
\myeqref{eq:decomp}:
\begin{align}
{\mathbf S} &= {\cal P}_{\sf E}\left\{\eta({\mathbf{\widehat{\Sigma}}})\right\}.
\end{align}
Then with probability at least $1-3\exp(-n^{2/3}/4)$:
\begin{align*}
\norm{ {\mathbf S} - \sum_{q=1}^r \beta_q {\mathbf v}_q({\mathbf v}_q)^{\sf T} }_2 &\le \frac{\tau\sum_q{k_q}}{\sqrt{n}}+ \kappa_n.
\end{align*}
Here $\kappa_n = 16(\sqrt{r\alpha}+ r\sqrt{\beta_1})n^{-1/6}$.
\end{proposition}
\begin{proposition} \label{prop:noise}
Let ${\mathbf N}$ denote the second term
of \myeqref{eq:decomp}:
\begin{align*}
{\mathbf N} &= {\cal P}_{{\sf F}}\left\{\eta\left( \frac{1}{n}{\mathbf Z}^{\sf T} {\mathbf Z} \right)\right\}.
\end{align*}
Then there exists $\tau_1=\tau_1(\alpha)$ such that
for any $\tau \ge\tau_1$ and all $p$ large enough, we have
\begin{align}
\norm{{\mathbf N}}_2 \le C_1(\alpha)\sqrt{\frac{\log \tau}{\tau}}\, ,
\end{align}
with probability at least $1-2\exp(-c_1(\tau) p)$. The constants
can be taken as $\tau_1 = 100\max(1, \alpha^2\log\alpha)$, $c_1(\tau) =
1/4\tau$ and $C_1(\alpha) = 5000\max(1, \alpha^{3/2})$.
\end{proposition}
\begin{proposition}\label{prop:cross}
Let ${\mathbf R}_1$ denote the matrix corresponding to the third term
of \myeqref{eq:decomp}:
\begin{align*}
{\mathbf R}_1 &= {\cal P}_{{\sf G}}\left\{\eta({\mathbf{\widehat{\Sigma}}})\right\}.
\end{align*}
Then there exists $\tau_2 = \tau_2(\alpha, \beta_1, r)$ such that for $\tau\ge \tau_2$
and every $p$ large enough we have:
\begin{align}
\norm{{\mathbf R}_1}_2 &\le C_2(\alpha, r, \beta_1)\sqrt\frac{\log \tau}{{\tau}} .
\end{align}
with probability at least $1-\exp(-c_2(\tau) p)$. Here we may take
$c_2(\tau) = c_1(\tau)=1/4\tau$.
\end{proposition}
\begin{proposition}\label{prop:diag}
Let ${\mathbf R}_2$ denote the matrix corresponding to the third term
of \myeqref{eq:decomp}:
\begin{align*}
{\mathbf R}_2 &= {\cal P}_{{\sf D}}\left\{\eta({\mathbf{\widehat{\Sigma}}})\right\}.
\end{align*}
Then with probability at least $1-\alpha n^{-C/6 +1}$ for every $n$ large enough:
\begin{align}
\norm{{\mathbf R}_2}_2 &\le \sqrt{\frac{C\log n}{n}}.
\end{align}
\end{proposition}
We defer the proofs of Propositions \ref{prop:signal},
\ref{prop:noise}, \ref{prop:cross} and \ref {prop:diag} to
Sections \ref{subsec:proofsignal}, \ref{subsec:proofnoise},
\ref{subsec:proofcross} and \ref{subsec:proofdiag}
respectively.
\begin{proof}[Proof of Theorem \ref{thm:corr}]
Using these results we now proceed to prove Theorem \ref{thm:corr}.
We will assume that the events in these proposition hold, and control the probability of their complement
via the union bound.
Denote by $k$ the sum of the support sizes, i.e. $\sum_q k_q$.
From Propositions \ref{prop:signal}, \ref{prop:noise}, \ref{prop:cross}, \ref{prop:diag}
and the triangle inequality we have:
\begin{align*}
\norm{\eta({\mathbf{\widehat{\Sigma}}}) - \sum_q \beta_q{\mathbf v}_q({\mathbf v}_q)^{\sf T}} &\le \frac{k\tau}{\sqrt{n}} + \max(C_1, C_2)\sqrt\frac{\log\tau}{\tau},
\end{align*}
for every $\tau \ge \max(\tau_1, \tau_2)$ with probability at least $1 - \alpha n^{-4}$.
Setting $k \le \sqrt{n \log\tau /\tau^3}$, the right hand side above is bounded by $\delta(\tau) = 2\max(C_1, C_2)\sqrt{\log\tau/\tau}$.
Further define ${\underline{\beta}} \equiv \min_{q \ne q' \le r} (\beta_q, \abs{\beta_q - \beta_{q'}})$.
Employing the Davis-Kahan $\sin\theta$ theorem
\cite{davis1970sin} we have:
\begin{align*}
\min(\norm{\mathbf{\widehat{v}}_q - {\mathbf v}_q}, \norm{\mathbf{\widehat{v}}_q +{\mathbf v}_q}) &\le \sqrt{2}\sin\theta(\mathbf{\widehat{v}}_q,{\mathbf v}_q) \\
&\le \frac{\sqrt{2}\delta(\tau)}{{\underline{\beta}} - \delta(\tau)}.
\end{align*}
Choosing $\tau \ge (8\max(C_1, C_2)/{\underline{\beta}}{\varepsilon})^4$ yields that $\delta(\tau)/({\underline{\beta}}-\delta(\tau)) \le {\varepsilon}$.
Letting $\tau$ be the largest of $\tau_1$, $\tau_2$ and $(8\max(C_1, C_2)/{\underline{\beta}}{\varepsilon})^4)$ gives the desired result.
\end{proof}
\subsection{Proof of Proposition \ref{prop:signal}}\label{subsec:proofsignal}
The proof proceeds in two steps. In the first lemma we bound $\norm{\E\{{\mathbf S}\} - \sum_{q}\beta_q {\mathbf v}_q({\mathbf v}_q)^{\sf T}}$
and in the second we control $\norm{{\mathbf S} - \E\{{\mathbf S}\}}$.
\begin{lemma}
\label{lem:exptrq2}
Consider ${\mathbf S}$ as defined in Proposition \ref{prop:signal}. Then
\begin{align*}
\norm{\E\{{\mathbf S}\} - \sum_{q}\beta_q{\mathbf v}_q({\mathbf v}_q)^{\sf T}} &\le \frac{\tau\sum_{q}k_q}{\sqrt{n}}.
\end{align*}
\end{lemma}
\begin{proof}
Notice that $\E\{{\mathbf S}\}$ is supported on a set of indices
$\cup_q{\sf Q}_q\times\cup_q{\sf Q}_q$ which has
size at most $(\sum_{q}k_q)^2$. Hence
\begin{align*}
\norm{\E\{{\mathbf S}\} - \sum_q {\beta_q} {\mathbf v}_q({\mathbf v}_q)^{\sf T}} &\le (\sum_q k_q)\norm{\E\{{\mathbf S}\} - \sum_{q}\beta_q{\mathbf v}_q({\mathbf v}_q)^{\sf T}}_\infty,
\end{align*}
where the last term denotes the entrywise $\ell_\infty$ norm of the matrix. Since ${\mathbf S}$ and
$\sum_{q}\beta_q{\mathbf v}_q({\mathbf v}_q)^{\sf T}$ have common support and since $|\eta(z;
\tau/\sqrt{n}) -z|\le \tau/\sqrt{n}$
we obtain that:
\begin{align*}
\norm{\E\{{\mathbf S}\} - \sum_{q}\beta_q{\mathbf v}_q({\mathbf v}_q)^{\sf T} }_\infty &\le\norm{\E\{{\cal P}_{{\sf E}}(\eta({\mathbf{\widehat{\Sigma}}}))\} - \sum_q \beta_q {\mathbf v}_q {\mathbf v}_q^{\sf T}}_{\infty}\\
&\le \frac{\tau}{\sqrt{n}}.
\end{align*}
The thesis then follows directly.
\end{proof}
\begin{lemma}
\label{lem:sigconc}
Let ${\mathbf S}$ be as defined in Proposition \ref{prop:signal}. Then:
\begin{align*}
\norm{{\mathbf S} - \E\{{\mathbf S}\}} &\le \kappa_n,
\end{align*}
with probability at least $1 - \exp(-n^{2/3}/4)$ where we define $\kappa_n \equiv 16(\sqrt{r\alpha} + r\sqrt{\beta_1})n^{-1/6}$.
\end{lemma}
Proposition \ref{prop:signal} follows directly from these two lemmas
since we have by triangle inequality:
\begin{align*}
\norm{{\mathbf S} -\sum_{q}\beta_q{\mathbf v}_q({\mathbf v}_q)^{\sf T}} &\le \norm{{\mathbf S} - \E\{{\mathbf S}\}} + \norm{\E\{{\mathbf S}\} - \sum_q\beta_q{\mathbf v}_q({\mathbf v}_q)^{\sf T}}.
\end{align*}
This completes the proof of Proposition \ref{prop:signal} conditional
on Lemma \ref{lem:sigconc}. In the next subsection we prove Lemma
\ref{lem:sigconc}.
\subsubsection{Proof of Lemma \ref{lem:sigconc}}
Let ${\mathbf y} \in\mathbb{R}^p$ denote a vector supported on
$\cup_q{\sf Q}_q$. Recall that
${\sf Q} = \cup_{q}{\sf Q}_q$. Fix an $\ell \in{\sf Q}$. The gradient of the Rayleigh quotient
${\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}, {\mathbf S}{\mathbf y}\>$ reads:
\begin{align*}
{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}, {\mathbf S}{\mathbf y}\> &= \frac{1}{n}\sum_{i:(i, \ell)\in
\cup_q{\sf Q}_q\times{\sf Q}_q}2\partial\eta\left({\mathbf{\widehat{\Sigma}}}_{i\ell} ; \frac{\tau}{\sqrt{n}}\right)%
({\mathbf{\tilde{z}}}_i + \sum_q\sqrt{\beta_q}v^q_i {\mathbf u}_q)y_iy_\ell.
\end{align*}
Define the vector ${\boldsymbol{\sigma}}^{\ell}({\mathbf y})\in\mathbb{R}^p$ as follows:
\begin{align*}
\sigma^{\ell}_i({\mathbf y}) &= \begin{cases}
\partial\eta\left( {\mathbf{\widehat{\Sigma}}}_{i\ell}; \frac{\tau}{\sqrt{n}} \right)y_i, &\text{
if } (i, \ell)\in\cup_q({\sf Q}_q\times{\sf Q}_q) \\
0 \text{ otherwise.}
\end{cases}
\end{align*}
where the left hand side denotes the $i^{\text{th}}$ entry of ${\boldsymbol{\sigma}}^{\ell}({\mathbf y})$.
Recall that ${\mathbf Z}_E$ is the matrix obtained from ${\mathbf Z}$ by setting to zero all columns with indices
outside $E\subseteq[p]$. Using this, we can now rewrite the
gradient in the following form:
\begin{align*}
{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}, {\mathbf S}^q{\mathbf y}\> &= \frac{2y_\ell}{n} ({\mathbf Z}_{{\sf Q}} + \sum_q\sqrt{\beta_q}{\mathbf u}_q({\mathbf v}_q)^{\sf T}){\boldsymbol{\sigma}}^\ell({\mathbf y}). %
\end{align*}
Since $\partial\eta(\cdot;\cdot)\in\{0, 1\}$,
we see that $\norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})}\le \norm{{\mathbf y}} =1$.
Consequently, we have that:
\begin{align*}
\norm{{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}, {\mathbf S}^q{\mathbf y}\>} &\le \frac{\abs{2y_\ell}}{n} \norm{{\mathbf Z}_{{\sf Q}} + %
\sum_q\sqrt{\beta_q}{\mathbf u}_q({\mathbf v}_q)^{\sf T}} \norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})} \\
&\le \frac{2\abs{y_\ell}}{n} \left(\norm{{\mathbf Z}_{{\sf Q}}} + \sum_q\sqrt{\beta_q}\norm{{\mathbf u}_q({\mathbf v}_q)^{\sf T}}\right) \\
&= \frac{2\abs{y_\ell}}{n} \left(\norm{{\mathbf Z}_{{\sf Q}}} + \sum_q\sqrt{\beta_q}\norm{{\mathbf u}_q}\right),
\end{align*}
Squaring and summing over $\ell$:
\begin{align*}
\norm{{\nabla}_{{\mathbf Z}_{\sf Q}}\<{\mathbf y}, {\mathbf S}{\mathbf y}\>}^2 &\le \frac{4}{n^2} (\norm{{\mathbf Z}_{{\sf Q}}} + \sum_{q}\beta_q\norm{{\mathbf u}_q})^2.
\end{align*}
The gradient above is with respect to all the variables ${\mathbf{\tilde{z}}}_\ell, \ell\in{\sf Q}_q$ and
the norm is the standard vector $\ell_2$ norm.
Let $G: \{{\mathbf Z}, ({\mathbf u}_q)_{q\le r} : \norm{{\mathbf Z}_{{\sf Q}}} \le
(2+\sqrt{r\alpha})\sqrt{n}, \norm{{\mathbf u}_q} \le 4\sqrt{n}\}$. Clearly $G$ is a
closed, convex set. Further, using Lemma \ref{lem:gaussianmatnorm} we can bound
the probability of $G^c$: $\norm{{\mathbf Z}_{{\sf Q}}} \le (\sqrt{n} + \sqrt{\sum_qk_q} + \sqrt{n}) \le (2+\sqrt{r\alpha})\sqrt{n}$
with probability at least $1-\exp(-n/2)$. Also, with probability at least $1-r\exp(-n/2)$, for every $q$ $\norm{{\mathbf u}_q} \le 4\sqrt{n}$.
Thus, on the set $G$ we have:
\begin{align*}
\norm{{\nabla}\<{\mathbf y}, {\mathbf S}{\mathbf y}\>}^2\mathbb{I}\{({\mathbf Z}, {\mathbf u}_1 \cdots {\mathbf u}_r)\in G\} &\le \frac{64}{n}(2 +\sqrt{r\alpha} + \sqrt{\beta})^2\\
{\mathbb P}\{G^c\} &\le 2\exp\left( -\frac{n}{4} \right).
\end{align*}
Define $L$ and $\kappa_n$ as follows:
\begin{align*}
L &\equiv \frac{8(2+\sqrt{r\alpha}+r\sqrt{\beta_1})}{\sqrt{n}}\\
\kappa_n &\equiv 16(2+\sqrt{r\alpha}+r\sqrt{\beta_1})n^{-1/6} = 2Ln^{1/3}.
\end{align*}
Also let $F_L({\mathbf Z}_{{\sf Q}})$ denote the $G, L$-Lipschitz extension of $\<{\mathbf y}, {\mathbf S}{\mathbf y}\>$.
We prove the following remark in Appendix \ref{sec:App}:
\begin{remark} \label{rem:exptsignal}
For every $n$ large enough, $|\E\left\{ \<{\mathbf y}, {\mathbf S}{\mathbf y}\> - F_L({\mathbf Z}_{{\sf Q}}) \right\}| \le n^{-1}$.
\end{remark}
Now employing Lemma \ref{lem:basicConc}:
\begin{align*}
{\mathbb P}\left\{ |\<{\mathbf y}, {\mathbf S}{\mathbf y}\> - \E\<{\mathbf y}, {\mathbf S}{\mathbf y}\>| \ge \kappa_n/2 \right\} &\le 2\exp\left( -\frac{n^{2/3}}{2} \right) + 2r\exp\left( -\frac{n}{4} \right) \\
&\le 3\exp\left( -\frac{n^{2/3}}{2} \right),
\end{align*}
for every $n$ large enough. Then using ${\mathbf y}$ as a vector in the $1/4$-net $T^{1/4}_{|{\sf Q}|}$ embedded
in $\mathbb{R}^p$ via the union of supports ${\sf Q}$, we use Lemma \ref{lem:specnormbnd} to obtain that:
\begin{align*}
\norm{{\mathbf S} - \E\{{\mathbf S}\}} \le \kappa_n,
\end{align*}
with probability at least $1 - 3\cdot 9^{\abs{{\sf Q}}} \exp (-n^{2/3}/2 ) \ge 1- \exp(-n^{2/3}/4)$
since $\abs{{\sf Q}} \le \sum_q{k_q} = O(\sqrt{n}) \le n^{2/3}/2$ for large enough
$n$.
\subsection{Proof of Proposition \ref{prop:noise}}\label{subsec:proofnoise}
It suffices to bound the norm of ${\mathbf{\widetilde{N}}}$ defined as
\begin{align*}
{\mathbf{\widetilde{N}}} &= {\mathcal{P}_{\sf nd}}\left\{\eta\left( \frac{1}{n}{\mathbf Z}^{\sf T}{\mathbf Z} \right)\right\}.
\end{align*}
We use a variant of the ${\varepsilon}$-net argument.
For a set of indices $E\subseteq [p]$, recall that
${\mathbf y}_E\in\mathbb{R}^p$ denotes the vector coinciding
with ${\mathbf y}$ on $E$, and zero outside $E$. By decomposing
the
Rayleigh quotient:
\begin{align*}
{\mathbb P}\left\{ \norm{{\mathbf{\widetilde{N}}}}_2 \ge \Delta \right\}
&\le {\mathbb P}\left\{\sup_{{\mathbf y}\in T^{\varepsilon}_p} \<{\mathbf y}, {\mathbf{\widetilde{N}}}{\mathbf y}\> \ge \Delta(1-2{\varepsilon})\right\} \\
&\le {\mathbb P}\left\{ \sup_{{\mathbf y}\in T^{\varepsilon}_p} \<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_E\> + %
\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> + 2\<{\mathbf y}_{E}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> \ge \Delta(1-2{\varepsilon}) \right\}.
\end{align*}
We let $E = \{i\in [p] : |y_i| > \sqrt{A/p}\}$ for the
constant $A = A(\tau) = \tau\log\tau$. Since $\norm{{\mathbf y}} =1$, it follows that
$|E| \le p/A$. The following lemma allows to bound the term
$\<{\mathbf y}_E, {\mathbf{\widetilde{N}}}, {\mathbf y}_E\>$ uniformly over all subsets $E$ smaller
than $p/A$.
\begin{lemma}
Fix $A\ge180\max(\sqrt{\alpha}, 1)$. Then, for every $p$ large enough,
the following holds with probability at least $1 - \exp(-p\log A/4A)$:
\begin{align*}
\sup_{E\subseteq[p], |E|\le p/A}\norm{ {\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}})}_2 &\le 32\sqrt{\alpha\frac{\log A}{A}}.
\end{align*}
\label{lem:smallset}
\end{lemma}
The proof of this lemma is provided in subsection \ref{sec:proofsmallset}.
Denoting by ${\cal E}$ the favorable event
of Lemma \ref{lem:smallset}, we obtain:
\begin{align*}
{\mathbb P}\left\{\norm{{\mathbf{\widetilde{N}}}}_2 \ge \Delta \right\} &\le {\mathbb P}\left\{ {\cal E}^c \right\} + %
{\mathbb P}\left\{ \sup_{{\mathbf y}\in T^{\varepsilon}_p}\left( \<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_E\> + \<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> + 2\<{\mathbf y}_{E}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>\right) \ge \Delta(1-2{\varepsilon}) , {\cal E} \right\}\nonumber\\
&\le {\mathbb P}\left\{ {\cal E}^c \right\} + {\mathbb P}\left\{\sup_{{\mathbf y}\in T^{\varepsilon}_p} \left(\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> + 2\<{\mathbf y}_{E}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> \right)\ge \widetilde{\Delta} \right\},
\end{align*}
where $\widetilde{\Delta} = \Delta(1-2{\varepsilon}) - 16\sqrt{2\alpha\log A/A}$. Further, using
the union bound and Lemma \ref{lem:epsnetcard}:
\begin{align}
{\mathbb P}\left\{ \norm{{\mathbf{\widetilde{N}}}}_2 \ge \Delta \right\} &\le {\mathbb P}\left\{ {\cal E}^c \right\} + %
\abs{T^{\varepsilon}_p}\sup_{{\mathbf y}\in T^{\varepsilon}_p}{\mathbb P}\left\{ \<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> \ge \frac{\widetilde{\Delta}}{3} \right\} \nonumber \\ %
&\quad+\abs{T^{\varepsilon}_p}\sup_{{\mathbf y}\in T^{\varepsilon}_p}{\mathbb P}\left\{ \<{\mathbf y}_{E}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> \ge \frac{\widetilde{\Delta}}{3} \right\}. \label{eq:baseZbnd}%
\end{align}
${\mathbb P}\left( {\cal E}^c \right)$ is bounded in Lemma \ref{lem:smallset}. We
now proceed to bound the latter two terms. For the second term,
the gradient ${\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>$
reads, for any fixed $\ell\in E^c$:
\begin{align*}
{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> &= %
\frac{2y_\ell}{n}\sum_{i\in E^c\backslash\ell}\partial \eta\left( \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>}{n}; \frac{\tau}{\sqrt{n}} \right)y_i {\mathbf{\tilde{z}}}_i.
\end{align*}
Let ${\boldsymbol{\sigma}}^\ell({\mathbf y})\in\mathbb{R}^p$ be a vector defined by:
\begin{align*}
\sigma_i^\ell({\mathbf y}) &= \begin{cases}
\partial \eta\left( \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>}{n}; \frac{\tau}{\sqrt{n}} \right)y_i &\text{ if } i\in E^c\backslash\ell,\\
0 &\text{ otherwise.}
\end{cases}
\end{align*}
With this definition we can represent the norm of the gradient as:
\begin{align*}
\norm{{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>} &= \frac{2\abs{y_\ell}}{n}\norm{{\mathbf Z}_{E^c}{\boldsymbol{\sigma}}^\ell({\mathbf y})} \\
&\le \frac{2\abs{y_\ell}}{n}\norm{{\mathbf Z}_{E^c}}\norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})} \\
&\le \frac{2\abs{y_\ell}}{n}\norm{{\mathbf Z}}\norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})}.
\end{align*}
For ${\boldsymbol{\sigma}}^\ell({\mathbf y})$:
\begin{align*}
\norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})}^2 &= \sum_{i\in E^c\backslash\ell}\partial \eta\left( \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>}{n}; \frac{\tau}{\sqrt{n}} \right)^2 y_i^2 \\
&\le \sum_{i\in E^c\backslash\ell} \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>^2}{n\tau^2} y_i^2
\\
&\le\frac{A}{\tau^2 np} \<{\mathbf{\tilde{z}}}_\ell, {\mathbf Z}_{E^c\backslash\ell}^{\sf T}
{\mathbf Z}_{E^c\backslash\ell}{\mathbf{\tilde{z}}}_\ell\> \\
&\le \frac{A \norm{{\mathbf{\tilde{z}}}_\ell}^2 \norm{{\mathbf Z}}^2}{np\tau^2}.
\end{align*}
Here the first line follows from $\partial \eta(x ; y) = \mathbb{I}(\abs{x}\ge y ) \le
\abs{x}/y$. The second line follows from the choice of $E$ whereby
$\abs{y_i}\le \sqrt{A/p}$ and the last line from Cauchy-Schwarz.
For any $\ell \in E$, ${\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> = 0$.
Now,
fix $\Gamma=5$, $\gamma = \Gamma\max(\alpha^{-1}, 1)\ge \Gamma$
and let $G = \{{\mathbf Z} : \norm{{\mathbf Z}} \le 2\sqrt{\gamma p}, \forall \ell,
\norm{{\mathbf{\tilde{z}}}_\ell} \le \sqrt{2\gamma p}, \}$. Clearly, $G$ is a closed,
convex set. Furthermore, on the set $G$, we obtain
from the gradient and ${\boldsymbol{\sigma}}^\ell({\mathbf y})$ estimates above that:
\begin{align}
\norm{{\nabla}_{{\mathbf Z}}\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>}^2 &= \sum_{\ell\in E^c} \norm{{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>}^2 \nonumber\\
&\le \sum_{\ell\in E^c}\frac{4y_\ell^2}{n^2}\norm{{\mathbf Z}}^2\norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})}^2 \nonumber\\
&\le \frac{4\norm{{\mathbf Z}}^2}{n^2}\max_{\ell\in E^c} \norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})}^2 \nonumber\\
&\le \frac{4A\norm{{\mathbf Z}}^4 \max_{\ell \in E^c}\norm{{\mathbf{\tilde{z}}}_\ell}^2}{n^3 p
\tau^2} \\
&\le\frac{128 A\gamma^3\alpha^3}{p\tau^2}. \label{eq:lipbnd}
\end{align}
Here we treat ${\nabla}_{{\mathbf Z}}\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>$ as a vector in $\mathbb{R}^{np}$,
hence the norm above is the standard $\ell_2$ norm on vectors. We also write the
gradient as ${\nabla}_{({\mathbf{\tilde{z}}}_{\ell})_{\ell\in[p]}}\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>$
to avoid ambiguity in specifying the norm.
We now bound ${\mathbb P}\{G^c\}$ as follows.
Lemma \ref{lem:gaussianmatnorm} implies that with probability at least $1-\exp(-\Gamma p/2)$:
\begin{align}
\norm{{\mathbf Z}}_2 &\le (1+ \sqrt{\Gamma} + \alpha^{-1/2})\sqrt{p}\nonumber\\
&\le 2\sqrt{\gamma p}, \label{eq:Znormbnd}
\end{align}
since $\gamma\ge (1+\alpha^{-1/2})^2$.
Further, the standard Chernoff bound
implies that, for a fixed $\ell$, $\norm{{\mathbf{\tilde{z}}}_\ell}^2 \le 2\gamma\alpha n = 2\gamma p$ with
probability at least $1 - \exp(-\gamma p/2)$. By the union bound,
we then obtain that ${\mathbb P}\{G^c\} \le p\exp(-\gamma p/2)+\exp(-\Gamma p/2) \le (p+1)\exp(-\Gamma p/2)$.
Define $K = \sqrt{128 A\gamma^3 \alpha^3 / p \tau^2}$.
Let $F_K({\mathbf Z})$ denote the $G, K$-Lipschitz extension of $F({\mathbf Z}) = \<{\mathbf y}_{E^c},{\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>$. We
have the following remark for $F_{K}({\mathbf Z})$ which is proved in Appendix \ref{sec:App}.
\begin{remark}\label{rem:exptnoise1}
We have $\E\{\<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>\}=0$. Further, for every $p$ large enough, $|\E\{F_K({\mathbf Z})\}| \le p^{-1}$.
\end{remark}
We can now use Lemma \ref{lem:basicConc} for $F({\mathbf Z})$,
Thus for any $\Delta_2\ge 2/p$:
\begin{align}
{\mathbb P}\left\{ \<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> \ge \Delta_2 \right\} &\le %
\exp\left(-\frac{\Delta_2^2}{4K^2} \right) + %
2p\exp\left( -\frac{\Gamma p}{2} \right).
\end{align}
Using $\Delta_2 = \sqrt{2\Gamma p}K = 16\sqrt{A\Gamma\gamma^3\alpha^3}/\tau$ we obtain:
\begin{align}
{\mathbb P}\left\{ \<{\mathbf y}_{E^c}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> \ge 16\frac{\sqrt{A\Gamma\gamma^3\alpha^3}}{\tau}\right\} &\le %
(2p+2)\exp(-\Gamma p/2).
\label{eq:quadformbnd2}
\end{align}
Now we can use the essentially same strategy on the term $\<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>$.
For $\ell \in E$ we have as before:
\begin{align*}
{\nabla}_{{\mathbf{\tilde{z}}}_{\ell}}\<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> &= \frac{y_{\ell}}{n}\sum_{i\in E^c} \partial\eta\left( \frac{\<\widetilde{z}_i, \widetilde{z}_\ell\>}{n}; \frac{\tau}{\sqrt{n}} \right)y_i{\mathbf{\tilde{z}}}_i, \\
\norm{{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}_{E}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>}^2 &\le \frac{y_\ell^2 A\norm{{\mathbf Z}}^{4}\max_{i\in E^c}\norm{{\mathbf{\tilde{z}}}_i}^2}{\tau^2pn^{3}}.
\end{align*}
Hence:
\begin{align}
\sum_{\ell\in E}\norm{{\nabla}_{{\mathbf{\tilde{z}}}_{\ell}}\<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>}^2 &\le \frac{A\norm{{\mathbf Z}}^4 \max_{i}\norm{{\mathbf{\tilde{z}}}_i}^2}{\tau^2pn^3}. \label{eqn:EEcomppartgrad1}
\end{align}
Analogously, for $\ell\in E^c$:
\begin{align*}
{\nabla}_{{\mathbf{\tilde{z}}}_{\ell}}\<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> &= \frac{y_\ell}{n}\sum_{i\in E} \partial\eta\left( \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>}{n}; \frac{\tau}{\sqrt{n}} \right)y_i {\mathbf{\tilde{z}}}_i \\
&= \frac{y_\ell}{n}{\mathbf Z}_{E}{\boldsymbol{\sigma}}^\ell_E({\mathbf y}),
\end{align*}
where we define the vector ${\boldsymbol{\sigma}}^\ell_E({\mathbf y}) \in \mathbb{R}^{E}$ as:
\begin{align*}
\forall i\in E, \quad \sigma^\ell_E({\mathbf y})_i &= y_i\partial\eta\left( \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>}{n}; \frac{\tau}{\sqrt{n}} \right).
\end{align*}
By Cauchy-Schwarz:
\begin{align*}
\norm{{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>}^2 &\le \frac{y_\ell^2}{n^2}\norm{{\mathbf Z}_E}^2 \norm{{\boldsymbol{\sigma}}^\ell_E({\mathbf y})}^2.
\end{align*}
Summing over $\ell \in E^c$:
\begin{align*}
\sum_{\ell\in E^c } \norm{{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>}^2 &\le \frac{\norm{{\mathbf Z}_E}^2}{n^2}\sum_{\ell\in E^c}y_\ell^2 \norm{{\boldsymbol{\sigma}}^\ell_E({\mathbf y})^2}^2 \\
&\le \frac{A\norm{{\mathbf Z}}^2}{pn^2}\sum_{\ell\in E^c} \norm{{\boldsymbol{\sigma}}^\ell_E({\mathbf y})}^2\\
&= \frac{A\norm{{\mathbf Z}}^2}{pn^2} \sum_{\ell \in E^c}\sum_{i\in E} y_i^2 \partial\eta\left( \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>}{n}; \frac{\tau}{\sqrt{n}} \right)^2 \\
&\le \frac{A\norm{{\mathbf Z}}^2}{pn^2} \sum_{i\in E} y_i^2 \sum_{\ell \in E^c} \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>^2}{\tau^2 n} \\
&= \frac{A\norm{{\mathbf Z}}^2}{\tau^2 pn^3} \sum_{i\in E}y_i^2 \<{\mathbf{\tilde{z}}}_i, {\mathbf Z}_{E^c}^{\sf T} {\mathbf Z}_{E^c}{\mathbf{\tilde{z}}}_i\> \\
&\le \frac{A\norm{{\mathbf Z}}^2 \norm{{\mathbf Z}_{E^c}}^2 \max_{i\in [E]}\norm{{\mathbf{\tilde{z}}}_i}^2}{\tau^2 np^3} \\
&\le \frac{A\norm{{\mathbf Z}}^{4}\max_{i\in [p]}\norm{{\mathbf{\tilde{z}}}_i}^2}{\tau^2 pn^3}.
\end{align*}
This bound along with \myeqref{eqn:EEcomppartgrad1} gives:
\begin{align*}
\norm{{\nabla}_{({\mathbf{\tilde{z}}}_{\ell})_{\ell\in[p]}}\<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>}^2 &\le \frac{2A\norm{{\mathbf Z}}^4\max_{i\in [p]}\norm{{\mathbf{\tilde{z}}}_i}^2}{\tau^2 np^3}.
\end{align*}
On the set $G$ defined before, we have that:
\begin{align*}
\norm{{\nabla}_{({\mathbf{\tilde{z}}}_{\ell})_{\ell\in[p]}}\<{\mathbf y}_E, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\>}^2
&\le\frac{64 A\gamma^3\alpha^3}{p\tau^2 }.
\end{align*}
Proceeding as before, applying Lemma \ref{lem:basicConc} we have:
\begin{align}
{\mathbb P}\left\{ \<{\mathbf y}_{E}, {\mathbf{\widetilde{N}}}{\mathbf y}_{E^c}\> \ge 16\frac{\sqrt{A\Gamma\gamma^2\alpha^3}}{\tau} \right\} &\le %
2p\exp\left( -\frac{\Gamma p}{2} \right).
\label{eq:quadformbnd3}
\end{align}
We can now use Eqs.\eqref{eq:quadformbnd2}, \eqref{eq:quadformbnd3} in
\myeqref{eq:baseZbnd}:
\begin{align*}
{\mathbb P}\left\{ \norm{{\mathbf{\widetilde{N}}}}_2 \ge (1-2{\varepsilon})^{-1}\left( 32\sqrt{\frac{\alpha\log A}{A}} + 48\sqrt{\frac{{A\Gamma\gamma^2\alpha^3}}{\tau^2}}\right) \right\} %
&\le \exp\left( -\frac{p\log A}{4A} \right)\\
&\quad+|T^{\varepsilon}_p| (4p+4)\exp\left( \frac{-\Gamma p}{2} \right) %
\end{align*}
We first simplify the probability bound. Since $A = \tau\log \tau$, $\log A/A \ge 1/\tau$ when $\tau\ge \exp(1)$. Further,
choosing ${\varepsilon}=1/4$, with Lemma \ref{lem:epsnetcard} we get that $|T^{{\varepsilon}}_p| \le (1+2/{\varepsilon})^p= 9^p$.
Since $\log 9 = 2.19\dots < \Gamma/2 = 5/2$, we
have $(4p+4)|T^{\varepsilon}_p|\exp(-\Gamma p/2) \le \exp(-p/20)$ for large enough $p$. Thus the right hand side is bounded
above by $ 2\exp\left( -p/4\max(\tau, 5) \right)$ for every $p$ large enough.
Now we simplify the operator norm bound. As $A = \tau\log\tau$, $\log A/A \le \log \tau/\tau$
since $\log z/z$ is decreasing. Further $\alpha \le \max(1, \alpha^3)$ and $\Gamma=5$ imply:
\begin{align*}
(1-2{\varepsilon})^{-1}\left( 32\sqrt{\frac{\alpha\log A}{A}} +
64\sqrt\frac{{A\Gamma\gamma^3\alpha^3}}{\tau^2}\right)
&\le 2(32+64\Gamma^{2})\sqrt{\frac{\max(1, \alpha^3)\log\tau}{\tau}}\\
&\le 5000\sqrt{\frac{\max(1, \alpha^3)\log\tau}{\tau}}.
\end{align*}
Our conditions on $\tau$ and $A$ were: $(i)$ $\tau \ge \max(4\sqrt{\Gamma\gamma\alpha}, \exp(1)) = 20\max(1, \sqrt{\alpha})$
and $(ii)$ $A\ge 180\max(\sqrt{\alpha}, 1)$. Using $\tau \ge 100\max(1, \alpha^2\log \alpha)$ satisfies both conditions.
\subsubsection{Proof of Lemma \ref{lem:smallset}}\label{sec:proofsmallset}
This proof also follows an ${\varepsilon}$-net argument. Let $a$ denote the size of
the set $E$. For notational simplicity, we will permute the rows and columns of
${\mathbf{\widetilde{N}}}$ to ensure $E = [a]$ (i.e. $E$ is the first $a$ entries of $[p]$).
For a fixed ${\mathbf y}\in T^{{\varepsilon}}_a$, we bound the Rayleigh quotient $\<{\mathbf y},{\widetilde{\cal P}}_{E,E}({\mathbf{\widetilde{N}}}){\mathbf y}\>$ with
high probability. Note that $\<{\mathbf y},{\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}}){\mathbf y}\>$ is a function of ${\mathbf{\tilde{z}}}_\ell, \ell\in E$.
The gradient of this function with respect to ${\mathbf{\tilde{z}}}_\ell$ is:
\begin{align*}
{\nabla}_{{\mathbf{\tilde{z}}}_\ell}\<{\mathbf y}, {\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}}){\mathbf y}\> &= \frac{2y_\ell}{n}\sum_{i\in E\backslash\ell} \partial \eta\left( \frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>}{n};\frac{\tau}{\sqrt{n}} \right)y_i {\mathbf{\tilde{z}}}_i ,
\end{align*}
where ${\boldsymbol{\sigma}}^\ell({\mathbf y})\in\mathbb{R}^p$ is the vector defined as:
\begin{align*}
\sigma_i^\ell({\mathbf y}) &= \begin{cases}
y_i\partial \eta\left(\frac{\<{\mathbf{\tilde{z}}}_i, {\mathbf{\tilde{z}}}_\ell\>}{n}; \frac{\tau}{\sqrt{n}}\right) &\text{ when } i\in E\backslash\ell\\
0 & \text{ otherwise.}
\end{cases}
\end{align*}
The (square of the) total gradient is thus given by:
\begin{align*}
\norm{{\nabla} \<{\mathbf y}, {\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}}){\mathbf y}\>}^2 &= \frac{4}{n^2}\sum_{\ell\in E} \norm{ {\mathbf Z}_E {\boldsymbol{\sigma}}^\ell({\mathbf y})}_2^2 y_\ell^2 \\
&\le \frac{4}{n^2}\sum_{\ell\in E } \norm{{\mathbf Z}_{E}}_2^2 \norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})}^2 y_\ell^2\\
&\le \left( \frac{2\norm{{\mathbf Z}_E}_2}{n} \right)^2\sum_{\ell\in E\backslash\ell}\norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})}^2 y_\ell^2,
\end{align*}
Since $|\partial\eta(\cdot; \tau/\sqrt{n})|\le1$ we have that $\norm{{\boldsymbol{\sigma}}^\ell({\mathbf y})}^2 \le \norm{{\mathbf y}}^2 \le 1$.
Consequently we obtain the bound:
\begin{align*}
\norm{{\nabla}\<{\mathbf y}, {\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}}){\mathbf y}\>}^2 &\le \left( \frac{2\norm{{\mathbf Z}_E}_2}{n} \right)^2.
\end{align*}
From Lemma \ref{lem:gaussianmatnorm} we have that:
\begin{align*}
\norm{{\mathbf Z}_E}_2 &\le \sqrt{n} + \sqrt{a} + t\sqrt{p},
\end{align*}
with probability at least $1 - \exp(-pt^2/2)$. Let $G = \{{\mathbf Z}_E: \norm{{\mathbf Z}_E}_2 \le \sqrt{n} + \sqrt{a}+ t\sqrt{p}\}$.
Then:
\begin{align}
\norm{{\nabla}\<{\mathbf y}, {\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}}){\mathbf y}\>}^2 &\le \frac{4\alpha}{p}\Bigg( 1 + \sqrt{\frac{a\alpha}{p}} + t\sqrt{\alpha} \Bigg)^2 \equiv L^2 \label{eq:Glipschitz}\\
\text{and } {\mathbb P}(G^c) &\le e^{-pt^2/2}. \label{eq:Gprob}
\end{align}
We let $F_L({\mathbf Z}_E)$ denote the $G,L$-Lipschitz extension of ${\nabla}\<{\mathbf y}, {\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}}){\mathbf y}\>$.
The following remark is proved in Appendix \ref{sec:App}:
\begin{remark}\label{rem:exptsmallset}
Firstly, $\E\{\<{\mathbf y},{\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}}){\mathbf y}\>\}=0$. Secondly, for every $p$ large enough: $|\E(F_L({\mathbf Z}))| \le p^{-1}$.
\end{remark}
Let $\widetilde{\Delta} = \Delta(1-2{\varepsilon})$ and $\nu = 1+ \sqrt{\alpha a/p}$.
We choose $t= \left(\sqrt{\nu^2 + \widetilde{\Delta}/2\sqrt{\alpha}} - \nu\right)/2$
and apply Lemma \ref{lem:basicConc} and Remark \ref{rem:exptsmallset}. This choice
of $t$ ensures that the two unfavorable events of Lemma \ref{lem:basicConc} are
both bounded above by $\exp(-pt^2/2)$. Thus,
\begin{align*}
{\mathbb P}\{\<{\mathbf y}, {\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}}){\mathbf y}\> \ge \widetilde{\Delta}\} &\le 2e^{-pt^2/2},
\end{align*}
for $p$ large enough. Further, our choice of $t$ implies:
\begin{align*}
t^2 &= \frac{1}{4}\left( \sqrt{\nu^2 + \frac{\widetilde{\Delta}}{2\sqrt{\alpha}}} - \nu \right)^2 \\
&= \frac{\nu^2}{2}\left( 1 + \frac{\widetilde{\Delta}}{4\nu^2\sqrt{\alpha}} - \sqrt{1+ \frac{\widetilde{\Delta}}{2\nu^2\sqrt{\alpha}}} \right)\\
&\ge \frac{\widetilde{\Delta}^2}{128\nu^2\alpha},
\end{align*}
where the last inequality follows from the fact that $g(x) = 1+x/2 - \sqrt{1+x} \ge x^2/16$ when
$x \le 2$. This requires $\widetilde{\Delta} \le 4\nu^2\sqrt{\alpha}$). Now, Lemma \ref{lem:epsnetcard}
and \ref{lem:specnormbnd} imply:
\begin{align*}
{\mathbb P}\left\{\norm{{\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}})}_2 \ge \Delta\right\} %
&\le 2\left( 1+\frac{2}{{\varepsilon}} \right)^a\exp\left( -\frac{p\widetilde{\Delta}^2}{256\nu^2\alpha} \right) \\
&\le \exp\left( -\frac{p\widetilde{\Delta}^2}{256\alpha\nu^2} + a \log\left(2+ \frac{4}{{\varepsilon}} \right) \right).
\end{align*}
There are $\binom{p}{a} \le (pe/a)^a$ possible choices for the set $E$. Using the union bound we
have that:
\begin{align*}
{\mathbb P}\left\{\sup_{E\subseteq [p], |E| = a} \norm{{\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}})}_2 \ge \Delta\right\} &\le %
\exp\left\{ -\frac{p\widetilde{\Delta}^2}{256\alpha\nu^2} + a \log\left(2+ \frac{4}{{\varepsilon}} \right) + a\log\left( \frac{pe}{a} \right) \right\}.
\end{align*}
Since $a \le p/A$, $\nu = 1 + \sqrt{a\alpha/p} \le 2$ when $A\ge\max(\sqrt{\alpha}, 1)$. Using ${\varepsilon}=1/4$ we obtain that
\begin{align*}
{\mathbb P}\left\{\sup_{E\subseteq [p], |E| = a} \norm{{\widetilde{\cal P}}_{E, E}({\mathbf{\widetilde{N}}})}_2 \ge \Delta\right\} &\le %
\exp\left( - p\left(\frac{\Delta^2}{1024\alpha} - \frac{\log\left(18eA \right)}{A}\right) \right).
\end{align*}
We required $\widetilde{\Delta} \le 4\nu^2\sqrt{\alpha}$, and $\widetilde{\Delta} = \Delta/2$.
Hence we require $\Delta\le 8\sqrt{\alpha} \le 8\nu^2\sqrt{\alpha}$.
Choosing $\Delta=32\sqrt{\alpha\log A/A}$, where $A\ge 180\max(\sqrt{\alpha}, 1)$ satisfies
this condition. Further, with this choice of $A$, $\log(18eA)\le 1.75\log A$.
Consequently:
\begin{align*}
P\left\{\sup_{E\subseteq [p], |E| = a} \norm{{\widetilde{\cal P}}_{E, E}(N)}_2 \ge 32\sqrt{\frac{\alpha\log A}{A}}\right\} &\le \exp\left(- \frac{p\log A}{4 A}\right).%
\end{align*}
\subsection{Proof of Proposition \ref{prop:cross}} \label{subsec:proofcross}
We explicitly write the $(i, j)^\mathrm{th}$ entry of ${\mathbf R}_1$ (when $(i, j)\in
{\sf G}$) as:
\begin{align*}
(R_1)_{ij} &= \eta\left( \frac{\<\sum_{a} \sqrt{\beta_q}{\mathbf u}_q(v_q)_i + {\mathbf{\tilde{z}}}_i,
\sum_{q} {\mathbf u}_q (v_q)_j + {\mathbf{\tilde{z}}}_j \>}{n} ; \frac{\tau}{\sqrt{n}}\right)
\end{align*}
Since ${\sf G}$ is a symmetric set of entries excluding the diagonal, it suffices to consider the case
$i < j$ above. Denote by ${\mathbf R}$ the upper triangle of ${\mathbf R}_1$.
Let $g$ denote the number of nonzero rows in ${\mathbf R}$. By the definition of $g$, $ g \le \sum_q
\abs{{\sf Q}_q} = k$. We wish to bound (with slight abuse of notation) the quantity:
$ \sup_{{\mathbf x}\in {S}^{g-1}}\sup_{{\mathbf y}\in {S}^{p-1}} \<{\mathbf x}, {\mathbf R}{\mathbf y}\>$.
The proof follows an epsilon net argument entirely analogous to
the proof of Proposition \ref{prop:noise}. The only difference is the
further dependence on the Gaussian random vectors ${\mathbf u}_q$. Hence we only
give a proof sketch, highlighting the difference with the proof of Proposition
\ref{prop:noise}.
Fix a vector ${\mathbf y}\in T^{1/4}_{p}$ and ${\mathbf x} \in T^{1/4}_{g}$, and let $E$ be the subset of indices
$E = \{i \in [p] : \abs{y_i} \ge \sqrt{A/p}\}$ for some constant $A$ to
be fixed later in the proof. As before, we split the Rayleigh quotient
$\<{\mathbf x}, {\mathbf R}_3{\mathbf y}\> = \<{\mathbf x}, {\mathbf R}{\mathbf y}_E\> + \<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\> \le
\norm{{\cal P}_{[p]\times E}({\mathbf R})} + \<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\>$. By the condition on
$E$, we have that $\abs{E} \le p/A$. Consequently:
\begin{align*}
{\mathbb P}\left\{ \norm{{\mathbf R}} \ge \Delta \right\} &\le
\sup_{{\mathbf x}\in T^{1/4}_g, {\mathbf y} \in T^{1/4}_{p}} {\mathbb P}\left\{ \<{\mathbf x}, {\mathbf R}{\mathbf y}\> \ge \Delta/2 \right\} \\
&\le
{\mathbb P}\left\{ \max_{\abs{E}\le p/A } \norm{{\cal P}_{[p]\times E}\left( {\mathbf R} \right)} \ge \frac{\Delta}{4} \right\} +
\sup_{{\mathbf x}\in T^{1/4}_g, {\mathbf y}\in T^{1/4}_p}{\mathbb P}\left\{ \<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\> \ge \frac{\Delta}{4} \right\}.
\end{align*}
We first concentrate on the second term, whose gradient with respect to a fixed ${\mathbf{\tilde{z}}}_i$ is given by:
\begin{align*}
{\nabla}_{{\mathbf{\tilde{z}}}_i}\<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\> &= \frac{x_i}{n}\sum_{j> i, (i, j)\in
{\sf G}} (y_{E^c})_i \partial\eta\left( \<\sum_q\sqrt{\beta_q}{\mathbf u}_q (v_q)_i+ {\mathbf{\tilde{z}}}_i,
\sum_{q}\sqrt{\beta_q}{\mathbf u}_q(v_q)_j + {\mathbf{\tilde{z}}}_j \>; \tau\sqrt{n}\right) \left(
\sum_{q}\sqrt{\beta_q}{\mathbf u}_q(v_q)_j + {\mathbf{\tilde{z}}}_j \right) \\
&\quad+ \frac{(y_{E^c})_i}{n}\sum_{j< i, (i, j)\in
{\sf G}} x_j \partial\eta\left( \<\sum_q\sqrt{\beta_q}{\mathbf u}_q (v_q)_i+ {\mathbf{\tilde{z}}}_i,
\sum_{q}\sqrt{\beta_q}{\mathbf u}_q(v_q)_j + {\mathbf{\tilde{z}}}_j\> ; \tau\sqrt{n}\right) \left(
\sum_{q}\sqrt{\beta_q}{\mathbf u}_q(v_q)_j + {\mathbf{\tilde{z}}}_j \right).
\end{align*}
Defining ${\boldsymbol{\sigma}}^i({\mathbf y})$ and ${\boldsymbol{\sigma}}^i({\mathbf x})$ similar to Proposition
\ref{prop:noise}, we have by
Cauchy Schwarz:
\begin{align*}
\norm{{\nabla}_{{\mathbf{\tilde{z}}}_i}\<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\>}^2 &\le
\frac{2\norm{\sum_{q}\sqrt{\beta_q}{\mathbf u}_q{\mathbf v}_q^{\sf T} + {\mathbf Z}}^2}{n^2} \left( x_i^2
\norm{{\boldsymbol{\sigma}}^i({\mathbf y})}^2 + (y_{E^c})^2_i \norm{{\boldsymbol{\sigma}}^i({\mathbf x})}^2
\right).
\end{align*}
Summing over $i$:
\begin{align*}
\sum_{i} \norm{{\nabla}_{{\mathbf{\tilde{z}}}_i}\<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\>}^2 &\le
\frac{2\norm{{\mathbf X}}^2}{n^2}\sum_{i}\left(x_i^2 \norm{{\boldsymbol{\sigma}}^i({\mathbf y})^2} +
(y_{E^c})^2_i \norm{{\boldsymbol{\sigma}}^i({\mathbf x})}^2\right)\\
&\le \frac{2\norm{{\mathbf X}}^2}{n^2} \sup_{i}\norm{{\boldsymbol{\sigma}}^i({\mathbf y})}^2 +
\frac{2\norm{{\mathbf X}}^2}{n}\sum_i(y_{E^c})_i^2 \norm{{\boldsymbol{\sigma}}^i({\mathbf x})}^2
\end{align*}
Let $G = \{({\mathbf u})_{q\le r}, {\mathbf Z}: \forall q \norm{{\mathbf u}_q}\le
C'\sqrt{n}, \norm{{\mathbf Z}} \le C'(\sqrt{p}+\sqrt{n}), \forall i \norm{{\mathbf{\tilde{z}}}_i}\le C'\sqrt{n}\}$. It is clear that
$G$ is convex, and that ${\mathbb P}\{G^c\} \le p\exp(-C''p)$ for some $C''$ dependent on $C'$.
It is not hard to show that:
\begin{align}
\sum_{i} \norm{{\nabla}_{{\mathbf{\tilde{z}}}_i}\<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\>}^2 &\le \frac{AC(\alpha,
(\beta)_{q\le r}, r)}{p\tau^2}, \label{eq:gradboundz}
\end{align}
for some constant $C$, when $C'$ is large enough.
Similarly, taking derivatives with respect to ${\mathbf u}_q$ for a fixed $q$, we have:
\begin{align*}
{\nabla}_{{\mathbf u}_q}\<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\> &= \frac{1}{n}\sum_{(i, j)\in
{\sf G}} \partial\eta\left( \<\sum_q\sqrt{\beta_q}{\mathbf u}_q (v_q)_i+ {\mathbf{\tilde{z}}}_i,
\sum_{q}\sqrt{\beta_q}{\mathbf u}_q(v_q)_j + {\mathbf{\tilde{z}}}_j \>; \tau\sqrt{n}\right)
\quad \cdot \quad \\
&\quad \left(x_i (y_{E^c})_j \sqrt{\beta_q} (v_q)_i(\sum_{q'
}\sqrt{\beta_{q'}}{\mathbf u}_{q'}(v_{q'})_j
+ {\mathbf{\tilde{z}}}_j) +
x_j (y_{E^c})_i \sqrt{\beta_q} (v_q)_j(\sum_{q'
}\sqrt{\beta_{q'}}{\mathbf u}_{q'}(v_{q'})_i
+ {\mathbf{\tilde{z}}}_i)
\right)\\
&= \frac{{\mathbf X}({\boldsymbol{\sigma}}^1_{{\sf G}}({\mathbf x}, {\mathbf y}) + {\boldsymbol{\sigma}}^2_{{\sf G}}({\mathbf x},
{\mathbf y}))}{n},
\end{align*}
where we define the vectors ${\boldsymbol{\sigma}}_{\sf G}^1({\mathbf x}, {\mathbf y}), {\boldsymbol{\sigma}}_{\sf G}^2({\mathbf x},
{\mathbf y})$ appropriately.
By Cauchy Schwarz:
\begin{align*}
\norm{{\nabla}_{{\mathbf u}_q}\<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\>}^2 &\le \frac{2\norm{{\mathbf X}}^2}{n^2}
(\norm{{\boldsymbol{\sigma}}_{\sf G}^1({\mathbf x}, {\mathbf y})}^2 + \norm{{\boldsymbol{\sigma}}_{\sf G}^2({\mathbf x}, {\mathbf y})}^2).
\end{align*}
We now bound the first term above, and the second term follows from a similar
argument.
\begin{align*}
\norm{{\boldsymbol{\sigma}}^1({\mathbf x}, {\mathbf y})}^2 &= \sum_{j}(y_{E^c})_j^2 \left( \sum_{i: (i, j)\in
{\sf G}} \sqrt{\beta_q}x_i(v_q)_i\partial\eta\left(
\<\sum_q\sqrt{\beta_q}{\mathbf u}_q (v_q)_i+ {\mathbf{\tilde{z}}}_i,
\sum_{q}\sqrt{\beta_q}{\mathbf u}_q(v_q)_j + {\mathbf{\tilde{z}}}_j \>; \tau\sqrt{n}
\right) \right)^2
\end{align*}
For simplicity of notation, define $D_{ij} = \partial\eta\left( \<\sum_q\sqrt{\beta_q}{\mathbf u}_q (v_q)_i+ {\mathbf{\tilde{z}}}_i,
\sum_{q}\sqrt{\beta_q}{\mathbf u}_q(v_q)_j + {\mathbf{\tilde{z}}}_j \>; \tau\sqrt{n}
\right)$.
The above sum then can be reduced to:
\begin{align*}
\norm{{\boldsymbol{\sigma}}^1({\mathbf x}, {\mathbf y})}^2 &= \sum_{i_1, i_2}\beta_q x_{i_1}x_{i_2}
(v_q)_{i_1}(v_q)_{i_2}\sum_{j: (i_1, j)\in {\sf G}\text{ or } (i_2, j)\in {\sf G}} (y_{E^c})_j^2
D_{i_1 j}D_{i_2 j}.
\end{align*}
We first bound the inner summation uniformly in $i_1, i_2$ as follows:
\begin{align*}
\sum_{j: (i_1, j)\in {\sf G} \text{ or } (i_2, j)\in {\sf G}} (y_{E^c})_j^2 D_{i_1, j}D_{i_2, j}
&\le \frac{A}{p} \sum_{j} \frac{\abs{\<{\mathbf{\tilde{x}}}_{i_1}, {\mathbf{\tilde{x}}}_j\>\<{\mathbf{\tilde{x}}}_{i_2}, {\mathbf{\tilde{x}}}_j\>}}{n\tau^2} \\
&\le \frac{A}{p}\sum_{j} \frac{\<{\mathbf{\tilde{x}}}_{i_1}, {\mathbf{\tilde{x}}}_j\>^2 + \<{\mathbf{\tilde{x}}}_{i_2},{\mathbf{\tilde{x}}}_j\>^2 }{2n\tau^2} \\
&\le \frac{A}{n\tau^2 p} \norm{{\mathbf X}}^2 (\norm{{\mathbf{\tilde{x}}}_{i_1}}^2 + \norm{{\mathbf{\tilde{x}}}_{i_2}}^2)
\end{align*}
Employing a similar strategy for the other term, it is not hard to show that:
\begin{align*}
\norm{{\boldsymbol{\sigma}}^1({\mathbf x}, {\mathbf y})} ^2&\le \frac{A\beta_q \norm{{\mathbf X}}^2 \sup_{i} \norm{{\mathbf{\tilde{x}}}_{i}}^2}{pn\tau^2}.
\end{align*}
Thus, on the set $G$, we obtain that:
\begin{align}
\sum_{q}\norm{{\nabla}_{{\mathbf u}_q} \<{\mathbf x}, {\mathbf R}{\mathbf y}_{E^c}\>}^2 &\le \frac{AC(\alpha, \beta_1, r)}{p\tau^2},\label{eq:gradboundu}
\end{align}
for every $\tau$ sufficiently large. Indeed the same bound, with a modified value for $C$ holds
for the gradient with respect to all the variables $( ({\mathbf u}_q)_{q\le r}, ({\mathbf{\tilde{z}}}_i)_{i\le p})$ using
Eqs.\eqref{eq:gradboundz}, \eqref{eq:gradboundu}. Lemma \ref{lem:basicConc} then
implies that
\begin{align*}
\sup_{{\mathbf x}\in T^{1/4}_g, {\mathbf y}\in T^{1/4}_{p}}{\mathbb P}\left\{ \<{\mathbf x}, {\mathbf R}{\mathbf y}\> \ge \sqrt{A C(\alpha, \beta_1, r)}{\tau^2} \right\} \le \exp(-c p),
\end{align*}
for an appropriate $c$.
We omit the proof of the following remark that uses similar techniques as above,
followed by a union bound.
\begin{remark}
For every $A\ge A_0(\alpha, \beta_1, r)$ we have that:
\begin{align*}
{\mathbb P}\left\{ \sup_{\abs{E} \le p/A} {\cal P}_{[p]\times E}({\mathbf R}) \ge C(\alpha, \beta_1, r)\sqrt{\frac{\log A}{A}} \right\} &\le \exp(-c_2(\tau) p).
\end{align*}
Here $c_2(\tau) = 1/4\tau$ suffices.
\end{remark}
Using $A = \tau\log \tau$ for $\tau$ large enough completes the proof.
\subsection{Proof of Proposition \ref{prop:diag}}\label{subsec:proofdiag}
Since ${\mathbf R}_2$ is a diagonal matrix, its spectral norm is bounded by the maximum of its entries.
This is easily done as, for every $i\in {\sf Q}^c$:
\begin{align*}
\abs{({\mathbf R}_2)_{ii}} &= \abs{\eta\left( \frac{\norm{{\mathbf{\tilde{z}}}_i}^2}{n} - 1;\frac{\tau}{\sqrt{n}} \right)} \\
&\le \abs{\frac{\norm{{\mathbf{\tilde{z}}}_i}^2-n}{n}}.
\end{align*}
By the Chernoff bound for $\chi$-squared random variables followed by the union bound
we have that:
\begin{align*}
\max_{i}\abs{\frac{\norm{{\mathbf{\tilde{z}}}_i}^2}{n}-1} \ge t,
\end{align*}
with probability at most $p(\exp(n(-t + \log(1+t))/2) + \exp(n(t - \log(1-t))/2))$. Setting
$t = \sqrt{C\log n/n}$ and using $\log(1+t) \le t - t^2/3, \log(1-t) \ge -t -t^2/3$ for
every $t$ small enough we obtain the probability bound of $pn^{-C/6} = \alpha n^{-C/6 +1}$.
\section*{Acknowledgements}
We are grateful to David Donoho for his feedback on this manuscript.
This work was
partially supported by the NSF CAREER award CCF-0743978, the NSF grant CCF-1319979, and
the grants AFOSR/DARPA FA9550-12-1-0411 and FA9550-13-1-0036.
|
1,108,101,565,301 | arxiv | \section{Introduction}
\label{sec_introduction}
For a long time, statistical mechanics was restricted to systems
interacting via short-range forces. For example, the case of
self-gravitating systems is almost never considered in standard
textbooks of statistical mechanics and these systems have been studied
exclusively in the context of astrophysics. In the sixties, Antonov
\cite{antonov}, Lynden-Bell \cite{lb} and Thirring \cite{thirring}
realized that self-gravitating systems have a very special
thermodynamics marked by the non-equivalence of statistical ensembles
(microcanonical, canonical, grand canonical,...). This is related to
the non-additivity of the energy and to the presence of negative
specific heats in the microcanonical ensemble. Furthermore, these
systems experience a rich diversity of phase transitions
(microcanonical and canonical first order phase transitions, zeroth
order phase transitions,...) associated with their natural tendency to
undergo {gravitational collapse}
\cite{paddy,ijmpb}. Recently, several researchers have started to
consider the dynamics and thermodynamics of systems with long-range
interactions at a more general level (see the books \cite{houches,assise} and
references therein) and to discuss the numerous analogies (and
differences) between these systems: self-gravitating systems,
two-dimensional vortices, neutral and non-neutral plasmas, the HMF
model, free electron lasers, Bose-Einstein condensates, atomic
clusters, chemotaxis of bacterial populations etc. These analogies
have also suggested interesting experiments. For example, in the
physics of ultra-cold gases, some authors \cite{artemiev} have
proposed to generate an attractive $1/r$ interaction between atoms by
using a clever configuration of laser beams. This leads to the
fascinating possibility of reproducing, in the laboratory, the {\it
isothermal collapse} (in the canonical ensemble) of a self-gravitating
Fermi gas \cite{ht,pt} leading to a ``white dwarf star''. These
examples illustrate the importance of studying the statistical
mechanics of systems with long-range interactions at a general level
and to develop the analogies between different systems that may seem
{\it a priori} of a very different nature.
\begin{table*}[ht]
\begin{tabular}
{||c||c|c|c||c||}%
\hline\hline
Index & Temperature & Bounded domain & Unbounded domain \\
\hline \hline
& $T>T_c$ & Metastable equilibrium state & $\bullet$ Evaporation \cite{virial1}: \\
& & (local minimum of free energy): & asymptotically free normal \\
$n=\infty$ & & box-confined isothermal sphere \cite{brenner,crs,sc} & diffusion (gravity negligible)\\
\cline{2-3}
& $T<T_c$ & Self-similar collapse with $\alpha=2$ \cite{brenner,crs,sc} & $\bullet$ Collapse: \\
& & followed by a self-similar post-collapse leading & pre-collapse and post-collapse as \\
& & to the formation of a Dirac peak of mass $M$ \cite{post} & in a bounded domain \cite{brenner,crs,sc} \\
\hline \hline
& $\Theta>\Theta_c$ & Equilibrium state: & Equilibrium state: \\
$0<n<n_3$ & & box-confined (incomplete) polytrope \cite{lang} & complete polytrope \\
\cline{2-3}
& $\Theta<\Theta_c$ & Equilibrium state: & (compact support) \cite{lang} \\
& & complete polytrope (compact support) \cite{lang}& \\
\hline \hline
& $\Theta>\Theta_c$ & Metastable equilibrium state & $\bullet$ Evaporation [P]: \\
& & (local minimum of free energy): & asymptotically free anomalous \\
$n_3<n<\infty$ & & box-confined polytropic sphere \cite{lang}& diffusion (gravity negligible) \\
\cline{2-3}
& $\Theta<\Theta_c$ & Self-similar collapse with $\alpha=2n/(n-1)$ \cite{lang}& $\bullet$ Collapse: \\
& & followed by a post-collapse leading to the & pre-collapse and post-collapse \\
& & formation of a Dirac peak of mass $M$ [N] & as in a bounded domain \cite{lang} \\
\hline \hline
& $\Theta>\Theta_c$ & Equilibrium state: & Self-similar evaporation \\
$n=n_3$ & & box-confined (incomplete) polytrope \cite{lang} & modified by self-gravity [P] \\
\cline{2-4}
& $\Theta<\Theta_c$ & Pseudo self-similar collapse & Collapse [N]\\
& & leading to a Dirac peak of & \\
& & mass $(\Theta/\Theta_c)^{d/2}M$ $+$ halo [P]. & \\
& & This is followed by a post-collapse & \\
& & leading to a Dirac peak of mass $M$ [N] & \\
\cline{2-4}
& $\Theta=\Theta_c$ & Infinite family of steady states [P] & Infinite family of steady states [P] \\
\hline \hline
\end{tabular}
\vspace{.3cm}
{\caption{Summary of the different regimes of the GSP system in $d>2$ with references to the physical literature ([P]: present paper; [N]: not done). The case of negative indices is considered in \cite{logotropes}. The links to the mathematical literature are indicated in the main text. {\it Note:} for $(n=\infty, T>T_{c})$ and for $(n_{3}<n<\infty, \Theta>\Theta_{c})$ in a bounded domain, the system can either reach a metastable equilibrium state or collapse depending on a notion of basin of attraction (see \cite{crs} for more details).} \label{Table1}}
\end{table*}
\begin{table*}[ht]
\begin{tabular}
{||c||c|c|c||c||}%
\hline\hline
Index & Temperature & Bounded domain & Unbounded domain \\
\hline \hline
& $T>T_c$ & Equilibrium state: & Self-similar evaporation \\
$n=\infty$ & & analytical solution \cite{sc} & modified by self-gravity \cite{virial1} \\
\cline{2-4}
& $T<T_c$ & Pseudo self-similar collapse & Collapse [N] \\
& & leading to a Dirac peak of & \\
& & mass $(T/T_c)M$ $+$ halo \cite{herrerobio,sc}. & \\
& & This is followed by a post-collapse & \\
& & leading to a Dirac peak of mass $M$ [N] & \\
\cline{2-4}
& $T=T_c$ & Self-similar collapse leading to& Self-similar collapse leading to \\
& & a Dirac peak of mass $M$ with & a Dirac peak of mass $M$ with \\
& & exponential growth of $\rho(0,t)$ \cite{sc,souplet} & logarithmic growth of $\rho(0,t)$ \cite{virial1} \\
\hline \hline
& $T>T_c$ & Equilibrium state: & Equilibrium state: \\
$0<n<\infty$ & & box-confined (incomplete) polytrope \cite{lang} & complete polytrope (compact support) \\
\cline{2-3}
& $T\le T_c$ & Equilibrium state: & \cite{lang} \\
& & complete polytrope (compact support) \cite{lang}& \\
\hline \hline
\end{tabular}
\vspace{.3cm}
{\caption{Summary of the different regimes of the GSP system in
$d=2$. In $d=1$, the GSP system always relaxes towards a statistical
equilibrium state so that there is no evaporation or collapse
\cite{sc,acedo,mt}.} \label{Table2}}
\end{table*}
In a series of papers \cite{crs,sc,post,tcoll,multi,virial1,virial2},
we have investigated the dynamics and thermodynamics of a system of
self-gravitating random walkers. The basic idea is to couple the usual
Brownian motion (as introduced by Einstein and Smoluchowski) to the
gravitational interaction. In our general model \cite{virial2}, the
microscopic dynamics of the particles is described by $N$ coupled
stochastic equations including a friction force and a stochastic force
in addition to the gravitational interaction. The friction force and
the stochastic force model the interaction of the system with a thermal bath of
non-gravitational origin. Then, the proper statistical description of
this {\it dissipative} system is the canonical ensemble. In order to
simplify the problem, we have considered a strong friction limit in
which the motion of the particles is overdamped. We have also
considered a mean field approximation which becomes exact in a proper
thermodynamic limit $N\rightarrow +\infty$ in such a way that the
volume $V\sim 1$ is of order unity and the coupling constant $G\sim
1/N$ goes to zero (alternatively, we can consider that the mass of the
individual particles scales like $m\sim 1/N$ so that the total mass
$M\sim Nm$ and the gravity constant $G$ remain of order unity). These
approximations lead to the Smoluchowski-Poisson (SP) system. The
steady states correspond to isothermal distributions associated with
the Boltzmann statistics. When coupled to the Poisson equation, we
obtain density profiles similar to {\it isothermal stars} in astrophysics
\cite{emden,chandrab}. In the course of our study, we realized that the SP system is
isomorphic to the standard Keller-Segel (KS) model \cite{ks,jl}
introduced in mathematical biology to describe the chemotaxis of
bacterial populations \cite{murray}. The SP system and the KS model
have now been extensively studied by physicists
\cite{crs,sc,post,tcoll,multi,virial1,virial2,acedo} and applied
mathematicians \cite{nanjundiah,cp,childress,nagai,biler95,herrero96,herrerobio,othmer,herrero97,herrero98,biler,brenner,nagai2,rosier,bn,horstmann,dolbeault,biler4,corrias,biler1,biler2,blanchet1,blanchet2,souplet} with different methods and motivations.
We have also studied a generalized Smoluchowski-Poisson (GSP) system
[see Eqs. (\ref{gsp1})-(\ref{gsp2}) of this paper] including an
arbitrary barotropic equation of state $P(\rho)$. This model has been
introduced by Chavanis in \cite{gen}. The GSP system can be viewed as
a generalized mean field Fokker-Planck equation (for a review of
nonlinear Fokker-Planck equations, see
\cite{frank,gfp}). It can be obtained from generalized stochastic
processes and it is associated with a notion of effective generalized
thermodynamics (E.G.T). These equations can also provide a
generalized Keller-Segel (GKS) model of chemotaxis with a density
dependent diffusion coefficient
\cite{gfp}. For an isothermal equation of state $P=\rho k_{B}T/m$, we
recover the standard SP system and KS model (with appropriate
notations). Apart from the isothermal equation of state, the GSP
system and GKS model have been studied for: (i) a polytropic equation
of state $P=K\rho^{\gamma}$ \cite{lang} (ii) a logotropic equation of
state $P=A \ln\rho$ \cite{logotropes} (iii) a Fermi-Dirac equation of
state $P=P_{F.D.}(\rho)$ \cite{crrs,bln} (iv) an equation of state
$P=-T\rho_{max}
\ln(1-\rho/\rho_{max})$ taking into account
excluded volume effects \cite{degrad}. These are standard equations
of state introduced in astrophysics and statistical mechanics so that
it is natural to consider these equations of state in connexion to the
GSP system and GKS model.
Specializing on the polytropic equation of state $P=K\rho^{\gamma}$
with $\gamma=1+1/n$ \cite{lang}, the steady states of the GSP system correspond
to polytropic distributions associated with the Tsallis statistics
\cite{tsallis}. When coupled to the Poisson equation, we obtain
density profiles similar to {\it polytropic stars} in astrophysics
\cite{emden,chandrab}. For $d\ge 2$, there exists a critical index
$\gamma_{4/3}=2(d-1)/d$, i.e. $n_{3}=d/(d-2)$ \cite{lang}. For
$0<n<n_3$, the GSP system relaxes towards a stable steady state with a
compact support, similar to a classical white dwarf star (classical
white dwarf stars are equivalent to polytropes with index $n=3/2$ in
$d=3$ \cite{fowler}). For $n>n_3$, there is no stable equilibrium in
an unbounded domain so that the system can either collapse or
evaporate (see Fig. \ref{evap3} for an illustration). These different
regimes have been studied in
\cite{lang}. For $n=n_3$, the dynamics is critical. At this index,
there exists a critical mass $M_c(d)$ (for a given polytropic
constant $K$) \cite{lang} which is connected to the Chandrasekhar
mass of relativistic white dwarf stars (ultra-relativistic white
dwarf stars are equivalent to polytropes with index $n=3$ in $d=3$
\cite{chandra}). The object of the present paper is to study
numerically and, when possible, analytically this critical dynamics.
For $M<M_c$, we find that the system evaporates and we construct a
self-similar solution. For $M>M_c$, we find that the system
collapses. In a finite time $t_{coll}$, it forms a Dirac peak with
mass $M_c$ surrounded by a halo that has a pseudo self-similar
evolution. For $d=2$, the critical index $n_3\rightarrow +\infty$ so
that we recover the case of isothermal spheres whose dynamics is
known to be critical in $d=2$ \cite{sc}.
When we apply this model in the context of chemotaxis \cite{csmasse},
we find the existence of a critical mass $M_{c}(d)$ at the critical
index $n_{3}=d/(d-2)$. For $d=2$, we recover the well-known result
$M_{c}(d=2)=8\pi$ obtained within the standard Keller-Segel model (see
\cite{mt} and references therein) and for $d=3$, the critical mass
associated with the GKS model is $M_{c}(d=3)=202.8956...$ (in usual
dimensionless variables). This is similar to the Chandrasekhar
limiting mass of white dwarf stars. The existence of a limiting mass
for bacterial populations at the critical index $n_{3}$ and its
connexion to the Chandrasekhar mass was pointed out in
\cite{wd,csmasse} (and implicitly in \cite{lang}).
This is another illustration of the numerous analogies that exist
between self-gravitating systems and bacterial populations
\cite{crrs}.
The paper is organized as follows. In Sec. \ref{sec_wdp}, we briefly
recall the connexion between white dwarf stars and gaseous
polytropes. In Sec. \ref{sec_sglp}, we recall the basic properties of
the SP and GSP systems and describe the behavior of the solutions
depending on the index $n$ and the dimension of space $d$. As the
problem is very rich, involving many different cases ($\sim 30$), a summary of
previously obtained results, completed by new results and new
discussion, is required to understand the place of the present study
in the general problem (see also Tables \ref{Table1} and
\ref{Table2} for an overview). Then, we consider more specifically
the particular index $n=n_{3}$ which presents a critical dynamics that
was mentioned, but not studied, in our previous paper \cite{lang}. In
Sec. \ref{sec_dim}, we show that this critical value can be understood
from a simple dimensional analysis. In Sec.
\ref{sec_collapse}, we study the critical collapse dynamics and
extend the results obtained in $d=2$ for isothermal ($n=+\infty$)
systems \cite{sc} to the case of {\it critical polytropes}
($n=n_{3}$) in $d>2$. In Sec. \ref{sec_evaporation}, we study the
evaporation dynamics in unbounded space. We show that for $n>n_3$,
self-gravity becomes negligible for large times so that the
evaporation is eventually controlled by the pure (anomalous)
diffusion. For $n=n_3$, gravity remains relevant at any time so that
there exists a self-similar solution for which all the terms of the
GSP system scale the same way. Finally, in Sec. \ref{sec_analogy},
we transpose our main results to the context of chemotaxis using
notations and parameters adapted to this problem (this is to
facilitate the comparison with the results obtained in mathematical
biology).
Our numerical and analytical study was conducted in parallel to a
mathematical work by Blanchet {\it et al.}
\cite{bcl} who obtained rigorous results for the critical dynamics of
the GSP system and GKS model introduced in our paper
\cite{lang}. These two independent studies have different motivations
and use very different methods so they are complementary to each
other.
\section{White dwarf stars and polytropes}
\label{sec_wdp}
In this section, we briefly recall the connexion between the maximum
mass of white dwarf stars (Chandrasekhar's mass
\cite{chandra}) and the theory of self-gravitating polytropic
spheres \cite{emden,chandrab}.
In simplest models of stellar structure, a white dwarf star can be
viewed as a degenerate gas sphere in hydrostatic equilibrium. The
pressure is entirely due to the quantum pressure of the electrons
(resulting from Pauli's exclusion principle for fermions) while the
density of the star is dominated by the mass of the protons. The condition of
hydrostatic equilibrium coupled to the Poisson equation reads
\begin{equation}
\nabla P=-\rho\nabla \Phi,\qquad \Delta\Phi=4\pi G\rho,
\label{wdp1}
\end{equation}
and the equation of state of a degenerate gas of relativistic fermions
at $T=0$ can be written parametrically as follows \cite{chandraM}
\begin{equation}
P=A_{2}f(x), \qquad \rho=Bx^{3},
\label{wdp2}
\end{equation}
where
\begin{equation}
A_{2}={\pi m^{4}c^{5}\over 3h^{3}}, \qquad
B={8\pi m^{3}c^{3}\mu H\over 3h^{3}}, \label{wdp3}
\end{equation}
\begin{eqnarray}
f(x)=x(2x^{2}-3)(1+x^{2})^{1/2}+3\ {\sinh}^{-1}x,
\label{wdp4}
\end{eqnarray}
where $m$ is the mass of the electrons, $H$ is the mass of the protons
and $\mu$ is the molecular weight. The function $f(x)$ has the
asymptotic behaviors $f(x)\simeq (8/5) x^{5}$ for $x\ll 1$ and
$f(x)\simeq 2 x^{4}$ for $x\gg 1$. The classical limit corresponds to
$x\ll 1$ and the ultra-relativistic limit to $x\gg 1$. In these
limits, the white dwarf star is equivalent to a polytropic gas sphere
with an equation of state $P=K\rho^{\gamma}$. The index $n$ of the
polytrope is defined by $\gamma=1+1/n$. In $d=3$ dimensions,
polytropes are self-confined for $n<5$ and they are stable (with
respect to the Euler-Poisson system) for $n\le 3$ (for $n=3$ they are
marginally stable). The mass-radius relation is given by \cite{chandrab}:
\begin{eqnarray}
M^{(n-1)/n}R^{(3-n)/n}=\frac{K(1+n)}{G(4\pi)^{1/n}}\omega_{n}^{(n-1)/n},
\label{wdp5}
\end{eqnarray}
where $\omega_{n}$ is a constant (depending only on the index $n$ of
the polytrope) that can be expressed in terms of the solution of the
Lane-Emden equation \cite{emden}.
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{chandra3new.eps}
} \caption[]{Mass-radius relation for relativistic white dwarf stars
at $T=0$ \cite{chandra}. The radius vanishes for a limiting mass
$M_{Chandra}$ corresponding to the ultra-relativistic limit (R). The
dashed line corresponds to the classical limit (C).} \label{chandra3}
\end{figure}
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{massedensiteD3.eps}
} \caption[]{Mass versus central density for relativistic white dwarf
stars at $T=0$. Equilibrium states only exist for $M<M_{Chandra}$. For
$M=M_{Chandra}$, the density profile is a Dirac peak. For
$M>M_{Chandra}$, the system is expected to collapse and form a neutron
star or a black hole. The corresponding density
profiles are represented in Fig. 4 of \cite{chandraM}.}
\label{massedensiteD3}
\end{figure}
In the classical case $x\ll 1$, the equation of state
takes the form
\begin{equation}
P=K_{1}\rho^{5/3},
\label{wdp6}
\end{equation}
with
\begin{equation}
K_{1}={1\over 5}\biggl ({3\over 8\pi}\biggr )^{2/3} {h^{2}\over m
(\mu H)^{5/3}}.
\label{wdp7}
\end{equation}
Therefore a classical white dwarf star is equivalent to a polytrope of
index $n=3/2$. The mass-radius relation is given by
\begin{equation}
M^{1/3}R={1\over 2}\biggl ({3\over 32 \pi^{2}}\biggr
)^{2/3}{h^{2}\over mG(\mu H)^{5/3}}\ \omega_{3/2}^{1/3},
\label{wdp8}
\end{equation}
with $\omega_{3/2}=132.3843...$. It exhibits the familiar
$MR^{3}\sim 1$ scaling.
In the ultra-relativistic limit $x\gg 1$, the equation of state takes
the form
\begin{equation}
P=K_{2}\rho^{4/3},
\label{wdp9}
\end{equation}
with
\begin{equation}
K_{2}={1\over 4}\biggl ({3\over 8\pi}\biggr )^{1/3}{hc\over (\mu H)^{4/3}}.
\label{wdp10}
\end{equation}
Therefore, an ultra-relativistic white dwarf star is equivalent to a
polytrope of index $n=3$. For this index, the relation (\ref{wdp5})
leads to a unique value of the mass
\begin{equation}
M_c=\biggl ({3\over 32\pi^{2}}\biggr )^{1/2}\omega_{3}\biggl ({hc\over
G}\biggr )^{3/2}{1\over (\mu H)^{2}},
\label{wdp11}
\end{equation}
with $\omega_{3}=2.01824...$. This is the Chandrasekhar mass
\begin{equation}
M_c=0.196701...\biggl ({hc\over G}\biggr )^{3/2}{1\over (\mu
H)^{2}}\simeq 5.76 M_{\odot}/\mu^{2}.
\label{wdp12}
\end{equation}
Considering the general mass-radius relation of partially relativistic
white dwarf stars (see Fig. \ref{chandra3}), we note that, for this
limiting value, the radius $R$ of the configuration tends to
zero. This leads to a Dirac peak with mass $M_c$. Thus, the
Chandrasekhar mass represents the maximum mass of white dwarf stars
(see Fig. \ref{massedensiteD3}). There is no hydrostatic equilibrium
configuration for $M>M_{c}$.
If we extend Chandrasekhar's theory to a $d$-dimensional universe
\cite{wd}, we find that white dwarf stars become unstable in a
universe with $d\ge 4$ dimensions (in $d=4$, classical white dwarf
stars exist for a unique value of the mass
$M=M_c=0.0143958...h^4/(m^2G^2\mu^3 H^3)$ and they are
marginally stable). Therefore, the dimension $d=3$ of our universe is
very special regarding the laws of gravity. This is the largest
dimension of space at which all the series of equilibria of white
dwarf stars (from classical to ultra-relativistic) is stable. This may
have implications regarding the Anthropic
Principle.
\section{Self-gravitating Langevin particles}
\label{sec_sglp}
\subsection{The generalized Smoluchowski-Poisson system}
\label{sec_gsp}
In this paper, we shall study a dynamical model of self-gravitating
systems whose steady states reproduce the condition of hydrostatic
equilibrium Eq. (\ref{wdp1}). Specifically, we consider the generalized
Smoluchowski-Poisson system \cite{gen}:
\begin{equation}
\frac{\partial\rho}{\partial t}=\nabla\cdot \left\lbrack
\frac{1}{\xi}\left (\nabla P+\rho\nabla\Phi\right )\right\rbrack,
\label{gsp1}
\end{equation}
\begin{equation}
\Delta\Phi=S_{d}G\rho,
\label{gsp2}
\end{equation}
where $P(\rho)$ is a barotropic equation of state, i.e. the pressure
$P({\bf r},t)$ depends only on the density of particles $\rho({\bf
r},t)$. This model describes a {\it dissipative} gas of
self-gravitating Langevin particles in an overdamped limit
$\xi\rightarrow +\infty$ (where inertial effects are neglected) and in
the thermodynamic limit $N\rightarrow +\infty$ (where the mean field
approximation becomes exact) \cite{virial2,lemou}. The GSP system is
a particular example of generalized mean field Fokker-Planck equation
\cite{gfp}. It is associated to a stochastic process of the form
\begin{equation}
\frac{d{\bf r}}{dt}=-\frac{1}{\xi}\nabla \Phi+\sqrt{\frac{2
P(\rho)}{\rho\xi}}{\bf R}(t),\label{gsp3}
\end{equation}
where ${\bf R}(t)$ is a white noise with $\langle {\bf
R}(t)\rangle={\bf 0}$ and $\langle
R_i(t)R_j(t')\rangle=\delta_{ij}\delta(t-t')$. This stochastic process
describes the evolution of each of the $N$ Langevin particles
interacting through the mean field potential $\Phi({\bf r},t)$. For
sake of generality, we have allowed the strength of the noise term in
Eq. (\ref{gsp3}) to depend on the local distribution of particles.
This gives rise to anomalous diffusion and generalized pressure laws
as discussed in
\cite{frank,gfp}.
The Lyapunov functional (or generalized free energy) associated with the GSP system is
\begin{equation}
F=\int\rho\int^{\rho}\frac{P(\rho')}{\rho'^{2}}\,d\rho' d{\bf
r}+\frac{1}{2}\int\rho\Phi\, d{\bf r}.
\label{gsp4}
\end{equation}
Easy calculations lead to
\begin{equation}
\dot F=-\int\frac{\xi}{\rho}(\nabla P+\rho\nabla\Phi)^{2}d{\bf r}\le 0.
\label{gsp5}
\end{equation}
The GSP system has the following properties: (i) the total mass is
conserved. (ii) $\dot F\le 0$. (iii) $\dot F=0$ $\Leftrightarrow$
$\nabla P+\rho\nabla\Phi={\bf 0}$ (hydrostatic equilibrium)
$\Leftrightarrow$ $\partial_{t}\rho=0$. (iv) $\rho_{eq}({\bf r})$ is a
steady state of the GSP system iff it is a critical point of $F[\rho]$
at fixed mass. (v) A steady state of the GSP system is linearly
dynamically stable iff it is a (local) minimum of $F[\rho]$ at fixed
mass \footnote{Since the free energy $F[\rho]$ coincides with the
energy functional ${\cal W}[\rho]$ of a barotropic gas (up to a
positive macroscopic kinetic term) \cite{aaantonov}, we conclude that
$\rho_{eq}({\bf r})$ is linearly dynamically stable with respect to
the GSP system iff it is formally nonlinearly dynamically stable with
respect to the barotropic Euler-Poisson system
\cite{aaantonov}.}. By Lyapunov's direct
method \cite{frank}, we know that if $F[\rho]$ is bounded from below,
the GSP system will relax towards a (local) minimum of $F[\rho]$ at
fixed mass for $t\rightarrow +\infty$. If $F[\rho]$ has several
minima, the choice of the selected minimum will depend on a notion of
basin of attraction: if the initial condition is sufficiently ``close'' to
the minimum $\rho_{eq}({\bf r})$, the distribution $\rho({\bf r},t)$
will converge towards $\rho_{eq}({\bf r})$ for $t\rightarrow
+\infty$. Finally, if $F[\rho]$ has no global minimum (as can be the
case for self-gravitating systems), the system can either tend to a
local minimum (metastable) if it exists, or undergo collapse or
evaporation.
We are not claiming that this simple model accurately describes the
dynamics of white dwarf stars or other astrophysical systems. However,
we have undertaken a systematic study of the GSP system for different
equations of state that have been considered in astrophysics. The main
interest of this model is its simplicity (while being still very rich)
which enables an accurate numerical and analytical treatment. This can
be viewed as a first step before considering other, more realistic,
dynamical models of self-gravitating systems. On the other hand, in a
completely different context, this model is isomorphic to the standard
Keller-Segel model describing the chemotaxis of bacterial populations
(see Sec. \ref{sec_analogy}). This is a further motivation to study
this type of equations at a general level \footnote{We shall study the
problem in $d$ dimensions because: (i) We have found that the structure
of the mathematical problem with the dimension of space is very rich
\cite{sc,lang,wd}, exhibiting several characteristic dimensions. (ii)
In gravity, the usual dimension is $d=3$ but, in biology (see
Sec. \ref{sec_analogy}), the bacteria (or cells) are compelled to lie
on a plane so that $d=2$.}.
\subsection{Isothermal spheres}
\label{sec_i}
For an isothermal equation of state $P=\rho
k_{B}T/m$, we recover the standard Smoluchowski-Poisson system \cite{crs}:
\begin{equation}
\frac{\partial\rho}{\partial t}=\nabla\cdot \left\lbrack
\frac{1}{\xi}\left (\frac{k_{B}T}{m}\nabla \rho+\rho\nabla\Phi\right
)\right\rbrack,
\label{i1}
\end{equation}
\begin{equation}
\Delta\Phi=S_{d}G\rho.
\label{i2}
\end{equation}
Equation (\ref{i1}) is an ordinary
mean-field Fokker-Planck equation associated with a Langevin dynamics
of the form
\begin{eqnarray}
\label{i3}
\frac{d{\bf r}}{dt}=-\frac{1}{\xi}\nabla \Phi+\sqrt{\frac{2k_{B}T}{\xi m}}{\bf R}(t),
\end{eqnarray}
where the strength of the noise is constant.
The Lyapunov functional of the SP system can be written
\begin{equation}
F=k_{B}T\int \frac{\rho}{m}\ln\frac{\rho}{m} \,d{\bf
r}+\frac{1}{2}\int\rho\Phi \,d{\bf r}.
\label{i4}
\end{equation}
This is the Boltzmann free energy $F_B=E-TS_B$ where $E=(1/2)\int
\rho\Phi \,d{\bf r}$ is the energy and $S_B=-k_{B}\int ({\rho}/{m})\ln
({\rho}/{m}) \,d{\bf r}$ is the Boltzmann entropy. The stationary
solutions of the SP system are given by the Boltzmann distribution
\begin{equation}
\rho=A e^{-\beta m \Phi},
\label{i5}
\end{equation}
where $A$ is a constant determined by the mass $M$. These steady
states can also be obtained by extremizing $F$ at fixed mass, writing
$\delta F-\alpha\delta M=0$, where $\alpha$ is a Lagrange
multiplier. The equilibrium distribution is obtained by substituting
Eq. (\ref{i5}) into Eq. (\ref{i2}) leading to the Boltzmann-Poisson
equation. Specializing on spherically symmetric distributions and
defining
\begin{equation}
\rho=\rho_{0}e^{-\psi(\xi)}, \quad \xi=r/r_{0}=(S_{d}\beta Gm\rho_0)^{1/2}r,
\label{i6}
\end{equation}
where $\rho_0$ is the central density, we find after simple algebra
that $\psi$ is solution of the Emden equation
\begin{equation}
\frac{1}{\xi^{d-1}}\frac{d}{d\xi}\left
(\xi^{d-1}\frac{d\psi}{d\xi}\right )=e^{-\psi},
\label{i7}
\end{equation}
with $\psi=0$ and $\psi'=0$ at $\xi=0$. The Emden equation can also be
obtained from the fundamental equation of hydrostatic equilibrium for
an isothermal equation of state \cite{emden,chandrab,sc}. Note that the
isothermal spheres have a self-similar structure
$\rho(r)/\rho_{0}=e^{-\psi(r/r_0)}$: if we rescale the central density
and the radius appropriately, they have the same profile
$e^{-\psi(\xi)}$. This property is called homology \cite{chandrab}.
For $d=1$, the SP system is equivalent to the Burgers equation
\cite{acedo,mt} and it relaxes towards the Camm distribution \cite{camm}
which is a global minimum of free energy for any temperature. For
$d>2$, there is no steady state with finite mass in an unbounded
domain because the density of an isothermal self-gravitating system
decreases as $\rho\sim r^{-2}$ for $r\rightarrow +\infty$
\cite{chandrab}. We shall thus enclose the system within a box of radius
$R$ \footnote{It may appear artificial to put the system in a
``box''. In gravity, the box delimitates the region of space where the
system can be assumed isolated from the surrounding and where
statistical mechanics applies. In biology (see Sec. \ref{sec_analogy}),
the box has a physical meaning since it represents the container in
which the bacteria (or cells) are confined.}. For box-confined
systems, we must integrate the Emden equation (\ref{i7}) until the
normalized box radius $\xi=\alpha$ with
\begin{eqnarray}
\alpha=(S_{d}\beta Gm\rho_0)^{1/2}R.
\label{i8}
\end{eqnarray}
It is useful to define a
dimensionless control parameter
\begin{eqnarray}
\eta=\frac{\beta GMm}{R^{d-2}}.
\label{i9}
\end{eqnarray}
Using the conservation of mass or the Gauss theorem, we get \cite{sc}:
\begin{eqnarray}
\eta=\alpha\psi'(\alpha).
\label{i10}
\end{eqnarray}
This equation relates the central density to the mass and the
temperature. More precisely, the relation $\eta(\alpha)$ gives the
mass $M$ as a function of the central density (for a fixed
temperature $T$) or the temperature $T$ as a function of the density
contrast ${\cal R}\equiv \rho(0)/\rho(R)=e^{\psi(\alpha)}$ (for a
fixed mass $M$). The curve $\eta(\alpha)$ is plotted in Fig. 3 of
\cite{sc}. For $2<d<10$, the series of equilibria $\eta(\alpha)$
oscillates and presents a first turning point at
$\eta_{c}=\eta(\alpha_{1})$ (for $d\ge 10$, the series of equilibria
does not display any oscillation). According to Poincar\'e's turning
point argument \cite{katz,ijmpb}, configurations with
$\alpha>\alpha_{1}$ are unstable (saddle points of free energy at
fixed mass). This concerns in particular the singular isothermal
sphere corresponding to $\alpha\rightarrow +\infty$. Configurations
with $\alpha<\alpha_{1}$ are metastable (local minima of free energy
at fixed mass) and they exist only for $\eta\le\eta_c$. There is no
global minimum of free energy for self-gravitating isothermal
spheres. For $\eta\le
\eta_{c}$, depending on the form of the initial density profile, the
SP system can either relax towards a box-confined isothermal sphere
(metastable) or collapse. This behavior has been illustrated
numerically in Fig. 16 of \cite{crs}. For $\eta>\eta_{c}$ the SP
system undergoes gravitational collapse. This self-similar collapse,
followed by the formation of a Dirac peak, has been studied in detail
in \cite{sc,post}. If we remove the box, the SP system can either
collapse or evaporate depending on the initial condition (this
behavior will be illustrated numerically in Sec. \ref{sec_sup}).
The dimension $d=2$ is critical and has been studied in detail in
\cite{sc,virial1}. The solution of the Emden equation is known analytically \cite{ostriker}:
\begin{equation}
e^{-\psi}=\frac{1}{\left (1+\frac{1}{8}\xi^{2}\right )^{2}}.
\label{i11}
\end{equation}
In an unbounded domain, the density profile extends to infinity but
the total mass is finite because the density decreases as $r^{-4}$ for
$r\rightarrow +\infty$. The total mass $M=\int_{0}^{+\infty}\rho 2\pi r dr$
is given by
\begin{equation}
M=\frac{1}{\beta Gm}\int_{0}^{+\infty} e^{-\psi}\xi
d\xi=\frac{1}{\beta Gm}\lim_{\xi\rightarrow +\infty} \xi \psi'(\xi),
\label{i12}
\end{equation}
where we have used the Emden equation (\ref{i7}) to get the last
equality. Using Eq. (\ref{i11}), we find that $\xi\psi'\rightarrow 4$
for $\xi\rightarrow +\infty$. This yields a unique value of the mass
(for a fixed temperature), or equivalently a unique value of the
temperature (for a fixed mass) given by
\begin{eqnarray}
M_{c}=\frac{4k_{B}T}{Gm}, \qquad k_{B}T_{c}=\frac{GMm}{4}.
\label{i13}
\end{eqnarray}
For $T=T_{c}$ or $M=M_{c}$, we have an infinite family of steady states
\begin{eqnarray}
\rho(r)=\frac{\rho_{0}}{\left (1+\frac{1}{8}(r/r_0)^2\right
)^2},\quad \rho_{0}r_0^2=\frac{k_{B}T}{2\pi Gm}, \label{i14}
\end{eqnarray}
parameterized by the central density $\rho_{0}$. For
$\rho_{0}\rightarrow +\infty$, we obtain a Dirac peak with mass
$M_c$. The steady states (\ref{i14}) have the same value of the free
energy, independently on the central density $\rho_{0}$ (see Appendix
\ref{sec_virial}) and they are marginally stable ($\delta^{2}F=0$). For $T\neq T_{c}$ or $M\neq M_c$, there is no steady state in an infinite domain.
For $T>T_c$ or $M<M_c$, the solution of the SP system evaporates and
for $T<T_c$ or $M>M_c$, the solution of the SP
system collapses. These different regimes have been discussed in
detail in \cite{sc,virial1}.
If we consider box confined configurations in $d=2$, we observe that
the control parameter (\ref{i9}) is independent on the box radius
and can be written
\begin{eqnarray}
\eta=\beta GMm=4\frac{M}{M_{c}}=4 \frac{T_{c}}{T}.
\label{i15}
\end{eqnarray}
Using Eqs. (\ref{i10}) and (\ref{i11}), we obtain the relation
$\eta(\alpha)=(\alpha^2/2)/(1+\alpha^2/8)$ between the central
density, the mass and the temperature. The density profiles are given
by Eq. (\ref{i14}) with $8(r_0/R)^2=(T/T_c-1)=(M_c/M-1)$ so the
central density is now determined by the mass $M$ or the temperature
$T$. Equilibrium states exist only for $\eta\le \eta_{c}=4$,
i.e. $M\le M_{c}$ or $T\ge T_{c}$ and, since the series of
equilibria is monotonic, they are fully
stable (global minima of free energy at fixed mass). In that case, the
SP system tends to a box-confined isothermal sphere. For
$\eta=\eta_{c}=4$, i.e. $M=M_{c}$ or $T=T_{c}$, the steady state is a
Dirac peak containing all the mass. For $\eta>\eta_{c}=4$ the SP
system undergoes gravitational collapse (see Sec. \ref{sec_collapse}).
The mass-central density (for a fixed temperature) of two-dimensional
isothermal spheres is plotted in Fig. \ref{mrho}. We note the striking
analogy with the mass-central density of white dwarf stars in
Fig. \ref{massedensiteD3}. Therefore, the critical mass (\ref{i13}) of
isothermal spheres in two dimensions shares some resemblance with the
Chandrasekhar mass. We shall show in the next section that this
analogy (which is not obvious a priori) bears more significance than
is apparent at first sight.
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{mrho.eps}
} \caption[]{Mass as a function of the central density for
two-dimensional box-confined self-gravitating isothermal spheres with
fixed temperature. Equilibrium states exist only for $M\le M_{c}$. For
$M=M_{c}$, the density profile is a Dirac peak and for $M>M_c$ the
system undergoes gravitational collapse. More precisely, this curve
represents $\eta(\alpha)/4$ so it also gives the inverse temperature
$T_{c}/T$ as a function of the density contrast ${\cal
R}=\rho_{0}/\rho(R)={\cal R}(\alpha)$ for a fixed
mass. The corresponding density profiles are
represented in Fig. 1 of \cite{mt}. }
\label{mrho}
\end{figure}
\subsection{Complete polytropes}
\label{sec_up}
If we consider a polytropic equation of state $P=K\rho^{\gamma}$ with
$\gamma=1+1/n$, we get the polytropic Smoluchowski-Poisson system \cite{lang}:
\begin{equation}
\frac{\partial\rho}{\partial t}=\nabla\cdot \left\lbrack
\frac{1}{\xi}\left (K\nabla \rho^{\gamma}+\rho\nabla\Phi\right
)\right\rbrack,
\label{up1}
\end{equation}
\begin{equation}
\Delta\Phi=S_{d}G\rho.
\label{up2}
\end{equation}
Equation (\ref{up1}) is a generalized mean field Fokker-Planck equation
associated with the stochastic process
\begin{eqnarray}
\label{up3} \frac{d{\bf r}}{dt}=-\frac{1}{\xi}\nabla
\Phi+\sqrt{\frac{2K}{\xi}}\rho^{(\gamma-1)/2}{\bf R}(t),
\end{eqnarray}
where the strength of the noise depends on the local density as a power law
\cite{borland}. The Lyapunov functional of the polytropic SP system can be written
\begin{equation}
F=\frac{K}{\gamma -1}\int (\rho^{\gamma}-\rho) \,d{\bf
r}+\frac{1}{2}\int\rho\Phi \,d{\bf r}.
\label{up4}
\end{equation}
It can be interpreted as a generalized free energy of the form
$F=E-T_{eff} S$ where $E=(1/2)\int \rho\Phi \,d{\bf r}$ is the energy,
$T_{eff}=K$ is an effective temperature (polytropic temperature) and
$S=-1/(\gamma-1)\int (\rho^{\gamma}-\rho)\,d{\bf r}$ is the Tsallis
entropy (the polytropic index $\gamma$ plays the role of the Tsallis
$q$ parameter). For $\gamma=1$, i.e. $n\rightarrow +\infty$, the
polytropic equation of state $P=K\rho^{\gamma}$ reduces to
$P=K\rho$. It coincides with an isothermal equation of state $P=\rho
k_{B}T/m$ with temperature $K=k_{B}T/m$ leading to the standard
Smoluchowski-Poisson system (\ref{i1})-(\ref{i2}).
The stationary solutions of the GSP system (\ref{up1}) are
given by the Tsallis distributions
\begin{equation}
\rho=\left \lbrack
\lambda-\frac{\gamma-1}{K\gamma}\Phi\right\rbrack_{+}^{1/(\gamma-1)},
\label{up5}
\end{equation}
where $\lambda$ is a constant determined by the mass $M$ (by
definition $[x]_{+}=x$ if $x\ge 0$ and $[x]_{+}=0$ if $x<0$). These
steady states can also be obtained by extremizing $F$ at fixed mass,
writing $\delta F-\alpha\delta M=0$, where $\alpha$ is a Lagrange
multiplier. The equilibrium distribution is obtained by substituting
Eq. (\ref{up5}) into Eq. (\ref{up2}) leading to the Tsallis-Poisson
equation. Specializing on spherically symmetric solutions and defining
\begin{equation}
\rho=\rho_0\theta^{n}(\xi), \quad \xi=r/r_{0}, \quad r_0=\left\lbrack
\frac{K(1+n)}{S_{d}G\rho_0^{1-1/n}}\right\rbrack^{1/2},
\label{up6}
\end{equation}
where $\rho_0$ is the central density, we find after
simple algebra that $\theta$ is solution of the Lane-Emden equation
\begin{equation}
\frac{1}{\xi^{d-1}}\frac{d}{d\xi}\left
(\xi^{d-1}\frac{d\theta}{d\xi}\right )=-\theta^{n},
\label{up7}
\end{equation}
with $\theta=1$ and $\theta'=0$ at $\xi=0$. The Lane-Emden equation
can equivalently be derived from the fundamental equation of
hydrostatic equilibrium with a polytropic equation of state
\cite{emden,chandrab,lang}. Note that the polytropic spheres have a self-similar
structure $\rho(r)/\rho_{0}=\theta^{n}(r/r_0)$: if we rescale the
central density and the radius appropriately, they have the same
profile $\theta^{n}(\xi)$. This property is called homology \cite{chandrab}.
In this paper, we restrict ourselves to $n>0$. Let us first discuss
the case $d>2$. For $n>n_{5}=(d+2)/(d-2)$, unbounded self-gravitating
polytropes have infinite mass because their density profile decreases
like $r^{-\alpha}$ for $r\rightarrow +\infty$, with
$\alpha=2n/(n-1)$. For $n<n_{5}=(d+2)/(d-2)$, they are
self-confined. In that case, the function $\theta$ vanishes at
$\xi=\xi_{1}$ and the density vanishes at $R_{*}=r_{0}\xi_{1}$ which
defines the radius of the polytrope. The relation between the radius
and the central density is
\begin{equation}
R_{*}=\left\lbrack
\frac{K(1+n)}{S_{d}G\rho_0^{1-1/n}}\right\rbrack^{1/2}\xi_{1}.
\label{up8}
\end{equation}
The total mass $M=\int_{0}^{R_{*}}\rho S_{d}r^{d-1}dr$ can be written as
\begin{equation}
M=S_{d}\rho_0
r_{0}^{d}\int_{0}^{\xi_{1}}\theta^{n}\xi^{d-1}d\xi=-S_{d}\rho_0
r_{0}^{d}\xi_{1}^{d-1}\theta'_{1}, \label{up9}
\end{equation}
where we have used the Lane-Emden equation (\ref{up7}) to get the last
equality. Therefore, the relation between the mass and the central
density is
\begin{equation}
M=-S_{d}\rho_0\left\lbrack \frac{K(1+n)}{S_{d}G\rho_0^{1-1/n}}
\right\rbrack^{d/2}\xi_{1}^{d-1}\theta'_{1}.
\label{up10}
\end{equation}
Eliminating the central density between Eqs. (\ref{up8}) and
(\ref{up10}) and introducing the index
\begin{equation}
n_{3}=\frac{d}{d-2},
\label{up11}
\end{equation}
we get the mass-radius relation
\begin{eqnarray}
M^{(n-1)/n}R_{*}^{\lbrack (d-2)(n_3-n)
\rbrack/n}=\frac{K(1+n)}{GS_{d}^{1/n}}\omega_{n}^{(n-1)/n},
\label{up12}
\end{eqnarray}
where
\begin{eqnarray}
\omega_{n}=-\xi_{1}^{(n+1)/(n-1)}\theta'_{1}.
\label{up13}
\end{eqnarray}
Let us introduce the polytropic temperature
\begin{eqnarray}
\Theta=\frac{K(1+n)}{nS_{d}^{1/n}}.
\label{up14}
\end{eqnarray}
For $0<n<n_{3}$ there is one, and only one, steady state for each
mass $M$ and temperature $\Theta$ and it is fully stable (global
minimum of $F$ at fixed mass). The GSP system will relax towards
this complete polytrope (note that for $n=1$ the radius $R_{*}$ of
the polytrope is independent on the mass). For $n_{3}<n<n_{5}$
there is one, and only one, steady state for each mass $M$ and
temperature $\Theta$ but it is unstable (saddle point of $F$ at
fixed mass). In that case, the system will either collapse or
evaporate. The index $n_{3}$ is {\it critical}. For $n=n_{3}$, there
exists steady solutions for a unique value of the mass (at fixed
temperature $\Theta$):
\begin{eqnarray}
M_{c}=\left (
\frac{n_{3}\Theta}{G}\right )^{n_3/(n_3-1)}\omega_{n_3},
\label{up15}
\end{eqnarray}
or for a unique temperature (at fixed mass $M$):
\begin{eqnarray}
\Theta_{c}=\frac{G}{n_3}\left (\frac{M}{\omega_{n_3}}\right )^{(n_3-1)/n_3}.
\label{up16}
\end{eqnarray}
For $d=3$, we have
$M_{c}=({3\Theta}/{G})^{3/2}\omega_{3}=10.487... ({\Theta}/{G})^{3/2}$
and
$\Theta_{c}=({G}/{3})({M}/{\omega_{3}})^{2/3}=0.20872... ({G}/{M})^{2/3}$. As
we have seen in Sec. \ref{sec_wdp}, the Chandrasekhar limiting mass of
relativistic white dwarf stars is connected to the limiting mass
(\ref{up15}) of critical polytropes. For a polytropic equation of state with
critical index $n=n_3$, and for $M=M_c$, we get an infinite family of
steady solutions
\begin{eqnarray}
\rho(r)=\rho_{0}\theta^{n_3}(r/r_0), \quad
\rho_{0}r_{0}^{d}=\frac{1}{S_d}\left (\frac{\Theta n_{3}}{G}\right
)^{d/2}, \label{up17}
\end{eqnarray}
parameterized by the central density $\rho_{0}$. For
$\rho_{0}\rightarrow +\infty$, the density profile tends to a Dirac
peak with mass $M_{c}$. These solutions have the same equilibrium free energy
$F[\rho_{eq}]=-dKM/(d-2)$ independently on the central density
$\rho_{0}$ (see Appendix \ref{sec_virial}) and they are marginally stable
($\delta^{2}F=0$). For $M<M_{c}$ (at fixed
temperature) or $\Theta>\Theta_c$ (at fixed mass), the solutions of
the GSP system evaporate and for $M>M_{c}$ (at fixed temperature) or
$\Theta<\Theta_c$ (at fixed mass), they collapse. These different
regimes will be studied in detail in Secs. \ref{sec_collapse} and \ref{sec_evaporation}.
For $d=2$, we find that $n_{3}\rightarrow +\infty$, so we realize that
isothermal systems ($n=+\infty$) in two dimensions are similar to
critical polytropes ($n=n_{3}$) in higher dimensions $d>2$. {\it This
is why the critical mass of isothermal spheres in $d=2$ shares some
analogies with the Chandrasekhar mass in $d=3$ since they both
correspond to critical polytropes with index $n=n_{3}$}
\cite{wd}. Comparing Eq. (\ref{i12}) with Eq. (\ref{up9}) we find that
for $d\rightarrow 2$ and $n=n_{3}\rightarrow +\infty$, we have the
limit
\begin{eqnarray}
\lim_{n_{3}\rightarrow +\infty} n_{3}\omega_{n_{3}}=4.
\label{up18}
\end{eqnarray}
This limit can also be obtained from Eq. (79) of \cite{lang}. With
this relation, we find that the critical mass and the critical
temperature in $d=2$ given by Eq. (\ref{i13}) are particular cases of
Eqs. (\ref{up15}) and (\ref{up16}).
Finally, for $d=1$ with $n>0$ (and for $d=2$ with $0<n<+\infty$), the
GSP system always relaxes towards a complete polytrope which is a global
minimum of free energy. Thus there is no critical dynamics for $d<2$
(and for $d=2$ with $n\neq +\infty$).
\subsection{Box confined polytropes}
\label{sec_box}
For systems confined within a box of radius $R$, we need to integrate
the Lane-Emden equation (\ref{up7}) until the normalized box radius
$\xi=\alpha$ with
\begin{eqnarray}
\alpha=R/r_0=\left\lbrack\frac{S_{d}G\rho_0^{1-1/n}}{K(n+1)}\right\rbrack^{1/2}R.
\label{box1}
\end{eqnarray}
It is useful to define a dimensionless control parameter (the
definition of this parameter has been slightly changed with respect
to our previous paper \cite{lang}):
\begin{eqnarray}
\eta=M\left \lbrack \frac{nS_{d}^{1/n}G}{K(1+n)}\right
\rbrack^{n/(n-1)}\frac{1}{R^{(d-2)(n-n_{3})/(n-1)}}.
\label{box2}
\end{eqnarray}
In terms of the polytropic temperature (\ref{up14}), it can be rewritten
\begin{equation}
\eta={G^{n/(n-1)}M\over \Theta^{n/(n-1)}R^{(d-2)(n-n_{3})/
(n-1)}}. \label{box3}
\end{equation}
Note that for $n\rightarrow +\infty$, we have $\Theta=K=k_{B}T/m$
and the definitions (\ref{i9}) and (\ref{box3}) coincide. Using the
conservation of mass or the Gauss theorem, we get \cite{lang}:
\begin{eqnarray}
\eta=-n^{{n}/({n-1})}\alpha^{(n+1)/(n-1)}\theta'(\alpha), \quad (\alpha<\xi_1).
\label{box4}
\end{eqnarray}
This equation relates the central density to the mass (at fixed
temperature and box radius). In fact, this relation is valid only
for {\it incomplete polytropes} whose density profile is arrested by
the box (i.e. $\rho(R)>0$). For $n\ge n_5$, this is always the case.
For $0<n<n_5$, using the identity
\begin{eqnarray}
\frac{\alpha}{\xi_{1}}=\frac{R}{R_{*}},
\label{box5}
\end{eqnarray}
the polytrope is confined by the box if $R_{*}\ge R$,
i.e. $\alpha\le \xi_{1}$. For $R_{*}<R$, i.e. $\alpha>\xi_{1}$, we
have {\it complete polytropes} whose density profile vanishes before
the wall. In that case, we need to integrate the
Lane-Emden equation until the natural polytropic radius $\xi=\xi_{1}$.
For $\alpha>
\xi_{1}$, the relation (\ref{box4}) is replaced by
\begin{eqnarray}
\eta=n^{n/(n-1)}\omega_{n} \left (\frac{R_{*}}{R}\right
)^{(d-2)(n-n_{3})/(n-1)}\ (\alpha>\xi_1), \label{box6}
\end{eqnarray}
which is equivalent to the mass-radius relation (\ref{up12}). Using
Eq. (\ref{box5}), it can be expressed in terms of $\alpha$, giving the
relation between the mass and the central density (at fixed
temperature) for complete polytropes. Finally, the intermediate case
is $R_*=R$, i.e. $\alpha=\xi_{1}$, at which the density profile
vanishes precisely at the box radius. In that case, we have
\begin{eqnarray}
\eta=n^{n/(n-1)}\omega_{n}\quad (\alpha=\xi_1).
\label{box7}
\end{eqnarray}
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{alphaeD3.eps}
} \caption[]{Series of equilibria for box-confined polytropes with
different index (the figure is done for $d=3$). The full lines
($\alpha<\xi_{1}$) correspond to incomplete polytropes whose profile is
arrested by the box and the dashed lines ($\alpha>\xi_{1}$) correspond
to complete polytropes that are self-confined.}
\label{alphaeD3}
\end{figure}
The relation $\eta(\alpha)$ defines the series of equilibria
containing incomplete (for $\alpha<\xi_{1}$) and complete (for
$\alpha>\xi_{1}$) polytropes. It gives the mass $M$ as a function of
the central density (for a fixed temperature $\Theta$ and box radius
$R$) or the temperature $\Theta$ as a function of the density contrast
(for a fixed mass $M$ and radius $R$). Different examples of curves
$\eta(\alpha)$ are represented in Fig. \ref{alphaeD3} for various
indices in $d=3$:
$\bullet$ For $n<n_{3}$, the series of equilibria $\eta(\alpha)$ is
monotonic. Since polytropic spheres are stable in absence of gravity
(corresponding to $\alpha\rightarrow 0$) and since there is no turning
point, the Poincar\'e argument implies that all the polytropes are
stable. It can be shown furthermore that they are fully stable (global
minima of free energy at fixed mass) so that the GSP system will tend
to a steady state for $t\rightarrow +\infty$. For
$\eta<\eta_{1}=\eta(\xi_{1})=n^{n/(n-1)}\omega_{n}$, the GSP system tends to
an incomplete polytrope confined by the box. For $\eta>\eta_{1}$, the
GSP tends to a complete polytrope with radius $R_{*}<R$. This has been
illustrated numerically in Fig. 21 of
\cite{lang} for $n=3/2$ in $d=3$. This index corresponds to a
classical white dwarf star in astrophysics. If we remove the box, the
GSP system always tends to the complete polytrope.
$\bullet$ For $n>n_{3}$, the series of equilibria $\eta(\alpha)$
presents a turning point at $\eta_{c}=\eta(\alpha_{1})$. According to
the Poincar\'e turning point argument, configurations with
$\alpha>\alpha_{1}$ are unstable (saddle points of free energy at
fixed mass). This concerns in particular the case of complete
polytropes for $n_3<n<n_5$ (corresponding to $\alpha=\xi_{1}$), the
Schuster polytrope $n=n_{5}$ and the singular polytropic spheres for
$n\ge n_{5}$ (corresponding to $\alpha=+\infty$). Configurations with
$\alpha<\alpha_{1}$ are metastable (local minima of free energy at
fixed mass) and they exist only for $\eta\le\eta_c$. There is no
global minimum of free energy for $n>n_{3}$. For $\eta\le \eta_{c}$,
depending on the form of the initial density profile, the GSP system
can either relax towards an incomplete polytrope confined by the box
(metastable) or collapse. For $\eta>\eta_{c}$, the GSP system undergoes
gravitational collapse. This self-similar collapse has been studied in
detail in \cite{lang}. It is very similar to the self-similar collapse
of isothermal systems in $d>2$ corresponding to $n\rightarrow
+\infty$. If we remove the box, the GSP system can either collapse or
evaporate depending on the initial condition (this will be illustrated
numerically in Sec. \ref{sec_evaporation}).
$\bullet$ The case $n=n_{3}$ is critical and will be studied in detail in this
paper. For the critical index $n=n_{3}$, the control parameter is
independent on the box radius and can be written
\begin{eqnarray}
\eta= M\left ( \frac{G}{\Theta}\right )^{n_3/(n_3-1)}.
\label{box8}
\end{eqnarray}
In terms of the critical mass (\ref{up15}) or critical temperature
(\ref{up16}), we have
\begin{equation}
\eta=n_{3}^{n_3/(n_3-1)}\omega_{n_{3}}\frac{M}{M_{c}}=n_{3}^{n_3/(n_3-1)}\omega_{n_{3}}
\left (\frac{\Theta_{c}}{\Theta}\right )^{n_{3}/(n_{3}-1)}.
\label{box9}
\end{equation}
For incomplete polytropes with $\alpha<\xi_{1}$, the relation
$\eta(\alpha)$ between the central density, the mass and the
temperature is given by Eq. (\ref{box4}). Their density profile is
given by Eq. (\ref{up17}) where $r_0$ is determined by
$(\Theta_c/\Theta)^{d/2}=M/M_c=-(1/\omega_{n_3})(R/r_0)^{d-1}\theta'(R/r_0)$,
equivalent to relation (\ref{box4}), so the central density is now
determined by the mass $M$ or the temperature $\Theta$. Complete
polytropes with $\alpha\ge
\xi_{1}$ exist for a unique value of the control parameter
\begin{eqnarray}
\eta_{c}=n_{3}^{n_3/(n_3-1)}\omega_{n_{3}}.
\label{box10}
\end{eqnarray}
This corresponds to the critical mass $M=M_c$ or critical temperature
$\Theta=\Theta_c$. Equilibrium states exist only for $\eta\le
\eta_{c}$, i.e $M\le M_c$ or $\Theta\ge \Theta_c$. For $\eta<\eta_{c}$,
they are fully stable (global minima of free energy at fixed mass). In
that case, the GSP system relaxes towards an incomplete polytrope
confined by the box. For $\eta=\eta_{c}$, i.e $M=M_c$ or
$\Theta=\Theta_c$, we have an infinite family of steady states
parameterized by their central density $\alpha\ge\xi_1$ or equivalently
by their radius $R_{*}\le R$. They are marginally stable
($\delta^{2}F=0$). For $\eta>\eta_{c}$, i.e $M>M_c$ or
$\Theta<\Theta_c$, the GSP system undergoes gravitational
collapse. The collapse dynamics is expected to be similar to the
critical collapse of isothermal systems with $n\rightarrow +\infty$ in
$d=2$ (see below). If we remove the box, the solution of the GSP
system evaporates for $\eta<\eta_c$, i.e. $M<M_{c}$ or
$\Theta>\Theta_c$ and collapses for $\eta>\eta_c$, i.e. for $M>M_{c}$
or $\Theta<\Theta_c$. These different regimes will be studied in
detail in Secs. \ref{sec_collapse} and
\ref{sec_evaporation}.
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{alphaeta3.eps}
} \caption[]{Mass as a function of the central density for
box-confined self-gravitating polytropic spheres with critical index
$n=n_{3}=3$ in $d=3$. Incomplete polytropes with $\rho(R)>0$ are
represented by a solid line and complete polytropes with $R_{*}\le R$
are represented by a dashed line. For $\rho_0\rightarrow +\infty$, the
density profile tends to a Dirac peak. Equilibrium states exist only
for $M\le M_c$. For $M>M_c$ the system undergoes gravitational
collapse. The curve represents $\eta(\alpha)/\lbrack
n_3^{n_3/(n_3-1)}\omega_{n_3}\rbrack$ so it also gives the inverse
temperature $(\Theta_c/\Theta)^{n_3/(n_3-1)}$ as a function of the
density contrast ${\cal R}(\alpha)$ for a fixed mass. }
\label{alphaeta3}
\end{figure}
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{box.eps}
} \caption[]{Density profiles of complete and incomplete polytropes
for the critical index $n_{3}=3$ in $d=3$. We have considered three
values of the central density
$\rho_{0}=(M_c/S_{d}R^d\omega_{n_3})\alpha^d$ corresponding to
$\alpha=3<\xi_{1}$ (incomplete polytrope: $R_{*}>R$, $M<M_{c}$),
$\alpha=\xi_{1}=6.89685...$ (limit polytrope: $R_{*}=R$, $M=M_{c}$),
and $\alpha=20>\xi_{1}$ (complete polytrope: $R_{*}<R$, $M=M_{c}$).}
\label{box}
\end{figure}
The mass-central density relation (for a fixed temperature) of
box-confined self-gravitating polytropic spheres with critical index
$n=n_{3}$ is plotted in Fig. \ref{alphaeta3} and the corresponding
density profiles (illustrating the notion of complete and incomplete
polytropes) are plotted in Fig. \ref{box}. We note the striking
analogy with the mass-central density relation of white dwarf stars in
Fig. \ref{massedensiteD3}. Indeed, ultra-relativistic white dwarf
stars are equivalent to polytropes with critical index $n=n_{3}=3$ in
$d=3$. In this context, the critical mass $M_c$ corresponds to the
Chandrasekhar limit. We emphasize, however, that we are considering
here pure critical polytropes enclosed within a box while in
Sec. \ref{sec_wdp} we considered self-confined {\it partially}
relativistic white dwarf stars for which a box is not needed. It is
only when $M\rightarrow M_{Chandra}$ (ultra-relativistic limit) that
they become equivalent to pure polytropes. Furthermore, at
$M=M_{Chandra}$ for white dwarf stars, the only steady state is a
Dirac peak while at $M=M_c$ for pure critical polytropes, we have an
infinite family of steady states with different central densities (the
same difference holds between critical polytropes $n=n_{3}$ in $d>2$
and isothermal spheres $n=n_3=+\infty$ in $d=2$; compare Figs.
\ref{alphaeta3} and \ref{mrho}). Finally, in Fig. \ref{etaalpha},
we plot the mass as
a function of the central density for different dimensions of space
$d$. This figure illustrates in particular the connexion between the
critical mass in $d=3$ reached for a finite value of the central
density and the critical mass in $d=2$ reached for an infinite value
of the central density.
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{etaalpha.eps}
} \caption[]{Mass as a function of the central density for critical
polytropes $n=n_3$ for different
dimensions of space. We have plotted
$\eta=n_{3}^{d/2}\omega_{n_{3}}M/M_c$ as a function of
$\alpha=(\rho_{0}S_{d}R^{d}\omega_{n_3}/M_c)^{1/d}$. The maximum mass
is reached at the bullet corresponding to $\alpha=\xi_{1}$,
$\eta=\eta_{c}=n_{3}^{d/2}\omega_{n_{3}}$.}
\label{etaalpha}
\end{figure}
\section{The critical index from dimensional analysis}
\label{sec_dim}
It is instructive to understand the origin of the critical index
$\gamma_{4/3}=2(d-1)/2$ or $n_{3}=d/(d-2)$ from simple dimensional
analysis. Here we consider unconfined systems in $d$ dimensions with
arbitrary value of $\gamma$. The polytropic Smoluchowski-Poisson
system can be written
\begin{equation}
\frac{\partial\rho}{\partial t}=\nabla\cdot \left \lbrack
\frac{1}{\xi}\left (K\gamma\rho^{\gamma-1}\nabla
\rho+\rho\nabla\Phi\right )\right\rbrack \equiv -\nabla\cdot {\bf
J}, \label{d1}
\end{equation}
\begin{equation}
\Delta\Phi=S_{d}G\rho.
\label{d2}
\end{equation}
The current ${\bf J}={\bf J}_{d}+{\bf J}_{g}$ appearing in the
Smoluchowski equation is the sum of two terms: a diffusion current ${\bf
J}_{d}=-K\gamma\rho^{\gamma-1}\nabla \rho$ and a gravitational
drift ${\bf J}_{g}=-\rho\nabla\Phi$. Based on dimensional analysis, the
diffusion current can be estimated by
\begin{equation}
J_{d}\sim + K\gamma
(M/L^d)^{\gamma-1}(\rho/L)\sim + (1/L)^{d(\gamma-1)+1},
\label{d3}
\end{equation}
and the drift term by
\begin{equation}
J_{g}=-\rho
GM/L^{d-1}\sim - (1/L)^{d-1},
\label{d4}
\end{equation}
where $M$ is the mass of the system and $L$ is the characteristic size
of the system.
The system will collapse to a point if gravity overcomes
(anomalous) diffusion, i.e. $|J_{g}|\gg |J_{d}|$, when $L\rightarrow
0$. This will be the case if $d-1>d(\gamma-1)+1$,
i.e. $\gamma<\gamma_{4/3}$. Conversely, if $\gamma>\gamma_{4/3}$, the
diffusion term can stabilize the system against gravitational collapse
so that the system can be in stable equilibrium. The system will
evaporate to infinity if (anomalous) diffusion overcomes gravity,
i.e. $|J_{d}|\gg |J_{g}|$, when $L\rightarrow +\infty$. This will be
the case if $d(\gamma-1)+1<d-1$, i.e. if
$\gamma<\gamma_{4/3}$. Conversely, if $\gamma>\gamma_{4/3}$, the
gravitational attraction can prevent evaporation so that the system
can be in stable equilibrium. In conclusion, we find that the system
can be in a stable equilibrium state iff $\gamma>\gamma_{4/3}$,
i.e. $1/n>1/n_3$. In the opposite case, the system can either collapse
to a point or evaporate to infinity. By this very simple argument, we
recover the stability criterion of self-gravitating polytropic spheres
obtained by other methods (see Appendix B of \cite{wd}).
The critical case is obtained when $J_d\sim J_g$ implying
$d(\gamma-1)+1=d-1$, i.e. $\gamma=\gamma_{4/3}$ or, equivalently,
$n=n_{3}$. In that case, the stability of the system will depend on
its mass. The system will collapse to a point if gravity overcomes
diffusion, i.e. $|J_{g}|\gg |J_{d}|$, when $L\rightarrow 0$. This will
be the case if $M>M_{c}$, where $M_{c}\sim (K/G)^{d/2}$ is a critical
mass. The system will evaporate to infinity (in an unbounded domain)
if (anomalous) diffusion overcomes gravity, i.e. $|J_{d}|\gg |J_{g}|$,
when $L\rightarrow +\infty$. This will be the case if $M<M_{c}$.
Therefore, at the critical index $\gamma=\gamma_{4/3}$ i.e. $n=n_{3}$,
the system collapses if $M>M_c$ and evaporates if $M<M_c$. Again, this
is fully consistent with the results obtained in Appendix B of
\cite{wd}.
\section{Collapse dynamics}
\label{sec_collapse}
For $0<n<n_3$ in a space with $d\ge 2$ dimensions, the GSP system
tends to an equilibrium state. For $n\ge n_{3}$, it can undergo
gravitational collapse. For $n>n_{3}$ with $d>2$, the collapse is
self-similar as studied in \cite{lang} (the case of negative indices
$n<0$ is studied in \cite{logotropes}). In the present section, we consider
the collapse dynamics of self-gravitating Langevin particles
associated with the {\it critical} index $n_3=d/(d-2)$ in $d\ge 2$
dimensions which presents non trivial features.
\subsection{Generalities: self-similar analysis}
\label{sec_g}
From now on, we adopt normalized variables such that
$G=M=R=\xi=1$. The unique control parameter is the temperature
$\Theta$. For spherically symmetric solutions, using the Gauss theorem, the GSP
system can be written in the form of an integrodifferential equation
\begin{eqnarray}
\frac{\partial\rho}{\partial t}=\frac{1}{r^{d-1}}
\frac{\partial}{\partial r}\biggl\lbrace r^{d-1}\biggl\lbrack
(S_{d}\rho)^{1/n}\Theta \frac{\partial\rho}{\partial r}\nonumber\\
+\frac{\rho}{r^{d-1}}\int_{0}^{r}\rho(r')S_{d}r'^{d-1}\,dr'
\biggr\rbrack\biggr\rbrace. \label{g1}
\end{eqnarray}
Introducing
the mass within a sphere of radius $r$
\begin{equation}
M(r,t)=\int_{0}^{r}\rho(r')S_{d}r'^{d-1}\,dr', \label{g2}
\end{equation}
the GSP system can be formulated through a unique
non-linear dynamical equation for $M(r,t)$:
\begin{eqnarray}
\frac{\partial M}{\partial t}=&&\Theta \biggl ({1\over
r^{d-1}}{\partial M\over\partial r}\biggr )^{1/n}\biggl\lbrack
{\partial^{2}M\over\partial r^{2}}-{d-1\over r}{\partial
M\over\partial r}\biggr\rbrack\nonumber
\\&& +{M\over r^{d-1}}{\partial M\over\partial r}. \label{g3}
\end{eqnarray}
If the system of total mass $M=1$ is confined within a box of radius
$R=1$, the appropriate boundary conditions are
\begin{equation}
M(0,t)=0,\qquad M(1,t)=1. \label{g4}
\end{equation}
If the system is not confined, the second condition should be replaced by
\begin{equation}
M(\infty,t)=1. \label{g5}
\end{equation}
It is also convenient to introduce the function $s(r,t)=M(r,t)/r^{d}$
which has the same dimension as the density and which satisfies
\begin{equation}
{\partial s\over\partial t}=\Theta\biggl (r{\partial
s\over\partial r} +d s\biggr )^{1/n}\biggl
({\partial^{2}s\over\partial r^{2}}+{d+1\over r}{\partial
s\over\partial r}\biggr )+\biggl (r{\partial s\over\partial
r}+ds\biggr )s. \label{g6}
\end{equation}
For $n\rightarrow +\infty$, these equations reduce to those
studied in Refs. \cite{crs,sc} in the isothermal case.
When the system collapses, it is natural to look for self-similar solutions
of the form
\begin{equation}
\rho(r,t)=\rho_{0}(t)f\biggl ({r\over r_{0}(t)}\biggr ), \qquad
r_{0}=\biggl ({\Theta\over \rho_{0}^{1-1/n}}\biggr )^{1/2}.
\label{g7}
\end{equation}
The relation between the core radius $r_0$ and $\rho_0$
(proportional to the central density \footnote{The reader should be
aware that, in the sections dealing with the dynamics, $\rho_0$ and
$r_0$ do not exactly coincide with the quantities of the same name
introduced in the sections dealing with the statics (they usually
differ by a factor of proportionality).}) is obtained by requiring
that the diffusive term and the drift term in Eq. (\ref{g1}) scale
in the same way. This relation can be rewritten $\rho_0 r_0^{\alpha}\sim 1$
with
\begin{equation}
\alpha=\frac{2n}{n-1}.
\label{g8}
\end{equation}
In terms of the
mass profile, we have
\begin{equation}
M(r,t)=M_{0}(t)g\biggl ({r\over r_{0}(t)}\biggr ), \qquad {\rm
with}\qquad M_{0}(t)=\rho_{0}r_{0}^{d}, \label{g9}
\end{equation}
and
\begin{equation}
g(x)=\int_{0}^{x}f(x')S_{d}x'^{d-1}\,dx'. \label{g10}
\end{equation}
In terms of the function $s$, we have
\begin{equation}
s(r,t)=\rho_{0}(t)S\biggl ({r\over r_{0}(t)}\biggr ), \qquad {\rm
with}\qquad S(x)={g(x)\over x^{d}}. \label{g11}
\end{equation}
Inserting the ansatz (\ref{g11}) in Eq. (\ref{g6}) and using
Eq. (\ref{g7}), we obtain
\begin{equation}
\frac{1}{\rho_0^2}\frac{d\rho_0}{dt}=\alpha, \label{g12}
\end{equation}
and
\begin{equation}
\alpha S+xS'=(xS'+dS)^{1/n} \biggl (S''+{d+1\over x}S'\biggr
)+(xS'+dS)S. \label{g13}
\end{equation}
Assuming that Eq. (\ref{g13}) has a solution so that the
self-similar solution exists, Eq. (\ref{g12}) is readily integrated
in
\begin{equation}
\rho_0(t)=\frac{1}{\alpha}(t_{coll}-t)^{-1}, \label{g14}
\end{equation}
implying a finite time singularity. On the other hand, the invariant
profile has the asymptotic behavior $f(x)\sim x^{-\alpha}$ for
$x\rightarrow +\infty$.
\subsection{The two-dimensional isothermal case}
\label{sec_td}
In $d=2$ dimensions, the critical index is $n_{3}=+\infty$
corresponding to the isothermal case studied in \cite{sc} (in that case
$\Theta=T$). Since the study of the critical dynamics is rather
complicated, it can be useful to summarize our results, with some
complements and amplifications, before treating the case $d>2$.
In $d=2$, there exists a critical temperature $T_{c}=1/4$. If the
system is enclosed within a box and $T>T_c$, it relaxes to an
equilibrium distribution confined by the box. If the system is not
confined and $T>T_c$, an evaporation process develops which has been
studied in \cite{virial1}. For $T=T_c$, the system undergoes
gravitational collapse. The evolution is self-similar and leads to a
Dirac peak containing the whole mass $M=1$ for $t\rightarrow
+\infty$. In a bounded domain, the central density grows exponentially
\cite{sc} rapidly with time and in an unbounded domain, the central
density increases logarithmically \cite{virial1} with time (and a tiny
fraction of mass is ejected at large distances to satisfy the moment
of inertia constraint at $T=T_{c}$). Note that the Dirac peak is also
the stationary solution of the SP system at $T=T_c$.
For $T<T_c$, and irrespectively of the presence of a confining box,
there is no steady state and the system collapses. Looking for an exact
self-similar solution of the form (\ref{g7}) we obtain $\rho_0 r_0^2=T$,
$\alpha=2=d$ and a scaling equation
\begin{equation}
\biggl (S''+{3\over x}S'\biggr
)+(xS'+2S)(S-1)=0. \label{td1}
\end{equation}
However, this equation does not have any physical solution for large
$x$. In fact, this could have been anticipated from the fact that the
scaling functions $s(x)$ and $f(x)$ should decay as $x^{-2}=x^{-d}$
for large $x$. Then, the total mass in the profile is of order $\rho_0
r_0^2\int^{1/r_0}x^{-2} x\,dx\sim \ln(1/r_0)$, which unphysically
diverges when $r_0$ goes to zero. Said differently, the scaling
profile at $t=t_{coll}$ is $\rho\propto r^{-2}$ so that the mass
$M=\int \rho(r)2\pi r dr$ diverges logarithmically for $r\rightarrow
0$. This logarithmic divergence is symptomatic of the formation of a
Dirac peak resulting from a pseudo self-similar collapse. In the
case $d=2$, this situation can be analyzed analytically in great
detail.
To that purpose, we note that the profile which cancels out the
r.h.s. of the SP system is exactly given by
\begin{equation}
M_1(r,t)=4T\frac{(r/r_0(t))^2}{1+(r/r_0(t))^{2}},
\label{td2}
\end{equation}
\begin{equation}
\rho_1(r,t)=\frac{4\rho_0(t)}{\pi}\frac{1}{(1+(r/r_0(t))^{2})^2},
\label{td3}
\end{equation}
with
\begin{equation}
\rho_{0}(t)r_{0}(t)^{2}=T.
\label{td4}
\end{equation}
If we consider time independent solutions ($\partial\rho/\partial
t=0$) and impose the conservation of mass, we recover the steady
solutions which exist for $T\ge T_{c}$ in a bounded domain (in that case
$r_0=(T/T_c-1)^{1/2}$) and for $T=T_c$ only in an infinite domain
(in that case we get a family of distributions parameterized by
$r_0$). However, in the present case, we consider the case $T<T_c$ and
seek the temporal evolution of $\rho_{0}(t)$ and $r_{0}(t)$. We argue
that the solution (\ref{td3}) gives the leading contribution of the density
profile in the core. This profile contains a mass $T/T_c$. We expect
that the collapse will lead to $\rho_{0}(t)\rightarrow +\infty$ and
$r_0(t)\rightarrow 0$ for $t\rightarrow t_{coll}$ (finite time
singularity). Then, we see that the profile (\ref{td3}) leads to a Dirac peak
with mass $T/T_c$, i.e.
\begin{equation}
\rho_1({\bf r},t)\rightarrow \frac{T}{T_{c}}\delta({\bf r}).
\label{td5}
\end{equation}
The excess of mass will be contained in the profile extending in the
halo. Therefore, we look for solutions of the form
\begin{eqnarray}
\label{td6}
\rho(r,t)&=&\rho_1(r,t)+\rho_2(r,t),\nonumber\\
&=&\rho_0(t) f_1(r/r_0(t))+\rho_0(t)^{\alpha(t)/2}f_2(r/r_0(t)).\nonumber\\
\end{eqnarray}
The first component has a scaling behavior and dominates in the center
of the collapse region. It leads to a Dirac peak containing a fraction
$M_c=T/T_{c}$ of the total mass $M=1$ at $t=t_{coll}$. The second component
obeys a pseudo-scaling and $f_2(x)\sim x^{-\alpha(t)}$ for large $x$,
with an effective scaling exponent $\alpha(t)$ which very slowly
approaches the value $2$ (expected from the naive self-similar analysis) when
$t\rightarrow t_{coll}$. Thus, at $t=t_{coll}$, we get
\begin{equation}
\label{td7}\rho({\bf r},t)\rightarrow M_{c}\delta({\bf r})+\chi({\bf r},t),
\end{equation}
where $\chi(r)$ is singular at $r=0$ behaving roughly as $r^{-2}$.
In Fig. \ref{D2SEUL}, we illustrate this decomposition of the
density profile into two components. It is shown in \cite{sc} that
the central density satisfies an equation of the form
\begin{equation}
\frac{1}{\rho_0}\frac{d\rho_0}{dt}\propto \rho_{0}^{\alpha(t)/2}, \label{td8}
\end{equation}
instead of Eq. (\ref{g12}), and that the effective scaling exponent
$\alpha(t)$ depends on the central density as
\begin{equation}
\epsilon(t) \equiv 1-\frac{\alpha(t)}{2}\sim \sqrt{\frac{\ln \ln
\rho_{0}(t)}{2\ln\rho_{0}(t)}}. \label{td9}
\end{equation}
This yields $\rho_{0}\sim (t_{coll}-t)^{-1+\epsilon(t)}$ or equivalently
\begin{equation}
\ln(\rho_{0}\tau)\sim -2\ln(r_{0}/\sqrt{\tau})\sim
\sqrt{\frac{|\ln\tau| \ln|\ln\tau|}{2}}, \label{td10}
\end{equation}
where we have noted $\tau=t_{coll}-t$.
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{scaling_indivi_D2SEUL.eps} }
\caption[]{For $d=2$, $n=n_{3}=+\infty$, and deep into the collapse
regime for $T=T_c/2=1/8$, we plot the density profile (full line),
emphasizing its two components: the core is dominated by the
invariant scaling profile (dotted line) given analytically by
Eq.~(\ref{td3}) containing a mass $M_c=T/T_c$, and the halo obeys
pseudo-scaling (dashed line) with an exponent $\alpha(t)$ tending
slowly to $d=2$ as $\rho_{0}\rightarrow +\infty$.} \label{D2SEUL}
\end{figure}
Prior to our work \cite{sc}, and unknown to us at that time, Herrero
\& Velazquez
\cite{herrerobio} had investigated the same problem in the context of
chemotaxis using a different method based on match asymptotics. For
$T<T_c$ (as far as we know, they did not consider the case $T=T_c$
treated in \cite{sc}), they showed that the system forms a Dirac peak
of mass $M_c=T/T_c$ (within our notations) surrounded by a halo containing
the excess of mass. On a qualitative point of view, the two scenarii are
consistent. On a
quantitative point of view, however, the scaling laws
\begin{eqnarray}
\ln(\rho_{0}\tau)\sim -2\ln(r_{0}/\sqrt{\tau})\sim \sqrt{2|\ln\tau|}\nonumber\\
+\frac{1}{2}\left (1-\frac{1}{\sqrt{|\ln\tau|}}\right )\ln|\ln\tau|. \label{td11}
\end{eqnarray}
obtained by Herrero \& Velazquez (HV) are slightly different from
ours (SC). They lead to an effective exponent given by
\begin{eqnarray}
1-\frac{\alpha(t)}{2}\sim \sqrt\frac{2}{\ln\rho_0}+\frac{1}{2}\left
(1-\frac{1}{\sqrt{\ln\rho_0}}\right )\frac{\ln
\ln\rho_0}{\ln\rho_0},\label{td12}
\end{eqnarray}
instead of Eq. (\ref{td9}). For the densities accessible numerically,
one gets $\alpha_{SC}(\rho_0=10^3)=1.252...$ while
$\alpha_{HV}(\rho_0=10^3)=0.751...$ and
$\alpha_{SC}(\rho_0=10^5)=1.348...$ while
$\alpha_{HV}(\rho_0=10^5)=1.017...$. Numerical simulations performed
in \cite{sc} show a good agreement with the predicted values of
$\alpha_{SC}$ for the densities accessible. However, in view of the
complexity of the problem, and of the logarithmic (and sub-logarithmic!)
corrections, it is difficult to understand the origin of the (slight)
discrepancy between the two approaches. In any case, they both show
that the collapse is not exactly self-similar but that the apparent
scaling exponent $\alpha(t)$ is a very slowly varying function of the
central density.
\subsection{The critical polytropic case with $d>2$}
\label{sec_cp}
We now consider the critical index $n=n_{3}=d/(d-2)$ with $d>2$. There
exists a critical temperature $\Theta_{c}=1/\lbrack
n_3\omega_{n_3}^{(n_3-1)/n_3}\rbrack$ (in $d=3$, we have
$\Theta_{c}=0.20872...$). If the system is
confined within a box and $\Theta>\Theta_c$, it relaxes to an
incomplete polytrope. This is illustrated in Fig.~\ref{coll1}. If the
system is not confined and $\Theta>\Theta_c$, an evaporation process
develops which will be studied in the next section. In the confined
case, when the generalized temperature $\Theta$ reaches the value
$\Theta_c$, the equilibrium density profile vanishes exactly at
$R=1$. For $\Theta<\Theta_c$, and irrespectively of the presence of a
confining box, the system collapses.
\begin{figure}[htbp]
\centerline{ \includegraphics[width=8cm,angle=0]{densite_125Tc.eps} }
\caption[]{In $d=3$ and for $n_3=3$ and $\Theta=0.25>\Theta_c$ in a finite box
($R=1$), we show the density at successive times, illustrating the
convergence to the equilibrium density profile (dashed line). The
insert illustrates the exponentially fast saturation of the central
density for $\Theta>\Theta_c$, whereas a slower algebraic saturation is expected right at $\Theta=\Theta_c$.}
\label{coll1}
\end{figure}
We can naively look for self-similar solutions of the form described
in Sec. \ref{sec_g}. For $n=n_3$, we find $\alpha=d$,
$\rho_{0}r_{0}^d=\Theta^{d/2}$ and the scaling equation
\begin{equation}
S''+{d+1\over x}S'+(xS'+dS)^{2/d}(S-1)=0. \label{cp1}
\end{equation}
It happens that as in the case ($d=2$, $n_3=\infty$), this equation
does not have any physical solution for large $x$. Again, this could
have been anticipated from the fact that the scaling functions $s(x)$
and $f(x)$ should decay as $x^{-2n_{3}/(n_{3}-1)}=x^{-d}$, for large
$x$. Then, the total mass in the profile is of order
\begin{equation}
\rho_0
r_0^d\int^{1/r_0}x^{-d} {\times} x^{d-1}\,dx\sim \ln(1/r_0)
\label{cp2}
\end{equation}
which unphysically diverges when $r_0$ goes to zero. Said
differently, the scaling profile at $t=t_{coll}$ is $\rho\propto
r^{-d}$ so that the mass $M=\int \rho(r)S_{d} r^{d-1} dr$ diverges
logarithmically for $r\rightarrow 0$ \footnote{More generally, for a
polytrope of index $n$ we have $\rho\propto r^{-\alpha}$ at
$t=t_{coll}$, with $\alpha=2n/(n-1)$ so that the self-similar
solution exists provided that $\alpha-d+1<1$ leading to $1/n<1/n_3$
(i.e. $n>n_3$ for $d>2$). This is precisely the range of indices for
which the complete polytropes are dynamically unstable
\cite{lang}.}.
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{scaling_indivi_075TcSEUL.eps} }
\caption[]{For $d=3$, $n=n_{3}=3$, and deep into the collapse regime
for $\Theta=0.75\Theta_c$, we plot the density profile (full line),
emphasizing its two components: the core is dominated by the bounded
invariant scaling profile (complete polytrope of index $n_3$)
containing a mass $M_c=(\Theta/\Theta_c)^{3/2}$ (dotted line), and
the halo obeys pseudo-scaling (dashed line) with an exponent
$\alpha(t)$ tending slowly to $d=3$ as $\rho_{0}\rightarrow
+\infty$.} \label{coll2}
\end{figure}
Hence, for $n=n_3$ in $d>2$, we expect a situation similar to what
was obtained for ($d=2$, $n_3=\infty$). However, the situation is
more difficult to analyze because the stationary profile is not
known analytically in the present case (this analytical profile was
at the basis of our analysis in \cite{sc}). Using the results of
Sec. \ref{sec_up}, the profile which cancels out the r.h.s. of the
GSP system is given by
\begin{equation}
\rho_1(r,t)=\frac{n_3^{d/2}}{S_{d}}\rho_0(t)\theta_{3}^{n_{3}}(r/r_{0}(t)),
\label{cp3}
\end{equation}
with
\begin{equation}
\rho_{0}(t)r_{0}(t)^{d}=\Theta^{d/2}.
\label{cp4}
\end{equation}
If we consider time independent solutions $(\partial\rho/\partial
t=0)$ and impose the conservation of mass, we recover the steady
solutions that exist for $\Theta\ge \Theta_{c}$ in a bounded domain
(in that case, we have
$(\Theta_c/\Theta)^{d/2}=-(1/\omega_{n_3})(R/r_0)^{d-1}\theta'(R/r_0)$)
and for $\Theta=\Theta_c$ only in an infinite domain (in that case we
get a family of distributions parameterized by $r_0$). However, in the
present case, we consider the case $\Theta<\Theta_c$ and seek the
temporal evolution of $\rho_{0}(t)$ and $r_{0}(t)$. We argue that the
solution (\ref{cp3}) gives the leading contribution of the density profile in
the core. This profile vanishes at $R_{*}(r)=\xi_{1}r_{0}(t)$,
has a central density $(n_3^{d/2}/S_{d})\rho_0(t)$ and contains a mass
(see Sec. \ref{sec_up}):
\begin{equation}
M_c=\left( \frac{\Theta}{\Theta_c}\right)^{d/2}.
\label{cp5}
\end{equation}
We expect that collapse will lead to $\rho_{0}(t)\rightarrow
+\infty$ and $r_{0}(t)\rightarrow 0$. Then, we see that the profile
(\ref{cp3}) tends to a Dirac peak with mass $M_c$, i.e.
\begin{equation}
\rho_1({\bf r},t)\rightarrow \left( \frac{\Theta}{\Theta_c}\right)^{d/2}\delta({\bf r}).
\label{cp6}
\end{equation}
The excess of mass will be contained in the profile extending in the
halo. Therefore, we look for solutions of the form
\begin{eqnarray}
\rho(r,t)&=&\rho_1(r,t)+\rho_2(r,t),\nonumber\\
&=&\rho_0(t) f_1(r/r_0(t))+\rho_0(t)^{\alpha(t)/d}f_2(r/r_0(t)).\nonumber\\
\label{cp7}
\end{eqnarray}
The first component \footnote{Defining $\rho_c(x)$ as the equilibrium
density profile at $\Theta=\Theta_c$, then the first component can be
written $\rho_1(r,t)=\frac{M_c}{r_0^d}\rho_c(r/r_0)$.} has a
scaling behavior and dominates in the center of the collapse
region. It leads to a Dirac peak containing a fraction
$M_c=(\Theta/\Theta_{c})^{d/2}$ of the total mass at $t=t_{coll}$. The
second component obeys a pseudo-scaling and $f_2(x)\sim
x^{-\alpha(t)}$ for large $x$, with an effective scaling exponent
$\alpha(t)$ which very slowly approaches the value $d$ (expected from
the naive self-similar analysis) when $t\rightarrow t_{coll}$. At
$t=t_{coll}$, the first component $\rho_1(r,t)$ tends to a Dirac peak
at the origin containing the mass $M_c$, whereas the second
component develops a singularity at $r=0$. Thus, we have
\begin{equation}
\label{cp8}\rho({\bf r},t)\rightarrow M_{c}\delta({\bf r})+\chi({\bf r},t),
\end{equation}
with $\chi(r)$ behaving roughly as $r^{-d}$.
In Fig.~\ref{coll2}, we
illustrate this decomposition of the density profile into two
components.
\begin{figure}[htbp]
\centerline{ \includegraphics[width=8cm,angle=0]{densite_075Tc.eps}
} \caption[]{For $\Theta=0.75\Theta_c$ (here $d=3$, $n=n_{3}=3$), we
plot $\rho_0^{-1}(t)\frac{d\rho_0}{dt}$ (top full line) and
$\hat\rho_0(t)$ (bottom full line) as a function of $\rho_0(t)$.
Both grow with an effective exponent $\alpha/3\approx 0.93$ (dotted
lines), which slowly increases and should saturate to unity (the
dashed line has slope unity).} \label{coll3}
\end{figure}
In Fig.~\ref{coll3}, we show that perfect scaling which would imply
$\rho_0^{-1}(t)\frac{d\rho_0}{dt}\sim \rho_0$ is not obeyed.
Instead, in the accessible density range,
$\rho_0^{-1}(t)\frac{d\rho_0}{dt}$ decays with an apparent power-law
of $\rho_0$ which increases very slowly with time, but remains less
than unity. We expect to have a relation of the form
\begin{equation}
\frac{1}{\rho_0}\frac{d\rho_0}{dt}\propto \rho_{0}^{\alpha(t)/d}, \label{cp9}
\end{equation}
which is indeed confirmed by the numerics. In Fig.~\ref{coll3}, we also plot
the central density in the pseudo-scaling component
\begin{equation}
\hat\rho_0(t)=\rho_0^{\alpha(t)/d}(t),
\label{cp10}
\end{equation}
which shows that the effective exponent $\alpha(t)$ slowly converges
to $\alpha=d$.
\begin{figure}[htbp]
\centerline{ \includegraphics[width=8cm,angle=0]{scaling_075Tc.eps}
} \caption[]{For $\Theta=0.75\Theta_c$ (here $d=3$, $n=n_{3}=3$), we
plot $f_2^{(\alpha)}(x)$ (as defined in the text) as a function of
$x=r/r_0$, for different times for which the central density evolves
from $10^2$ to $10^7$. Pseudo-scaling is observed. The envelop of
the tails decays with an apparent exponent $\alpha\approx 2.8$
(right dashed line), while the small $x$ behavior is quadratic (left
dashed line).} \label{coll4}
\end{figure}
Finally in Fig.~\ref{coll4}, we display the apparent scaling
behavior of $\rho_2(r,t)=\rho_0(t)^{\alpha(t)/d}f_2(r/r_0(t))$,
associated to a value of $\alpha\approx 2.8$, fully compatible with
the value obtained in Fig.~\ref{coll3} (in $d=3$).
\section{Evaporation dynamics in unbounded space}
\label{sec_evaporation}
\subsection{The case $n>n_3$}
\label{sec_sup}
When the system is not confined to a finite box, the nature of the
dynamics crucially depends on the value of the polytropic index $n$
with respect to $n_3$. As before, we consider $d\ge 2$ and $n>0$. If
$n<n_3$, there exists equilibrium solutions (fully stable complete
polytropes) which are reached for any initial density profile. If
$n>n_3$, depending on the initial density profile and on the
temperature, the system can collapse or evaporate. If $R_0$ is the
typical extension of the initial density profile containing a mass
$M$, one can form a quantity with the dimension of $\Theta$:
\begin{equation}
\label{sup1}
\Theta_*=\frac{G M^{(n-1)/n}}{R_0^{(d-2)(n-n_3)/n}},
\end{equation}
which plays the role of an effective critical temperature. If
$\Theta\ll\Theta_*$, the system should collapse as it would do if
confined in a box of typical radius $R_0$ \cite{lang}. If
$\Theta\gg\Theta_*$, the system should evaporate in the absence of an
actual confining box. Hence, for a given initial profile, there exists
a non universal $\Theta_*$ separating these two regimes. We present
numerical simulations for the case $n>n_3$. In Fig.~\ref{evap3}, and
for a particular initial process, we illustrate the fact that
depending on the value of $\Theta$ with respect to a non universal
$\Theta_*$, the system can collapse or evaporate. In the evaporation
regime and for $n>n_3$, a scaling analysis shows that gravity becomes
gradually irrelevant and that this process becomes exclusively
controlled by free (anomalous) diffusion. This fact is illustrated in
Fig.~\ref{evap4}. Indeed, when the evaporation length
$r_{0}(t)\rightarrow +\infty$, we see from Eq. (\ref{g3}) that the
gravitational term becomes negligible in front of the diffusion term:
\begin{eqnarray}
{M\over r^{d-1}}{\partial M\over\partial r}\ll \Theta \biggl ({1\over
r^{d-1}}{\partial M\over\partial r}\biggr )^{1/n}
{\partial^{2}M\over\partial r^{2}}, \label{sup2}
\end{eqnarray}
if $d>d/n+2$, i.e. $n>n_3$. Therefore, for $t\gg 1$, the GSP system
reduces to the pure anomalous diffusion equation
\begin{equation}
\frac{\partial\rho}{\partial t}\simeq
\frac{K}{r^{d-1}}\frac{\partial}{\partial r}\left
(r^{d-1}\frac{\partial \rho^{\gamma}}{\partial r}\right
),\label{sup3}
\end{equation}
with $K=S_{d}^{\gamma-1}\Theta/\gamma$. This equation has self-similar
solutions that were first discovered by Barenblatt \cite{barenblatt}
in the context of porous media. These solutions are closely related to
the form of generalized thermodynamics introduced by Tsallis \cite{tsallis}.
\begin{figure}[htbp]
\centerline{ \includegraphics[width=8cm,angle=0]{diff_coll_d3n5.eps}
} \caption[]{In $d=3$ and $n=5>n_3$, and for a given initial density
profile ($M(r)=r^3/({\rm e}^{-r^2}+r^2)^{3/2}$; fat line), we show
the collapse dynamics observed at $\Theta=0.15$ (full lines for
different times before $t_{coll}$) and the evaporation dynamics
observed at $\Theta=1$ (dashed lines for different times). For this
particular initial condition, we find $\Theta_*\approx 0.206$.}
\label{evap3}
\end{figure}
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{diff_domine_d3n5.eps} }
\caption[]{In $d=3$ and $n=5>n_3$, we present the evaporation
density data collapse at $\Theta=1$. As time proceeds, the effect of
gravity becomes negligible, and the scaling profiles converge to the
one corresponding to free diffusive evaporation (full line). This
is the Barenblatt solution whose invariant profile is a Tsallis
distribution of Eq.~(\ref{sup14}) with index $\gamma$.}
\label{evap4}
\end{figure}
Using the original idea of Plastino \& Plastino \cite{pp}, we look for
solution of Eq. (\ref{sup3}) in the form of a Tsallis distribution
with index $\gamma$ and time dependent coefficients
\begin{equation}
\rho(r,t)=\frac{1}{Z}\rho_0(t)\left\lbrack
1-(\gamma-1)(r/r_0(t))^2\right\rbrack_{+}^{1/(\gamma-1)}.\label{sup4}
\end{equation}
For $\gamma>1$, i.e. $n>0$, we have a profile with compact support
where the density vanishes at
$r_{max}(t)=r_{0}(t)/\sqrt{\gamma-1}$. For $\gamma<1$, i.e. $n<0$, the
density decreases like $\rho\sim r^{-2/(1-\gamma)}$ and the total mass
is finite provided that $\gamma>\gamma_{1/3}\equiv (d-2)/d$,
i.e. $n<-d/2$. Requiring that the profile (\ref{sup4}) contains all the mass $M=1$,
and imposing
\begin{equation}
\rho_{0}(t)r_{0}(t)^d=1,\label{sup5}
\end{equation}
with find the normalization factor
\begin{equation}
Z\equiv \int_{0}^{+\infty}\lbrack
1-(\gamma-1)x^2\rbrack_{+}^{1/(\gamma-1)}S_{d}x^{d-1}dx.\label{sup6}
\end{equation}
Then, substituting the ansatz (\ref{sup4}) with Eq. (\ref{sup5}) in
Eq. (\ref{sup3}), we obtain
\begin{equation}
\dot \rho_0=-2dS_{d}^{\gamma-1}\Theta Z^{1-\gamma}\rho_0^{\gamma+2/d}.\label{sup7}
\end{equation}
Solving this equation with the initial condition $\rho({\bf
r},t=0)=\delta({\bf r})$, we get
\begin{equation}
\rho_0(t)=\frac{1}{\lbrack
2d(\gamma-\gamma_{1/3})S_{d}^{\gamma-1}\Theta Z^{1-\gamma}
t\rbrack^{1/(\gamma-\gamma_{1/3})}}.\label{sup8}
\end{equation}
This is valid for $\gamma>\gamma_{1/3}$, i.e. $n>0$ or $n<-d/2$. We
note the scaling laws for large times:
\begin{equation}
\rho_0(t)\sim t^{-dn/(d+2n)}, \qquad r_0(t)\sim t^{n/(d+2n)}. \label{sup9}
\end{equation}
It is instructive to re-derive this solution in a different manner,
without pre-supposing the form of the solution. We look for general
self-similar solutions of the form
\begin{equation}
\rho(r,t)=\rho_{0}(t)f(r/r_{0}(t)). \label{sup10}
\end{equation}
We require that all the mass is in the profile (\ref{sup10}) and
impose the relation (\ref{sup5}), implying that
\begin{equation}
\int_{0}^{+\infty}f(x)S_{d}x^{d-1}\,dx=1.\label{sup11}
\end{equation}
Substituting the ansatz (\ref{sup10}) with Eq. (\ref{sup5}) in Eq.
(\ref{sup3}), and imposing the condition (\ref{sup7}) where $Z$ is
for the moment an arbitrary constant, we obtain the differential
equation
\begin{equation}
\frac{1}{x^{d-1}}\frac{d}{dx}\left
(x^{d-1}f^{\gamma-1}\frac{df}{dx}\right
)=-2Z^{1-\gamma}(xf'+df).\label{sup12}
\end{equation}
Noting the identity $x^{d-1}(xf'+df)=(x^d f)'$, this equation can be integrated into
\begin{equation}
f^{\gamma-2}\frac{df}{dx}+2Z^{1-\gamma}x=0. \label{sup13}
\end{equation}
This first order differential equation can again be readily
integrated. We can choose the constant of integration so as to
obtain a solution of the form
\begin{equation}
f(x)=\frac{1}{Z}\left\lbrack 1-(\gamma-1)x^2\right\rbrack_{+}^{1/(\gamma-1)}.\label{sup14}
\end{equation}
Finally, the normalization condition (\ref{sup11}) implies that $Z$
is given by Eq. (\ref{sup6}). It is interesting to realize that the
$q$-exponential function $e_{q}(x)=\lbrack
1+(q-1)x\rbrack_{+}^{1/(q-1)}$ introduced in the context of Tsallis
generalized thermodynamics stems from the simple differential
equation (\ref{sup13}) related to the anomalous diffusion equation
(\ref{sup3}). Indeed, the scaling solution of this equation can be
written
\begin{equation}
f(x)=\frac{1}{Z}e_{\gamma}(-x^2),\label{sup15}
\end{equation}
which generalizes the gaussian distribution obtained for the
ordinary diffusion equation recovered for $\gamma=1$.
The moments $\langle r^k\rangle$ of the distribution (\ref{sup10}) are
given by
\begin{equation}
\langle r^k\rangle(t)= r_0(t)^k\int_{0}^{+\infty}f(x)x^{k+d-1}S_d \,dx.\label{sup16}
\end{equation}
They exist provided that $k>-d$ for $\gamma\ge 1$ and provided that
$-d<k<2/(1-\gamma)-d$ for $\gamma<1$. They scale like $\langle
r^k\rangle\propto r_0^k\propto t^{nk/(d+2n)}$.
The Tsallis entropy is finite for $\gamma>\gamma_{3/5}=d/(d+2)$ and it scales like
\begin{equation}
S(t)-nM=-n\rho_0^{1/n}\int_0^{+\infty}f(x)^{\gamma}S_d
x^{d-1}dx\propto t^{-d/(d+2n)}. \label{sup17}
\end{equation}
On the other hand, for $d>2$, the potential energy $W=-1/(2S_d) \int
(\nabla\Phi)^2 d{\bf r}$ scales like
\begin{equation}
W\propto \int_{0}^{+\infty} \left \lbrack
\frac{M(r)}{r^{d-1}}\right \rbrack^{2}r^{d-1}dr\propto
\frac{1}{r_{0}^{d-2}}\propto t^{-n(d-2)/(d+2n)}.\label{sup18}
\end{equation}
Comparing Eqs. (\ref{sup17}) and (\ref{sup18}), we see that the
potential energy is always negligible with respect to the entropy
for $n>n_{3}$. Therefore, the Tsallis free energy behaves like
\begin{equation}
F(t)+nKM\propto t^{-d/(d+2n)},\label{sup19}
\end{equation}
for $t\rightarrow +\infty$. Note that for $n_3<n<+\infty$, the free
energy tends to a finite value $-nKM$ as the system spreads to
infinity. Alternatively, for the isothermal case $n= +\infty$, the free energy
is given by Eq. (95) of \cite{virial1} and it tends to $-\infty$.
We can use the identity (\ref{m4}) to derive the first correction in
the evolution of the moments $\langle r^k\rangle$ due to
self-gravity. To that purpose, we introduce the zeroth order
solution (\ref{sup4}) in the equation
\begin{eqnarray}
\label{sup20}
\frac{d\langle r^k\rangle}{dt}=k(k+d-2)\int P r^{k-2} \,d{\bf r}\nonumber\\
-k \int_{0}^{+\infty} r^{k-d}M(r)\frac{\partial M}{\partial r}\,dr.
\end{eqnarray}
The first term gives, after integration, the pure anomalous scaling
\begin{eqnarray}
\label{sup21}
\langle r^{k}\rangle_{0} \propto t^{nk/(d+2n)}.
\end{eqnarray}
The second term gives, after integration, the first correction due
to gravity. If we write $\Delta \langle r^{k}\rangle =\langle
r^{k}\rangle -\langle r^{k}\rangle_{0}$, we get
\begin{eqnarray}
\label{sup22}
\Delta \langle r^{k}\rangle \propto t^{\frac{n(k-d)}{d+2n}+1}.
\end{eqnarray}
Let us consider some particular cases: (i) for $n\rightarrow +\infty$,
we obtain $\Delta \langle r^{k}\rangle
\propto t^{(k-d)/2+1}$. If we furthermore consider the second moment $k=2$
(moment of inertia), we recover the scaling $\Delta \langle
r^{2}\rangle \propto t^{2-{d}/{2}}$ of \cite{virial1}. (ii) for $k=d$,
we find that $\Delta \langle r^{d}\rangle=-(d/2)t\propto t$ whatever
the index $n$ and the dimension of space $d$. (iii) For $n=n_3$,
gravitational effects are of the same order as diffusive effects and
$\langle r^k\rangle_{0}\propto \Delta \langle r^k\rangle\propto
t^{k/d}$. This case will be studied in detail in the next
section. (iv) Finally, let us introduce the number $k_0\equiv
d-d/n-2$. For $k<k_0$, $\Delta\langle r^k\rangle\rightarrow 0$; for
$k=k_0$, $\Delta\langle r^k\rangle\propto 1/t$; for $k>k_0$,
$\Delta\langle r^k\rangle\rightarrow +\infty$.
\subsection{The critical case $n=n_3$}
\label{sec_crit}
Finally, for $n=n_3$, and since a critical $\Theta_c$ exists
irrespectively of the presence of a confining box, the system
collapses for $\Theta<\Theta_c$ and evaporates for
$\Theta>\Theta_c$. In the latter regime, gravity remains relevant
and evaporation is controlled by both gravity and the diffusion
process (see Fig.~\ref{evap1}). Mathematically, this arises from the
fact that there is an evaporation scaling solution for which all the
terms of Eq.~(\ref{g1}) scale in the same way. More specifically, we
expect an evaporation density profile of the form
\begin{equation}
\rho(r,t)=\rho_{0}(t)f\biggl ({r\over r_{0}(t)}\biggr ), \qquad
\rho_{0}(t)r_{0}(t)^d=1.
\label{crit1}
\end{equation}
The relation between the evaporation length $r_0$ and $\rho_0$
(proportional to the central density) is obtained by requiring that
the diffusive term and the drift term in Eq. (\ref{g1}) scale the same. In
terms of the mass profile, we have
\begin{equation}
M(r,t)= g\biggl ({r\over r_{0}(t)}\biggr )\quad {\rm with}\quad
g(x)=\int_{0}^{x}f(x')S_{d}x'^{d-1}\,dx',
\label{crit2}
\end{equation}
and in terms of the function $s$, we have
\begin{equation}
s(r,t)=\rho_{0}(t)S\biggl ({r\over r_{0}(t)}\biggr ), \qquad {\rm
with}\qquad S(x)={g(x)\over x^{d}}. \label{crit3}
\end{equation}
We require that all the mass is contained in the self-similar
profile \footnote{Looking for a self-similar solution of the form
(\ref{g7}) for any index $n$ and requiring that the diffusion and
gravity scale the same, we find that $\rho_0 r_0^{\alpha}\sim 1$
where $\alpha$ is given by Eq. (\ref{g8}). The profile will contain
all the mass provided that $\rho_0 r_0^d\sim 1$ which implies
$\alpha=d$ leading to $n=n_{3}$. Thus, it is only for the critical
index that we can have a self-similar solution where the diffusion
and gravity scale the same and which contains all the mass.}, which
implies that
\begin{equation}
g(+\infty)=\int_{0}^{+\infty}f(x)S_{d}x^{d-1}\,dx=1.
\label{crit4}
\end{equation}
Inserting the ansatz (\ref{crit3}) in Eq. (\ref{g6}), using Eq.
(\ref{crit1}), and imposing
\begin{equation}
\frac{1}{\rho_{0}^{2}}\frac{d\rho_{0}}{dt}=-d\Theta, \quad {\rm
i.e.}\quad r_{0}^{d-1}\frac{dr_{0}}{dt}=\Theta,\label{crit5}
\end{equation}
we obtain the scaling equation (note the change of sign
compared to Eq.~(\ref{g13})) \footnote{{The scaling
equations (\ref{g13}) and (\ref{crit6}) have a very different
mathematical structure. The scaling equation for collapse
(\ref{g13}), valid for $n>n_3$, leads to an eigenvalue problem for $S(x)$
\cite{sc,lang}. Indeed, it admits a physical solution for a unique
value of $S(0)$ equal to $S_{*}$ (say). For $S(0)<S_{*}$, the solution
becomes negative at some point, and for $S(0)>S_*$,
it diverges at a finite $x_0$. By contrast,
the scaling equation for evaporation (\ref{crit6}), valid for $n=n_3$,
admits a one parameter family of solutions parameterized by
$S(0)$. Then, the suitable value $S_{*}$ is selected by the
normalization condition (\ref{crit4}).}}:
\begin{equation}
S''+{d+1\over x}S'+(xS'+dS)^{2/d}\left (\frac{1}{\Theta}S+1\right )=0. \label{crit6}
\end{equation}
The evaporation radius is given by
\begin{equation}
r_0(t)=(d\Theta t)^{1/d}.\label{crit7}
\end{equation}
The moments scale like $\langle r^k\rangle\propto r_0^k\propto
(d\Theta t)^{k/d}$ and the free energy scales like $F(t)+n_3KM\propto
t^{-(d-2)/d}$.
If we consider the large temperature limit $\Theta\gg 1$ where the
diffusion term dominates on the gravitational drift, the foregoing
differential equation reduces to
\begin{equation}
S''+{d+1\over x}S'+(xS'+dS)^{2/d}=0. \label{crit8}
\end{equation}
In terms of the function $f$ it can be written
\begin{equation}
f^{-2/d} f'+\frac{x}{S_d^{(d-2)/d}}=0,\label{crit9}
\end{equation}
which is consistent with Eq.~(\ref{sup13}) up to the changes of
notations in Eqs. (\ref{sup7}) and (\ref{crit5}). We can either solve
this equation and impose the normalization condition (\ref{crit4}) or
make simple transformations in order to directly use the results of
Sec. \ref{sec_sup}. Indeed, let us set $\rho_{0}=\sigma\rho_{*}$ and
$r_{0}=\mu r_{*}$. We impose $\rho_* r_{*}^d=1$ leading to
$\sigma\mu^{d}=1$. On the other hand, we choose
$\sigma=2(S_{d}/Z)^{(d-2)/d}$ where $Z$ is defined by Eq. (\ref{sup6})
so that
$\dot{\rho}_{*}=-2d(S_{d}/Z)^{(d-2)/d}\Theta\rho_{*}^{2}$. Then,
$\rho=\rho_{*}f_{*}(r/r_{*})$ with $f_*(x)=\sigma f(x/\mu)$. Now,
$\rho_*$, $r_*$ and $f_*$ have been defined so as to coincide with the
functions $\rho_0$, $r_0$ and $f$ of Sec. \ref{sec_sup}. Thus, we get
$f(x)=(1/\sigma)f_*(\mu x)$ where $f_*$ is the function
(\ref{sup14}). Therefore, the normalized solution of Eq. (\ref{crit9})
with the present notations can be written
\begin{equation}
f(x)=\frac{1}{\sigma Z}\left\lbrack 1-\frac{d-2}{d}\mu^{2} x^2\right
\rbrack_{+}^{d/(d-2)},\label{crit10}
\end{equation}
with
\begin{equation}
\sigma \mu^{d}=1, \qquad \sigma=2\left (\frac{S_{d}}{Z}\right )^{(d-2)/d},\label{crit11}
\end{equation}
and where $Z$ is given by Eq. (\ref{sup6}). Proceeding along the
lines of \cite{virial1}, we could expand the solutions of
Eq. (\ref{crit6}) (or of the equivalent equation for $f$) in powers of
$\Theta^{-1}$ in the limit $\Theta\rightarrow +\infty$.
\begin{figure}[htbp]
\centerline{
\includegraphics[width=8cm,angle=0]{crit_T1_grav_nograv.eps} }
\caption[]{In $d=3$ and for $n=n_3=3$, we compare the evaporation
profiles at different times for $\Theta=1>\Theta_c$, for
self-gravitating particles (full lines), to the faster evaporation
dynamics when gravity is switched off (dashed lines). }
\label{evap1}
\end{figure}
\begin{figure}[htbp]
\centerline{ \includegraphics[width=8cm,angle=0]{crit_exp_sca.eps} }
\caption[]{In $d=3$ and for $n=n_3=3$, we compare the scaling
profiles for $\Theta=0.21$ near $\Theta_c\approx 0.20872$,
$\Theta=1$, and $\Theta=100$ (top to bottom full lines; for clarity,
the $\Theta=0.21$ profile has been scaled down by a factor $150$).
For $\Theta\gg 1$, the invariant profile corresponds to the
Barenblatt solution (pure anomalous diffusion) which is a Tsallis
distribution with index $\gamma_{4/3}=1+1/n_3$. For
$\Theta\rightarrow \Theta_{c}$ the invariant profile tends to the
profile of a steady polytrope with index $n_3$. For an intermediate
temperature $\Theta=1$, we illustrate the perfect observed data
collapse by plotting $r_{0}^{d}(t)\rho(r,t)$ as a function of
$r/r_0(t)$, for $t=1.5^n$ $(n=0,...,13)$. These 14 curves are
indistinguishable from the theoretical scaling profile. In the
insert, we illustrate the scaling relation of Eq.~(\ref{scathc})
obtained for different values of $\varepsilon=(\Theta-\Theta_c)/\Theta_c\to 0$.}
\label{evap2}
\end{figure}
In Fig.~\ref{evap2}, we show the form of the evaporation density
profile $f$ as a function of $\Theta>\Theta_c$. As $\Theta$
approaches $\Theta_c$, the central density diverges, whereas the
profile tends to the one corresponding to free diffusion for large
$\Theta$. In addition, we present numerical simulations for an
intermediate $\Theta$, showing that dynamical scaling is perfectly
obeyed. Moreover, when $\Theta\rightarrow \Theta_c$, we find that
the scaling function obeys itself a scaling relation (see insert of
Fig.~\ref{evap2}). Defining
$\varepsilon=(\Theta-\Theta_c)/\Theta_c$, we find
\begin{equation}
f(\Theta,x)=\varepsilon^{-1}F(x/\varepsilon^{1/d}),\label{scathc}
\end{equation}
where $F$ takes the form of a steady polytropic profile of index
$n_3$. This scaling relation implies that close to $\Theta_c$, the
$d$-th moment of $r$ scales as
\begin{equation}
\langle r^d(t)\rangle\sim (\Theta-\Theta_c)t,
\end{equation}
which is a generalization of our exact result for $d=2$
($n_3=+\infty$, $T_c=1/4$) \cite{virial1},
\begin{equation}
\langle r^2(t)\rangle= 4(T-T_c)t+\langle r^2(0)\rangle.
\end{equation}
\section{Analogy between the limiting mass of white dwarf stars
and the critical mass of bacterial populations}
\label{sec_analogy}
The generalized Smoluchowski-Poisson (GSP) system describing the
dynamics of self-gravitating Langevin particles shares many analogies
with the generalized Keller-Segel (GKS) model describing the
chemotaxis of bacterial populations. Below, we briefly review the
basic equations of chemotaxis and show the close link with the present
work.
\subsection{The generalized Keller-Segel model}
\label{sec_gks}
The original Keller-Segel model has the form \cite{ks}:
\begin{eqnarray}
\label{gks1}
\frac{\partial\rho}{\partial t}=\nabla\cdot
(D_{2}(\rho,c)\nabla\rho)-\nabla\cdot (D_{1}(\rho,c)\nabla c),
\end{eqnarray}
\begin{equation}
\label{gks2}\epsilon {\partial c\over\partial t}=-k(c) c+h(c) \rho+D_{c}\Delta c.
\end{equation}
The drift-diffusion equation (\ref{gks1}) governs the evolution of the
density of bacteria $\rho({\bf r},t)$ and the reaction-diffusion
equation (\ref{gks2}) governs the evolution of the secreted chemical
$c({\bf r},t)$. The bacteria diffuse with a diffusion coefficient
$D_{2}$ and they also move in a direction of a positive gradient of
the chemical (chemotactic drift). The coefficient $D_{1}$ is a measure
of the strength of the influence of the chemical gradient on the flow
of bacteria. On the other hand, the chemical is produced by the
bacteria with a rate $h(c)$ and is degraded with a rate $k(c)$. It
also diffuses with a diffusion coefficient $D_{c}$. In the primitive
Keller-Segel model, $D_{1}=D_{1}(\rho,c)$ and $D_{2}=D_{2}(\rho,c)$
can both depend on the concentration of the bacteria and of the
chemical. This can take into account microscopic constraints, like
close-packing effects \cite{ph,degrad,kin} or anomalous diffusion \cite{lang}.
If we assume a constant diffusion coefficient $D_2=D$ and a constant
mobility $D_1/\rho=\chi$ (we also consider a constant production
rate $\lambda$ and a constant degradation rate $k^2$ of the chemical),
we obtain the standard
Keller-Segel (KS) model
\begin{eqnarray}
\label{gks3}
\frac{\partial\rho}{\partial t}=\nabla\cdot \left
(D\nabla\rho-\chi \rho\nabla c \right ),
\end{eqnarray}
\begin{equation}
\label{gks4}\epsilon {\partial c\over\partial t}=\Delta c-k^2c+\lambda\rho.
\end{equation}
If we now assume that the diffusion coefficient and the mobility
depend on the concentration of the bacteria, and if we set $D_2=D
h(\rho)$ and $D_1=\chi g(\rho)$, where $h$ and $g$ are positive
functions, we obtain the generalized Keller-Segel (GKS) model
\cite{ph,degrad,kin}:
\begin{eqnarray}
\label{gks5}
\frac{\partial\rho}{\partial t}=\nabla\cdot
(Dh(\rho)\nabla\rho-\chi g(\rho)\nabla c),
\end{eqnarray}
\begin{equation}
\label{gks6}\epsilon {\partial c\over\partial t}=\Delta c-k^2c+\lambda\rho.
\end{equation}
Equation (\ref{gks5}) can be viewed as a nonlinear mean field
Fokker-Planck (NFP) equation \cite{gfp} associated with a stochastic
process of the form
\begin{equation}
\frac{d{\bf r}}{dt}=\chi(\rho)\nabla c+\sqrt{2D(\rho)}{\bf R}(t),\label{gks7}
\end{equation}
with a diffusion coefficient
$D(\rho)=(D/\rho)\int^{\rho}h(\rho')d\rho'$ and a mobility
$\chi(\rho)=\chi g(\rho)/\rho$. These equations are associated with a
notion of effective generalized thermodynamics \cite{frank,gfp}. The Lyapunov
functional of the NFP equation (\ref{gks5})-(\ref{gks6}) can be
written in the form of a generalized free energy $F=E-T_{eff}S$ where
\begin{eqnarray}
\label{gks8}
E=\frac{1}{2\lambda}\int \left\lbrack (\nabla c)^{2}+k^{2} c^{2}\right \rbrack
\, d{\bf r}-\int \rho c \, d{\bf r},
\end{eqnarray}
is the energy, $T_{eff}=D/\chi$ is an effective temperature given by
an Einstein-like relation and
\begin{eqnarray}
\label{gsk9}
S=-\int C(\rho)\, d{\bf r}, \qquad C''(\rho)=\frac{h(\rho)}{g(\rho)},
\end{eqnarray}
is a generalized entropy. A straightforward calculation shows that
\begin{eqnarray}
\label{gks10}
\dot F=-\frac{1}{\lambda\epsilon}\int
(-\Delta c+k^{2}c-\lambda\rho)^{2} \,d{\bf r}\nonumber\\
-\int \frac{1}{\chi g(\rho)}(Dh(\rho)\nabla\rho-\chi g(\rho)\nabla c)^{2}\,d{\bf r}\le 0,
\end{eqnarray}
which is the expression of the $H$-theorem in the canonical ensemble
adapted to dissipative systems. If we consider the particular case of
a constant mobility $g(\rho)=\rho$ and a power law diffusion
$h(\rho)=\gamma\rho^{\gamma-1}$, with $\gamma=1+1/n$, we obtain the
polytropic Keller-Segel model \cite{lang}:
\begin{eqnarray}
\label{gsk11}
\frac{\partial\rho}{\partial t}=\nabla\cdot \left
(D\nabla\rho^{\gamma}-\chi \rho\nabla c \right ),
\end{eqnarray}
\begin{equation}
\label{gks12}\epsilon {\partial c\over\partial t}=\Delta c-k^2c+\lambda\rho.
\end{equation}
The standard Keller-Segel model is recovered for $\gamma=1$. Finally,
if we neglect the degradation of the chemical ($k=0$) and consider a
limit of large diffusivity of the chemical (implying $\epsilon=0$), we
obtain for sufficiently large concentrations (see
Appendix C of \cite{kin}):
\begin{eqnarray}
\label{gsk13}
\frac{\partial\rho}{\partial t}=\nabla\cdot \left
(D\nabla\rho^{\gamma}-\chi \rho\nabla c \right ),
\end{eqnarray}
\begin{equation}
\label{gsk14}\Delta c=-\lambda\rho.
\end{equation}
These equations are isomorphic to the generalized
Smoluchowski-Poisson (GSP) system (\ref{up1})-(\ref{up2}) provided
that we set
\begin{equation}
\label{gks15}D=K/\xi,\quad \chi=1/\xi,\quad c=-\Phi, \quad \lambda=S_{d}G.
\end{equation}
Therefore, the results of the present paper apply to the chemotactic
problem provided that the parameters are properly re-interpreted.
\subsection{Formulation of the results with the biological variables}
\label{sec_form}
In the gravitational context, we usually fix the coefficients $\xi$,
$G$ and $M$ and use the temperature $\Theta$ as a control parameter.
In the biological context, the coefficients $D$, $\chi$ and $\lambda$
are assumed given and the control parameter is the mass $M$.
Therefore, it may be useful to briefly reformulate the previous
results in terms of the mass, using notations adapted to the
chemotactic problem.
For the critical index $n=n_{3}=d/(d-2)$ in $d\ge 2$, the steady
states (polytropes) of the GKS model (\ref{gsk13})-(\ref{gsk14})
exist, in an unbounded domain, for a unique value of the mass given by \cite{csmasse}:
\begin{equation}
\label{f1}M_{c}=S_{d}\left \lbrack
\frac{D(1+n_3)}{\chi\lambda} \right\rbrack^{n_3/(n_3-1)}\omega_{n_3}.
\end{equation}
For $d=3$, we have
\begin{equation}
\label{f2}M_{c}=32\pi\omega_{3}\left (
\frac{D}{\chi\lambda} \right )^{3/2}\simeq 202.8956...\left (
\frac{D}{\chi\lambda} \right )^{3/2}.
\end{equation}
For $d=2$, using the identity (\ref{up18}), we recover the critical
mass
\begin{equation}
\label{f3}M_{c}=\frac{8\pi D}{\chi\lambda},
\end{equation}
associated with the two-dimensional standard Keller-Segel (KS) model
(see \cite{mt} and references therein). It is convenient to
introduce rescaled variables so that $D=\lambda=\chi=1$. With this
system of units the critical mass is $M_c(d)=S_d
(1+n_3)^{n_3/(n_3-1)}\omega_{n_3}=S_d\lbrack
2(d-1)/(d-2)\rbrack^{d/2}\omega_{d/(d-2)}$. For example, $M_c(d=2)=8\pi$ and
$M_c(d=3)=32\pi\omega_3=202.8956...$. Using the approximate
expression of $\omega_n$ obtained in Eq. (B72) of \cite{wd}, we can
derive an approximate expression of the critical mass in the form
\begin{equation}
\label{f4}M_{c}^{approx}(d)=\frac{S_d}{d}\lbrack d(d+2)\rbrack^{d/2}.
\end{equation}
For $d=2$, it returns the exact result
$M_{c}^{approx}(2)=M_c=8\pi$. On the other hand,
$M_c^{approx}(d=3)=243$ and $M_c^{approx}(d=4)=2842$. Using
$S_d=2\pi^{d/2}/\Gamma(d/2)$ we find that $M_c^{approx}(d)\sim
2\pi^{d/2}d^d/\Gamma(d/2)$ for $d\rightarrow +\infty$.
Let us briefly discuss the critical dynamics of the GKS system with
index $n=n_{3}=d/(d-2)$ for $d\ge 2$, depending on the total mass of the
bacteria. For $M<M_c$, a box-confined system tends to an incomplete
polytrope confined by the walls of the box. In an unbounded domain,
the system evaporates in a self-similar way as discussed in
Sec. \ref{sec_crit}. For $M>M_c$, the system undergoes finite time
collapse as discussed in Sec. \ref{sec_collapse}. In a finite time
$t=t_{coll}$, it forms a Dirac peak containing a mass $M_c$ surrounded
by a collapsing halo evolving quasi self-similarly with a time-dependent
exponent $\alpha(t)$ tending extremely slowly to $\alpha=d$ as
$t\rightarrow t_{coll}$. Thus,
\begin{equation}
\label{f5}\rho({\bf r},t)\rightarrow M_{c}\delta({\bf r})+\chi({\bf r},t),
\end{equation}
where $\chi(r)$ behaves roughly as $r^{-d}$ for $r\rightarrow 0$. For
$M=M_c$, the situation is delicate and depends on the dimension of
space. For $d=2$, in a bounded domain, the steady state of the KS
model is a Dirac peak ($\rho_{0}=+\infty$). We have constructed in
\cite{sc} a self-similar solution tending to this Dirac peak in
infinite time. The central density increases exponentially rapidly. In
an infinite domain, the KS model admits an infinite family of steady
state solutions parameterized by their central density but the Dirac
peak ($\rho_{0}=+\infty$) is selected dynamically (the other solutions
have an infinite moment of inertia and, since the moment of inertia is
conserved when $M=M_{c}$, they cannot be reached from an initial
condition with a finite moment of inertia). We have constructed in
\cite{virial1} a self-similar solution tending to this Dirac peak in
infinite time (and ejecting a small amount of mass at large distances
so as to satisfy the moment of inertia constraint). The central
density increases logarithmically rapidly. For $d>2$ and $M=M_c$, in a
bounded domain, the GKS model admits an infinite family of steady
state solutions parameterized by their central density or,
equivalently, by their natural radius $R_*$. We have found numerically
that the system tends to the polytrope where the density reaches zero
at the box radius ($R_*=R$).
Due to the analogy between gravity and chemotaxis \cite{crrs}, we find
that the critical mass of bacterial populations in the standard
Keller-Segel model in $d=2$ and in the generalized Keller-Segel model
in $d>2$ for the critical index $n=n_3$ shares some resemblance with
the Chandrasekhar mass of white dwarf stars. For example, the curves
of Figs. \ref{mrho} and \ref{alphaeta3} also represent the mass of the
bacterial aggregate as a function of the central density. As we have
seen, they are strikingly similar to the mass-central density relation
of white dwarf stars in Fig. \ref{massedensiteD3}. Therefore,
bacteria and white dwarf stars share deep analogies despite their very
different physical nature \cite{csmasse}.
\section{Conclusions and perspectives: the GSP system with a
relativistic equation of state}
In this paper, we have studied the critical dynamics, at the index
$n=n_3$, of the GSP system and GKS model describing self-gravitating
Langevin particles and bacterial populations. This study completes our
previous investigation \cite{lang} that was restricted to the cases
$n<n_3$ and $n>n_3$. We have seen that, at the index $n=n_3$, there
exists a critical mass $M_c$ (independent on the size of the system)
that is connected to the Chandrasekhar limiting mass of white dwarf
stars \cite{csmasse}. In order to strengthen this analogy, it would be
interesting to study the GSP system (\ref{gsp1})-(\ref{gsp2}) with the
equation of state (\ref{wdp2}) corresponding to relativistic white
dwarf stars. In fact, we can already describe qualitatively the
behavior of the solutions by using the results obtained here for
polytropes (see also the stability results obtained in \cite{wd} for
relativistic white dwarf stars).
For $d=1$ and $d=2$, there exists an equilibrium state (global minimum
of free energy) for all values of the mass $M$. Therefore, the GSP
system relaxes towards that steady state.
For $d=3$, there exists a critical mass
$M_{Chandra}=0.196701...({hc/G})^{3/2}/(\mu H)^{2}$. For
$M<M_{Chandra}$, the GSP system tends to a partially relativistic
white dwarf star (global minimum of free energy). For $M\ll
M_{Chandra}$, the density is small so that the equation of state
reduces to that of a polytrope of index $n=3/2$ (classical
limit). Therefore, the GSP system relaxes towards a classical white
dwarf star as described in Fig. 21 of \cite{lang}. For $M=M_{Chandra}$
the density becomes large so that the equation of state reduces to
that of a critical polytrope of index $n=3$ (ultra-relativistic
limit). We expect that the GSP system forms a Dirac peak of mass
$M_{Chandra}$ in infinite time. For $M>M_{Chandra}$, there is no
equilibrium state and the system collapses. When the density reaches
high values, the system becomes equivalent to a polytrope of index
$n=3$. Therefore, according to the present study, it forms in a finite
time a Dirac peak of mass $M_{Chandra}$ surrounded by a halo evolving
quasi self-similarly with an exponent $\alpha(t)$ converging very
slowly to $\alpha=3$.
For $d=4$, there exists a critical mass
$M_c=0.0143958...h^4/(m^2G^2\mu^3 H^3)$ discovered in \cite{wd}. For
$M<M_c$, the steady states are unstable and the system can either
collapse or evaporate (depending on the form of the initial density
profile and on the basin of attraction of the solution). In case of
evaporation, when the density reaches low values, the system becomes
equivalent to a polytrope of critical index $n_{3/2}=n_{3}=2$
(classical limit). In that case, it undergoes a self-similar
evaporation similar to that described in Sec. \ref{sec_crit} where
diffusion and gravity scale the same way. In case of collapse, when
the density reaches high values, the system becomes equivalent to a
polytrope of index $n_{3}'=4>n_3=2$ (ultra-relativistic limit). In
that case, it undergoes a self-similar collapse similar to that
described in \cite{lang}. For $M>M_c$, there is no steady state and
the system collapses in the way discussed previously (energy
considerations developed in \cite{wd} show that there is no
evaporation in that case).
For $d\ge 5$, there is no steady state and the system can either
collapse or evaporate. In case of evaporation, when the density
reaches low values, the system becomes equivalent to a polytrope of
index $n_{3/2}>n_{3}$. In that case, it undergoes a self-similar
evaporation similar to that described in Sec. \ref{sec_sup} where
gravity becomes asymptotically negligible. In case of collapse, when
the density reaches high values, the system becomes equivalent to a
polytrope of index $n_{3}'>n_3$. In that case, it undergoes a
self-similar collapse similar to that described in \cite{lang}.
As we have already mentioned, the real dynamics of white dwarf stars
is not described by the GSP system, but is much more complicated.
However, we think that the study of this simple dynamical model is an
interesting first step before considering more complicated models. At
least, it reveals the great richness of the problem. A next step
would be to take into account inertial effects and study the
(generalized) Kramers-Poisson system and the corresponding
hydrodynamic equations \cite{virial2}.
|
1,108,101,565,302 | arxiv | \section{Introduction}
\label{sec:intro}
Solving differential equations is of great interest since the invention of calculus by
Newton and Leibniz. After two centuries and fifty years of the invention of calculus, Schr\"{o}dinger introduced his differential equation which now carries his name. It has been applied widely to many physical (quantum) system. One general class of these physical systems are bound systems such as hydrogen atom, molecules, and nucleons in a nuclei, etc. Solutions of such systems mean to obtain their eigenvalues and (or) eigenfunctions.
AIM is one of the various solving methods of the corresponding differential equations for bound systems which is developed by Ciftci, Hall and Saad \cite{Ciftci2003_JPA,Ciftci2005_JPA,Ciftci2005_PRA,Ciftci2005_PLA}. It has many successful application in the literature. Depending on the nature of a differential equation AIM can give analytical or numerical solutions.
Schr\"{o}dinger, Dirac, Klein-Gordon, or Duffin-Kemmer-Petiau (DKP) all these differential wave equations and any others have to take second-order homogeneous linear differential equations of the form
\begin{equation}
\label{eq:aim}
y'' = \lambda_0(x) y' + s_0(x) y,
\end{equation}
as it is mentioned in Ciftci \emph{et al} \cite{Ciftci2003_JPA} to be solvable by AIM.
In Ref. \cite{Boztosun2006_JMP,Yasuk2006_JMP}, one can find a successful application of AIM to DKP equations which gives analytical solutions for harmonic oscillator and Coulomb potential cases while anharmonic potential case needs a perturbation treatment in the frame work of AIM. Another study which is performed on Schr\"{o}dinger Equation for Makarov Potential \cite{Bayrak2008_IJTP} resulted in analytical solutions. Also there is an approximate analytical solution of Dirac eqaution with Hulthen potential as presented in \cite{Soylu2007_JMP}. There is also a study \cite{Boztosun2007_CPL} which shows AIM can be simplified for a group of differential equations and analytical solutions can be obtained almost immediately.
Although AIM can solve many differential equations analytically in many other cases this might not be possible due to the nature of that particular differential equation, but in these cases, AIM might offer numerical solutions. Cases of numerically obtained eigenvalue can be found in a Schr\"{o}dinger equation study with a Yukawa potential \cite{Karakoc2006_IJMPE}, a DKP equation study which was performed on sextic oscillator potential \cite{Yasuk_2008}, and the references therein. An interesting numerical application of AIM has been performed to Kerr-(A)dS black holes using Kerr-(A)dS angular separation equation \cite{Cho2009_PRD}.
Almost all examples given above (to the best of my knowledge) were investigated using the codes written either in Mathematica or Maple or some similar closed source software. But, \emph{AIMpy} \cite{aimpy} is an open-source \cite{opensource} code based on Python language \cite{Python}, the code is as fast as an unpublished alternative code which has been written in Mathematica by the author of this study.
\emph{AIMpy} \cite{aimpy} code is designed to solve numerical cases and it relies on the improved version of AIM which is developed by Cho \textit{et~al}. \cite{Cho2010_CQG}. The following sections are dedicated to short summary of the description of AIM and improved AIM, and some examples of recalculations using \emph{AIMpy} \cite{aimpy} for the problems chosen from the literature \cite{Karakoc2006_IJMPE,Yasuk_2008,Cho2010_CQG,Bayrak2007_IJQC}.
\section{Model}
\label{sec:model}
\subsection{AIM}
\label{sec:aim}
If one focuses on solving a 1-dimensional Schr\"{o}dinger equation with a smooth changing potential,
this equation can be represented in the following form,
\begin{equation}
\label{eq:1dSch}
\frac{d^2\psi(x)}{dx} + V(x)\psi(x) = E \psi(x)~.
\end{equation}
To be able to perform AIM on Eq. \ref{eq:1dSch} it needs to be transformed into form of Eq. \ref{eq:aim}. This can be achieved by proposing the following relation
\begin{equation}
\label{eq:aimFunc}
\psi(x) = f(x) y(x).
\end{equation}
This relation produces an equation when inserted into Eq. \ref{eq:1dSch} in the form of Eq.~\ref{eq:aim} where
\begin{eqnarray}
\label{eq:l0s0}
\lambda_0(x) &=& - \frac{2 \frac{d}{d x} f{\left (x \right )}}{f{\left (x \right )}} \texttt{, and } \nonumber \\
s_0(x) &=& E - V{\left (x \right )} - \frac{\frac{d^{2}}{d x^{2}} f{\left (x \right )}}{f{\left (x \right )}}.
\end{eqnarray}
The process of obtaining $\lambda_0$ and $s_0$ is very similar to 1D~Schr\"{o}dinger case for any differential equation once they are transformed into form of Eq.~\ref{eq:aim}. Proposition of $f(x)$ function is important to obtain the solution, usually one should analyze the asymptotic behaviors of the differential equation: the examples of how $f(x)$ is proposed in can be seen in Refs. \cite{Ciftci2003_JPA,Ciftci2005_JPA,Ciftci2005_PLA,Ciftci2005_PRA,Bayrak2008_IJTP,Karakoc2006_IJMPE,Yasuk_2008,Bayrak2007_IJQC}.
After this point, iterative part of AIM starts due to one has to take derivatives of the Eq.~\ref{eq:aim} and put previously obtained lower degree derivatives in the new higher degree differential equation. This leads to iterative equations to $\lambda_k$ and $s_k$:
\begin{eqnarray}
\label{eq:lksk}
\lambda _ {k} (x) & = &
\lambda _ {k - 1} ^ {\prime} (x) + s _ {k - 1} (x) + \lambda _ {0} (x) \lambda _ {k - 1} (x) \\
s _ {k} (x) & = &
s _ {k - 1} ^ {\prime} (x) + s _ {0} (x) \lambda _ {k - 1} (x) , \quad \textrm{where } k = 1,2,3 , \dots \nonumber
\end{eqnarray}
Asymptotic behavior of the method comes into play when $k$ is large enough and, it gives the following relation,
\begin{equation}
\label{eq:alpha_x}
\frac { s _ { k } (x) } { \lambda _ { k } (x) } = \frac { s _ { k - 1 } (x) } { \lambda _ { k - 1 } (x) } \equiv \alpha (x) .
\end{equation}
A new form of this equation is defined as $\delta_k$,
\begin{equation}
\label{eq:quantization}
\delta _ { k } \equiv \lambda _ { k }(x) s _ { k - 1 } ( x ) - \lambda _ { k - 1 } (x) s _ { k } (x),
\end{equation}
and $\delta_k=0$ is called as `quantization condition' \cite{Cho2010_CQG}. Roots of the quantization condition give eigenvalues of any differential equation which obeys AIM conditions. Finally, the following integral which details are given in Ref. \cite{Ciftci2003_JPA} gives the general solution of Eq.~\ref{eq:aim},
\begin{eqnarray}
\label{eq:aim_gen_sol}
y(x) &=& \exp \left( - \int ^ {x} \alpha \left(x^ { \prime } \right) dx^ { \prime } \right) \times \nonumber \\
&&\left [ C _ { 2 } + C _ { 1 } \int ^ {x} \exp \left( \int ^ {x^ { \prime }}
\left[ \lambda _ { 0 } \left(x^ { \prime \prime } \right) + 2 \alpha \left(x^ { \prime \prime } \right) \right] dx^ { \prime \prime } \right) dx^ { \prime } \right].
\end{eqnarray}
\subsection{Improved AIM}
\label{sec:imp_aim}
An apparent `weakness' of AIM is its iterative nature, because one has to take derivatives of $\lambda_{k-1}$ and $s_{k-1}$ to obtain $\lambda_k$ and $s_k$. These derivatives are added to every new iteration and for sufficiently large $k$'s $\lambda_k$ and $s_k$ become huge mathematical terms which might be represented by with very a big order polynomial. This might brings difficulties depending on the case for both numerical and symbolical calculations. The part for the symbolic difficulty would be time and memory consuming, and the part for the numerical difficulty would be precision and accuracy loss.
To overcome this issues Cho \textit{et~al}. \cite{Cho2010_CQG} developed an `improved' version of AIM. According to the improved version, $\lambda_k$ and $s_k$ can be written as Taylor series,
\begin{eqnarray}
\lambda _ {n} ( x ) &= & \sum _ { i = 0 } ^ { \infty } c _ {n} ^ { i } ( x - x_0 ) ^ { i }, \\
s _ {n} ( x ) &= & \sum _ { i = 0 } ^ { \infty } d _ {n} ^ { i } ( x - x_0 ) ^ { i },
\end{eqnarray}
where $c _ {n} ^ { i }$ and $d _ {n} ^ { i }$ are Taylor coefficients. Using these series in Eqs.~\ref{eq:lksk} gives following recursion relations for the coefficients:
\begin{eqnarray}
c _ {n} ^ { i } &=& ( i + 1 ) c _ {n-1} ^ { i + 1 } + d _ {n-1} ^ { i } + \sum _ { j = 0 } ^ { i } c _ { 0 } ^ { j } c _ {n-1} ^ { i - j },\\
d _ {n} ^ { i } &=& ( i + 1 ) d _ {n-1} ^ { i + 1 } + \sum _ { j = 0 } ^ { i } d _ { 0 } ^ { j } c _ {n-1} ^ { i - j }.
\end{eqnarray}
The quantization condition in Eq. \ref{eq:quantization} can be rewritten with these new recursion relations in the following form,
\begin{equation}
\label{eq:newquantization}
d_ {n} ^ { 0 } c_{n-1} ^ { 0 } - d _ {n-1} ^ { 0 } c _ {n} ^ { 0 } = 0
\end{equation}
and there is no need take derivatives when using the new condition other than the first iteration. Therefore, only $\lambda_0$ and $s_0$ derivatives are taken to obtain $c _ {0} ^ { i }$ and $d _ {0} ^ { i }$ coefficients which are necessary to start the new recursion relation calculations.
\section{The Code}
\label{sec:thecode}
\emph{AIMpy} \cite{aimpy} is designed in an open-source environment (Python language), and also I believe open-source \cite{opensource} philosophy is one of the driving force to the progress of science. Therefore, \emph{AIMpy} \cite{aimpy} is an open-source code with GPL \cite{GPL} license which allows anyone can modify and develop on top of it.
It is designed with two parts: the one including all necessary libraries and Python functions and, the other one includes only AIM calculations. The first part is a Python program called \emph{`asymptotic.py'} and the other one a Jupyter notebook \cite{Jupyter} which one can give any name. However, it is preferred as \emph{`AIMpy\_[name of the problem].ipynb'} for the examples of this paper. The code, the examples and, installation and usage manuals can be found at https://github.com/mkarakoc/aim.
\emph{`asymptotic.py'} has Python libraries which are named IPython \cite{IPython}, \emph{SymPy} (sympy) \cite{SymPy}, \emph{symengine} \cite{SymEngine} and Python-FLINT (flint) \cite{PythonFlint}.
IPython is necessary to show mathematical terms as user-friendly. For example, a parameter named as `beta' will be shown in the outputs as $\beta$.
\emph{SymPy} and \emph{symengine} are both used to make symbolic calculations, mainly for derivatives of $\lambda_0$ and $s_0$. Both \emph{SymPy} and \emph{symengine} can calculate arbitrary precision numbers and that makes them right tools for AIM since the arbitrary precision (or high precision) calculation is one of the key-point of AIM. \emph{SymPy} and \emph{symengine} both are capable of taking derivatives but \emph{symengine} is significantly faster than \emph{SymPy}. Therefore, \emph{symengine} is preferred for derivatives while \emph{SymPy} is mostly used to convert symbolic variables into \LaTeX~to show outputs as user-friendly using IPython tools.
Once, $c _ {k} ^ { i }$ and $d _ {k} ^ { i }$ coefficients are obtained using these two libraries then the quantization condition in Eq. \ref{eq:newquantization} is obtained in an polynomial form depending on $x$ (equation variable) and $E$ (eigenvalue variable) parameters. The roots of this polynomial are calculated using \emph{Python-FLINT} or shortly flint. As it is mentioned in the reference page \cite{PythonFlint}: \emph{Python-FLINT} is an \emph{`Python extension module wrapping FLINT (Fast Library for Number Theory) and Arb (arbitrary-precision ball arithmetic).'} The roots can have arbitrary precision since both \emph{FLINT} \cite{flint} and \emph{Arb} \cite{arb} are arbitrary precision libraries. Therefore, it can be claimed that the eigenvalues that would be calculated with \emph{AIMpy} \cite{aimpy} will be very reliable.
Second part of the code is in a Jupyter notebook \cite{Jupyter} which actually is an environment to create an user-friendly interface and to just take care of the physics problem interested in rather than the code itself.
\section{Examples}
\label{sec:examples}
The problems solvable by AIM can be classified as analytically and numerically solvable ones, but this version of \emph{AIMpy} does not have analytical solver property, yet. Therefore, the below examples are selected from the literature \cite{Karakoc2006_IJMPE,Bayrak2007_IJQC,Yasuk_2008,Cho2010_CQG} to present \emph{AIMpy} for numerical applications of AIM. The full details of the examples are not presented in this study. One shall see the details of the equations, the potentials and the definitions of their parameters in the reference papers since these examples are focused on to confirm \emph{AIMpy} reproduce results of earlier studies very well.
The first example is given in a more detailed fashion to show how the \emph{AIMpy} code would look like and how it is used to solve the eigenvalue problems. The other examples are kept more abstract since they are just proof of the success of \emph{AIMpy}.
\subsection{The Yukawa Potential}
\label{sec:yukawa}
\begin{equation}
\label{eq:Yukawa}
V(r)=-\frac{A}{r} \exp (-\alpha r)
\end{equation}
The first example is the solution of the radial Schr\"{o}dinger equation (Eq.~\ref{eq:RadSch}) for a Yukawa type potential (Eq.~\ref{eq:Yukawa}) which was studied in Ref. \cite{Karakoc2006_IJMPE} using standard AIM through the unpublished Mathematica code written by the author. The eigenvalues solution of the system is given below such as seen in a Jupyter notebook. One can get an idea of how the other examples also have been solved with \emph{AIMpy} \cite{aimpy}.
\begin{equation}
\label{eq:RadSch}
\frac{d^{2} R_{n}(r)}{d r^{2}}+\frac{2 m}{\hbar^{2}}\left(E_{nL}-\frac{L(L+1) \hbar^{2}}{2 m r^{2}} - V(r)\right) R_{n}(r)=0
\end{equation}
\begin{lstlisting}
# Python program to use AIM tools
from asymptotic import *
# symengine (symbolic) variables
# for lambda_0 and s_0
En,m,hbar,L,r,r0 = se.symbols("En,m,hbar,L,r,r0")
beta,alpha,A = se.symbols("beta,alpha,A")
\end{lstlisting}
In the first step, one should import \emph{``asymptotic.py"}, then define the necessary paramaters as symbolic variables of \emph{symengine}. Next step is the writing of $\lambda_0$ and $s_0$ (Eq. \ref{eq:yukawa_l0s0}) with these variables as seen below. It should be noted here that $\lambda_0$ and $s_0$ must be obtained by user as it has been explained in the section \ref{sec:aim}.
\begin{eqnarray}
\label{eq:yukawa_l0s0}
\lambda_0 &=& 2 \beta - \frac{2}{r} \nonumber \\
s_0 &=& - \frac{2m }{\hbar^{2}}\left(E_{nL} -\frac{A e^{- \alpha r}}{r}\right) + \frac{L(L+1)}{r^{2}} + \frac{2 \beta}{r} - \beta^{2}
\end{eqnarray}
\begin{lstlisting}
# 1st step:
# lambda_0 and s_0
l0 = 2*beta - 2/r
s0 = -2*m/hbar**2*(En-A*se.exp(-alpha*r)/r) + L*(L+1)/r**2 + (2*beta)/r - beta**2
\end{lstlisting}
The user should be aware of all numbers are in l0~$\equiv\lambda_0$ and s0~$ \equiv s_0$ are in infinite precision until the iterative calculation process starts. It should be avoided to put numerical values of the symbolic parameters directly into the l0 and s0. Therefore it would be possible to use symbolic form of l0 and s0 later on if it is needed to make calculations for different numerical values of the parameters.
\begin{lstlisting}
# 2nd step:
## Case: A = 4, L=0
# values of variables
nA = o*4
nL = o* 0
nalpha = o* 2/10
nhbar, nm = o* 1, o*1/2
nbeta = o* 3
nr0 = o* 1/nbeta
# parameters of lambda_0 (pl0) and s_0 (ps0)
pl0 = {beta: nbeta}
ps0 = {beta: nbeta, alpha: nalpha, A:nA, m: nm, L: nL, hbar: nhbar, r0: nr0}
\end{lstlisting}
Third step is the preparation of the numerical values of the parameters.
It is easy to understand that the variable names that start with \emph{``n"} are for the numerical values. These values are connected with the symbolic variables through pl0 and ps0 which are Python dictionaries. The benefit of this usage l0 and s0 still have been kept in their symbolic forms. As it is mentioned earlier all numerical values will be kept in infinite precision to assure this an object named as \emph{``o"} is created. The value of is {\bf 1} with infinite precision and it has an \emph{symengine} integer type. For example, \emph{o * 1/2} will provide that \emph{1/2} will always be stay as it is till the precision of it to changed an finite number.
The existence of parameters $A, L, \alpha, \hbar, m$ are obvious and their numerical values are taken from Ref. \cite{Karakoc2006_IJMPE}. The $\beta$ parameter comes from the asymptotic form of the proposed wave-function (Ref. \cite{Karakoc2006_IJMPE}) in Eq. \ref{eq:yukawa_wave}. Its value is just an arbitrary number and usually chosen to make iterative process to converge as early as possible to eigenvalues. $r_0 = 1/\beta$ is chosen as the maximum of the asymptotic part of the wave-function as it is suggested in Ref. \cite{Karakoc2006_IJMPE}.
\begin{equation}
\label{eq:yukawa_wave}
R_{n}(r)=r \exp (-\beta r) f(r)
\end{equation}
\begin{lstlisting}
# 3rd step:
# pass lambda_0, s_0 and variable values to aim class
yukawa_A4L0 = aim(l0, s0, pl0, ps0)
yukawa_A4L0.display_parameters()
yukawa_A4L0.display_l0s0(0)
yukawa_A4L0.parameters(En, r, nr0, nmax=201, nstep=10, dprec=500, tol=1e-101)
\end{lstlisting}
The fourth step is to create a \emph{Python object} with using a \emph{Python class} named as \emph{aim}. This class takes four inputs l0, s0 are symbolic representations of $\lambda_0$ and $s_0$ in Python language and, pl0, ps0 are Python dictionaries they contain numerical values for l0 and s0, respectively. The numerical values passed to l0 and s0 inside of the \emph{aim} class. In this step, the Python object created by \emph{aim} is assigned to \emph{``yukawa\_A4L0"} variable. These name can be any arbitrary name which obeys Python naming rules. But in this example, \emph{``yukawa"} stands for the potential name and, the rest of name gives an idea of which case is user studying on. \emph{``yukawa\_A4L0.display\_parameters()"} and \emph{``yukawa\_A4L0.display\_l0s0(0)"} lines are not necessary for the solution, they are to show the numerical values of the parameters and $\lambda_0$ and $s_0$ in much more a human readable representation. The last line contains the name of the important variables. The defitions of the variables are given in Table \ref{tab:var1}.
\begin{table}[!hbtp]
\begin{center}
\caption{Names of the variables can be any Pythonic variable name, but it can also be chosen similar to the variables of a differential equations in interest.}
\label{tab:var1}
\begin{tabular}{|l|l|}
\hline
Variable & Definitions\\
Name& \\
\hline
En & eigenvalue of the differential equation\\
r & the differential equation variable which is used for derivatives\\
nr0 & a particular value for r, usually minimum of the potential or \\
& the maximum of the asymptotic part of the wave-function \\
nmax& the maximum iteration number\\
nstep& skip every iteration with the value given to nstep\\
dprec& the finite precision of all numerical values\\
tol & tolerance for convergence to eigenvalues \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{lstlisting}
# 4th step:
# create coefficients for improved AIM
yukawa_A4L0.c0()
yukawa_A4L0.d0()
yukawa_A4L0.cndn()
\end{lstlisting}
The fifth step is the calculations of Taylor series coefficients according to improved version of AIM (Ref. \cite{Cho2010_CQG}). $c _ {0} ^ { i }$ and $d _ {0} ^ { i }$ and are calculated with the lines \emph{``yukawa\_A4L0.c0()"} and \emph{``yukawa\_A4L0.d0()"}, respectively. $c _ {n} ^ { i }$ and $d _ {n} ^ { i }$ (where $n = 1, 2, 3, \ldots$) are calculated with the last line (\emph{``yukawa\_A4L0.cndn()"}).
The last step is the following code line. As it is mentioned earlier, \emph{``Arb"} library is used through \emph{``Python-FLINT"} to obtain the eigenvalues. Therefore, the method to obtain the eigenvalues named as \emph{``get\_arb\_root"}. In this example, eigenvalues are real negative numbers where their fraction part has 20 digits. There might be other roots like real positive or complex, but they are filtered with \emph{``showRoots='-r'"}. The size of the fraction part is defined with \emph{``printFormat"}, and it is obvious \emph{.20f} in the example stands for 20 digits.
\begin{lstlisting}
# 5th step:
yukawa_A4L0.get_arb_roots(showRoots='-r', printFormat="{:25.20f}")
\end{lstlisting}
Finally, the results of the calculation are given below. It can be seen from the outputs that the fast and successful convergence of the iterations. In the earlier study that I have contributed \cite{Karakoc2006_IJMPE} lesser fractional digits are presented which it was enough for the goal of that study (\emph{see Table \ref{tab:Yukawa}}), but it is aimed to show the how successful AIMpy \cite{aimpy} in the present one.
\lstset{
basicstyle=\footnotesize,
backgroundcolor=\color{bg},
commentstyle=\color{mygreen},
stringstyle=\color{mymauve},
keywordstyle=\color{blue},
breaklines=false,
language=Python,
frame=single
}
\begin{lstlisting}
# 6th (last) step:
iteration E00 E10 E20
001 -2.22608382037941285277
021 -3.25646424490443058235 -0.39503930505322610330
041 -3.25646424490722525404 -0.39942480823044205382 -0.00752426627166027709
061 -3.25646424490722525404 -0.39942617037004934161 -0.02379348447266779420
081 -3.25646424490722525404 -0.39942617065529516410 -0.02544763075010264581
101 -3.25646424490722525404 -0.39942617065535110306 -0.02563879250512245213
121 -3.25646424490722525404 -0.39942617065535111388 -0.02566135365381434051
141 -3.25646424490722525404 -0.39942617065535111388 -0.02566401769819825872
161 -3.25646424490722525404 -0.39942617065535111388 -0.02566433195250032477
181 -3.25646424490722525404 -0.39942617065535111388 -0.02566436900553236619
201 -3.25646424490722525404 -0.39942617065535111388 -0.02566437337375074847
CPU times: user 21.1 s, sys: 24.8 ms, total: 21.2 s, Wall time: 21.2 s
\end{lstlisting}
\begin{table}[!htbp]
\caption{The energy eigenvalues ($E_{nL}$) of the Yukawa potential (Eq. \ref{eq:Yukawa}) where $\hbar=2m=1$,$\alpha=0.2 fm^{-1}$, $\beta=3$ and $n=0$ for all $L$ values. The results of the Refs. \cite{Gonul2006_PS,Chakrabarti2001_PLA,Chakrabarti2001_PLA} are just given to show the results of the other methods. The result of Ref. \cite{Karakoc2006_IJMPE} is the one that one should really compare of the present study.}
\label{tab:Yukawa}
\begin{center}
\begin{tabular}{rrrrrrr}
\hline
$A$ &$L$& Present study ($E_{nL}$) & AIM\cite{Karakoc2006_IJMPE} & SUSY \cite{Gonul2006_PS}& Numerical\cite{Chakrabarti2001_PLA} & Analytical\cite{Chakrabarti2001_PLA}\\
\hline \vspace{4pt}
4 & 0 & -3.25646424490722525404 & -3.256464 & -3.2563 & -3.2565 & -3.2199 \\
8 & 0 & 14.45812571278417740340 & -14.458126 & -14.4581 & -14.4571 & -14.4199 \\ \vspace{4pt}
& 1 & -2.58369238520910751079 & -2.583692 & -2.5830 & -2.5836 & -2.4332 \\
16 & 0 & -60.85903282302551371170 & -60.859033 & -60.8590 & -60.8590 & -60.8193 \\ \vspace{4pt}
& 1 & -12.99103533706039481539 & -12.991035 & -12.9908 & -12.9910 & -12.8375 \\
24 & 0 & -139.25934814287272696744 &-139.259348 & -139.2590 & -139.2594 & -139.2201 \\
& 1 & -31.39381360113816006395 & -31.393814 & -31.3937 & -31.3938 & -31.2385 \\
& 2 & -11.59594912057730331414 & -11.595949 & -11.5951 & -11.5959 & -11.2456 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{The Exponential Cosine Screened Coulomb (ECSC) Potential}
\label{sec:exppot}
\begin{equation}
\label{eq:ecsc}
V(r)=-\frac{A}{r} e^{-\delta r} \cos (\delta r)
\end{equation}
The ECSC potential in Eq. \ref{eq:ecsc} for the radial Schr\"{o}dinger equation (Eq.~\ref{eq:RadSch}) has been analyzed in Refs. \cite{Bayrak2007_IJQC,Ikhdair1993_ZPD,Meyer1985_JPA,Ikhdair2007_JMC} with different solution methods. Ref. \cite{Bayrak2007_IJQC} is followed to redo the calculations with AIMpy \cite{aimpy}. Therefore, one can find in Bayrak \textit{et~al}. \cite{Bayrak2007_IJQC} how $\lambda_{ 0 }$, $s_0$ (Eqs. \ref{eq:ecsc_l0s0}):
\begin{eqnarray}
\label{eq:ecsc_l0s0}
\lambda_0 &=& 2 \beta - \frac{2 \left(L + 1\right)}{r},\\
s_0 &=& - \frac{2m }{\hbar^{2}}E_{nL} - \beta^{2} + \frac{2 \beta (L+1)}{r} -\frac{A_{1}}{r}+ A_{2} - A_{3} r^{2} + A_{4} r^{3} - A_{5} r^{4} + A_{6} r^{6}, \nonumber
\end{eqnarray}
where,
\begin{equation}
A_{1}=\frac{2 m}{\hbar^{2}} A, \quad
A_{2}=A_{1} \delta, \quad
A_{3}=\frac{A_{1}}{3}\delta^{3}, \quad
A_{4}=\frac{A_{1}}{6}\delta^{4}, \quad
A_{5}=\frac{A_{1}}{30}\delta^{5}, \quad
A_{6}=\frac{A_{1}}{630}\delta^{7}
\end{equation}
and, the terms in them are obtained.
The results of present study using AIMpy \cite{aimpy} are presented in Table \ref{tab:ExpCos}. The results of the reference paper Ref. \cite{Bayrak2007_IJQC} are not presented here since the present results are almost exactly the same with them other than one or two last digits of the fractional parts of the eigenvalues.
\begin{table}[!htbp]
\caption{The energy eigenvalues ($E_{nL}$) of the ECSC potential (Eq. \ref{eq:ecsc}) where $A, \hbar=m=1$, $\beta=6/10$. Compare the results with Ref. \cite{Bayrak2007_IJQC}.}
\label{tab:ExpCos}
\begin{center}
\begin{tabular}{cccccc}
\hline
$\delta$ & $E_{nL}$ & $E_{1L}$ & $E_{2L}$ & $E_{3L}$ & $E_{4L}$ \\
\hline
0.01 & $s$ & -0.49000099 & -0.11501346 & -0.04561908 & -0.02143746 \\
& $p$ & & -0.11500966 & -0.04561104 & -0.02142437 \\
& $d$ & & & -0.04559484 & -0.02139798 \\ \vspace{4pt}
& $f$ & & & & -0.02135784 \\
0.02 & $s$ & -0.48000780 & -0.10510359 & -0.03602510 & -0.01257152 \\
& $p$ & & -0.10507464 & -0.03596760 & -0.01248554 \\
& $d$ & & & -0.03585066 & -0.01231013 \\ \vspace{4pt}
& $f$ & & & & -0.01203814 \\
0.06 & $s$ & -0.44020051 & -0.06742086 & -0.00534922 & \\
& $p$ & & -0.06677740 & -0.00438279 & \\ \vspace{4pt}
& $d$ & & & -0.00226187 & \\
0.10 & $s$ & -0.40088477 & -0.03491583 & & \\
& $p$ & & -0.03245501 & & \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{The Sextic Oscillator Potential}
\label{sec:sectpot}
The following homogeneous second-order differential equation for the DKP anharmonic (sextic) oscillator:
\begin{equation}
\left[\frac{\mathrm{d}^{2}}{\mathrm{d} r^{2}}-\frac{J(J+1)}{r^{2}}+\frac{3 m w}{\hbar}-\left(\frac{m^{2} w^{2}}{\hbar^{2}}+\frac{5 q}{\hbar}\right) r^{2}
+\frac{2 m w q}{\hbar^{2}} r^{4}-\frac{q^{2}}{\hbar^{2}} r^{6} \right] F(r)=\frac{1}{\hbar^{2} c^{2}}\left(m^{2} c^{4}-E^{2}\right) F(r),
\end{equation}
can be obtained when the process in Ref. \cite{Yasuk_2008} is followed. This is a second order differential equation which it has a very similar mathematical form to the radial Schr\"{o}dinger equation (Eq.~\ref{eq:RadSch}). Therefore, after this point it is easy to convert this equation following Ref. \cite{Yasuk_2008} and by proposing an asymptotic form for $F(r)$ function to obtain a solvable equation by AIM through AIMpy \cite{aimpy}. Through this way one can obtain following $\lambda_0$ and $s_0$ forms:
\begin{eqnarray}
\lambda_0 &=& 2 \beta_{1} + 4 \beta_{2} r - \frac{2}{r},\\
s_{0}&=&-\beta_{1}^{2}+6 \beta_{2}+\frac{2 \beta 1}{r}-4 \beta_{1} \beta_{2} r-4 \beta_{1}^{2} r^{2}-\frac{1}{\hbar^{2} c^{2}}\left(E_{nJ}^{2}+A_{0}-\frac{A_{1}}{r^{2}}-A_{2} r^{2}+A_{3} r^{4}-A_{4} r^{6}\right), \nonumber
\end{eqnarray}
where
\begin{equation}
A_{0}=m c^{2}\left(3 \hbar w-m c^{2}\right), \quad
A_{1}=\hbar^{2} c^{2} J(J+1), \quad
A_{2}=c^{2}\left(m^{2} w^{2}+5, \hbar q\right), \quad
A_{3}=2 m c^{2} q w, \quad
A_{4}=q^{2} c^{2}.
\end{equation}
The eigenvalues obtained in the present study are presented in Table \ref{tab:Sectic} which are exactly the same with the ones in the reference \cite{Yasuk_2008} paper.
\begin{table}[!htbp]
\caption{Ground and excited state energies of the sextic oscillator where $\hbar = c = m = 1$ and $q = w = 0.1$. Compare the results with Ref. \cite{Yasuk_2008}.}
\label{tab:Sectic}
\begin{center}
\begin{tabular}{ccccc}
\hline
$E_{nJ}$ & $n=0$ & $n=1$ & $n=2$ & $n=3$ \\
\hline
$s$ & 1.72356712431419 & 2.59853197666838 & 3.39500190327295 & 4.14085218155052 \\
$p$ & 2.15692634351980 & 2.98966248683064 & 3.76218401610683 & 4.49009342299632 \\
$d$ & 2.54665944322791 & 3.35618211593500 & 4.11089275949862 & 4.82457020376795 \\
$f$ & 2.91026031614511 & 3.70392637878655 & 4.44477489149058 & 5.14682205155551 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Black Hole Quasinormal Modes}
\label{sec:blackhole}
In this example, the results of the reference paper Ref. \cite{Cho2010_CQG} for black hole quasinormal modes (QNMs) has been reproduced. Improved AIM is one of the important parts of the AIMpy \cite{aimpy} is also presented by Cho \textit{et~al}. \cite{Cho2010_CQG} in the same paper. Cho \textit{et~al}. \cite{Cho2010_CQG} focuses on a field equation of the form
\begin{equation}
\frac{\mathrm{d}^{2} \psi(x)}{\mathrm{d} x^{2}}+\left[\omega^{2}-V(x)\right] \psi(x)=0,
\end{equation}
where V(x) is a master potential of the form \cite{Cho2010_CQG,Berti2009_CQG}
\begin{equation}
V(r)=f(r)\left[\frac{\ell(\ell+1)}{r^{2}}+\left(1-s^{2}\right)\left(\frac{2 M}{r^{3}}-\frac{\left(4-s^{2}\right) \Lambda}{6}\right)\right],
\end{equation}
and $\mathrm{d} x=\mathrm{d} r / f(r)$ where
\begin{equation}
f(r)=1-\frac{2 M}{r}-\frac{\Lambda}{3} r^{2}.
\end{equation}
$\lambda_0(\xi)$ and $s_0(\xi)$ can be obtained with the $\xi=1 / r$ variable change \cite{Cho2010_CQG,Moss2002_CQG} and following the process in Cho~\textit{et~al}.'s~\cite{Cho2010_CQG} study as
\begin{eqnarray}
\lambda_0(\xi) &=& -{1\over p} \left[ p' - {2 i\omega_{n\ell} \over \kappa_1(\xi - \xi_1)} -2 i \omega_{n\ell} \right], \\
s_0(\xi) &=& {1\over p} \left[ \ell(\ell +1) +
(1-s^2) \left( 2M\xi - (4-s^2){\Lambda \over 6\xi^2} \right) +
{i\omega_{n\ell} \over \kappa_1(\xi - \xi_1)^2} \left( {i\omega_{n\ell} \over \kappa_1} +1 \right) + (p' - 2i\omega_{n\ell}) {i\omega_{n\ell} \over \kappa_1(\xi - \xi_1)} \right]. \nonumber
\end{eqnarray}
The QNMs ($\omega_{n\ell}$) presented in Table \ref{tab:Blackhole} are belongs to present study. Because the results are almost the same with the reference paper Ref. \cite{Cho2010_CQG} as it happened with the previous examples above.
\begin{table}[!htbp]
\caption{All QNMs ($\omega_{n\ell}$) with $s=2$ in this table obtained for $\ell=2, 3$ and various $\Lambda$ cosmological constants. Compare the results with Ref. \cite{Cho2010_CQG}.}
\label{tab:Blackhole}
\begin{center}
\begin{tabular}{ccccc}
\hline
$\ell=2$ & $\Lambda$ & n=1 & n=2 & n=3 \\
\hline
& 0.00 & 0.3736717 - 0.0889623i & 0.3467110 - 0.2739149i & 0.3010535 - 0.4782770i \\
& 0.02 & 0.3383914 - 0.0817564i & 0.3187587 - 0.2491966i & 0.2827322 - 0.4294841i \\
& 0.04 & 0.2988947 - 0.0732967i & 0.2858409 - 0.2217241i & 0.2599919 - 0.3770922i \\
& 0.06 & 0.2532892 - 0.0630425i & 0.2457420 - 0.1897910i & 0.2300764 - 0.3191573i \\
& 0.08 & 0.1974823 - 0.0498773i & 0.1941148 - 0.1497866i & 0.1871198 - 0.2502570i \\
& 0.09 & 0.1626104 - 0.0413665i & 0.1607886 - 0.1241522i & 0.1570423 - 0.2071172i \\
& 0.10 & 0.1179164 - 0.0302105i & 0.1172432 - 0.0906409i & 0.1158764 - 0.1511018i \\ \vspace{4pt}
& 0.11 & 0.0372699 - 0.0096157i & 0.0372493 - 0.0288470i & 0.0372081 - 0.0480784i \\
$\ell=3$ & $\Lambda$ & n=1 & n=2 & n=3 \\
\hline
& 0.00 & 0.5994433 - 0.0927030i & 0.5826438 - 0.2812981i & 0.5516849 - 0.4790928i \\
& 0.02 & 0.5431149 - 0.0844957i & 0.5307443 - 0.2553631i & 0.5070153 - 0.4320588i \\
& 0.04 & 0.4800575 - 0.0751464i & 0.4716583 - 0.2263948i & 0.4550106 - 0.3807731i \\
& 0.06 & 0.4071752 - 0.0641396i & 0.4021706 - 0.1928074i & 0.3920528 - 0.3227693i \\
& 0.08 & 0.3178048 - 0.0503821i & 0.3154946 - 0.1512490i & 0.3108033 - 0.2524505i \\
& 0.09 & 0.2618425 - 0.0416439i & 0.2605716 - 0.1249688i & 0.2579976 - 0.2084119i \\
& 0.10 & 0.1899943 - 0.0303145i & 0.1895170 - 0.0909507i & 0.1885554 - 0.1516089i \\
& 0.11 & 0.0600915 - 0.0096189i & 0.0600766 - 0.0288567i & 0.0600469 - 0.0480945i \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusions and outlook}
\label{sec:conclusions}
I have presented \emph{AIMpy} code which is an eigenvalue solver for Schr\"{o}dinger-like differential equations using AIM. \emph{AIMpy} is an open-source \cite{opensource} code using the power of fast and high precision open-source symbolic and numeric calculation libraries. The example cases given in the paper proves the reliability of the code. However, some more features can be added to the code in later versions: like eigenfunction calculations, analytical solver and, direct input of any solvable differential equations by AIM to AIMpy to obtain $\lambda_0$ and $s_0$, \emph{etc.} As the last word, I would like to encourage community contributions or writing their versions of the code through forking the repository on GitHub \cite{aimpy}.
\bibliographystyle{plain}
|
1,108,101,565,303 | arxiv | \section{Introduction}
\subsubsection*{Context} Being able to predict the impact of a new infrastructure on the traffic in a transportation network is an old but still important objective for transport planners. In 1952, \cite{Wa52} noted that after some while the traffic arranges itself to form an equilibrium and formalized principles characterizing this equilibrium. With the terminology of game theory, the equilibrium is a Nash equilibrium for a congestion game with nonatomic users. In 1956, \cite{Be56} translated these principles as a mathematical program which turned out to be convex, opening the door to the tools from convex optimization. The currently most commonly used algorithm for such convex programs is probably the Frank-Wolfe algorithm~\citep{FW56}, because of its simplicity and its efficiency, but many other algorithms with excellent behaviors have been proposed, designed, and experimented.
One of the main assumptions used by Beckmann to derive his program is the fact that all users are equally impacted by the congestion. With the transportation terminology, it means that there is only one {\em class}. In order to improve the prediction of traffic patterns, researchers started in the 70s to study the {\em multiclass} situation where each class has its own way of being impacted by the congestion. Each class models a distinct mode of transportation, such as cars, trucks, or motorbikes. \cite{Da72,Da80} and \cite{Sm79} are probably the first who proposed a mathematical formulation of the equilibrium problem in the multiclass case. However, even if this problem has been the topic of many research works, an efficient algorithm for solving it is not known, except in some special cases~\citep{F77,H88,MM88,MW04}. In particular, there are no general algorithms in the literature for solving the problem when the cost of each arc is in an affine dependence with the flow on it.
Our purpose is to discuss the existence of such algorithms.
\subsubsection*{Model}
We are given a directed graph $D=(V,A)$ modeling the transportation network. The set of all paths (resp. $s$-$t$ paths) is denoted by $\mathcal{P}$ (resp. $\mathcal{P}_{(s,t)}$). The population of {\em players} is modeled as a bounded real interval $I$ endowed with the Lebesgue measure $\lambda$, the {\em population measure}. The set $I$ is partitioned into a finite number of measurable subsets $(I^k)_{k\in K}$ -- the {\em classes} -- modeling the players with same characteristics: they share a same collection of cost functions $(c_a^k:\mathbb{R}_+\rightarrow\mathbb{R}_+)_{a\in A}$, a same origin $s^k$, and a same destination $t^k$. A player in $I^k$ is said to be of {\em class $k$}. We define $V^k$ (resp. $A^k$) to be the set of vertices (resp. arcs) reachable from $s^k$ in $D$.
A {\em strategy profile} is a measurable mapping $\sigma:I\rightarrow\mathcal{P}$ such that $\sigma(i)\in\mathcal{P}_{(s^k,t^k)}$ for all $k\in K$ and all $i\in I^k$. We denote by $x_a^k$ the number of class $k$ players $i$ such that $a$ is in $\sigma(i)$: $$x_a^k=\lambda\{i\in I^k:\,a\in\sigma(i)\}.$$ The vector $\boldsymbol{x}^k=(x_a^k)_{a\in A^k}$ is an $s^k$-$t^k$ flow of value $\lambda(I^k)$: for each $v\in V^k\setminus\{s^k,t^k\}$, we have $$\sum_{a\in\delta^+(v)}x_a^k=\sum_{a\in\delta^-(v)}x_a^k$$ and $$\sum_{a\in\delta^+(s^k)}x_a^k-\sum_{a\in\delta^-(s^k)}x_a^k=\sum_{a\in\delta^-(t^k)}x_a^k-\sum_{a\in\delta^+(t^k)}x_a^k=\lambda(I^k).$$ The vector $(\boldsymbol{x}^k)_{k\in K}$ is thus a multiflow. The total number of players $i$ such that $a$ is in $\sigma(i)$ is $\sum_{k\in K}x_a^k$ and is denoted $x_a$. We denote by $\boldsymbol{x}$ the vector $(x_a)_{a\in A}$.
The cost of arc $a$ for a class $k$ player is $c_a^k(x_a)$. For player, the cost of a path $P$ is defined as the sum of the costs of the arcs contained in $P$. Each player wants to select a minimum-cost path.
A strategy profile is a (pure) Nash equilibrium if each path is only chosen by players for whom it is a minimum-cost path. In other words, a strategy profile $\sigma$ is a Nash equilibrium if for each class $k\in K$ and each player $i\in I^k$ we have
$$\sum_{a\in\sigma(i)}c_a^k(x_a)=\min_{P\in\mathcal{P}_{(s^k,t^k)}}\sum_{a\in P}c_a^k(x_a).$$
This game enters in the category of {\em nonatomic congestion games with player-specific cost functions}, see \cite{Mi96}. Under mild conditions on the cost functions, a Nash equilibrium is always known to exist. The original proof of the existence of an equilibrium was made by \cite{Sc70} and uses a fixed point theorem. The proof of this result is also made in \citet{Mi00} or can be deduced from more general results \citep{Ra92}.
The problem of finding a Nash equilibrium for such a game is called the {\em Multiclass Network Equilibrium Problem}.
\subsubsection*{Contribution}
Our results concern the case when the cost functions are affine and strictly increasing: for all $k\in K$ and $a\in A^k$, there exist $\alpha_a^k\in\mathbb{Q}_+\setminus\{0\}$ and $\beta_a^k\in\mathbb{Q}_+$ such that $c_a^k(x)=\alpha_a^kx+\beta_a^k$ for all $x\in\mathbb{R}_+$.
First, we prove the existence of a polynomial algorithm solving the Multiclass Network Equilibrium Problem when the number of classes and the number of vertices are fixed. The core idea of the algorithm relies on properties of hyperplane arrangements. A corollary of this theorem is that the {\em parallel-link} case (graph with parallel arcs between two vertices) is polynomially solvable for a fixed number of classes. This special case, even with only two classes, does not seem to have been known before.
Second, we show that there exists a pivoting algorithm solving the problem. This algorithm, inspired by the classical Lemke algorithm solving linear complementarity problems, is reminiscent of the network simplex algorithm that solves the minimum cost flow problem, in the sense that we exploit the presence of a graph to build the pivoting algorithm. The experiments show its efficiency.
On our track, we extend slightly the notion of basis used in linear programming and linear complementarity programming to deal directly with unsigned variables (hence without replacing them by twice their number of signed variables).
To our knowledge, these two algorithms are the first specially designed to solve this problem.
We emphasize that the exact complexity of the problem remains unknown. The fact that it can be modeled as a linear complementarity problem implies that it belongs to the so-called PPAD class. The PPAD class is a class of problems for which an object is sought, here an equilibrium, while being sure that the object exists by an {\em a priori} argument equivalent to the following one: in a graph without isolated vertices and whose vertices all have at most one predecessor and at most one successor, if there is a vertex with at most one neighbor, there is another such vertex. This class was defined by~\cite{P94} and contains complete problems. An example of a PPAD-complete problem is the problem of computing a mixed Nash equilibrium in a bimatrix game (\cite{CDT09}). We do not know whether the Multiclass Network Equilibrium Problem with affine costs is PPAD-complete or not.
\subsubsection*{Related works} The single-class case is polynomially solvable since as soon as the cost functions are nondecreasing, the problem turns out to be a convex optimization problem, see~\cite{Be56}. This case has already been mentioned at the beginning of the introduction.
We are not aware of any algorithm with a controlled complexity for solving the Multiclass Network Equilibrium Problem, even with affine cost functions. There are however some papers proposing practical approaches. In general, the proposed algorithm is a Gauss-Seidel type diagonalization method, which consists in sequentially fixing the flows for all classes but one and solving the resulting single-class problem by methods of convex programming, see \cite{F77,FS82,H88,MM88} for instance. For this method, a condition ensuring the convergence to an equilibrium is not always stated, and, when there is one, it requires that ``the interaction between the various users classes be relatively weak compared to the main effects (the latter translates a requirement that a complicated matrix norm be less than unity)''~\citep{MM88}. Such a condition does clearly not cover the case with affine cost functions. Another approach is proposed by \cite{MW04}. For cost functions satisfying the ``nested monotonicity'' condition -- a notion developed by \cite{CC88} -- they design a descent method for which they are able to prove the convergence to a solution of the problem. However, we were not able to find any paper with an algorithm solving the problem when the costs are polynomial functions, or even affine functions.
We can also mention generalization of the Lemke algorithm -- see for instance \cite{AV11,AEP79,CF96,CPS92,E73,SPS12} -- but none of them is specially designed for solving our problem, nor exploits a graph structure of any kind.
\subsubsection*{Structure of the paper} In Section~\ref{sec:formulation}, we provide mathematical features of the Multiclass Network Equilibrium Problem. In particular, we show with an elementary proof how to write it as a linear complementarity problem.
Section~\ref{sec:poly} is devoted to one of our main results, namely the existence of a polynomial algorithm when the number of vertices and the number of classes are fixed. This section is subdivided into fours subsections. The first subsection -- Section~\ref{subsec:main} -- states the result and gives a general description of the algorithm. We provide then a brief introduction to the concept of hyperplane arrangement, which is used in the proofs (Section~\ref{subsec:hyperplan}). The following two sections (Sections~\ref{subsec:determine} and~\ref{subsec:compute}) are devoted to the two parts of the proof.
Section~\ref{sec:lemke} is devoted to the network Lemke-like algorithm. The first subsection -- Section~\ref{subsec:opt} -- shows how to rewrite the linear complementary problem formulation as an optimization program. Section~\ref{subsec:tools} presents the notions underlying the algorithm. All these notions, like {\em basis}, {\em secondary ray}, {\em pivot}, and so on, are classical in the context of the Lemke algorithm. They require however to be redefined in order to be able to deal with the features of our optimization program. The algorithm is then described in Section~\ref{subsec:lemke}. Section~\ref{subsec:experiments} is devoted to the experiments and shows the efficiency of the proposed approach.
\begin{remark}
A preliminary version of Section~\ref{sec:lemke} has been presented at the conference WINE 2013 \citep{MP13Lemke}.
\end{remark}
\section{Mathematical properties of the equilibrium}\label{sec:formulation}
Let $\boldsymbol{y} = (y_a)_{a\in A}$ be a flow. We define its {\em support} as the set of arcs with a positive flow: $$\operatorname{supp}(\boldsymbol{y}) = \{ a \in A:\; y_a >0\}.$$ The cost of a minimum-cost $s^k$-$v$ path when the arc costs are given by the $c_a^k(\boldsymbol{y})$'s is denoted $\pi_v^k(\boldsymbol{y})$.
We denote by $\boldsymbol{\pi}^k(\boldsymbol{y})$ the vector $(\pi_v^k(\boldsymbol{y}))_{v\in V^k}$. We define $\operatorname{mincost}^k(\boldsymbol{y})$ to be the set of arcs in $A^k$ that are on some minimum cost paths originating at $s^k$. Formally, we have $$\operatorname{mincost}^k(\boldsymbol{y})=\{a=(u,v)\in A^k:\;\pi_v^k(\boldsymbol{y})-\pi_u^k(\boldsymbol{y})=c_a^k(y_a)\}.$$
\begin{proposition}\label{prop:inclusions}
The multiflow $(\boldsymbol{x}^k)_{k\in K}$ is an equilibrium multiflow if and only if $$\operatorname{supp}(\boldsymbol{x}^k)\subseteq\operatorname{mincost}^k(\boldsymbol{x})\quad\mbox{for all $k\in K$}.$$
\end{proposition}
\begin{proof}
If some class $k$ players use an arc $a$ at equilibrium, it means that this arc is on a minimum-cost $s^k$-$t^k$ path. We have thus the inclusion $\operatorname{supp}(\boldsymbol{x}^k)\subseteq\operatorname{mincost}^k(\boldsymbol{x})$.
Conversely, suppose that $\operatorname{supp}(\boldsymbol{x}^k)\subseteq\operatorname{mincost}^k(\boldsymbol{x})$ for all $k$. Take any $s^k$-$t^k$ path $P$ chosen by a non-negligible amount of class $k$ players. Each arc $a=(u,v)$ in this path is in the support of $\boldsymbol{x}^k$, and thus is such that $\pi_v^k(\boldsymbol{x})-\pi_u^k(\boldsymbol{x})=c_a^k(x_a)$. The cost of the path is therefore $\pi_{t^k}^k(\boldsymbol{x})$, which implies that $P$ is a minimum-cost $s^k$-$t^k$ path. $(\boldsymbol{x}^k)_{k\in K}$ is a multiflow equilibrium.
\end{proof}
With a similar proof, we can get an alternate formulation of the equilibrium. Consider the following system, where $\boldsymbol{b}=(b_v^k)$ is a given vector with $\sum_{v\in V^k}b_v^k=0$ for all $k$.
\begin{equation}\label{pb:MNEP-gen}\tag{$MNEP_{gen}$}
\begin{array}{lr}
\displaystyle{\sum_{a\in \delta^+(v)} x_a^k = \sum_{a\in \delta^-(v)} x_a^k + b_v^k} & v\in V^k, k\in K \\ \\
c^k_{uv}(x_{uv}) + \pi^k_u - \pi^k_v - \mu^k_{uv} = 0 & (u,v)\in A^k, k\in K \\ \\
x_a^k \mu_a^k = 0 & a\in A^k, k\in K \\ \\
\pi_{s^k}^k=0 & k\in K\\ \\
x_a^k \geq 0, \mu_a^k \geq 0, \pi_v^k \in \mathbb{R} & v\in V^k, a \in A^k, k\in K .
\end{array}
\end{equation}
Finding solutions for systems like~\eqref{pb:MNEP-gen} is a {\em complementarity program}, the word ``complementarity'' coming from the condition $x_a^k \mu_a^k = 0$ for all $(a,k)$ such that $a \in A^k$.
\begin{proposition}\label{prop:equilibrium}
Suppose that $b_v^k=0$ for $v\in V^k\setminus\{s^k,t^k\}$, $b_{s^k}^k=\lambda(I^k)$, and $b_{t^k}^k=-\lambda(I^k)$ for all $k$. Then
$(\boldsymbol{x}^k)_{k\in K}$ is an equilibrium multiflow if and only if there exist $\boldsymbol{\mu}^k\in\mathbb{R}_+^{A^k}$ and $\boldsymbol{\pi}^k\in\mathbb{R}^{V^k}$ for all $k$ such that $(\boldsymbol{x}^k,\boldsymbol{\mu}^k,\boldsymbol{\pi}^k)_{k\in K}$ is a solution of the complementarity program~\eqref{pb:MNEP-gen}.
\end{proposition}
\begin{proof}
Let $(\boldsymbol{x}^k)_{k\in K}$ be a multiflow equilibrium. We define $\pi_v^k$ to be $\pi_v^k(\boldsymbol{x})$. Finally, $\mu^k_{uv}$ is defined to be $c^k_{uv}(x_{uv}) + \pi^k_u - \pi^k_v$ for all $k\in K$ and $(u,v)\in A^k$. This solution is a feasible solution of the program~\eqref{pb:MNEP-gen} (using Proposition~\ref{prop:inclusions} to get the complementary conditions).
Conversely, take a feasible solution of the program~\eqref{pb:MNEP-gen}. Let $P$ be any $s^k$-$t^k$ path. We have $\sum_{a\in P}c_a^k(x_a)=\pi_{t^k}^k+\sum_{a\in P}\mu_a^k$. Thus $\sum_{a\in P}c_a^k(x_a)\geq\pi_{t^k}^k$, with equality when the path $P$ is in $\operatorname{supp}(\boldsymbol{x}^k)$. Any $s^k$-$t^k$ path in $\operatorname{supp}(\boldsymbol{x}^k)$ is thus a minimum-cost $s^k$-$t^k$ path. Hence $(\boldsymbol{x}^k)_{k\in K}$ is an equilibrium multiflow.
\end{proof}
When the cost functions are affine $c_a^k(x)=\alpha_a^kx+\beta_a^k$, solving the Multiclass Network Equilibrium Problem amounts thus to solve the following linear complementarity problem
\begin{equation}\tag{$MNEP$}\label{pb:MNEP}
\begin{array}{lr}
\displaystyle{\sum_{a\in \delta^+(v)} x_a^k = \sum_{a\in \delta^-(v)} x_a^k + b_v^k} & v\in V^k, k\in K\\ \\
\displaystyle{\alpha_{uv}^kx_{uv} + \pi^k_u - \pi^k_v - \mu^k_{uv} = - \beta_{uv}^k} & (u,v) \in A^k, k\in K \\ \\
x_a^k \mu_a^k = 0 & a\in A^k, k\in K \\ \\
\pi_{s^k}^k=0 & k\in K\\ \\
x_a^k \geq 0, \mu_a^k \geq 0, \pi_v^k \in \mathbb{R} & v\in V^k, a\in A^k, k\in K
\end{array}
\end{equation}
with $b_v^k=0$ for $v\in V^k\setminus\{s^k,t^k\}$, and $b_{s^k}^k=\lambda(I^k)$ and $b_{t^k}^k=-\lambda(I^k)$ for all $k$.
\section{A polynomial algorithm}\label{sec:poly}
\subsection{The algorithm}\label{subsec:main}
We describe the algorithm solving the Multiclass Network Equilibrium Problem in polynomial time when the number of classes and the number of vertices are fixed.
Let $\AA=\{(S^k)_{k\in K}:\;S^k\subseteq A^k\}$. The algorithm consists in two steps.
\begin{enumerate}
\item It computes a set $\mathcal S \subseteq \AA$ of polynomial size such that for any equilibrium multiflow $(\boldsymbol{x}^k)_{k\in K}$, there is a $(S^k)_{k\in K} \in \mathcal S$ with $\operatorname{supp}(\boldsymbol{x}^k) \subseteq S^k \subseteq \operatorname{mincost}^k(\boldsymbol{x})$ for all $k$.
\item It tests for every $(S^k)_{k\in K} \in \mathcal S$ whether there exists an equilibrium multiflow $(\boldsymbol{x}^k)_{k\in K}$ with $\operatorname{supp}(\boldsymbol{x}^k) \subseteq S^k \subseteq \operatorname{mincost}^k(\boldsymbol{x})$
for all $k$, and compute it if it exists.
\end{enumerate}
For fixed $|K|$ and $|V|$, each step can be done in polynomial time according respectively to Proposition~\ref{prop:poly_determine} and Proposition~\ref{prop:poly_compute}.
\begin{proposition} \label{prop:poly_determine} Assume $|K|$ and $|V|$ being fixed. We can determine in polynomial time a set $\mathcal S\subseteq \AA$ of polynomial size such that for any equilibrium multiflow $(\boldsymbol{x}^k)_{k\in K}$, there is a $(S^k)_{k\in K} \in \mathcal S$ with $\operatorname{supp}(\boldsymbol{x}^k) \subseteq S^k \subseteq \operatorname{mincost}^k(\boldsymbol{x})$ for all $k$.
\end{proposition}
Both the size of $\mathcal S$ and the time complexity to compute it are actually a $O((K^2|A|)^{K(|V|-1)})$.
In the next proposition, $|K|$ and $|V|$ are not required to be fixed. As we will see in the proof, it amounts to solve a system of linear equalities and inequalities, which is polynomially solvable thanks to the interior point method.
\begin{proposition} \label{prop:poly_compute}
Let $(S^k)_{k\in K} \in \AA$. In polynomial time, we can \begin{itemize}
\item decide whether there exists an equilibrium multiflow $(\boldsymbol{x}^k)_{k\in K}$ with $$\operatorname{supp}(\boldsymbol{x}^k) \subseteq S^k \subseteq \operatorname{mincost}^k(\boldsymbol{x})$$ for all $k$,
\item compute such a multiflow if it exists.
\end{itemize}
\end{proposition}
An equilibrium multiflow $(\boldsymbol{x}^k)_{k\in K}$ is known to exist, see the Section ``Model'' of the Introduction. Thus, when the algorithm terminates, it has necessarily found an equilibrium.
To summarize, we have the following theorem.
\begin{theorem}\label{thm:poly_main}
For a fixed number of classes and vertices, there exists an algorithm solving the Multiclass Network Equilibrium Problem with affine costs in polynomial time with respect to the number of arcs.
\end{theorem}
The complexity is $O\left((K^2|A|)^{K(|V|-1)}\right)$ times the complexity of solving a system of linear equalities and inequalities with $\sum_{k\in K}(|A^k|+|V^k|-1)$ variables.
\subsection{Preliminaries on hyperplane arrangements}\label{subsec:hyperplan}
A {\em hyperplane} $h$ in $\mathbb{R}^d$ is a $(d-1)$-dimensional subspace of $\mathbb{R}^d$. It partitions $\mathbb{R}^d$ into three regions: $h$ itself and the two open half-spaces having $h$ as boundary. We give an orientation for $h$ and note the two half-spaces $h^\oplus$ and $h^\ominus$, the former being on the positive side of $h$ and the latter one on the negative side. The closed half-spaces are denoted by $\overline{ h^\oplus} = h^\oplus \cup h$ and $\overline{ h^\ominus} = h^\ominus \cup h$. Given a finite set $H$ of hyperplanes, an {\em arrangement} is a partition of $\mathbb{R}^d$ into relatively open convex subsets, called {\em cells}. A $k$-cell is a cell of dimension $k$. A $0$-cell is called a point. The {\em hyperplane arrangement} $\mathcal A(H)$ associated to the set of hyperplanes $H$ is defined as follows. The $d$-cells are the connected components of $\mathbb{R}^d \setminus H$. For $0\leq k \leq d-1$, a $k$-{\em flat} is the intersection of exactly $d-k$ hyperplanes of $H$. Then, the $k$-cells of the arrangement are the connected components of $L \setminus \{ h \in H, L \nsubseteq h \}$ for every $k$-flat $L$.
Given an arrangement of $n$ hyperplanes, the number of $k$-cells is bounded by $$\sum_{i=0}^{k} \binom{d-i}{k-i} \binom{n}{d-i}.$$ The total number of cells is thus a $O(n^d)$. In a breakthrough paper, \cite{EdORSe86} proved that the set of cells (determined by the relative positions with respect to the hyperplans) can be computed in $O(n^d)$ as well, given the equations of the hyperplanes (and assuming that the coefficients involved in the equations are in $\mathbb Q$).
Further details on hyperplane arrangements can be found in \cite{Ed87} or \cite{Ma02} for example.
\subsection{Proof of Proposition~\ref{prop:poly_determine}}\label{subsec:determine}
For each class $k$, and each arc $a\in A^k$, we define the oriented half-space of $\prod_{j\in K}\mathbb{R}^{V^j\setminus\{s^j\}}$:
$$h_{a}^{k,\ominus} = \left\{ \vec{\boldsymbol{y}}=(y_v^k) \in\prod_{j\in K}\mathbb{R}^{V^j\setminus\{s^j\}}:\; y_v^{k}-y_u^k> \beta_a^{k} \right\}.$$
For each class $k\neq k'$ and arc $a=(u,v)\in A^k\cap A^{k'}$, we define moreover the following oriented half-space, still of $\prod_{j\in K}\mathbb{R}^{V^j\setminus\{s^j\}}$:
$$h_a^{k,k',\ominus} = \left\{ \vec{\boldsymbol{y}}=(y_v^j) \in\prod_{j\in K}\mathbb{R}^{V^j\setminus\{s^j\}}:\; \alpha_a^{k'} \left(y_v^{k}-y_u^k - \beta_a^{k}\right) > \alpha_a^k \left(y_v^{k'}-y_u^{k'} - \beta_a^{k'}\right) \right\}.$$
We define the convex polyhedron $$P_a^k =\overline{h_{a}^{k,\ominus}}\cap \bigcap_{k'\neq k:\;A^k\cap A^{k'}\neq\emptyset} \overline{h_a^{k,k',\ominus}}.$$
The $P_a^k$'s have a useful property that links the cost at an equilibrium to the support. Let $\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in\prod_{k\in K}\mathbb{R}^{V^k\setminus\{s^k\}}$ be the vector $(\boldsymbol{\pi}^k(\boldsymbol{x}))_{k\in K}$.
\begin{lemma}\label{lem:1}
Let $(\boldsymbol{x}^k)_{k\in K}$ be an equilibrium multiflow. For any class $k$ and arc $a$, if $a \in \operatorname{supp}( \boldsymbol{x}^k)$, then $\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in P_a^{k}$.
\end{lemma}
\begin{proof}
Let $a=(u,v) \in \operatorname{supp}(\boldsymbol{x}^k)$. According to Proposition~\ref{prop:inclusions}, we have $$x_a = \frac{\pi_v^{k}(\boldsymbol{x})-\pi_u^{k}(\boldsymbol{x})-\beta_a^{k}}{\alpha_a^{k}}.$$ In particular, since $ x_a \geq 0$, we have $\pi_v^{k}(\boldsymbol{x})-\pi_u^{k}(\boldsymbol{x}) \geq \beta_a^k$ and thus $ \vec{\boldsymbol{\pi}}(\boldsymbol{x})\in \overline{h_{a}^{k,\ominus}} $.
For any other class $k'$ such that $a\in A^{k'}$, we have
$$\alpha_a^{k'} \left(\frac{\pi_v^{k}(\boldsymbol{x})-\pi_u^{k}(\boldsymbol{x})-\beta_a^k}{\alpha_a^k}\right) + \beta_a^{k'} \geq \pi_v^{k'}(\boldsymbol{x})-\pi_u^{k'}(\boldsymbol{x})$$ according to the definition of $\boldsymbol{\pi}^{k'}(\boldsymbol{x})$. It implies that $\vec{\boldsymbol{\pi}}(\boldsymbol{x}) \in \overline{h_a^{k,k',\ominus}}.$
Therefore, $\vec{\boldsymbol{\pi}}(\boldsymbol{x}) \in P_a^k.$
\end{proof}
\bigskip
In order to prove the proposition, we consider the set of hyperplanes $$H = \left\{ h_a^{k,k'}:\; k\neq k' \in K, a\in A^k\cap A^{k'}\right\} \cup \left\{ h_{a}^{k}:\; k \in K, a\in A^k\right\}.$$ We consider then the associated arrangement $\mathcal{A}(H)$.
\begin{proof}[Proof of Proposition~\ref{prop:poly_determine}]
We start by building $\mathcal{A}(H)$. The number of cells and the time complexity to build them are a $O\left( (K^2|A|)^{K(|V|-1)}\right)$ (see Section~\ref{subsec:hyperplan}).
Define the map $\varphi : \{\mbox{cells of }\mathcal{A}(H)\} \to \mathcal \AA$ in the following way: for every cell $P$ and class $k \in K$,
$$ \varphi(P)_k = \{ a \in A^k:\; P \cap P_a^k \neq \emptyset \}.$$ This map can easily be built in polynomial time.
Let then $\mathcal S = \varphi(\{\mbox{cells of }\mathcal{A}(H)\})$. The size of $\mathcal S$ is at most $O\left( (K^2|A|)^{K(|V|-1)}\right)$.\\
It remains to show that for any equilibrium multiflow $(\boldsymbol{x}^k)_{k\in K}$, there exists $(S^k)_{k\in K} \in \mathcal{S}$ such that $\operatorname{supp}(\boldsymbol{x}^k) \subseteq S^k \subseteq \operatorname{mincost}^k(\boldsymbol{x})$ for all $k$.
Let $(\boldsymbol{x}^k)_{k\in K}$ be an equilibrium multiflow. Since the cells of $\mathcal A (H)$ partition $\prod_{k\in K}\mathbb{R}^{V^k\setminus\{s^k\}}$, there is a cell $P_0$ such that $\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in P_0$.
Let $k \in K$ and $a \in \operatorname{supp}( \boldsymbol{x}^k)$. Lemma~\ref{lem:1} ensures that $\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in P_a^{k}$, and in particular that $P_0 \cap P_a^k \neq \emptyset$, i.e. $a \in \varphi(P_0)_k$. We have thus $\operatorname{supp}(\boldsymbol{x}^k) \subseteq \varphi(P_0)_k$ for every $k\in K$. Defining $S^k = \varphi(P_0)_k$, we have $\operatorname{supp}(\boldsymbol{x}^k)\subseteq S^k$ for all $k$, as required. \\
We prove now that $S^k \subseteq \operatorname{mincost}^k(\boldsymbol{x})$ for all $k$. Consider a class $k$ and an arc $a\in S^k$. We have already proved that
$\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in P_a^k$.
Suppose first that $x_a>0$. If $a \in \operatorname{supp}(\boldsymbol{x}^k)$, Proposition~\ref{prop:inclusions} implies that $a \in \operatorname{mincost}^k(\boldsymbol{x})$. Otherwise, there is at least a class $k_0\neq k$ such that $a\in\operatorname{supp}(\boldsymbol{x}^{k_0})$. Lemma \ref{lem:1} gives that $\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in P_a^{k_0}$. We have thus $\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in P_a^{k_0}\cap P_a^k$, which implies $\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in h_a^{k,k_0}$. This translates into
$$ \alpha_a^k \left( \pi_v^{k_0}(\boldsymbol{x})-\pi_u^{k_0}(\boldsymbol{x}) - \beta_a^{k_0}\right) = \alpha_a^{k_0} (\pi_v^{k}(\boldsymbol{x})-\pi_u^{k}(\boldsymbol{x})-\beta_a^{k}),$$ i.e.
$\alpha_a^kx_a+\beta_a^k=\pi_v^{k}(\boldsymbol{x})-\pi_u^{k}(\boldsymbol{x})$. Hence, $a \in \operatorname{mincost}^k(\boldsymbol{x})$.
Suppose then $x_a=0$. Since $\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in P_a^k$, we have in particular $\vec{\boldsymbol{\pi}}(\boldsymbol{x})\in \overline{h_{a}^{k,\ominus}}$. It implies that $\pi_v^{k}(\boldsymbol{x})-\pi_u^{k}(\boldsymbol{x})\geq\beta_a^{k}$. The reverse inequality is a consequence of the definition of $\boldsymbol{\pi}^{k}(\boldsymbol{x})$. We have thus $\pi_v^{k}(\boldsymbol{x})-\pi_u^{k}(\boldsymbol{x})=\beta_a^{k}$, which implies again $a \in \operatorname{mincost}^k(\boldsymbol{x})$.
\end{proof}
\subsection{Proof of Proposition~\ref{prop:poly_compute}}\label{subsec:compute}
\begin{proof}
There exists an equilibrium multiflow $(\boldsymbol{x}^k)_{k\in K}$ with $\operatorname{supp}(\boldsymbol{x}^k) \subseteq S^k \subseteq \operatorname{mincost}^k(\boldsymbol{x})$ for all $k$ if and only if there is a solution of the program~\eqref{pb:MNEP} with $\mu_a^k=0$ for all $k\in K$ and $a\in S^k$, and $x_a^k=0$ for all $k\in K$ and $a\notin S^k$. It gives rise to a system of linear equalities and inequalities with $\sum_{k\in K}(|A^k|+|V^k|-1)$ variables, which can be solved in polynomial time
by the interior point method (see \cite{Wr97} for example).
\end{proof}
\begin{remark}
We can reduce the size of $\mathcal S$. We know without any computation that there are no solutions as soon as there is a class $k$ with $S^k = \emptyset$. It means that we can consider only the cells $P$ such that for every class $k$ there exists an arc $a$ with $P \cap P_a^k \neq \emptyset$. We can remove from $\mathcal A (H)$ the cells belonging to $$\bigcup_{k\in K} \bigcap_{a=(u,v) \in A^k} \left( \bigcup_{k'\neq k:\;A^k\cap A^{k'}=\emptyset}h_a^{k,k',\oplus} \cup h_{uv}^{k,\oplus} \right).$$
However, this reduction is in general negligible with respect to the total size of $\mathcal S$.
\end{remark}
\section{A network Lemke-like algorithm}\label{sec:lemke}
\subsection{An optimization formulation}\label{subsec:opt}
Similarly as for the classical Lemke algorithm, we rewrite the problem as an optimization problem. It is the starting point of the algorithm. This problem is called the {\em Augmented Multiclass Network Equilibrium Problem}. Let $\boldsymbol{e}=(e_a^k)$ be any vector defined for all $k\in K$ and $a\in A^k$. Consider the following optimization program.
\begin{equation}\tag{$AMNEP(\boldsymbol{e})$}\label{pb:AMNEP}
\begin{array}{rlr}\min & \omega \\
\mbox{s.t.} & \displaystyle{\sum_{a\in \delta^+(v)} x_a^k = \sum_{a\in \delta^-(v)} x_a^k + b_v^k} & k\in K,v \in V^k\\ \\
& \displaystyle{\alpha_{uv}^k\sum_{k'\in K}x^{k'}_{uv} + \pi^k_u - \pi^k_v - \mu^k_{uv} +e_{uv}^k\omega= - \beta_{uv}^k} & k\in K,(u,v) \in A^k \\ \\
& x_a^k \mu_a^k = 0 & k\in K, a\in A^k \\ \\
& \pi_{s^k}^k=0 & k\in K \\ \\
& x_a^k \geq 0, \mu_a^k \geq 0, \omega\geq 0, \pi_v^k \in \mathbb{R} & k\in K, a \in A^k, v\in V^k.
\end{array} \end{equation}
A key remark is
\begin{quote}
{\em Solving \eqref{pb:MNEP} amounts to find an optimal solution of \eqref{pb:AMNEP}\\ with $\omega=0$.}
\end{quote}
Indeed, a solution with $\omega=0$ can easily be completed to provide a solution of \eqref{pb:MNEP}, and conversely, a solution of \eqref{pb:MNEP} provides a solution with $\omega=0$ of \eqref{pb:AMNEP}. Some choices of $\boldsymbol{e}$ allow to find easily feasible solutions to this program. In Section~\ref{subsec:tools}, $\boldsymbol{e}$ will be chosen in such a way.
We write the program~\eqref{pb:AMNEP} under the form
$$
\begin{array}{rl}\min & \omega \\
\mbox{s.t.} &
\overline{M}^{\boldsymbol{e}}\left(\begin{array}{c}\boldsymbol{x} \\ \boldsymbol{\mu} \\ \omega\end{array}\right)+
\left(\begin{array}{c}\boldsymbol{0} \\ M^T\end{array}\right)\boldsymbol{\pi}=\left(\begin{array}{c}\boldsymbol{b}\\-\boldsymbol{\beta}\end{array}\right) \\
& \boldsymbol{x}\cdot\boldsymbol{\mu}=0 \\
& \boldsymbol{x}\geq\boldsymbol{0},\,\boldsymbol{\mu}\geq\boldsymbol{0},\,\omega\geq 0,\,\boldsymbol{\pi}\in\prod_{k\in K}\mathbb{R}^{V^k\setminus\{s^k\}},
\end{array}
$$ where $\overline{M}^{\boldsymbol{e}}$ and $C$ are defined as follows. (The matrix $\overline{M}^{\boldsymbol{e}}$ is denoted with a superscript $\boldsymbol{e}$ in order to emphasize its dependency on $\boldsymbol{e}$).
We define $M=\operatorname{diag}((M^k)_{k\in K})$ where $M^k$ is the incidence matrix of the directed graph $(V^k,A^k)$ from which the $s^k$-row has been removed:
$$M^k_{v,a}=\left\{ \begin{array}{ll} 1 & \mbox{ if $a\in\delta^+(v)$,} \\ -1 & \mbox{ if $a\in\delta^-(v)$,} \\ 0&\mbox{ otherwise}. \end{array}\right.$$
We also define $C^k=\operatorname{diag}((\alpha_a^k)_{a\in A^k})$ for $k\in K$, and then $C$ the real matrix $C=({\underbrace{(C^k,\cdots,C^k)}_{|K|\mbox{\tiny{ times}}}}{}_{k\in K})$. Then let $$\overline{M}^{\boldsymbol{e}}= \left(\begin{array}{ccc} M & \boldsymbol{0} & \boldsymbol{0} \\ C & -I & \boldsymbol{e}\end{array}\right).$$
For $k\in K$, the matrix $M^k$ has $|V^k|-1$ rows and $|A^k|$ columns, while $C^k$ is a square matrix with $|A^k|$ rows and columns. Then the whole matrix $\overline{M}^{\boldsymbol{e}}$ has $\sum_{k\in K}(|A^k|+|V^k|-1)$ rows and $2\left(\sum_{k\in K}|A^k|\right)+1$ columns.
\subsection{Bases, pivots, and rays}\label{subsec:tools}
\subsubsection{Bases}\label{subsec:bases} We define $\mathcal{X}$ and $\mathcal{M}$ to be two disjoint copies of $\{(a,k):\,k\in K,\,a\in A^k\}$. We denote by $\phi^x(a,k)$ (resp. $\phi^{\mu}(a,k)$) the element of $\mathcal{X}$ (resp. $\mathcal{M}$) corresponding to $(a,k)$. The set $\mathcal{X}$ models the set of all possible indices for the `$x$' variables and $\mathcal{M}$ the set of all possible indices for the `$\mu$' variables for the program~\eqref{pb:AMNEP}. We consider moreover a dummy element $o$ as the index for the `$\omega$' variable.
We define a {\em basis} for the program~\eqref{pb:AMNEP} to be a subset $B$ of the set $\mathcal{X}\cup\mathcal{M}\cup\{o\}$ such that the square matrix of size $\sum_{k\in K}\left(|A^k|+|V^k|-1\right)$ defined by
$$\left(\begin{array}{c|c}\overline{M}^{\boldsymbol{e}}_B & \begin{array}{c} \boldsymbol{0} \\ M^T \end{array}\end{array}\right)$$ is nonsingular. Note that this definition is not standard. In general, a basis is defined in this way but without the submatrix $\left(\begin{array}{c} \boldsymbol{0} \\ M^T \end{array}\right)$ corresponding to the `$\pi$' columns. We use this definition in order to be able to deal directly with the unsigned variables `$\pi$'. We will see that this approach is natural (and could be used for linear programming as well). However, we are not aware of a previous use of such an approach.
As a consequence of this definition, since $M^T$ has $\sum_{k\in K}(|V^k|-1)$ columns, a basis is always of cardinality $\sum_{k\in K}|A^k|$.
\begin{remark}\label{rem:size}
In particular, since the matrix is nonsingular and since $M^T$ has $\sum_{k\in K}|A^k|$ rows, the first $\sum_{k\in K}(|V^k|-1)$ rows of $\overline{M}^{\boldsymbol{e}}_B$ have each a nonzero entry. This property is used below, especially in the proof of Lemma~\ref{lem:nosecondary}.
\end{remark}
The following additional notation is useful: given a subset $Z\subseteq\mathcal{X}\cup\mathcal{M}\cup\{o\}$, we denote by $Z^x$ the set $\left(\phi^{x}\right)^{-1}(Z\cap\mathcal{X})$ and by $Z^{\mu}$ the set $\left(\phi^{\mu}\right)^{-1}(Z\cap\mathcal{M})$.
In other words, $(a,k)$ is in $Z^x$ if and only if $\phi^x(a,k)$ is in $Z$, and similarly for $Z^{\mu}$.
\subsubsection{Basic solutions and non-degeneracy}\label{subsec:basic}
Let $B$ be a basis. If it contains $o$, the unique solution
$(\bar{\boldsymbol{x}},\bar{\boldsymbol{\mu}},\bar{\omega},\bar{\boldsymbol{\pi}})$
of
\begin{equation}\label{eq:basicsol}
\left\{\begin{array}{l}
\left(\begin{array}{c|c}\overline{M}^{\boldsymbol{e}}_B & \begin{array}{c} \boldsymbol{0} \\ M^T \end{array}\end{array}\right)\left(\begin{array}{c}\boldsymbol{x}_{B^x} \\ \boldsymbol{\mu}_{B^{\mu}} \\ \omega \\ \boldsymbol{\pi}\end{array}\right)=\left(\begin{array}{c}\boldsymbol{b} \\ -\boldsymbol{\beta} \end{array}\right) \\
x_a^k = 0\quad\mbox{ for all $(a,k)\notin B^x$} \\
\mu_a^k = 0\quad\mbox{ for all $(a,k)\notin B^{\mu}$}.
\end{array}\right.
\end{equation} is called the {\em basic solution} associated to $B$. If $B$ does not contain $o$, we define similarly its associated {\em basic solution}. It is the unique solution $(\bar{\boldsymbol{x}},\bar{\boldsymbol{\mu}},\bar{\omega},\bar{\boldsymbol{\pi}})$ of
\begin{equation}\label{eq:basicsol_opt}
\left\{\begin{array}{l}
\left(\begin{array}{c|c}\overline{M}^{\boldsymbol{e}}_B & \begin{array}{c} \boldsymbol{0} \\ M^T \end{array}\end{array}\right)\left(\begin{array}{c}\boldsymbol{x}_{B^x} \\ \boldsymbol{\mu}_{B^{\mu}} \\ \boldsymbol{\pi}\end{array}\right)=\left(\begin{array}{c}\boldsymbol{b} \\ -\boldsymbol{\beta} \end{array}\right) \\
x_a^k = 0\quad\mbox{ for all $(a,k)\notin B^x$}\\
\mu_a^k = 0\quad\mbox{ for all $(a,k)\notin B^{\mu}$} \\
\omega = 0.
\end{array}\right.
\end{equation}
A basis is said to be {\em feasible} if the associated basic solution is such that $\bar{\boldsymbol{x}},\bar{\boldsymbol{\mu}},\bar{\omega}\geq 0$. \\
The program~\eqref{pb:AMNEP} is said to {\em satisfy the non-degeneracy assumption} if, for any feasible basis $B$, the associated basic solution $(\bar{\boldsymbol{x}},\bar{\boldsymbol{\mu}},\bar{\omega},\bar{\boldsymbol{\pi}})$ is such that
$$\left((a,k)\in B^x\Rightarrow\bar{x}_a^k>0\right)\mbox{ and }\left((a,k)\in B^{\mu}\Rightarrow\bar{\mu}_a^k>0\right).$$ Note that if we had defined the vector $\boldsymbol{b}$ to be $0$ on all vertices $v\notin\{s^k,t^k\}$ -- as it is required by the original formulation of the Multiclass Network Equilibrium Problem -- the program would not in general satisfy the non-degeneracy assumption. Our network Lemke-like algorithm actually solves the program~\eqref{pb:AMNEP} under the non-degeneracy assumption, but, as it will be explained in Section~\ref{subsec:lemke}, it can be used to solve the degenerate case as well -- and thus the original formulation when the costs are affine -- via a perturbation argument.
An example of a basis for which the assumption fails to be satisfied is the basis $B^{ini}$ defined in Section~\ref{subsec:init}. Remark~\ref{rem:degeneracy} in that section details the example.
\subsubsection{Pivots and polytope}
The following lemmas are key results that eventually lead to the Lemke-like algorithm. They are classical for the usual definition of bases. Since we have extended the definition, we have to prove that they still hold.
\begin{lemma}\label{lem:pivotout}
Let $B$ be a feasible basis for the program~\eqref{pb:AMNEP} and assume non-degeneracy. Let $i$ be an index in $\mathcal{X}\cup\mathcal{M}\cup\{o\}\setminus B$. Then there is at most one feasible basis $B'\neq B$ in the set $B\cup\{i\}$.
\end{lemma}
\begin{proof
Let $(\bar{\boldsymbol{x}},\bar{\boldsymbol{\mu}},\bar{\omega},\bar{\boldsymbol{\pi}})$ be the basic solution associated to $B$ and let $Y=B\cup\{i\}$.
The set of solutions
$$\left\{\begin{array}{l}
\left(\begin{array}{c|c}\overline{M}^{\boldsymbol{e}}_Y & \begin{array}{c} \boldsymbol{0} \\ M^T \end{array}\end{array}\right)\left(\begin{array}{c}\boldsymbol{x}_{Y^x} \\ \boldsymbol{\mu}_{Y^{\mu}} \\ \omega \\ \boldsymbol{\pi}\end{array}\right)=\left(\begin{array}{c}\boldsymbol{b} \\ -\boldsymbol{\beta} \end{array}\right) \\
x_a^k = 0\quad\mbox{ for all $(a,k)\notin Y^x$} \\
\mu_a^k = 0\quad\mbox{ for all $(a,k)\notin Y^{\mu}$}\end{array}\right.$$ is a one-dimensional line in $\mathbb{R}\times\prod_{k\in K}\left((\mathbb{R}^2)^{A^k}\times\mathbb{R}^{V^k\setminus\{s^k\}}\right)$ (the space of all variables) and passing through $(\bar{\boldsymbol{x}},\bar{\boldsymbol{\mu}},\bar{\omega},\bar{\boldsymbol{\pi}})$. The bases in $Y$ correspond to intersections of this line with the boundary of $$Q=\{(\boldsymbol{x},\boldsymbol{\mu},\omega,\boldsymbol{\pi}):\,x_a^k\geq 0, \mu_a^k\geq 0, \omega\geq 0,\mbox{ for all $k\in K$ and $a\in A^k$}\}.$$ This latter set being convex (it is a polyhedron), the line intersects at most twice its boundary under the non-degeneracy assumption.
\end{proof}
The operation consisting in computing $B'$ given $B$ and the {\em entering index} $i$ is called the {\em pivot operation}. If we are able to determine an index in $\mathcal{X}\cup\mathcal{M}\cup\{o\}\setminus B$ for any basis $B$, Lemma~\ref{lem:pivotout} leads to a ``pivoting'' algorithm. At each step, we have a current basis $B^{curr}$, we determine the entering index $i$, and we compute the new basis in $B^{curr}\cup\{i\}$, if it exists, which becomes the new current basis $B^{curr}$; and so on. The next lemma allows us to characterize situations where there is no new basis, i.e. situations for which the algorithm gets stuck.
The feasible solutions of \eqref{pb:AMNEP} belong to the polytope
\begin{align*}
\mathcal{P}(\boldsymbol{e})=\left\{(\boldsymbol{x},\boldsymbol{\mu},\omega,\boldsymbol{\pi}):\,
\overline{M}^{\boldsymbol{e}}\left(\begin{array}{c}\boldsymbol{x} \\ \boldsymbol{\mu} \\ \omega\end{array}\right)+\left(\begin{array}{c}\boldsymbol{0} \\ M^T\end{array}\right)\boldsymbol{\pi}=\left(\begin{array}{c}\boldsymbol{b}\\-\boldsymbol{\beta}\end{array}\right),\, \right. \\
\boldsymbol{x}\geq\boldsymbol{0},\,\boldsymbol{\mu}\geq\boldsymbol{0},\,\boldsymbol{\pi}\geq\boldsymbol{0},\,\omega\in\mathbb{R}_+ \Bigg\}.
\end{align*}
\begin{lemma}\label{lem:infiniteray}
Let $B$ be a feasible basis for the program~\eqref{pb:AMNEP} and assume non-degeneracy. Let $i$ be an index in $\mathcal{X}\cup\mathcal{M}\cup\{o\}\setminus B$. If there is no feasible basis $B'\neq B$ in the set $B\cup\{i\}$, then the polytope $\mathcal{P}(\boldsymbol{e})$ contains an infinite ray originating at the basic solution associated to $B$
\end{lemma}
\begin{proof
The proof is similar as the one of Lemma~\ref{lem:pivotout}, of which we take the same notions and notations. If $B$ is the only feasible basis, then the line intersects the boundary of $Q$ exactly once. Because of the non-degeneracy assumption, it implies that there is an infinite ray originating at $(\bar{\boldsymbol{x}},\bar{\boldsymbol{\mu}},\bar{\omega},\bar{\boldsymbol{\pi}})$ and whose points are all feasible.
\end{proof}
\subsubsection{Complementarity and twin indices}\label{subsec:comp}
A basis $B$ is said to be {\em complementary} if for every $(a,k)$ with $a\in A^k$, we have $(a,k)\notin B^x$ or $(a,k)\notin B^{\mu}$: for each $(a,k)$, one of the components $x_a^k$ or $\mu_a^k$ is not activated in the basic solution. In case of non-degeneracy, it coincides with the condition $\boldsymbol{x}\cdot\boldsymbol{\mu}=0$. An important point to be noted for a complementary basis $B$ is that if $o\in B$, then there is $(a_0,k_0)$ with $a_0\in A^{k_0}$ such that
\begin{itemize}
\item $(a_0,k_0)\notin B^x$ and $(a_0,k_0)\notin B^{\mu}$, and
\item for all $(a,k)\neq(a_0,k_0)$ with $a\in A^k$, exactly one of the relations $(a,k)\in B^x$ and $(a,k)\in B^{\mu}$ is satisfied.
\end{itemize}
This is a direct consequence of the fact that there are exactly $\sum_{k\in K}|A^k|$ elements in a basis and that each $(a,k)$ is not present in at least one of $B^x$ and $B^{\mu}$. In case of non-degeneracy, this point amounts to say that $x_a^k=0$ or $\mu_a^k=0$ for all $(a,k)$ with $a\in A^k$ and that there is exactly one such pair, denoted $(a_0,k_0)$, such that both are equal to $0$.
We say that $\phi^x(a_0,k_0)$ and $\phi^{\mu}(a_0,k_0)$ for such $(a_0,k_0)$ are the {\em twin indices}.
\subsubsection{Initial feasible basis}\label{subsec:init}
A good choice of $\boldsymbol{e}$ gives an easily computable initial feasible complementary basis to the program~\eqref{pb:AMNEP}.
An {\em $s$-arborescence} in a directed graph is a spanning tree rooted at $s$ that has a directed path from $s$ to any vertex of the graph.
We arbitrarily define a collection $\mathcal{T}=(T^k)_{k\in K}$ where $T^k\subseteq A^k$ is an $s^k$-arborescence of $(V^k,A^k)$. Then the vector $\boldsymbol{e}=(e_a^k)_{k\in K, a\in A^k}$ is chosen with the help of $\mathcal T$ by
\begin{equation}\label{eq:defe}e_a^k=\left\{\begin{array}{ll} 1 & \mbox{if $a\notin T^k$} \\ 0 & \mbox{otherwise}.\end{array}\right.\end{equation}
\begin{lemma}\label{lem:initbasis}
Let the set of indices $Y\subseteq\mathcal{X}\cup\mathcal{M}\cup\{o\}$ be defined by
$$Y=\{\phi^x(a,k):\,a\in T^k, k\in K\}\cup\{\phi^{\mu}(a,k):\,a\in A^k\setminus T^k, k\in K\}\cup\{o\}.$$
Then, one of the following situations occurs:
\begin{itemize}
\item[$\bullet$] $Y\setminus \{o\}$ is a complementary feasible basis providing an optimal solution of the program \eqref{pb:AMNEP} with $\omega=0$.
\item[$\bullet$] There exists $(a_0,k_0)$ such that $B^{ini}=Y\setminus\{\phi^{\mu}(a_0,k_0)\}$ is a feasible complementary basis for the program~\eqref{pb:AMNEP}.
\end{itemize}
\end{lemma}
\begin{proof
The subset $Y$ has cardinality $\sum_{k\in K}|A^k|+1$. To show that $Y$ contains a feasible complementary basis, we proceed by studying the solutions of the system
\begin{equation}\tag{$S^{\boldsymbol{e}}$}\label{eq:Y}
\left\{\begin{array}{l}
\left(\begin{array}{c|c}\overline{M}^{\boldsymbol{e}}_Y & \begin{array}{c} \boldsymbol{0} \\ M^T \end{array}\end{array}\right)\left(\begin{array}{c}\boldsymbol{x}_{Y^x} \\ \boldsymbol{\mu}_{Y^{\mu}} \\ \omega \\ \boldsymbol{\pi}\end{array}\right)=\left(\begin{array}{c}\boldsymbol{b} \\ -\boldsymbol{\beta} \end{array}\right) \\
x_a^k = 0\quad\mbox{ for all $(a,k)\notin Y^x$} \\
\mu_a^k = 0\quad\mbox{ for all $(a,k)\notin Y^{\mu}$}.
\end{array}\right.
\end{equation}
It is convenient to rewrite the system~\eqref{eq:Y} in the following form.
\begin{align}\mbox{For}&\mbox{ all } k\in K, \nonumber\\
&\left\{\begin{array}{ll}
M_{T^k}^k x_{T^k}^k = b^k & \\
\alpha_{uv}^k\displaystyle{\sum_{k'\in K}x^{k'}_{uv} + \pi^k_u - \pi^k_v - \mu^k_{uv} +e_{uv}^k\omega= - \beta_{uv}^k} & \mbox{ for all } (u,v) \in A^k \\
x_a^k = 0 & \mbox{ for all } a\notin T^k \\
\mu_a^k = 0 & \mbox{ for all } a\in T^k.
\end{array}\right.\label{eq:S}\end{align}
The matrix $M_{T^k}^k$ is nonsingular (see the book by \cite{AMO93}). It gives a unique solution $x_{T^k}^k$ of the first equation of \eqref{eq:S}, and since $x_a^k = 0$ for $a\notin T^k$, we get a unique solution $\boldsymbol{x}$ to system~\eqref{eq:Y}.
We look now at the second equation of \eqref{eq:S} for $k$ and $(u,v)$ such that $(u,v)\in T^k$. We get that any solution of system~\eqref{eq:Y} satisfies the equalities
$$\alpha_{uv}^k\sum_{k'\in K}x_{uv}^{k'} + \pi^k_u - \pi^k_v = - \beta_{uv}^k, \quad \mbox{ for all $k\in K$ and $(u,v)\in T^k$}.$$
Indeed, if $(u,v)\in T^k$, we have $e_{uv}^k=0$ and $\mu_{uv}^k=0$. Recall that we defined $\pi_{s^k}^k=0$. Since $T^k$ is a spanning tree of $(V^k,A^k)$ for all $k$, these equations completely determine $\boldsymbol{\pi}$.
We look then at the second equation of \eqref{eq:S}, this time for $k$ and $(u,v)$ such that $(u,v)\notin T^k$. We get that any solution of system~\eqref{eq:Y} satisfies the equalities
\begin{equation}\label{eq:mu}
\alpha_{uv}^k\sum_{k'\neq k}x_{uv}^{k'} -\mu_{uv}^k+\omega+\pi^k_u-\pi^k_v=-\beta_{uv}^k, \quad \mbox{ for all $k\in K$ and $(u,v)\notin T^k$}.
\end{equation} Indeed, if $(u,v)\notin T^k$, we have $e_{uv}^k=1$ and $x_{uv}^k=0$.
If $\alpha_{uv}^k x_{uv}+\beta_{uv}^k+\pi^k_u-\pi^k_v\geq 0$ for all $k\in K$ and $(u,v)\notin T^k$, then we have an optimal solution of the program~\eqref{pb:AMNEP} with $\omega=0$, and we get the first point of Lemma~\ref{lem:initbasis}. We can thus assume that $\alpha_{uv}^k x_{uv}+\beta_{uv}^k+\pi^k_u-\pi^k_v<0$ for at least one triple $u,v,k$. Let $u_0,v_0,k_0$ be such a triple minimizing $\alpha_{uv}^k x_{uv}+\beta_{uv}^k+\pi^k_u-\pi^k_v$ and let $a_0=(u_0,v_0)$. Note that Equation~\eqref{eq:mu} implies that
\begin{equation}\label{eq:mupositive}
\mu_{uv}^k \geq \mu_{u_0v_0}^{k_0}, \quad \mbox{ for all $k\in K$ and $(u,v)\notin T^k$}.
\end{equation}
We finish the proof by showing that $B^{ini}$, defined as $Y\setminus\{\phi^{\mu}(a_0,k_0)\}$, is a feasible complementary basis for the program~\eqref{pb:AMNEP}. For $B^{ini}$, system~\eqref{eq:basicsol} has a unique solution. Indeed, the first part of the proof devoted to the solving of~\eqref{eq:Y} has shown that $\boldsymbol{x}$ and $\boldsymbol{\pi}$ are uniquely determined, without having to compute the values of the $\mu_a^k$'s. By definition of $(a_0,k_0)$, since $\phi^{\mu}(a_0,k_0)$ is not in $B^{ini}$, we have $$\mu_{u_0v_0}^{k_0}=0\quad\mbox{and}\quad\omega = -\alpha_{u_0v_0}^{k_0} x_{u_0v_0}- \beta_{u_0v_0}^{k_0}-\pi^{k_0}_{u_0}+\pi^{k_0}_{v_0}.$$ Finally, Equation~(\ref{eq:mu}) determines the values of the $\mu_{uv}^k$ for $k\in K$ and $(u,v) \notin T^k$, and Equation~(\ref{eq:mupositive}) ensures that these values are nonnegative. Therefore, $B^{ini}$ is a basis, and it is feasible because all $x_a^k$ and $\mu_a^k$ in the solution are nonnegative. Furthermore, for each $(a,k)$ with $a\in A^k$, at least one of $\phi^x(a,k)$ and $\phi^{\mu}(a,k)$ is not in $B^{ini}$. Hence, the subset $B^{ini}$ is a feasible complementary basis.
\end{proof}
We emphasize that $B^{ini}$ depends on the chosen collection $\mathcal{T}$ of arborescences.
Note that the basis $B^{ini}$ is polynomially computable.
\begin{remark}\label{rem:base}
A short examination of the proof makes clear that the following claim is true: {\em Assuming non-degeneracy, if $B$ is a feasible basis such that $B^x=\{(a,k):\,a\in T^k,\,k\in K\}$, then $B=B^{ini}$.}
The fact that the $T^k$ are arborescences fixes completely $\boldsymbol{x}$, and then $\boldsymbol{\pi}$. The fact that $B$ is a feasible basis forces $\omega$ to be equal to the maximal value of $-\alpha_{uv}^k x_{uv}- \beta_{uv}^{k}-\pi^{k}_{u}+\pi^{k}_{v}$ (except of course if this value is nonpositive, in which case we have already solved our problem), which in turn fixes the values of the $\mu_{uv}^k$.
\end{remark}
\begin{remark}\label{rem:degeneracy}
As already announced in Section~\ref{subsec:basic}, if we had defined the vector $\boldsymbol{b}$ to be $0$ on all vertices $v\notin\{s^k,t^k\}$, the problem would not satisfy the non-degeneracy assumption as soon as there is $k\in K$ such that $T^k$ has a vertex of degree $3$ (which happens when $(V^k,A^k)$ has no Hamiltonian path). In this case, the basis $B^{ini}$ shows that the problem is degenerate. Since the unique solution $\boldsymbol{x}^k_{T^k}$ of $M_{T^k}^k \boldsymbol{x}^k_{T^k} = \boldsymbol{b}^k$ consists in sending the whole demand on the unique path in $T^k$ from $s^k$ to $t^k$, we have for all arcs $a \in T^k$ not belonging to this path $x_a^k =0$ while $(a,k)\in B^{ini,x}$.
\end{remark}
\subsubsection{No secondary ray} \label{subsec:ray}
Let $(\bar{\boldsymbol{x}}^{ini},\bar{\boldsymbol{\mu}}^{ini},\bar{\omega}^{ini},\bar{\boldsymbol{\pi}}^{ini})$ be the feasible basic solution associated to the initial basis $B^{ini}$, computed according to Lemma~\ref{lem:initbasis} and with $\boldsymbol{e}$ given by Equation~\eqref{eq:defe}.
The following inifinite ray
$$\rho^{ini}=\left\{(\bar{\boldsymbol{x}}^{ini},\bar{\boldsymbol{\mu}}^{ini},\bar{\omega}^{ini},\bar{\boldsymbol{\pi}}^{ini})+t(\boldsymbol{0},\boldsymbol{e},1,\boldsymbol{0}):\, t\geq0\right\},$$ has all its points in $\mathcal{P}(\boldsymbol{e})$. This ray with direction $(\boldsymbol{0},\boldsymbol{e},1,\boldsymbol{0})$ is called the {\em primary ray}. In the terminology of the Lemke algorithm, another infinite ray originating at a solution associated to a feasible complementary basis is called a {\em secondary ray}. Recall that we defined $\pi_{s^k}^k=0$ for all $k \in K$ in Section~\ref{sec:formulation} (otherwise we would have a trivial secondary ray). System~\eqref{pb:AMNEP} has no secondary ray for the chosen $\boldsymbol{e}$
\begin{lemma}\label{lem:nosecondary}
Let $\boldsymbol{e}$ be defined by Equation~\eqref{eq:defe}. Under the non-degeneracy assumption, there is no secondary ray in $\mathcal{P}(\boldsymbol{e})$.
\end{lemma}
\begin{proof
Suppose that $\mathcal{P}(\boldsymbol{e})$ contains an infinite ray $$\rho=\left\{(\bar{\boldsymbol{x}},\bar{\boldsymbol{\mu}},\bar{\omega},\bar{\boldsymbol{\pi}})+t(\boldsymbol{x}^{dir},\boldsymbol{\mu}^{dir},\omega^{dir},\boldsymbol{\pi}^{dir}):\, t\geq0\right\},$$ where $(\bar{\boldsymbol{x}},\bar{\boldsymbol{\mu}},\bar{\omega},\bar{\boldsymbol{\pi}})$ is a feasible complementary basic solution associated to a basis $B$. \\
We first show that $\boldsymbol{x}^{dir}=0$. For a contradiction, suppose that it is not the case and let $k$ be such that $\boldsymbol{x}^{dir,k}$ is not zero. Since the points of $\rho$ must satisfy the system~\eqref{pb:AMNEP} for all $t\geq 0$, we have that $(\boldsymbol{x}^{dir},\boldsymbol{\mu}^{dir},\omega^{dir},\boldsymbol{\pi}^{dir})$ must satisfy for all $v\in V^k$
$$\sum_{a\in \delta^+(v)} x_a^{dir,k} = \sum_{a\in \delta^-(v)} x_a^{dir,k},$$ which shows that $\boldsymbol{x}^k$ is a circulation in the directed graph $(V^k,A^k)$. Moreover, we must have for all $(u,v)\in A^k$
\begin{equation}\label{eq:cost}
\begin{array}{c}
\displaystyle{\alpha_{uv}^k\sum_{k'\in K}x^{dir,k'}_{uv} + \pi^{dir,k}_u - \pi^{dir,k}_v - \mu^{dir,k}_{uv} +e_{uv}^k\omega^{dir}=0}.
\end{array}
\end{equation} where we have $\pi_{s^k}^{dir,k}=0$ since $\pi_{s^ k}^k=0$ for any feasible solution of~\eqref{pb:AMNEP}, see Section~\ref{sec:formulation}. The following relations must also be satisfied:
\begin{equation}\label{eq:xmu}
\boldsymbol{x}^{dir}\cdot\boldsymbol{\mu}^{dir}=0,
\end{equation} and
\begin{equation}\label{eq:omega}
\boldsymbol{x}^{dir}\geq\boldsymbol{0},\boldsymbol{\mu}^{dir}\geq\boldsymbol{0},\omega^{dir}\geq 0.
\end{equation}
Take now any circuit $C$ in $D=(V,A)$ in the support of $\boldsymbol{x}^{dir,k}$. Since we have supposed that $\boldsymbol{x}^{dir,k}$ is not zero and since it is a circulation, such a circuit necessarily exists. According to Equations~\eqref{eq:xmu} and~\eqref{eq:omega}, we have $\mu_a^{dir,k}=0$ for each $a\in C$.
The sum $\sum_{a\in C}e_a^k$ is nonzero since no tree $T^k$ can contain all arcs in $C$. Summing Equation~\eqref{eq:cost} for all arcs in $C$, we get $$\omega^{dir}=-\frac{\sum_{a\in C}\alpha_a^k\sum_{k'\in K}x_a^{dir,k'}}{\sum_{a\in C}e_a^k}<0.$$ It is in contradiction with Equation~\eqref{eq:omega}. It implies that $x_a^{dir,k}=0$ for all $k\in K$ and $a\in A^k$. \\
We show now that $\boldsymbol{\pi}^{dir}=0$. We start by noting that Equation~\eqref{eq:cost} becomes $$\pi_u^{dir,k}-\pi_v^{dir,k}-\mu_{uv}^{dir,k}=0, \quad \mbox{ for all $k\in K$ and $(u,v)\in T^k$}.$$ Since $T^k$ is an $s^k$-arborescence, we have $0=\pi_{s^k}^{dir,k}\geq\pi_v^{dir,k}$ for all $v\in V^k$, according to Equation~\eqref{eq:omega}.
Define now $F^k$ to be the set of arcs $a\in A^k$ such that $(a,k)\in B^x$. Using Remark~\ref{rem:size} of Section~\ref{subsec:bases}, $\overline{M}^{\boldsymbol{e}}_B$ has a nonzero entry on each of its first $\sum_{k\in K}(|V^k|-1)$ rows, which implies that the set $F^k$ spans all vertices in $V^k\setminus\{s^k\}$.%
According to the non-degeneracy assumption, $\bar{x}_a^k$ is nonzero on all arcs of $F^k$. The complementarity condition for all points of the ray give that $\bar{\boldsymbol{x}}\cdot\boldsymbol{\mu}^{dir}+\boldsymbol{x}^{dir}\cdot\bar{\boldsymbol{\mu}}=0$, and since $\boldsymbol{x}^{dir} = \boldsymbol{0}$, we have
$\bar{\boldsymbol{x}}\cdot\boldsymbol{\mu}^{dir}=0$. Hence $\mu_{uv}^{dir,k}=0$ for all $(u,v)\in F^k$, and Equation~\eqref{eq:cost} becomes
\begin{equation}\label{eq:cost_bis}\pi_u^{dir,k}-\pi_v^{dir,k}+e_{uv}^k\omega^{dir}=0 \quad \mbox{ for all $k\in K$ and $(u,v)\in F^k$}.\end{equation} Thus, according to Equation~\eqref{eq:omega}, we have $0=\pi_{s^k}^{dir,k}\leq\pi_v^{dir,k}$ for all $v\in V^k$. Since we have already shown the reverse inequality, we have $\pi_v^{dir,k}=0$ for all $v\in V^k$. \\
Now, if $T^k\neq F^k$ for at least one $k$, we get the existence of an arc $(u,v)\in F^k$ for which $e_{uv}^k=1$, while $\pi_u^{dir,k}=\pi_v^{dir,k}=0$. Equation~\eqref{eq:cost_bis} implies then that $\omega^{dir}=0$. Still using $\boldsymbol{x}^{dir}=\boldsymbol{0}$, we get then, again with the help of Equation~\eqref{eq:cost}, that $\boldsymbol{\mu}^{dir}=\boldsymbol{0}$, which contradicts the fact that $\rho$ is an infinite ray.
Therefore, we have $T^k=F^k$ for all $k$. Using Remark~\ref{rem:base} of Section~\ref{subsec:init}, we are at the initial basic solution: $B=B^{ini}$. According to Equation~\eqref{eq:cost}, and since $\boldsymbol{x}^{dir}=\boldsymbol{0}$ and $\boldsymbol{\pi}^{dir}=\boldsymbol{0}$, we have $\mu_{uv}^{dir,k}=e_{uv}^k\omega^{dir}$ for all $k\in K$ and $(u,v)\in A^k$. Thus $(\boldsymbol{x}^{dir},\boldsymbol{\mu}^{dir},\omega^{dir},\boldsymbol{\pi}^{dir})=\omega^{dir}(\boldsymbol{0},\boldsymbol{e},1,\boldsymbol{0})$ for $\omega^{dir}\geq 0$, and $\rho$ is necessarily the primary ray $\rho^{ini}$.
Then there is no secondary ray, as required.\end{proof}
\subsubsection{A Lemke-like algorithm}\label{sec:algo}
Assuming non-degeneracy, the combination of Lemma~\ref{lem:pivotout} and the point explicited in Section~\ref{subsec:comp} gives rise to a Lemke-like algorithm. Two feasible complementary bases $B$ and $B'$ are said to be {\em neighbors} if $B'$ can be obtained from $B$ by a pivot operation using one of the twin indices as an entering index, see Section~\ref{subsec:comp}. Note that is is a symmetrical notion: $B$ can then also be obtained from $B'$ by a similar pivot operation. The abstract graph whose vertices are the feasible complementary bases and whose edges connect neighbor bases is thus a collection of paths and cycles. According to Lemma~\ref{lem:initbasis}, we can find in polynomial time an initial feasible complementary basis for~\eqref{pb:AMNEP} with the chosen vector $\boldsymbol{e}$. This initial basis has exactly one neighbor according to Lemma~\ref{lem:infiniteray} since there is a primary ray and no secondary ray (Lemma~\ref{lem:nosecondary}).
Algorithm~\ref{algo:pivot_compl} explains how to follow the path starting at this initial feasible complementary basis. Function \texttt{EnteringIndex}$(B,i')$ is defined for a feasible complementary basis $B$ and an index $i'\notin B$ being a twin index of $B$ and computes the other twin index $i\neq i'$. Function \texttt{LeavingIndex}$(B,i)$ is defined for a feasible complementary basis $B$ and an index $i\notin B$ and computes the unique index $j\neq i$ such that $B\cup\{i\}\setminus\{j\}$ is a feasible complementary basis (see Lemma~\ref{lem:pivotout}).
Since there is no secondary ray (Lemma~\ref{lem:nosecondary}), a pivot operation is possible because of Lemma~\ref{lem:infiniteray} as long as there are twin indices. By finiteness, a component in the abstract graph having an endpoint necessarily has another endpoint. It implies that the algorithm reaches at some moment a basis $B$ without twin indices. Such a basis is such that $o\notin B$ (Section~\ref{subsec:comp}), which implies that we have a solution of the program~\eqref{pb:AMNEP} with $\omega=0$, i.e. a solution of the program~\eqref{pb:MNEP}, and thus a solution of our initial problem.
\begin{algorithm}
\begin{algorithmic}
\State{\bf Input. }The matrix $\overline{M}^{\boldsymbol{e}}$, the matrix $M$, the vectors $\boldsymbol{b}$ and $\boldsymbol{\beta}$, an initial feasible complementary basis $B^{ini}$\;
\State{\bf Output. }A feasible basis $B^{end}$ with $o\notin B^{end}$\;
\State$\phi^{\mu}(a_0,k_0)\leftarrow$ twin index in $\mathcal{M}$\;
\State$i\leftarrow\mbox{\texttt{EnteringIndex}}(B^{ini},\phi^{\mu}(a_0,k_0))$\;
\State$j\leftarrow\mbox{\texttt{LeavingIndex}}(B^{ini},i)$\;
\State$B^{curr}\leftarrow B^{ini}\cup\{i\}\setminus\{j\}$\;
\While{There are twin indices}
\State$i\leftarrow\mbox{\texttt{EnteringIndex}}(B^{curr},j)$\;
\State$j\leftarrow\mbox{\texttt{LeavingIndex}}(B^{curr},i)$\;
\State$B^{curr}\leftarrow B^{curr}\cup\{i\}\setminus\{j\}$\;
\EndWhile
\State $B^{end}\leftarrow B^{curr}$\;
\State \Return $B^{end}$\;
\end{algorithmic}
\caption{Lemke-like algorithm} \label{algo:pivot_compl}
\end{algorithm}
\subsection{Algorithm and main result} \label{subsec:lemke}
We are now in a position to describe the full algorithm under the non-degeneracy assumption.
\begin{itemize}
\item For each $k\in K$, compute a collection $\mathcal{T}=(T^k)$ where $T^k\subseteq A^k$ is an $s^k$-arborescence of $(V^k,A^k)$.
\item Define $\boldsymbol{e}$ as in Equation~\eqref{eq:defe} (which depends on $\mathcal{T}$).
\item Define $Y=\{\phi^x(a,k):\,a\in T^k, k\in K\}\cup\{\phi^{\mu}(a,k):\,a\in A^k\setminus T^k, k\in K\}\cup\{o\}$.
\item If $Y\setminus \{o\}$ is a complementary feasible basis providing an optimal solution of the program~\eqref{pb:AMNEP} with $\omega=0$, then we have a solution of the program~\eqref{pb:MNEP}, see Lemma~\ref{lem:initbasis}.
\item Otherwise, let $B^{ini}$ be defined as in Lemma~\ref{lem:initbasis} and apply Algorithm~\ref{algo:pivot_compl}, which returns a basis $B^{end}$.
\item Compute the basic solution associated to $B^{end}$.
\end{itemize}
All the elements proved in Section~\ref{subsec:tools} lead to the following result.
\begin{theorem}\label{thm:main}
Under the non-degeneracy assumption, this algorithm solves the program~\eqref{pb:MNEP}.
\end{theorem}
This result provides actually a constructive proof of the existence of an equilibrium for the Multiclass Network Equilibrium Problem when the cost are affine and strictly increasing, even if the non-degeneracy assumption is not satisfied. If we compute $\boldsymbol{b}=(b_v^k)$ strictly according to the model, we have \begin{equation}\label{eq:def_b} b_v^k = \left\{\begin{array}{ll}\lambda(I^k) & \mbox{if $v=s^k$} \\ -\lambda(I^k) & \mbox{if $v=t^k$} \\ 0 & \mbox{otherwise}.\end{array}\right.\end{equation} In this case, the non-degeneracy assumption is not satisfied as it has been noted at the end of Section~\ref{subsec:init} (Remark~\ref{rem:degeneracy}).
Anyway, we can slightly perturb $\boldsymbol{b}$ and $-\boldsymbol{\beta}$ in such a way that any feasible complementary basis of the perturbated problem is still a feasible complementary basis for the original problem. Such a perturbation exists by standard arguments, see~\cite{CPS92}. Theorem~\ref{thm:main} ensures then the termination of the algorithm on a feasible complementary basis $B$ whose basic solution is such that $\omega=0$. Therefore, the algorithm solves the Multiclass Network Equilibrium Problem with affine costs in any case.\\
A consequence of Theorem~\ref{thm:main} is the following. Consider the Multiclass Network Equilibrium Problem with affine costs. If the demands $\lambda(I^k)$ and the coefficients involved in the cosrs are rational numbers, then there exists an equilibrium inducing rational flows on each arc and for each class $k$. It is reminiscent of a similar result for two-player matrix games: if the matrices involve only rational entries, there is an equilibrium involving only rational numbers \citep{Na51}.
\subsection{Computational experiments}\label{subsec:experiments}
\subsubsection{Instances}
The experiments are made on $n \times n$ grid graphs (Manhattan instances). For each pair of adjacent vertices $u$ and $v$, both arcs $(u,v)$ and $(v,u)$ are present. We built several instances on these graphs with various sizes $n$, various numbers of classes, and various cost parameters $\alpha_a^k,\beta_a^k$. The cost parameters were chosen uniformly at random such that for all $a$ and all $k$ $$\alpha_a^k\in[1,10]\quad\mbox{and}\quad\beta_a^k\in[0,100].$$
\subsubsection{Results}
The algorithm has been coded in C++ and tested on a PC Intel{\small\textsuperscript{\textregistered}} Core{\small\textsuperscript{\texttrademark}} i5-2520M clocked at 2.5 GHz, with 4 GB RAM. The computational results are given in Table~\ref{tab:result}. Each row of the table contains average figures obtained on five instances on the same graph and with the same number classes, but with various origins, destinations, and costs parameters.
\begin{table}
\begin{center}
\begin{tabular}{cc|cc|ccc}
Classes & Grid & Vertices & Arcs & Pivots & Algorithm~\ref{algo:pivot_compl} & Inversion \\
& & & & & (seconds) & (seconds) \\ \hline
2 & 2 $\times$ 2 & 4 & 8 & 2 & $<$0.01 & $<$0.01 \\
& 4 $\times$ 4 & 16 & 48 & 21 & 0.01 & 0.03 \\
& 6 $\times$ 6 & 36 & 120 & 54 & 0.08 & 0.5\\
& 8 $\times$ 8 & 64 & 224 & 129 & 0.9 & 4.0 \\ \hline
3 & 2 $\times$ 2 & 4 & 8 & 4 & $<$0.01 & $<$0.01 \\
& 4 $\times$ 4 & 16 & 48 & 33 & 0.03 & 0.1 \\
& 6 $\times$ 6 & 36 & 120 & 97 & 0.4 & 1.9\\
& 8 $\times$ 8 & 64 & 224 & 183 & 2.6 & 12 \\ \hline
4 & 2 $\times$ 2 & 4 & 8 & 3 & $<$0.01 & $<$0.01 \\
& 4 $\times$ 4 & 16 & 48 & 41 & 0.06 & 0.3 \\
& 6 $\times$ 6 & 36 & 120 & 126 & 0.9 & 4.7 \\
& 8 $\times$ 8 & 64 & 224 & 249 & 5.4 & 25 \\ \hline
10 & 2 $\times$ 2 & 4 & 8 & 11 & $<$0.01 & 0.02 \\
& 4 $\times$ 4 & 16 & 48 & 107 & 0.7 & 4.1 \\
& 6 $\times$ 6 & 36 & 120 & 322 & 15 & 70 \\
& 8 $\times$ 8 & 64 & 224 & 638 & 87 & 385 \\ \hline
50 & 2 $\times$ 2 & 4 & 8 & 56 & 0.3 & 2.6 \\
& 4 $\times$ 4 & 16 & 48 & 636 & 105 & 511 \\
\end{tabular}
\end{center}
\caption{Performances of the complete algorithm for various instance sizes}
\label{tab:result}
\end{table}
The columns ``Classes'', ``Vertices'', and ``Arcs'' contain respectively the number of classes, the number of vertices, and the number of arcs. The column ``Pivots'' contains the number of pivots performed by the algorithm. They are done during Step 5 in the description of the algorithm in Section~\ref{subsec:lemke} (application of Algorithm~\ref{algo:pivot_compl}). The column ``Algorithm~\ref{algo:pivot_compl}'' provides the time needed for the whole execution of this pivoting step. The preparation of this pivoting step requires a first matrix inversion, and the final computation of the solution requires such an inversion as well. The times needed to perform these inversions are given in the column ``Inversion''. The total time needed by the complete algorithm to solve the problem is the sum of the ``Algorithm~\ref{algo:pivot_compl}'' time and twice the ``Inversion'' time, the other steps of the algorithm taking a negligible time.
It seems that the number of pivots remains always reasonable. Even if the time needed to solve large instances is sometimes important with respect to the size of the graph, the essential computation time is spent on the two matrix inversions. The program has not been optimized, since there are several efficient techniques known for inverting matrices. The results can be considered as very positive.
\bibliographystyle{plainnat}
|
1,108,101,565,304 | arxiv | \section{Introduction}
\label{sec:intro}
In Witten's 2+1 gravity\cite{achu,witten1} on spacetimes of the form
${\cal M} = {\Bbb R} \times \Sigma$, where $\Sigma$ is a closed
orientable two-manifold of genus~$g$, the simplest nontrivial case of
$g=1$ is known to differ in several qualitative ways from the generic
case of $g>1$\cite{mess,carlip1,carlip2,carlip3,louma,carlip-water}.
One facet of this is the way that large diffeomorphisms
({\it i.e\/}., diffeomorphisms disconnected from the identity)
appear in the theory.
For $g>1$, the geometrodynamically relevant connected component of
the classical solution space is the cotangent bundle over the Teichm\"uller\
space $T^g$ of~$\Sigma$. The quotient group $G$ of all
diffeomorphisms modulo diffeomorphisms connected to the identity is
the modular group, and the quotient space $T^g/G$ is the Riemann
moduli space, which is a smooth manifold everywhere except at
isolated singularities. Upon quantization, one option is to take the
Hilbert space to be $L^2(T^g)$ with respect to a natural volume
element on $T^g$\cite{witten1,goldman1,goldman2,AAbook2}, and to let
the large diffeomorphisms act on this space as symmetries. Another
option is to treat the large diffeomorphisms as gauge, in which case
they should be factored out from the quantum theory; this can be
achieved by taking the Hilbert space to be~$L^2(T^g/G)$.
For $g=1$ the classical solution space is not a manifold, nor is
the subset that corresponds to conventional
geometrodynamics\cite{carlip1,louma,moncrief1}. The attention has
therefore often been fixed to the so-called ``spacelike sector" of
the theory, where the classical solution space consists of two copies
of the ``square root" geometrodynamical
theory\cite{moncrief1,hosoya-nakao1} glued together by a
lower-dimensional non-geometrodynamical part\cite{louma}. The
configuration space of this sector can be regarded as ${\cal N}_{\rm
S}:=\left({\Bbb R}^2\setminus\{(0,0)\}\right)/{\Bbb Z}_2$, where the
${\Bbb Z}_2$ action on ${\Bbb R}^2\setminus\{(0,0)\}$ is generated by the
map $(x^1,x^2)\mapsto(-x^1,-x^2)$, and the phase space is the
cotangent bundle over~${\cal N}_{\rm S}$. ${\cal N}_{\rm S}$ is
equipped with the volume element $d\mu := dx^1
dx^2$\cite{AAbook2,five-a}. The modular group is now ${{\rm SL}(2,\BbbZ)}$, and
its action on ${\cal N}_{\rm S}$ is induced from the action on
${\Bbb R}^2$ given by
\begin{equation}
\left(
\begin{array}{c}
x^1 \\
x^2 \\
\end{array}
\right)
\longmapsto
M
\left(
\begin{array}{c}
x^1 \\
x^2 \\
\end{array}
\right)
\ \ , \ \
M \in {{\rm SL}(2,\BbbZ)}
\ \ ,
\end{equation}
where $M$ on the right hand side acts on the column vector by usual
matrix multiplication. (Clearly, this ${{\rm SL}(2,\BbbZ)}$ action on ${\cal
N}_{\rm S}$ reduces to an action of the factor group
${{\rm PSL}(2,\BbbZ)}={{\rm SL}(2,\BbbZ)}/\{\openone,-\openone\}$, where $\openone$ stands
for the two by two unit matrix.) If now the large diffeomorphisms are
understood as a symmetry, the Hilbert space can be taken to be
${{\cal H}}_{\rm S} := L^2({\cal N}_{\rm S};
d\mu)$\cite{carlip1,louma,AAbook2,five-a}, and the action of
${{\rm SL}(2,\BbbZ)}$ on ${\cal N}_{\rm S}$ clearly induces a unitary
representation ${\sf T}^{\rm S}_{{\rm SL}(2,\BbbZ)}$ of ${{\rm SL}(2,\BbbZ)}$
on~${{\cal H}}_{\rm S}$. Treating the large diffeomorphisms as
gauge is more problematic, however. One attempt might be to follow
the logic of the higher genus surfaces and regard the quotient space
${\cal N}_{\rm S}/{{\rm SL}(2,\BbbZ)}$ as a configuration space on which the
quantum theory is to be built. However, the action of ${{\rm SL}(2,\BbbZ)}$ on
${\cal N}_{\rm S}$ is not properly discontinuous. In fact, each
half-line with rational $x^2/x^1$ is fixed by an infinite Abelian
subgroup, whereas the half-lines with irrational $x^2/x^1$ are fixed
only by $\pm\openone$\cite{peldan}. This implies that the isomorphism
class of stabilizer subgroups is nowhere locally constant. The
quotient space ${\cal N}_{\rm S}/{{\rm SL}(2,\BbbZ)}$ is thus nowhere locally a
manifold, and it is not obvious whether one could use such a space as
a configuration space for the quantum theory. An alternative attempt
might be to employ the Hilbert space~${\cal H}_{\rm S}$, but to
restrict observables to those that commute with the given unitary
action of ${{\rm SL}(2,\BbbZ)}$. This prevents observables from having non-zero
matrix elements between states belonging to inequivalent
subrepresentations. Therefore, no state that can be decomposed into
the sum of states belonging to inequivalent subrepresentations can be
a pure state. Here, ``state" is understood to mean ``state for the
algebra of observables;" see {\it e.g\/}.\cite{bogolubov}. However,
from the results in Sections \ref{sec:sltwor} and \ref{sec:sltwoz} it
follows that {\it every\/} state can be written in such a form (as
will be discussed more explicitly in the following paragraph). The
attempt to reduce the gauge redundancy in this fashion thus leads to
the somewhat paradoxical result that there are no pure states.
That the gauge interpretation of large diffeomorphisms leads to
the absence of pure states is, in fact, not at all
surprising. This can be seen from a more familiar example:
Consider the Hilbert space ${\cal H}=L^2({\Bbb R}^2; dx^1dx^2)$
and the unitary action of the translation group in
$x^1$-direction. We want to regard the translations
(for simplicity we shall refer to the $x^1$-translations
simply as translations) as gauge, and
hence require all observables to commute with them. Their integral
kernels, $O(x^1,x^2;y^1,y^2)$, thus only depend on $x^1-y^1$, $x^2$,
and~$y^2$. On the other hand, each translation invariant subspace,
${\cal H}_{\Delta}\subset {\cal H}$, is uniquely given
by those functions whose Fourier transform in $x^1$ vanishes
almost everywhere outside the measurable set
$\Delta\subset {\Bbb R}$ in the transformed $x^1$
coordinate\cite{rudin}. Given two disjoint measurable sets,
$\Delta_1$ and~$\Delta_2$, it is easy to see that all matrix
elements of observables between states in
${{\cal H}}_{\Delta_1}$ and ${\cal H}_{\Delta_2}$ vanish.
This is a direct consequence of the fact that translation invariant
operators cannot increase the support of the Fourier transform,
and can also be checked by direct calculation using the
property of integral kernels given above. On the other hand, given a
measurable set $\Delta$ of non-zero measure, it is always possible to
find a measurable subset $\Delta_1\subset\Delta$
of non-zero but strictly smaller measure.
(A~short proof of this fact will be given in
Appendix~\ref{app:lebesgue}.) Denoting by $\Delta_2$ the complement
of $\Delta_1$ in~$\Delta$, we have a decomposition
$\Delta=\Delta_1\cup\Delta_2$ of $\Delta$ into disjoint subsets of
non-zero measures. If we take for $\Delta$ the support in the Fourier
transformed $x^1$ coordinate of an arbitrary element
in~${\cal H}$, we immediately infer that {\it any\/} vector is
the sum of two vectors which lie in different and strictly smaller
invariant subspaces. Hence there are no pure states. A~similar
argument applies to our gravitational case, using the diffeomorphism
invariant subspaces of Theorem \ref{sec:sltwor}.2 in
Section~\ref{sec:sltwor}.
In the above simple example it is clear how the redundant
translations are properly eliminated: instead of ${\cal H}$ one
considers the Hilbert space $L^2({\Bbb R};dx^2)$ of square integrable
functions over the classical reduced configuration space, which may
be identified with the $x^2$ axis. However, in our gravitational case
the analogous option is not at our direct disposal, due to the
complicated structure of ${\cal N}_{\rm S}/{{\rm SL}(2,\BbbZ)}$. In this paper we
shall therefore concentrate on the theory in which the large
diffeomorphisms are treated as symmetries. The Hilbert space is
thus~${\cal H}_{\rm S}$, the large diffeomorphisms act on
${\cal H}_{\rm S}$ by ${\sf T}^{\rm S}_{{\rm SL}(2,\BbbZ)}$, and the algebra
of observables is taken to be the full algebra $B({\cal H}_{\rm
S})$ of bounded operators on~${\cal H}_{\rm S}$. Note that this
means allowing, in principle, observables that do not commute with
the large diffeomorphisms. At the fundamental level this theory has
no superselection rules, and rays in ${\cal H}_{\rm S}$ are in
bijective correspondence to pure states.
The purpose of this paper is to make two observations about the
unitary representation ${\sf T}^{\rm S}_{{\rm SL}(2,\BbbZ)}$ of ${{\rm SL}(2,\BbbZ)}$ on
${{\cal H}}_{\rm S}$. On the one hand, we point out that
${\sf T}^{\rm S}_{{\rm SL}(2,\BbbZ)}$ is reducible, and we exhibit a class of
infinite dimensional closed invariant subspaces. On the other hand,
we demonstrate that ${{\cal H}}_{\rm S}$ contains no nontrivial
finite dimensional invariant subspaces.\footnote{While this paper was
in preparation, an independent argument ruling out nontrivial finite
dimensional invariant subspaces was given in the revised version of
Ref.\cite{peldan}. We thank Peter Peld\'an for discussions on this
issue.}
Our starting point is the decomposition\cite{knapp-prob,howe-tan-ex}
of the standard unitary representation ${\sf T}_{{\rm SL}(2,\BbbR)}$ of ${{\rm SL}(2,\BbbR)}$
on $L^2({\Bbb R}^2)$ into a direct integral of irreducible unitary
representations, all of whom belong to the principal
series\cite{bargmann,lang,knapp,howe-tan}. This decomposition yields
an obvious construction of closed infinite dimensional subspaces of
$L^2({\Bbb R}^2)$ that are invariant under~${\sf T}_{{\rm SL}(2,\BbbR)}$. Projecting
$L^2({\Bbb R}^2)$ to the subspace~${{\cal H}}_{\rm S}$ then
produces closed infinite dimensional subspaces of
${{\cal H}}_{\rm S}$ that are invariant under ${\sf T}_{{\rm SL}(2,\BbbR)}$,
and hence also under ${\sf T}^{\rm S}_{{\rm SL}(2,\BbbZ)}$. This is the first
claim above.
To prove the second claim, let us denote by ${\sf T}_{{\rm SL}(2,\BbbZ)}$ the
restriction of ${\sf T}_{{\rm SL}(2,\BbbR)}$ to ${{\rm SL}(2,\BbbZ)}$. It is known that the
principal series irreducible unitary representations of ${{\rm SL}(2,\BbbR)}$
restrict to irreducible representations of ${{\rm SL}(2,\BbbZ)}$\cite{cowsteg}.
(Further, two representations of ${{\rm SL}(2,\BbbZ)}$ obtained in this fashion
are equivalent only if the corresponding representations of ${{\rm SL}(2,\BbbR)}$
are\cite{cowsteg,bishsteg}.) This means that the direct integral
decomposition of ${\sf T}_{{\rm SL}(2,\BbbR)}$ yields, through restriction to
${{\rm SL}(2,\BbbZ)}$, a decomposition of ${\sf T}_{{\rm SL}(2,\BbbZ)}$ into a direct
integral of irreducible infinite dimensional representations. It
follows immediately that $L^2({\Bbb R}^2)$ has no nontrivial finite
dimensional subspaces that are invariant under~${\sf T}_{{\rm SL}(2,\BbbZ)}$.
This implies the claim.
With the decomposition of~${\sf T}_{{\rm SL}(2,\BbbZ)}$, one can translate the
action of ${{\rm SL}(2,\BbbZ)}$ on the configuration space ${\cal N}_{\rm S}$
into an action of ${{\rm SL}(2,\BbbZ)}$ on~$S^1$, where the $S^1$ arises as the
configuration space of the constituent irreducible representations of
${{\rm SL}(2,\BbbZ)}$ on~$L^2(S^1)$. The problem of building a quantum theory
with the configuration space ${\cal N}_{\rm S}/{{\rm SL}(2,\BbbZ)}$ is thus
translated into the problem of building quantum theories with the
configuration space $S^1/{{\rm SL}(2,\BbbZ)}$. We show that the difficulty
persists: the action of ${{\rm SL}(2,\BbbZ)}$ on $S^1$ is not properly
discontinuous, and $S^1/{{\rm SL}(2,\BbbZ)}$ is nowhere locally a manifold.
The rest of the paper is as follows. In Section \ref{sec:sltwor} we
present the decomposition of the standard representation of ${{\rm SL}(2,\BbbR)}$
on $L^2({\Bbb R}^2)$ into a direct integral of irreducible
representations, and we use this decomposition to construct a class
of closed invariant subspaces. The restriction to the subgroup
${{\rm SL}(2,\BbbZ)}$ is addressed in Section~\ref{sec:sltwoz}. Section
\ref{sec:discussion} contains a brief discussion. Appendix
\ref{app:lebesgue} recalls an elementary property of the Lebesgue
measure, and Appendix \ref{app:Sone} contains an analysis of the
quotient space $S^1/{{\rm SL}(2,\BbbZ)}$.
\section{Representation of ${{\rm SL}(2,\BbbR)}$ on $L^2({\Bbb R}^2)$}
\label{sec:sltwor}
In this section we first review the decomposition of the standard
unitary representation of ${{\rm SL}(2,\BbbR)}$ on $L^2({\Bbb R}^2)$ into
irreducible unitary representations\cite{knapp-prob}. We then note
that the decomposition presents an obvious way of constructing
infinite dimensional closed invariant subspaces of~$L^2({\Bbb R}^2)$.
Let $(x^1,x^2)$ be a pair of global coordinates on~${\Bbb R}^2$.
The group ${{\rm SL}(2,\BbbR)}$ has on ${\Bbb R}^2$ the natural associative
action
\begin{equation}
\left(
\begin{array}{c}
x^1 \\
x^2 \\
\end{array}
\right)
\longmapsto
M
\left(
\begin{array}{c}
x^1 \\
x^2 \\
\end{array}
\right)
\ \ , \ \
M \in {{\rm SL}(2,\BbbR)}
\ \ ,
\label{sltworactionR}
\end{equation}
where $M$ on the right hand side acts on the column vector by usual
matrix multiplication. Denoting a point on ${\Bbb R}^2$ by~$x$, we write
this action as as $x\mapsto Mx$.
Let ${{\cal H}} := L^2({\Bbb R}^2)=L^2({\Bbb R}^2;dx^1dx^2)$ be the
Hilbert space of square integrable
functions\footnote{Note that functions represent the same element in
$L^2$ spaces if they differ at most on a set of measure zero.
We shall therefore throughout understand functions to be defined
only almost everywhere~(a.e.), and pointwise equations for the
functions to hold only a.e.}
on ${\Bbb R}^2$ with the inner product
\begin{equation}
(f,g) := \int dx^1 dx^2 \, {\overline f}g
\ \ .
\label{ip}
\end{equation}
We define a representation ${\sf T}$ of ${{\rm SL}(2,\BbbR)}$
on~${\cal H}$,
$f\mapsto {\sf T}(M)f$, by
\begin{equation}
{\sf T}(M)f(x) := f (M^{-1}x)
\ \ .
\label{sltworactionH}
\end{equation}
This representation is clearly unitary,
$({\sf T}(M)f,{\sf T}(M)g)=(f,g)$. Our aim is to decompose this
representation into its irreducible components. As ${{\rm SL}(2,\BbbR)}$ is a
Type~I group, the decomposition is essentially unique\cite{mackey}.
We first rewrite the ${{\rm SL}(2,\BbbR)}$ action on ${\Bbb R}^2$
(\ref{sltworactionR}) in a more convenient manner. This action
clearly leaves the origin invariant.
On ${\Bbb R}^2\setminus\{(0,0)\}$,
introduce the polar coordinates $(r,\theta)$ through
\begin{equation}
\begin{array}{rl}
&x^1 = r \cos\theta
\ \ ,
\\
&x^2 = r \sin\theta
\ \ ,
\label{polarcoords}
\end{array}
\end{equation}
where $r>0$, and $\theta$ is understood periodic with period~$2\pi$.
We parametrize a matrix $M\in{{\rm SL}(2,\BbbR)}$ as
\begin{equation}
M = U
\left(
\begin{array}{rr}
\alpha & \beta \\
{\bar\beta} & {\bar\alpha} \\
\end{array}
\right)
U^{-1}
\ \ ,
\label{suoneonepar}
\end{equation}
where $U$ is the unitary matrix
\begin{equation}
U :=
{1 \over \sqrt{2}}
\left(
\begin{array}{rr}
1 & 1 \\
i & -i \\
\end{array}
\right)
\ \ ,
\end{equation}
and $\alpha$ and $\beta$ are complex numbers satisfying
$\alpha{\bar\alpha}-\beta{\bar\beta}=1$. This parametrization is
one-to-one, and if the matrix
$\pmatrix{ \alpha & \beta \cr
{\bar\beta} & {\bar\alpha} \cr}$
is interpreted as an element of ${{\rm SU}(1,1)}$,
(\ref{suoneonepar}) defines an isomorphism
${{\rm SL}(2,\BbbR)} \simeq {{\rm SU}(1,1)}$\cite{bargmann}.
The ${{\rm SL}(2,\BbbR)}$ action (\ref{sltworactionR}) on
${\Bbb R}^2\setminus\{(0,0)\}$ takes then the form
$(r,\theta) \mapsto (Mr,M\theta)$, where
\begin{mathletters}
\label{sltwoactionpolar}
\begin{eqnarray}
e^{iM\theta} &=& e^{i\theta}
{{\overline{W(M,\theta)}}
\over
|W(M,\theta)|}
\ \ ,
\label{sltwoactiontheta}
\\
\noalign{\bigskip}
Mr &=& r |W(M,\theta)|
\ \ ,
\end{eqnarray}
\end{mathletters}
with
\begin{equation}
W(M,\theta) := \alpha + \beta e^{2i\theta}
\ \ .
\end{equation}
Next, we perform a radial Mellin transform on~${{\cal H}}$. For
$f\in{{\cal H}}$, its transform ${\hat f}$ is defined by
\begin{equation}
{\hat f} (s, \theta)
:=
\int_0^\infty dr \, r^{is} f(r,\theta)
\ \ , \ \
s\in{\Bbb R}
\ \ .
\label{mellin}
\end{equation}
Here, and from now on, we understand the argument of a function in
${{\cal H}}$ to be the pair of polar coordinates
$(r,\theta)$~(\ref{polarcoords}). The transform (\ref{mellin})
defines an isomorphism
${{\cal H}}\simeq {\hat{{\cal H}}}
:= L^2({\Bbb R}
\times S^1; {(2\pi)}^{-1} ds d\theta)$:
the inner product (\ref{ip}) can be written as
\begin{equation}
(f,g) =
{1\over 2\pi} \int_{-\infty}^\infty ds
\int_{-\pi}^\pi d\theta \,
{\overline{{\hat f} (s,\theta)}} \,
{\hat g} (s,\theta)
\ \ ,
\label{ip1}
\end{equation}
and the inverse transform is
\begin{equation}
f(r,\theta) =
{1\over 2\pi} \int_{-\infty}^\infty ds \,
r^{-1-is} \, {\hat f} (s, \theta)
\ \ .
\label{imellin}
\end{equation}
These statements follow directly from the observation that in terms
of the logarithmic radial coordinate $t=\ln r$, the inner product
(\ref{ip}) reads \begin{equation}
(f,g) =
\int_{-\infty}^\infty dt
\int_{-\pi}^\pi d\theta \,
{\overline{e^t f(e^t,\theta)}} \,
{e^t g(e^t,\theta)}
\ \ ,
\end{equation}
and the transforms (\ref{mellin}) and (\ref{imellin}) reduce to an
ordinary Fourier transform pair,
\begin{mathletters}
\begin{eqnarray}
{\hat f} (s, \theta)
&=&
\int_{-\infty}^\infty dt \,
e^{ist} \, e^t f(e^t,\theta)
\ \ ,
\\
\noalign{\smallskip}
e^t f(e^t,\theta)
&=&
{1\over 2\pi} \int_{-\infty}^\infty ds \,
e^{-ist} {\hat f} (s, \theta)
\ \ .
\end{eqnarray}
\end{mathletters}
By (\ref{sltwoactionpolar}) and~(\ref{mellin}), the Mellin transform
maps the representation ${\sf T}$ of ${{\rm SL}(2,\BbbR)}$ into a
representation ${\hat {\sf T}}$ on~${\hat{{\cal H}}}$,
given by
\begin{equation}
{\hat {\sf T}}(M){\hat f} (s,\theta)
=
{| W(M, M^{-1} \theta)|}^{1+is}
{\hat f} (s, M^{-1}\theta)
\ \ .
\label{sltworactionHhat}
\end{equation}
The remarkable property of (\ref{sltworactionHhat}) is that the
different values of $s$ are decoupled. To utilize this, we write
${\hat{{\cal H}}}$ as the direct integral
\begin{equation}
{\hat{{\cal H}}}
= \int_{-\infty}^\infty ds \,
{\hat{{\cal H}}}_s
\ \ ,
\end{equation}
where ${\hat{{\cal H}}}_s
\simeq L^2 (S^1; {(2\pi)}^{-1} d\theta)$
with the inner product
\begin{equation}
({\hat f}_s,{\hat g}_s)_s =
{1\over 2\pi} \int_{-\pi}^\pi d\theta \,
{\overline{{\hat f}_s (\theta)}} \,
{\hat g}_s (\theta)
\ \ , \ \
f_s, g_s \in {\hat{{\cal H}}}_s
\ \ .
\label{ips}
\end{equation}
The representation ${\hat {\sf T}}$ (\ref{sltworactionHhat}) then
decomposes into a representation ${\hat {\sf T}}_s$ on
each~${\hat{{\cal H}}}_s$, given by
\begin{equation}
{\hat {\sf T}}_s(M){\hat f}_s (\theta) =
{| W(M, M^{-1} \theta)|}^{1+is}
{\hat f}_s (M^{-1}\theta)
\ \ .
\label{sltworactionHhats}
\end{equation}
It is straightforward to verify that ${\hat {\sf T}}_s$ is a unitary
representation for every~$s$. However, it is not
irreducible.
To proceed, we write
${\hat{{\cal H}}}_s
={\hat{{\cal H}}}_s^+
\oplus {\hat{{\cal H}}}_s^-$,
where ${\hat{{\cal H}}}_s^\pm$ are the two closed subspaces of
${\hat{{\cal H}}}_s$ where the functions satisfy respectively
${\hat f}_s(\theta+\pi)=\pm{\hat f}_s(\theta)$.
${\hat{{\cal H}}}_s^\pm$ are clearly each invariant
under~${\hat{\sf T}}_s$.
Therefore, ${\hat{\sf T}}_s$ decomposes
into unitary representations of ${{\rm SL}(2,\BbbR)}$
on~${\hat{{\cal H}}}_s^\pm$. We shall show that, with the
exception of~${\hat{{\cal H}}}_0^-$, all these representations
are irreducible.
Let $s$ be fixed, and consider~${\hat{{\cal H}}}_s^+$. For
${\hat f}\in{\hat{{\cal H}}}_s^+$,
we can write ${\hat f}(\theta) =
{\tilde f}(2\theta)$,
where ${\tilde f}$ is periodic in its argument with
period~$2\pi$. (For brevity, we drop the subscript $s$ when there is
no danger of confusion.) This gives an isomorphism
${\hat{{\cal H}}}_s^+
\simeq {\tilde{{\cal H}}}_s^+
:= L^2(S^1; {(2\pi)}^{-1}d\phi)$,
where the inner product induced from (\ref{ips}) is
\begin{equation}
({\tilde f},{\tilde g})_s
=
{1\over 2\pi} \int_{-\pi}^\pi d\phi \,
{\overline{{\tilde f} (\phi)}} \,
{\tilde g} (\phi)
\ \ .
\label{ipts+}
\end{equation}
The resulting representation ${\tilde {\sf T}}_s^+$ of ${{\rm SL}(2,\BbbR)}$
on ${\tilde{{\cal H}}}^+$ is then
given by
\begin{equation}
{\tilde {\sf T}}_s^+(M){\tilde f} (\phi)
=
{| w(M, M^{-1} \phi)|}^{1+is}
{\tilde f} (M^{-1}\phi)
\ \ ,
\label{sltworactionHt+}
\end{equation}
where
\begin{equation}
w(M, \phi) := \alpha + \beta e^{i\phi}
\ \ ,
\end{equation}
and the action of ${{\rm SL}(2,\BbbR)}$ on $\phi$ is determined by
(\ref{sltwoactiontheta}) and takes the form
\begin{equation}
e^{iM\phi} = e^{i\phi}
{{\overline{w(M,\phi)}}
\over
w(M,\phi)}
\ \ .
\label{sltwoactionphi}
\end{equation}
This is recognized as the irreducible unitary representation of the
continuous class $C^0_q$ constructed in
Ref.\cite{bargmann}\footnote{Note that formula (6.11) in
Ref.\cite{bargmann} has a
typographical error and should read
$T_\sigma(a)f(\phi) =
\mu(a, a^{-1}\phi)^{{1\over2}+\sigma} f(a^{-1}\phi)$.},
with the Casimir invariant $q$ taking the value $(1+s^2)/4$.
These representations are known as the principal
series of even parity\cite{lang,knapp,howe-tan}.
Let then $s$ again be fixed, and
consider~${\hat{{\cal H}}}_s^-$. For
${\hat f}\in{\hat{{\cal H}}}_s^-$, we now can write
${\hat f}(\theta) =
e^{i\theta} {\tilde f}(2\theta)$, where ${\tilde f}$ is periodic in
its argument with period~$2\pi$. This gives an isomorphism
${\hat{{\cal H}}}_s^-\simeq {\tilde{{\cal H}}}_s^- :=
L^2(S^1; {(2\pi)}^{-1}d\phi)$,
where the inner product induced from
(\ref{ips}) is again given by~(\ref{ipts+}). The resulting
representation ${\tilde {\sf T}}_s^-$ of
${{\rm SL}(2,\BbbR)}$ on ${\tilde{{\cal H}}}^-$ is then given by
\begin{equation}
{\tilde {\sf T}}_s^-(M){\tilde f} (\phi)
=
{| w(M, M^{-1} \phi)|}^{1+is}
\nu(M, M^{-1}\phi)
{\tilde f} (M^{-1}\phi)
\ \ ,
\label{sltworactionHt-}
\end{equation}
where
\begin{equation}
\nu(M,\phi) :=
{w(M, \phi)
\over
|w(M, \phi)|
}
\ \ ,
\end{equation}
and the rest of the notation is as with~${\tilde {\sf T}}_s^+$. For
$s\ne0$, ${\tilde {\sf T}}_s^-$ is recognized as the irreducible
unitary representation of the continuous class $C^{1/2}_q$
constructed in Ref.\cite{bargmann},
with the Casimir invariant $q$ taking the value $(1+s^2)/4$.
These representations are known as the principal series of odd
parity\cite{lang,knapp,howe-tan}.
The representation ${\tilde {\sf T}}_0^-$ decomposes
into a direct sum of two irreducible unitary representations, denoted
in Ref.\cite{bargmann} by $D^+_{1/2}$ and $D^-_{1/2}$ and known as
the limits of the discrete series\cite{lang,knapp,howe-tan}.
We thus have a complete decomposition of the representation ${\sf T}$
(\ref{sltworactionH}) of ${{\rm SL}(2,\BbbR)}$ on~${{\cal H}}$ into its
irreducible components. We collect the statements into a
theorem\cite{knapp-prob}.
{\bf Theorem \ref{sec:sltwor}.1.}
The Hilbert space ${{\cal H}}= L^2({\Bbb R}^2)$ has a decomposition
\begin{equation}
{{\cal H}} \simeq
\int\limits_{-\infty}^\infty ds \,
\left(
{\tilde{{\cal H}}}^+_s \oplus {\tilde{{\cal H}}}^-_s
\right)
\ \ ,
\label{H-full-decomp}
\end{equation}
where the integral is a direct integral, such that
(i) ${\tilde{{\cal H}}}^\pm_s \simeq L^2(S^1)$ for every~$s$;
(ii) the unitary representation ${\sf T}$ (\ref{sltworactionH}) of
${{\rm SL}(2,\BbbR)}$ on ${{\cal H}}$ decomposes into the unitary
representations~${\tilde{\sf T}}^\pm_s$
on~${\tilde{{\cal H}}}^\pm_s$;
(iii) ${\tilde {\sf T}}^+_s$ is an irreducible unitary representation
in the principal series with even parity, $C^0_q$, with
$q=(1+s^2)/4$;
(iv) ${\tilde {\sf T}}^-_s$ with $s\ne0$ is an irreducible unitary
representation in the principal series with odd parity, $C^{1/2}_q$,
with $q=(1+s^2)/4$.~~$\Box$
Note that the further decomposition of~${\tilde {\sf T}}^-_0$ is not
relevant in Theorem \ref{sec:sltwor}.1, as the point $s=0$ is a set
of measure zero on the real line and therefore does not contribute to
the integral in~(\ref{H-full-decomp}). Note also that if we write
\begin{mathletters}
\label{hpm-sum}
\begin{eqnarray}
{\cal H} &=& {\cal H}^+ \oplus {\cal H}^-
\ \ ,
\label{h-decomp}
\\
\noalign{\smallskip}
{\hat{{\cal H}}} &=& {\hat{{\cal H}}}^+ \oplus
{\hat{{\cal H}}}^-
\ \ ,
\end{eqnarray}
\end{mathletters}%
where ${\cal H}^\pm$ consist of those functions $f$ in
${\cal H}$ that satisfy $f(r,\theta+\pi) = \pm f(r,\theta)$,
and similarly
${\hat{{\cal H}}}^\pm$ consist of those functions ${\hat f}$ in
${\hat{{\cal H}}}$ that satisfy ${\hat f}(s,\theta+\pi) = \pm
{\hat f}(s,\theta)$,
we then have
\begin{equation}
{\cal H}^\pm \simeq
{\hat{\cal H}}^\pm
= \int\limits_{-\infty}^\infty ds \, {\hat{\cal H}}^\pm_s
\> \simeq \int\limits_{-\infty}^\infty ds \,
{\tilde{\cal H}}^\pm_s
\ \ .
\label{hpm-string}
\end{equation}
As ${{\rm SL}(2,\BbbR)}$ has no nontrivial finite dimensional unitary
representations\cite{howe-tan}, ${\cal H}$ cannot have
nontrivial finite dimensional subspaces that are invariant
under~${\sf T}$. There are, however, infinite dimensional closed
subspaces of ${{\cal H}}^\pm$ that are invariant
under~${\sf T}$. We have the following theorem.
{\bf Theorem \ref{sec:sltwor}.2.}
Let $E$ be a measurable subset of~${\Bbb R}$, and let
\begin{equation}
{\hat{{\cal H}}}^\pm_E = \left\{ \, {\hat f} \in
{\hat{{\cal H}}}^\pm \mid \hbox{${\hat f}(s,\theta)=0$ for
a.e.\ $s\notin E$} \, \right\}
\ \ .
\end{equation}
Then ${\hat{{\cal H}}}^\pm_E$ is a closed subspace
of~${\hat{{\cal H}}}^\pm$, and it is invariant under~${\hat
{\sf T}}$.
{\sl Proof\/}. We can assume that $E$ and
${\Bbb R} \setminus E$ both have strictly positive measure
(since otherwise ${\hat{{\cal H}}}^\pm_E = \{0\}$
or ${\hat{{\cal H}}}^\pm_E = {\hat{{\cal H}}}^\pm$).
It is clear that
${\hat{{\cal H}}}^\pm_E$ is an invariant subspace
of~${\hat{{\cal H}}}^\pm$. Closedness follows from the
observation that ${\hat{{\cal H}}}^\pm_E$ is the orthogonal
complement in ${\hat{{\cal H}}}^\pm$ of the subspace
${\hat{{\cal H}}}^\pm_{{\Bbb R}\setminus E}$.~~$\Box$
The construction of ${\hat{{\cal H}}}^\pm_E$ closely parallels
the construction of closed translationally invariant subspaces
of~$L^2({\Bbb R})$. For a measurable set $E\subset{\Bbb R}$, the functions
in $L^2({\Bbb R})$ whose Fourier transform vanishes almost everywhere
outside $E$ constitute a closed translationally invariant subspace
of~$L^2({\Bbb R})$; conversely, every closed translationally invariant
subspace of $L^2({\Bbb R})$ is of this form for some~$E$\cite{rudin}. We
shall not address here the question as to whether the spaces
${\hat{{\cal H}}}^\pm_E$ exhaust all closed ${\hat
{\sf T}}$-invariant subspaces of~${\hat{{\cal H}}}^\pm$.
\section{Representation of ${{\rm SL}(2,\BbbZ)}$ on $L^2({\Bbb R}^2)$}
\label{sec:sltwoz}
We now consider the consequences of the above results
when ${{\rm SL}(2,\BbbR)}$ is restricted to the subgroup ${{\rm SL}(2,\BbbZ)}$.
It is clear that all the unitary representations of ${{\rm SL}(2,\BbbR)}$
appearing in Section \ref{sec:sltwor} restrict to unitary
representations of ${{\rm SL}(2,\BbbZ)}$. The spaces
${\hat{{\cal H}}}^\pm_E$ of Theorem \ref{sec:sltwor}.2 are
therefore invariant also under the representation of ${{\rm SL}(2,\BbbZ)}$ that
is inherited from~${\sf T}$. In the physical terminology introduced
in Section~\ref{sec:intro}, this implies that
${\hat{{\cal H}}}^+_E$ are closed diffeomorphism invariant
subspaces of ${\cal H}_{\rm S} \simeq {\hat{{\cal H}}}^+$.
To examine the possibility of finite dimensional diffeomorphism
invariant subspaces, let us denote by ${\hat{\sf T}}'$ and ${\tilde
{\sf T}}^{\prime \pm}_s$ the representations of ${{\rm SL}(2,\BbbZ)}$ that are
obtained by restriction from respectively ${\hat{\sf T}}$
and~${\tilde {\sf T}}_s^\pm$. Let ${\cal F}^\pm$ be a finite
dimensional subspace of~${\hat{{\cal H}}}^\pm$, and let $\left\{
\, {\hat f}^\pm_k \in {\hat{{\cal H}}}^\pm \mid \hbox{$k =
1,\ldots,N$} \, \right\}$ be a finite set of vectors spanning~${\cal
F}^\pm$. Suppose now that ${\cal F}^\pm$ is invariant
under~${\hat{\sf T}}'$. It follows that for a.e.\ $s\in{\Bbb R}$, the
subspace ${\cal F}_s^\pm$ of ${\hat{{\cal H}}}_s^\pm \simeq
{\tilde{{\cal H}}}_s^\pm$ that is spanned by the functions
$\left\{ \, {\hat f}^\pm_k(s,\theta) \, \right\}$ is invariant
under~${\tilde {\sf T}}^{\prime \pm}_s$. But since
${\tilde{{\cal H}}}_s^\pm$ are infinite dimensional and ${\tilde
{\sf T}}^{\prime \pm}_s$ are irreducible (except for~${\tilde
{\sf T}}^{\prime -}_0$)\cite{cowsteg}, ${\cal F}_s^\pm$ must be
trivial for a.e.~$s$. This implies that every ${\hat f}^\pm_k$ is the
zero vector in~${\hat{{\cal H}}}^\pm$, and hence ${\cal F}^\pm =
\{0\}$.
Therefore, ${\cal H}$ has no nontrivial finite dimensional
subspaces that are invariant under~${\hat{\sf T}}'$. In the physical
terminology of Section~\ref{sec:intro}, this implies that
${\cal H}_{\rm S} \simeq {\hat{{\cal H}}}^+$ has no
nontrivial finite dimensional subspaces invariant under the large
diffeomorphisms.
\section{Discussion}
\label{sec:discussion}
In this paper we have addressed the role of large diffeomorphisms in
Witten's 2+1 gravity on the manifold ${\Bbb R}\times T^2$. We
concentrated on a ``spacelike sector" quantum theory that treats the
large diffeomorphisms as a symmetry. On the one hand, we showed that
the Hilbert space contains no nontrivial finite dimensional subspaces
that are invariant under the large diffeomorphisms. On the other
hand, we constructed explicitly a class of infinite dimensional
closed invariant subspaces. The existence of such subspaces implies,
in particular, that the representation of the large diffeomorphisms
on the Hilbert space is reducible.
These results shed light on both the similarities and differences
between the behavior of Witten's theory on the manifold ${\Bbb R}\times
T^2$ and the manifolds ${\Bbb R}\times \Sigma$, where $\Sigma$ is a
surface of genus $g>1$\cite{witten1,mess,carlip1,goldman2}. For
$g>1$, a geometrodynamically relevant quantum theory that treats the
large diffeomorphisms as a symmetry is obtained by taking the
configuration space to be the Teichm\"uller\ space~$T^g$. The quotient of
$T^g$ under the action of the large diffeomorphisms is the Riemann
moduli space: as $T^g$ contains infinitely many copies of the Riemann
moduli space, the quantum theory has no nontrivial finite dimensional
diffeomorphism invariant subspaces. This is similar to what we have
found for the torus. On the other hand, the differences between the
torus and the higher genus surfaces manifest themselves when one
attempts to treat the large diffeomorphisms as gauge. As the Riemann
moduli space is a manifold except at isolated singularities, a higher
genus theory that treats the large diffeomorphisms are gauge can be
obtained by defining the inner product by an integral over just the
Riemann moduli space instead of all of~$T^g$. In contrast, the
corresponding quotient space for the torus seems too pathological to
be employed in a similar fashion\cite{peldan}. We shall show in
Appendix \ref{app:Sone} that this pathology persists even when the
torus Hilbert space is decomposed by (\ref{hpm-string}) into a direct
integral of Hilbert spaces that carry irreducible representations of
the large diffeomorphism group. An attempt to reduce the gauge
redundancies in the torus theory at the quantum level leads to the
absence of any pure states in the theory, as discussed in the
Introduction.
For ${\Bbb R}\times T^2$, the construction of a connection
representation quantum theory where the large diffeomorphisms are
treated as gauge remains thus an open problem. In the metric-type
representations of
Refs.\cite{carlip1,carlip2,carlip3,carlip-water,peldan2}, the
difficulty does not appear.
Finally, it should be emphasized that we have not attempted to
interpret physically the symmetries generated by the large
diffeomorphisms. Doing so would require, among other things, a
physical interpretation of those observables that do not commute with
the large diffeomorphisms. For noncompact two-manifolds, one
possibility to approach this might be to introduce boundary
conditions that fix additional structure at an asymptotic infinity,
and to interpret the large diffeomorphisms in terms of the structure
at the infinity\cite{carlip-scat}. The infinity would then be
understood as an ambient physical system. However, for compact
two-manifolds no such reference to an outside system is
possible. The interpretational issue is therefore not at all obvious.
\acknowledgments
We would like to thank Chris Bishop for bringing
Refs.\cite{cowsteg,bishsteg} to our attention, and Abhay Ashtekar,
Martin Bordemann, John Friedman, Don Marolf, and Peter Peld\'{a}n for
discussions. D.\thinspace{}G. is grateful to John Friedman and Karel
Kucha\v{r} for their hospitality during the early stages of this
work.
This work was supported in part by the NSF grant PHY91-05935.
|
1,108,101,565,305 | arxiv | \section{Introduction}
Origin of turbulence and then the viscosity parameter $\alpha$,
while has been demonstrated for a hot Keplerian accretion disk by the
magnetorotational instability (MRI; Balbus \& Hawley 1991; Balbus, Hawley, \&
Stone 1996; Hawley, Balbus, \& Winters 1999), is
still not well understood for cold disks, e.g. accretion disks
around quiescent cataclysmic variables (Gammie \& Menou 1998; Menou 2000),
proto-planetary and star-forming disks (Blaes \& Balbus 1994), and
the outer region of disks in active galactic nuclei (Menou \&
Quataert 2001), even after more than three decades of the famous discovery of
the $\alpha$-disk (Shakura \& Sunyaev 1973).
These cold systems are largely neutral in charge
so that the MRI driven turbulence appears to be ruled out.
However, to support accretion, there must appear some sort of turbulence (most likely
pure hydrodynamic in nature) and then the corresponding turbulent viscosity, as
the molecular viscosity is negligible. Richard \& Zahn (1999) indeed showed
by analyzing results of laboratory experiments for the classical Couette-Taylor
flow that in the case where angular momentum increases outward, similar to the
Keplerian flow, hydrodynamic turbulence may be sustained. Then they derived the
corresponding turbulent viscosity which is very useful for the astrophysical purpose.
Longaretti (2002) argued that the Keplerian
accretion flow in the shearing sheet limit should be turbulent and
the lack of turbulence in simulations is due to lack of their resolution.
Then Lesur \& Longaretti (2005) showed that the required resolution is not
possible to achieve with the presently available computer resources.
They also found that the efficiency of turbulent transport is directly correlated
to the critical Reynolds number for the transition to turbulence.
Inspite of all these analyzes, the actual origin of hydrodynamic turbulence in such systems
still remains unclear. It is important to note that the numerical simulation has never been carried out
with a {\it very high} Reynolds number what the real accretion disk exhibits. Therefore,
the hydrodynamic effects may compete with the magnetohydrodynamic effects (essentially MRI)
at a high Reynolds number and, in a realistic system, turbulence may be
due to hydrodynamic effects independent
of whether the disk is cold or hot.
Recently, several authors including ourselves have
put their efforts in achieving progress toward the solution of this
difficult problem (e.g. Tevzadze et al. 2003; Chagelishvili 2003;
Umurhan \& Regev 2004; Yecko 2004; Mukhopadhyay, Afshordi \& Narayan 2005; Afshordi, Mukhopadhyay
\& Narayan 2005; Johnson \& Gammie 2005; Umurhan et al. 2006;
Barranco \& Marcus 2005; Papaloizou 2005).
The main aim of these works is to demonstrate the
{\it pure} hydrodynamic turbulence by transient growth of energy
with the suitable choice of two-dimensional initial perturbation. The idea is any large growth
plausibly could switch the accretion disk into the nonlinear regime which might
result its subcritical transition to turbulence
if energy growth exceeds the threshold for turbulence.
One might argue that transient growth, even if large, can not provide much
clue on the existence and properties of a turbulent basin of attraction.
Schmid \& Henningson (2001) and Criminale, Jackson \& Joslin (2003) have
described in detail that transition to turbulence is not a unique process
but it depends on the initial condition/disturbance and the nature
of the flow. In fact, it is known that even in presence of secondary instability
linearly unstable base flows may reach to a non-turbulent saturated state.
However, turbulence definitely belongs to the nonlinear regime and it
is exhibited only in the situations when large growth (or more precisely transient
growth for the present purpose) switches the system over the non-linear regime.
As our present goal is to understand the possible origin of hydrodynamic turbulence,
we consider those situations when large transient growth governs non-linearity.
At the present case where the accretion time scale is comparable
(or shorter) to the time scale to achieve maximum growth (MAN), decaying
nature of growth
at a period longer than the accretion time scale does not matter. If growth (or maximum growth)
becomes large enough that exceeds the
threshold to trigger non-linearity and
turbulence, then it does not matter whether growth evolves exponentially or transiently.
Dauchot \& Manneville (1997) argued with a toy model that transient growth
does not guaranty the transition to turbulence as the underlying phase portraits
do not get deformed much in absence of any linearly unstable mode. However, that does not
rule out the importance of transient growth as their model is very
simplistic consisting of two variables only and growth is tinny to
expect any deformation in the phase portraits. Other alternative methods were
proposed to describe the subcritical transition to turbulence and then
to investigate various underlying aspects in detail by e.g. Waleffe (1995, 1997),
Brosa \& Grossmann (1999), Waleffe (2003), Kerswell (2005), while last two works,
that discuss non-rotating Couette flows and pipe flows respectively,
are mostly relevant for the astrophysical purpose. Although the works bring some new
insight into the subject, the main problem i.e. to understand the subcritical turbulence
in three-dimension remains unsolved. This is indeed a non-trivial problem not
only in astrophysics but also in fluid dynamics. In the present paper, we suggest
a possible mechanism to govern non-linear effect into a Keplerian accretion disk.
While our prescription does not solve the problem completely, it certainly
opens up a new avenue to understand the physics behind this puzzle.
The problem with transient growth, what has been proposed as the mechanism to
generate turbulence
(e.g. Umurhan \& Regev 2004; hereafter UR; Mukhopadhyay, Afshordi \& Narayan 2005,
Afshordi, Mukhopadhyay \& Narayan 2005; hereafter MAN), is that in two-dimension
underlying perturbations
must ultimately decline to zero
in the viscous flow. To overcome this limitation, it is necessary to invoke
three-dimensional effects. Various kinds of secondary
instability, such as the elliptical instability, are widely
discussed as a possible route to self-sustained turbulence in
linearly perturbed shear flows (see, e.g. Pierrehumbert 1986;
Bayly 1986; Hellberg \&
Orszag 1988; Craik 1989; Le Dize\'s, Rossi \& Moffatt 1996;
Kerswell 2002). Therefore, we are motivated to see
whether these three-dimensional instabilities are present in
the Keplerian flows
which consist of elliptical streamlines under two-dimensional perturbation.
Goodman (1993) first explored the
possible role of the elliptical instability in an accretion disk
and Lubow, Pringle \& Kerswell (1993) and Ryu \& Goodman (1994)
showed that angular momentum may be transferred from the disk to
the tidal source by the instability effect. They essentially
considered the case of forced disks. Later, Ioannou \&
Kakouris (2001) also examined transient growth being constantly re-excited
by external noise.
The three-dimensional instability of a two-dimensional
flow with elliptical vortices has been demonstrated by number of authors
(e.g. Craik \& Criminale 1986; Waleffe 1990) and
has been proposed as a generic mechanism for the breakdown of many two-dimensional
high Reynolds number flows.
This result motivates us to investigate whether the similar
scenarios exist in a Keplerian accretion disk.
Therefore, essentially we plan to investigate the
Keplerian flow with two consecutive perturbation effects. The primary two-dimensional perturbation,
which drives transient growth (MAN), governs elliptical streamlines in the flow.
Then we consider the further three-dimensional perturbation,
that is the secondary one, to this two-dimensional flow, if that drives
three-dimensional instability effect
in presence of viscosity. Presumably, three-dimensional instabilities
lead to nonlinear feedback and self-sustained turbulence.
Although the maximum growth in the Keplerian disk with finite vertical thickness and then with
finite vertical perturbation is smaller compared to that in a two-dimensional Keplerian
disk, as Tevzadze et al.
(2003) and MAN argued, vertical stratification may cause to have
a nonvanishing asymptotic growth that is a fraction of the maximum growth. However, they could
not show whether this will significantly help the onset of turbulence.
By three-dimensional hydrodynamic simulations, Barranco \& Marcus (2005) studied
the dynamics and formation of vortices in stably stratified proto-planetary disks
and found that vortices are unstable under perturbation. Theoretically, the origin of these
vortices is understood as perturbation in plane shear flow
(UR, MAN). Continuing in the same line of thought, recently,
Umurhan et al. (2006) have shown substantial growth resulting
from initial perturbations in the linearly stable steady state and have concluded
that significant perturbation energy amplification occurs in accretion disks on a global scale.
In the present paper, we perform a local linear
analysis in the shearing box approximation.
Although a small section of an accretion disk may not reproduce the global
disk properties, we assume that if turbulence is exhibited at any section,
then that sustains and eventually affects the entire disk. While the local
result does not guaranty its global signature, absence of local instability
and then possible turbulence, perhaps, does guaranty its global absence.
Therefore, from the results of this paper we can determine the future
avenues of the subject, whether the analyzes for
secondary instability including transient growth
is a fruitful path to solve the problem.
In the next section, we outline the basic model considered for this
problem describing primary and secondary perturbation.
In \S3, we present the simple solution when the evolution of
secondary perturbation is much rapid than that of primary one. Subsequently, in \S4 we analyze the
general solutions when both perturbations vary simultaneously. Finally we sum up the results
with a discussion in \S5.
\section{The Model}
We consider a small portion of the Keplerian flow centered on radius $r_0$ (see MAN for detail).
Therefore, the Keplerian flow locally reduces to rotating Couette flow in a narrow gap limit
whose unperturbed velocity vector is $\vec{U}=(0,-x,0)$, where $x<<r_0$.
Therefore, the Navier-Stokes and the continuity equations for
the dynamics of viscous incompressible disk fluid are
given by
\begin{equation}
\frac{\partial \vec{U}}{\partial t}+\vec{U}.\nabla\vec{U}+\vec{\Omega}
\times\vec{\Omega}\times \hat{x}\,x
+2\vec{\Omega}\times\vec{U}+\nabla(\tilde{p})=\frac{1}{R}\nabla^2\vec{U};\,\,\,\, \nabla.\vec{U}=0,
\label{nevst2}
\end{equation}
where $\nabla=(\partial/{\partial x},\partial/{\partial y},\partial/{\partial z})$,
$R$ is the Reynolds number,
the angular frequency $\vec{\Omega}=(0,0,1/q)$,
$t$ is the time, and $\tilde{p}$ is proportional
to the pressure. For a Keplerian disk $q=1.5$ and for a constant angular
momentum disk $q=2$.
Here, all the variables are expressed in dimensionless units.
The unit of length is the box size in the radial direction and the unit of velocity
is the maximum relative velocity between two fluid elements into the box.
Below we describe two subsequent perturbation effects, primary and secondary, one by one.
\subsection{Primary Perturbation}
Under a linear two-dimensional (primary)
perturbation, the velocity vector of the (primary) flow reduces as
\begin{equation}
\vec{U}\rightarrow\vec{U}_p=(w_x,-x+w_y,0)={\bf A}.\vec{d},
\label{primper}
\end{equation}
where
\begin{equation}
w_x=\zeta\frac{k_y}{\kappa^2}\sin(k_xx+k_yy),\,\,w_y=-\zeta\frac{k_x}{\kappa^2}\sin(k_xx+k_yy),
\label{primpert}
\end{equation}
${\bf A}$ is a tensor of rank $2$, the position vector $\vec{d}=(x,y,z)$,
$k_x,k_y$ are the components of the wave vector of perturbation,
$\kappa=\sqrt{k_x^2+k_y^2}$, and $\zeta$ is the amplitude of the vorticity perturbation.
Now we concentrate on a further small patch of the primarily perturbed flow so that the spatial scale
is very small compared to the wavelength of primary perturbation, $x<<1/k_x$,
$y<<1/k_y$. Therefore, $\bf A$ comes out to be
\begin{equation}
{\bf A}=A_j^k=\left(\begin{array}{ccc} \zeta\sqrt{\epsilon(1-\epsilon)} &
\zeta(1-\epsilon) & 0\\ -(1+\zeta\epsilon) &
-\zeta\sqrt{\epsilon(1-\epsilon)} & 0\\ 0 & 0 & 0 \end{array}\right),
\label{axy}
\end{equation}
where $\epsilon=(k_x/\kappa)^2$.
The above $\bf A$ indicates the flow pattern to be a generalized elliptical one compared
to that discussed in standard fluid literature (e.g. Bayly 1986; Craik 1989; Kerswell 2000)
as an ordinary elliptical flow given by
\begin{equation}
A_j^k=\left(\begin{array}{crr} 0 & 1-\epsilon & 0\\ -(1+\epsilon) & 0 &
0\\ 0 & 0 & 0 \end{array}\right).
\label{axy0}
\end{equation}
\subsection{Secondary Perturbation}
The background flow for further perturbation (secondary one)
also corresponds to eqn. (\ref{nevst2}) except
$\vec{U}$ is replaced by $\vec{U}_p$.
The secondary perturbation modifies components of the velocity
in eqn. (\ref{primper}) and the pressure as $\vec{U}_p\rightarrow \vec{U}_p+\vec{u}$ and
$\tilde{p}\rightarrow\tilde{p}+p$. The perturbation is considered to be
plane wave typed given by
\begin{equation}
(u_i,\tilde{p})=(v_i(t),p(t)) \exp(ik_m(t) x^m),
\label{pet}
\end{equation}
where $k_m>>k_x,k_y$. The Latin indices run from $1$ to $3$ such that
e.g. $x^m\equiv (x,y,z)$, and thus the background velocity
can be written as $U_{pi}=A^m_i\,x_m$.
Therefore, from eqns. (\ref{nevst2}), (\ref{axy}) and (\ref{pet}) with replacing $\vec{U}$
by $\vec{U}_p$
and after some algebra, we obtain the evolution of a linear secondary perturbation
\begin{equation}
\dot{v}_j+A_j^k\,v_k+2\,\epsilon_{mkj}\Omega^m v^k=-ip\,k_j
-\frac{v_j}{R}\,k^2,
\label{perteq}
\end{equation}
along with
\begin{equation}
k_nv^n=0,\,\,\,\,\,
\dot{k}_j=-(A^m_j)^T\,k_m,\,\,\,\,\,
k_n\dot{v}^n=k_m\,A^m_n\,v^n,
\label{keq}
\end{equation}
where the `over-dot' indicates a derivative with respect to $t$,
$\epsilon_{mkj}$ is a Levi-Civita tensor, and $k^2=k_mk^m$.
Two components, $k_1$ and $k_2$, of the wave-vector [$\vec{k}=k_m=(k_1,k_2,k_3)$] of
secondary perturbation oscillate in time with the angular frequency
$\varpi=\sqrt{\zeta(1-\epsilon)}$ at a fixed $\epsilon$, while the third one, $k_3$, remains
constant. As we choose the signature of the background Minkowski space-time to be $[-,+,+,+]$,
it does not matter whether
any Latin index appears as a lower case or an upper case. For example,
$A^k_j=A_{jk}$, where $j$ and $k$ respectively indicate the row and the column
number.
Now projecting out eqn. (\ref{perteq}) by $P^j_i=\delta^j_i-k^{-2}k^j\,k_i$ and
using eqn. (\ref{keq}) we obtain
\begin{equation}
\dot{v}_i=\left(2\frac{k^j\,k_i}{k^2}-\delta^j_i\right)A^k_j\,v_k
-2\,\epsilon_{mki}\,\Omega^m\,v^k-\frac{v_i}{R}k^2
+\left(2\,\epsilon_{mkj}\,\Omega^m\,v^k+\frac{v_j}{R}k^2\right)\frac{k^jk_i}{k^2}.
\label{veq}
\end{equation}
A similar equation was obtained by Bayly (1986), except that they now have additional
terms induced due to the Coriolis and the viscous effects.
As $R$ is very large in an accretion disk,
we neglect the viscous term in eqn. (\ref{veq}) comparing with others and
rewrite the equation as
\begin{equation}
\dot{v}_i=\Lambda^j_i\,v_j,
\label{vmat}
\end{equation}
where
\begin{equation}
\Lambda_{ij}=\left(\begin{array}{ccc} \left(\frac{2k_1^2}{k^2}-1\right)A_{11}+
\frac{2k_1k_2}{k^2}\left(A_{21}+\frac{1}
{q}\right) & \frac{2k_1^2}{k^2}\left(A_{12}-\frac{1}{q}\right)+\frac{2}{q}-A_{12}+\frac{2k_1k_2}{k^2}A_{22}
& 0\\ \frac{2k_1k_2}{k^2}A_{11}+\frac{2k_2^2}{k^2}\left(A_{21}+\frac{1}{q}\right)-A_{21}-\frac{2}{q}
& \frac{2k_1k_2}{k^2}\left(A_{12}-\frac{1}{q}\right)+\left(\frac{2k_2^2}{k^2}-1\right)A_{22} &
0\\ \frac{2k_1k_3}{k^2}A_{11}+\frac{2k_2k_3}{k^2}\left(A_{21}+\frac{1}{q}\right) &
\frac{2k_1k_3}{k^2}\left(A_{12}-\frac{1}{q}\right)+\frac{2k_2k_3}{k^2}A_{22} & 0 \end{array}\right).
\label{mat}
\end{equation}
We essentially need to solve eqn. (\ref{vmat}) to demonstrate the behavior of perturbations.
Next we describe the solution in various possible situations.
\section{Perturbation Solution at a fixed $\epsilon$}
The solution with constant $\epsilon$
\footnote{This is to remind that
the radial wave-vector of primary perturbation, $k_x$, varies with $t$ (MAN),
therefore in reality the background for secondary perturbation appears to be time-dependent.}
implies the result at a particular instant of the evolution of primary perturbation
that provides the {\it instantaneous growth rate}.
The underlying idea is to focus on
the situation when the evolution of secondary perturbation and the corresponding
development of growth is much rapid compared to that due to primary perturbation and therefore
$\epsilon$ practically remains constant during secondary perturbation evolution.
This helps us to compare the result with what exists in standard fluid literature
(e.g. Bayly 1986; Kerswell 2002).
The general solution of
eqn. (\ref{vmat}) can be written as a linear superposition of Floquet modes
\begin{equation}
v_i(t)=\exp(\sigma\,t)\,f_i(\phi),
\label{flo}
\end{equation}
where $\phi=\varpi\,t$, $f_i(\phi)$ is a periodic function with
time-period $T=2\pi/\varpi$, and $\sigma$ is the Floquet exponent
which is the eigenvalue of the problem. Clearly, if $\sigma$ is positive then
perturbation increases with time.
Applying the periodicity condition, $f_i(2\pi)=f_i(0)$,
in eqn. (\ref{flo}), we obtain
\begin{equation}
v_i(T)=\exp(\sigma\,T)\,f_i(2\pi)=\exp(\sigma\,T)\,f_i(0)=\exp(\sigma\,T)\,v_i(0).
\label{veigen}
\end{equation}
Therefore, $\exp(\sigma\,T)$ serves as an evolution operator.
To determine the exponential growth rate, $2\sigma$, we
strictly follow e.g. Bayly (1986) and Craik (1989). In this method,
one has to evaluate the associated velocity evolution matrix,
whose eigenvalue and eigenvector
at $t=T$ are $e^{\sigma T}$ and $f_i(2\pi)=f_i(0)$ respectively, satisfying
\begin{equation}
\frac{dM_{ji}(t)}{dt}=\Lambda^m_jM_{mi}(t),
\label{matevo}
\end{equation}
where $M_{ji}(0)=\delta_{ji}$. Essentially $M_{ji}(t)$ serves as an evolution operator
such that
\begin{equation}
v_j(t)=M_{ji}(t)v_i(0).
\label{vevo}
\end{equation}
Thus, using the fourth order Runge-Kutta
method, one can easily compute the elements of the $3\times 3$ matrix $M_{ji}(T)$.
Interesting feature to note is that $<Tr(\Lambda_{ij})>=0$ \footnote{`$Tr$'
denotes the sum of diagonal elements of the matrix, $<...>$ indicates
the averaged value, and `det' refers to the determinant of the matrix.}
over $0\le t\le T$. Therefore, $\det(M_{ji}(T))=1$.
Moreover, $d(k_jM^j_i(t))/dt=0$ and therefore $k_i(0)=k^j(0)M_{ji}(T)$ which
indicates that one eigenvalue of this $3\times 3$ matrix is always unity.
The remaining two eigenvalues of $M_{ji}(T)$ must be either real and
reciprocal to each other or complex conjugate to each other with unit modulus.
This property helps us to check the accuracy of results such that the
product of all three eigenvalues must be unity.
Therefore, our problem reduces to evaluate two no-trivial
eigenvalues, $\mu_1,\mu_2$, of the
matrix $M_{ji}(T)$. If $\mu_1$ or $\mu_2$ is real and positive, then the Floquet exponent
$\sigma_i=log(\mu_i)/T$.
When the initial value of $\vec{k}$, i.e., $\vec{k}(0)=k_m(0)\equiv(0,0,1)$,
$\vec{k}(t)$ remains conserved throughout as follows from eqn. (\ref{keq}).
This is the {\it pure} vertical perturbation.
In this case, $\Lambda_{ji}$ in eqn. (\ref{mat}) appears
as a constant matrix.
Therefore, $e^{\sigma t}$ and $f_i(t)$ are
the eigenvalue and the eigenvector respectively of the matrix $M_{ji}(t)$ at any time $t$ and
thus the Floquet exponent can be evaluated at any instant.
In fact, there is an exact analytical solution for
the Floquet exponents for this initial condition
which are the eigenvalues of the matrix $\Lambda_{ji}$.
\subsection{Ideal elliptical flow}
Before investigating our accretion disk solutions in detail, let us recapitulate the nature of standard
elliptical flow and the corresponding instability discussed in the fluid dynamics
literature since long time. The velocity of a two-dimensional fluid element with elliptical
streamlines is given by
$\vec{U}_p={\bf A}{\bf .}\vec{d}$ (see, e.g. Kerswell 2002) with the definition
of ${\bf A}$ as follows from eqn. (\ref{axy0}).
Now following the perturbation technique described in \S2.2, we obtain
$\Lambda_{ij}$ given by eqn. (\ref{mat}) with the components of $\bf A$ as follows from
eqn. (\ref{axy0}). If $\vec{k}$ is constant, i.e. perturbation is vertical, then $\Lambda_{ij}$
is also a constant matrix given by
\begin{equation}
\Lambda_{ji}=\left(\begin{array}{ccc} 0 & 2\Omega_z+\epsilon-1
& 0\\ -2\Omega_z+\epsilon+1 & 0 &
0\\ 0 & 0 & 0 \end{array}\right),
\label{mat2}
\end{equation}
whose non-trivial eigenvalues are the velocity growth rates, Floquet exponents, given by
\begin{equation}
\sigma=\pm\sqrt{{\epsilon}^2-(1-2\Omega_z)^2}.
\label{eliec}
\end{equation}
For a Keplerian flow, $\sigma=\pm\sqrt{\epsilon^2-1/9}$. Therefore, the vertical
perturbation gives rise to the positive {\it instantaneous} growth rate in a Keplerian disk for
$\epsilon >1/3$. For any other perturbation, $k_1$ and $k_2$ oscillate in time
with the angular frequency $\varpi=\sqrt{1-\epsilon^2}$ and $k_3$ remains constant
(for detailed descriptions, see, e.g. Bayly 1986; Craik 1989; Kerswell 2002).
In Fig. \ref{ellip}, we compare the variation of maximum velocity growth rate as a function of
eccentricity parameter for a Keplerian flow with that of non-rotating flow. By ``maximum''
we refer the quantity obtained by maximizing over $\vec{k}$. Clearly the growth rate
is significantly large at high $\epsilon$ for a Keplerian system that is interesting for
the astrophysical purpose \footnote{This is to remind that our rotating Keplerian
flow described in \S2 is
highly eccentric at large $k_x$, i.e. at the early stage of the evolution of primary perturbation.}.
This is understood physically from eqn. (\ref{eliec}).
When $\Omega_z=0$, from
eqn. (\ref{eliec}), $\sigma=0$ at $\epsilon=1$. However, for a rotating flow,
$\sigma=\pm 2\sqrt{q-1}/q$ at $\epsilon=1$. This result motivates us to study the elliptical
streamline (vortex) effects in an actual Keplerian flow governs in accretion disks
as follows from eqn. (\ref{axy}).
\subsection{Flow in a Keplerian disk}
Here the velocity, $\vec{U}_p$, of the background flow (primarily perturbed flow) is defined according
to $\bf A$ given by eqn. (\ref{axy}).
When does our primarily perturbed Keplerian flow
reduce to conventional flows?
(1) When $k_x\rightarrow\infty$ i.e. $\epsilon\rightarrow 1$ and $\zeta\rightarrow 1$,
$\bf A$ in eqn. (\ref{axy}) is same as that in eqn. (\ref{axy0}).
This is the case of an extremally eccentric flow.
(2) When $k_x\rightarrow\infty$ and $\zeta\rightarrow 0$, ${\bf A}$ in eqn. (\ref{axy}) reduces to
that of plane shear flow (MAN).
(3) When $k_x\rightarrow 0$ i.e. $\epsilon \rightarrow 0$ and $\zeta\rightarrow 1$,
again the form of $\bf A$ in both the equations are same. This is the case of a circular flow.
(4) When $k_x\rightarrow 0$ and $\zeta\rightarrow 0$, ${\bf A}$ in eqn. (\ref{axy}) again reduces to
that of plane shear flow (MAN).
Now for a pure vertical perturbation, $\Lambda_{ji}$ in eqn. (\ref{mat}) reduces to
\begin{equation}
\Lambda_{ji}=\left(\begin{array}{ccc} -\zeta\sqrt{\epsilon(1-\epsilon)} & 2\Omega_z-\zeta(1-\epsilon)
& 0\\ -2\Omega_z+\zeta\epsilon+1 & \zeta\sqrt{\epsilon(1-\epsilon)} &
0\\ 0 & 0 & 0 \end{array}\right),
\label{mat1}
\end{equation}
where $\Omega_z=1/q$. As before the above $\Lambda_{ji}$ is a constant matrix, and thus we evaluate
the Floquet exponent as
\begin{equation}
\sigma=\pm\sqrt{\zeta\epsilon-(2\Omega_z-1)(2\Omega_z-\zeta)}.
\label{sigcon}
\end{equation}
Most of the works in fluid literature, so far,
have been carried out for non-rotating systems without focusing on
a Keplerian flow rigorously. Therefore, eqn. (\ref{sigcon}) can be seen as
an extension of those results for an actual rotating Keplerian flow.
Now we understand the following points from eqn. (\ref{sigcon}).
(1) When $\Omega_3=0$, $\sigma=\sqrt{\zeta(\epsilon-1)}$. This verifies
that non-rotating two-dimensional plane shear flow is always hydrodynamically
stable under a {\it pure} vertical perturbation.
(2) When $\Omega_3=1/2$, $\sigma=\sqrt{\zeta\epsilon}$. Therefore, the constant angular
momentum accretion flow is always hydrodynamically unstable.
The energy growth rate of perturbation increases with the strain rate i.e.
the eccentricity of the flow.
(3) When $\Omega_3=2/3$, $\sigma=\sqrt{\zeta\epsilon-(4-3\zeta)/9}$.
Therefore, a Keplerian flow with elliptical streamlines
gives rise to unbounded growth, at least in certain time interval when growth
and growth rate due to primary perturbation is very small,
under a {\it pure} vertical perturbation, only if $\zeta>1/3$.
However, there are some other three-dimensional perturbations \footnote{By vertical perturbation
we mean that only the vertical component of initial perturbation wave-vector is
non-zero,
while any other perturbation with a non-zero vertical component of initial wave-vector
is called three-dimensional perturbation.}
which can generate positive growth rate, $\sigma$,
in a Keplerian flow with $\zeta < 1/3$, that we describe by
numerical solutions.
As primary perturbation evolves with time,
eccentricity decreases, and then the energy growth rate due to
secondary perturbation changes.
Figure \ref{kepcon}a shows the
variation of maximum growth rate, $\sigma_{max}$, as a function of
eccentricity parameter, $\epsilon$ \footnote{Note that $\epsilon$ is a parameter
that carries the information of eccentricity of the system but is not the
eccentricity itself.}. By ``maximum'' we refer the
quantity obtained by maximizing over the vertical component of
wave-vector, $k_{3}$.
Clearly, for $\zeta < 1/3$, growth rate maximizes for
three-dimensional perturbations with $k_3<1$. At small $\epsilon$
and large $\zeta$, the streamlines of the flow essentially become circular
(see eqn. (\ref{axy})), and thus the growth rate severely decreases
due to lack of significant elliptical vortex. On the other hand, when $\epsilon$
and $\zeta$ both are small, the background structure reduces to plane shear
and therefore any growth arises due to primary perturbation only.
Figure \ref{kepcon}b shows the variation of optimum growth rate,
$\sigma_{opt}$, as a function of $k_3$.
By ``optimum'' we refer the quantity obtained by maximizing over
$\epsilon$. Interesting fact to note is that the optimum growth rate is
always obtained for three-dimensional perturbation with
significant vertical component.
Moreover, as $\zeta$ increases, the best growth rate is obtained
at high $\epsilon$ with large $k_3$.
Therefore, three-dimensional growth is more prompt
at larger $\zeta$.
\section{Computation of growth with simultaneous evolution of both perturbations}
Above results verify that at some parameter range the
three-dimensional growth rate due to
secondary perturbation in rotating shear flow is expected to be real and positive,
which motivates us to analyze the simultaneous evolution of
both perturbations. As primary perturbation evolves, $k_x$
varies with time and therefore $\epsilon$ and then $\bf A$ does so. Thus,
although eqn. (\ref{keq}) remains still valid, in general
the wave-vector of secondary disturbance is not periodic and the solution of
eqn. (\ref{vmat}) can not be expressed exactly by Floquet modes.
Therefore, to compute
growth in energy, one has to find out the elements of energy evolution matrix
\begin{equation}
{\cal M}_{ik}(t)=M_{im}(t)^T M_{mk}(t)
\label{m2}
\end{equation}
whose largest eigenvalue is growth in energy at time $t$. As $M_{mk}(t)$ can be obtained
from eqn. (\ref{matevo}), computation of ${\cal M}_{ik}(t)$ is a trivial job.
The actual time variance of $\epsilon(t)$ in eqn. (\ref{matevo})
(and in eqn. (\ref{mat})) is now considered. Clearly,
${\cal M}_{ik}(t)$ is the instantaneous energy of perturbation of the flow, while we can remind that
$M_{im}(t)$ is the instantaneous velocity of perturbation.
Figure \ref{pert} depicts the evolution of best growing secondary perturbation.
It is clear that perturbation at $t=0$ is a leading wave with
a large negative $k_1$ (as well as $k_x$, although $k_x<<k_1$) and therefore the flow is
highly eccentric at the beginning. With time, $k_1$ (as well as $k_x$) decreases in
magnitude and finally becomes zero at $t=t_{\rm max}$ when growth maximizes.
With further increase of time, the wave becomes trailing and growth starts to decrease.
Figure \ref{grow}a shows that as $\zeta$ increases, the (first) peak value of growth increases and
that occurs at an earlier time. Comparing with the variation of $k_1$ as a function of $t$,
as shown in Fig. \ref{grow}c, it is very clear that a peak in growth
appears when $k_1$ approaches
to zero. As $k_1$ in the cases with $\zeta=0.05$ and $0.1$ becomes zero twice,
corresponding growth maximizes twice too. The second peak appears at $t\sim 1000$ when
$k_x\sim k_1\rightarrow 0$. Moreover, for $\zeta=0.4$, $k_1$ becomes zero thrice
(it cuts the zero line at $t\sim 1000$ and becomes negative, but
immediately turns up and cuts the zero
again). Therefore, the corresponding growth curve attains two peaks
at $t\sim 1000$ very close to each other, apart from the first one at $t=503$.
The maximization of growth at the minimization of the radial component of perturbation
wave vector was explained in MAN. This is essentially due to the fact that
$G\propto 1/k^2$.
If $\zeta=0$, then $k_1$ and $k_x$ both become zero simultaneously as shown in Fig. \ref{grow}c.
However for $\zeta>0$, $k_1$ increases faster, as follows from
eqn. (\ref{keq}), and becomes zero earlier than $k_x$.
The interesting fact to note is that underlying growth,
although apparently is of transient kind as shown in Fig. \ref{grow}a, diverges
asymptotically
for any $\zeta >1/3$, while converges for $\zeta \le 1/3$.
The asymptotic divergence of growth is similar to the instability
one obtains in linear perturbation analysis
for Poiseuille flow at $R\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 5772$ (e.g. Reddy \& Henningson 1993).
The significant asymmetry around $t=t_{\rm max}$ and non-zero asymptotic value in growth curves
are due to the vertical structure.
Figure \ref{grow}b shows the variation of peak
growth as a function $k_3/k_2$. The quantity $k_3/k_2$ carries
information of how three-dimensional the flow is.
It is interesting to note that the maximum growth for $\zeta >1/3$ is unbounded so that
increases with vertical structure, while it is bounded for $\zeta \le 1/3$. We know that
in a two-dimensional flow, the maximum growth, $G_{max}$, scales
with $k_{x0}^2$ (MAN). However for $\zeta>0$, $G_{max}$
decreases at small $k_3/k_2$ but increases at large $k_3/k_2$, compared to that for $\zeta=0$.
At around $k_3/k_2=0.5$, which corresponds to the marginally
two-dimensional perturbation, growth due to secondary perturbation is
comparable to that of primary perturbation. This verifies that secondary perturbation
can govern significant transient growth in a geometrically thin accretion disk with finite
vertical thickness. However, in three-dimension, when
$k_3/k_2\sim 1$, the secondary perturbation effect always dominates over the primary one.
This indicates that three-dimensional secondary perturbation
enhances energy growth and thereafter any possible non-linear feedback effects
with its elliptical base state.
As $k_{x0}$ and $k_{10}$ ($\sim R^{1/3}$, see UR, MAN)
increase, the vertical structure plays more effective roles to govern growth. When $k_{x0}=k_{10}/10=-10^3$
and $k_y=k_{20}/10=1$, $G_{max}\sim 4\times 10^4$ at $k_3/k_2=1$ for $\zeta=0.1$, which
is an order of magnitude larger compared to that for $\zeta=0$.
If we consider a smaller $R$ with $k_{x0}=k_{10}/10=-10^2$, then
$G_{max}$ at $k_3/k_2=1$ decreases to $\sim 2\times 10^3$ for
$\zeta=0.1$, which is still larger by a factor of two compared to
that for $\zeta=0$.
Therefore, three-dimensional effects efficiently enhance growth and then presumably
help to trigger turbulence in shear flows.
\section{Discussions}
We find that significant energy growth of perturbation is possible to govern in shear flow with
the Coriolis force. This system is an idealized local analog of an accretion disk which,
under secondary perturbation,
definitely exhibits
three-dimensional large growth of transient kind and, in addition,
sometimes exhibits unbounded late-time growth
at a large amplitude of primary perturbation.
We have explicitly demonstrated
the perturbation effects one by one. First, primary two-dimensional perturbation induces
vortex into the flow that can be locally seen as elliptical streamlines. This system, which
does not have any exponential growing eigenmode but does exhibit significant transient growth,
has been extensively studied already (e.g. UR; Yecko 2004; MAN).
In this situation, a plane wave perturbation,
that is frozen into the fluid, is sheared along with the
background flow. At $t=0$, the effective wave vector of
perturbation is in the $x$ direction ($k_x>>k_y$) and
is negative which provides very asymmetric leading waves. Therefore, the flow
at this stage is highly eccentric. As time goes on, the
wavefronts are straightened out by the shear and $|k_x|$
decreases and transient growth increases. When $k_x\sim 0$, i.e. the wavefronts become almost radial,
transient growth is maximum. At this time, the streamlines of
the flow are almost circular. At yet later
time, growth decreases and the wave becomes trailing.
Then it has been argued that if the maximum growth
exceeds the threshold for inducing turbulence,
then this mechanism could drive the system to a turbulent state.
Presumably, once the system becomes turbulent, it remains turbulent
as a result of nonlinear interactions and feedback among the
perturbations. It can be reminded that our present aim is to understand and establish the origin of
viscosity in the flow that must be due to turbulence. The
transfer of mass inward and angular
momentum outward in an accretion disk
is difficult to explain in absence of turbulence. However, the accretion disk
is quite a complex system with, possibly, continuous perturbed flow. Therefore, if we can address
a mechanism to govern turbulence, then its recycling is not a difficult job.
Second, we consider further perturbation, namely secondary perturbation, into the
flow described above. The primarily perturbed shear flow serves as a background for secondary
perturbation whose eccentricity naturally varies with time due to the evolution
of primary perturbation. In this paper, we have especially
demonstrated the evolution of this secondary perturbation which exhibits three-dimensional large
transient growth in a local Keplerian accretion disk. While primary perturbation itself can produce
large transient growth at a high Reynolds number that might drive the non-linear effects into
the system, the best perturbation responsible for this effect is two-dimensional. However, it is understood that
perturbations must ultimately decline to zero in presence of viscosity (see e.g. UR),
unless three-dimensional effects are invoked. Therefore, we have addressed
the possible origin of three-dimensional effects that shows a clear route to three-dimensional
hydrodynamic growth and then possible non-linear feedback and turbulence in accretion flows.
Underlying
growth arises due to the elliptical vortices present in the background,
rather than due to the plane shear which exhibited growth under primary perturbation.
In the standard fluid literature, the elliptical instability has been widely
discussed as a possible route to self-sustained turbulence in linearly perturbed shear flows,
as mentioned in \S1. However, usual emphasis of those investigations is on non-rotating
flows. Craik (1989), while
discussed about the elliptical instability in rotating shear flows, did not focus
on a Keplerian flow, what is of astrophysical interest. Therefore, in the present paper,
we have first discussed the growth rate in standard elliptical flows and
compared the results for
non-rotating flows with that of rotating ones in \S3.1. We have shown that the
growth rate in a Keplerian flow with
constant elliptical streamlines is significantly large compared to that in a non-rotating flow,
particularly at high eccentricity.
However, in reality, when a small section of a Keplerian accretion
disk is considered under a two-dimensional linear
perturbation, the flow governs distorted elliptical streamlines whose structure vary with time.
Therefore, the growth rate due to secondary perturbation at a fixed $\epsilon$ (instantaneous
growth rate) as described in \S3.2 decreases much compared to
that in the flow with idealized elliptical streamlines,
unless the amplitude of primary perturbation, $\zeta$, is very large.
At a $\zeta>1/3$, a pure vertical secondary perturbation produces the best growing eigenmode.
On the other hand, at a $\zeta\le 1/3$,
the best growing eigenmode arises due to other three-dimensional perturbations with a
significant, but not sole, vertical effect. Although the instantaneous growth rate appears
to be small for a small $\zeta$
(which is of particular interest), at least compared to
the case with idealized elliptical streamlines,
actual growth which is the result of simultaneous
evolution of both perturbations as described in \S4 can be large enough to
exhibit non-linear effects
if the time-scale for the evolution of perturbation is large. The time for the evolution
of perturbation scales
with the Reynolds number of the flow as $R^{1/3}$ (MAN). As perturbation evolves,
$k_x$ varies from $-\infty$ to $0$ and thus the eccentricity of the flow decreases from
$1$ to $0$. Most of the important three-dimensional
growing modes are generated at the high eccentricity regime when $0.995\le\epsilon
\le 1$ and therefore $10\le |k_x|\le \infty$. Important fact to note is that growth maximizes
for three-dimensional perturbation with a significant vertical effect.
We therefore conclude with an important caveat. UR already showed via
two-dimensional simulations that chaotic motions can persist for a time much longer than
the time scale needed for linear growth. However, the corresponding vorticity decays unless
the vertical structure is there. In the present paper,
we have shown the existence of three-dimensional
perturbation effects and corresponding eigenmodes which governs large
energy growth and then suggests the possible existence of non-linear effects and self-sustained
hydrodynamic turbulence in accretion disks. Now one will have to verify our suggestion
by numerical simulation with proper resolution and possibly also by nonlinear
analytic asymptotic methods.
\begin{acknowledgements}
The author is grateful to Ramesh Narayan for suggesting this problem and for
extensive discussion and encouragement
throughout the course of the work. The author is also thankful to the referee for
his/her constructive suggestions that help to improve the presentation of the paper.
This work was supported in part by NASA grant NNG04GL38G and NSF grant
AST 0307433.
\end{acknowledgements}
|
1,108,101,565,306 | arxiv | \section{Introduction}
\label{s.introduction}
NGC 3256 is an infrared-luminous merger with a bolometric luminosity of $L_{\rm bol}=4\times 10^{11} L_{\odot}$
($D = 35$ Mpc, see Table \ref{t.4418param} for other parameters).
Its two nuclei with a projected separation of 5\arcsec\ = 850 pc \citep{Zenner93,Norris95} and
two long tidal tails of stars and \ion{H}{1} gas \citep{English03}
indicate that the system is in a late stage of merging between two disk galaxies \citep{Toomre77}.
NGC 3256 belongs to the sequence of `most luminous galaxies within their distance ranges', which are, beyond the local group,
NGC 253, M82, NGC 1068, NGC 3256, Arp 299, and Arp 220 in the catalogue of \citet{Sanders03}.
It is therefore among the best targets to explore luminosity-related phenomena in local galaxies, although its location at
Dec. = $-43\arcdeg$ impeded its studies compared to other galaxies in the sequence.
\citet[hereafter SHP06]{Sakamoto06} made the first interferometric imaging of a CO line emission in NGC 3256
soon after the commissioning of the Submillimeter Array (SMA) and
discovered wide CO line wings underlying the much brighter narrow component in previous observations
\citep[e.g.,][]{Sargent89,Aalto02}.
The wing CO emission was attributed to a molecular outflow from the face-on merger.
The detection of a galactic molecular outflow from faint and wide CO line wings became possible at that time
owing in part to the new wide-band capabilities of the SMA.
Many extragalactic molecular outflows have been detected since then
through broad CO line wings caught with wide-band spectrometers
\citep[e.g.,][]{Feruglio10,Chung11,Alatalo11}\footnote{Detection of molecular outflows from broad OH lines
dates back much further \citep[and references therein]{Baan89}.
Galactic outflows of cold molecular gas have been also found
from off-plane molecular gas of edge-on galaxies
[e.g., \citet{Nakai87} toward M82 and \citet{Turner85}, \citet{GarciaBurillo00}, and \citet{Bolatto13a} toward NGC 253]
and
from blueshifted molecular absorption lines against nuclear continuum
\citep[e.g.,][for Arp 220 and Mrk 231]{Baan89,Sakamoto09,Fisher10}.
All galaxies mentioned here belong to the above-mentioned elite sequence of luminous nearby galaxies.}
Such molecular outflows coexist with outflows of ionized and atomic gas and are expected to
have significant impact on the luminosity-generation activities in galaxies and the evolution of galaxies themselves \citep[for reviews]{Veilleux05,Carilli13}.
We have used the new Atacama Large Millimeter/sub-millimeter Array (ALMA)
in its first open-use (Cycle 0) to further study NGC 3256.
We aimed at the structure and properties of the molecular gas around the luminous merger nuclei
including the high-velocity molecular gas.
Although the broad CO wings had been confirmed and found to be even broader
in the ALMA commissioning and science verification data \citep{Sakamoto13a}
its structure was still largely unconstrained.
We therefore observed the galaxy in the 3 and 0.8 mm bands in ALMA Cycle 0 and
also made supplemental 1.3 mm observations with the SMA.
These new observations provide much higher spatial resolution than before for the circumnuclear molecular gas,
up to about 1\arcsec\ for CO(1--0), 0\farcs8 for CO(2--1), and 0\farcs5 for CO(3--2).
We also obtained high-resolution high-sensitivity data of CN(1--0), \propyne(6--5), \thirteenCO(2--1), \HCOplus(4--3), and 3 and 0.8 mm continuum.
In this paper, we report these new observations and
give an overall account of the spatial and kinematical structure of the molecular gas in the center of NGC 3256.
We found the high-velocity gas to be two bipolar molecular outflows from the two nuclei
and that the two outflows have distinctively different properties from each other.
We describe our observations and data reduction in Section \ref{s.obs}
and present our observational results in Section \ref{s.result}.
We use the data in Section \ref{s.configuration} to constrain the merger configuration,
which is critical to interpret the observed gas motion.
In Section \ref{s.twoOutflows} we present our two-outflow model for the observed velocity structure and gas distribution.
The one from the southern nucleus has remarkable properties in its velocity field, high velocity, high collimation, and large energy.
We discuss its driving mechanism in Section \ref{s.Snucleus}.
Section \ref{s.conclusions} compares our findings in NGC 3256 with similar objects and phenomena in galaxies
and then summarizes our conclusions.
\section{Observations}
\label{s.obs}
\subsection{ALMA}
\label{s.obs-alma}
Our ALMA observations in Cycle 0 were made in 2011--2012 using
up to twenty-three 12 m-diameter antennas as summarized in Table \ref{t.obslog}.
We observed in the 3 mm band (Band 3) and the 0.85 mm band (Band 7)
each in two array configurations jointly covering projected baselines between 15 m and about 370 m.
The Band 3 observations were for a single pointing at a position between the two nuclei.
The primary beam of the ALMA 12 m antennas has a full width at half maximum (FWHM) of 53\arcsec\
at the frequency of the redshifted CO(1--0) line\footnote{The FWHM size of the primary beam is assumed
to be $1.17 (\lambda/12)$ for ALMA and $1.15 (\lambda/6)$ for SMA, where $\lambda$ is the wavelength in meters.}.
In Band 7 we made a seven-point hexagonal mosaic with the same central position and a 7\farcs3 spacing
between adjacent pointings.
The FWHM of the individual primary beam is 18\arcsec\ and that of the mosaicked primary beam is about 25\arcsec\
for the redshifted CO(3--2) line.
We used a correlator setup having 0.488 MHz channel spacing and about 3.5 GHz continuous coverage in each sideband.
We also combined with our Band 3 data an earlier ALMA dataset obtained
through the Commissioning and Science Verification (CSV) program carried out by the Joint ALMA Observatory.
The CSV observations, also listed in Table \ref{t.obslog}, had about the same on-source integration time of 3 hr as
our Cycle 0 observations albeit with 7 or 8 antennas.
They provide dense sampling of short projected baselines between 12 m and 90 m.
The CSV observations were made toward a slightly offset position (4\farcs2 from our Cycle 0 observations)
with a correlator setup of 15.6 MHz channel spacing
and with almost the same frequency coverage as our Cycle 0 observations (Table \ref{t.freqCoverage}).
The CSV and Cycle 0 data were combined as a mosaic because of the pointing offset.
All ALMA data were calibrated from the raw data\footnote{in ALMA Science Data Model format}
in a uniform manner using the CASA\footnote{Common Astronomy Software Applications} reduction package versions 4.0 and 4.1.
Most notably, we used the `Butler-JPL-Horizons 2012' model for Titan and Mars in our flux calibration, measured
and accounted for the spectral slopes of our bandpass and gain calibrators in our calibration, and checked
the flatness of our spectral bandpass by looking at the spectrum of the gain calibrator after all calibrations.
Data showing non-linear baselines were flagged.
The versions of CASA that we used do not allow to flag only one of the two linear polarizations on individual baselines
because the two polarizations share a flagging variable in the data structure.
Therefore, when one of the two linear polarizations on a baseline was found faulty the remaining one was copied over
the corrupted one and both the original and copied visibilities were down-weighted to conserve the net weights of the
rescued data.
Imaging and basic data analysis were also made in CASA.
For lines we binned our data to spectral resolutions of 4, 10, and 20 MHz for Cycle 0 Band 3, 15.6 MHz for the combined
CSV\plus Cycle 0 data in Band 3, and 10 and 30 MHz for Band 7.
Table \ref{t.linelist} lists the six lines detected in our ALMA data as well as two notable non-detections.
We made continuum data after carefully inspecting the full widths of these lines and by summing up line-free channels.
The continuum has been subtracted from our line data in the \uv\ domain for the CSV\plus Cycle 0 data
and in the image domain for the rest.
This is because we are most interested in weak and broad line emission near the phase center in the former dataset
while better subtraction across the imaging area is more desired for other datasets.
Parameters of our reduced data are summarized in Tables \ref{t.data_cont_properties} and \ref{t.data_line_properties}.
Compared to our previous SMA observations the new ALMA observations improved
spatial resolution by about a factor of three and
sensitivity in line brightness temperature by about an order of magnitude.
Our CO(1--0) data cubes made from the Cycle 0 data alone recovered 76--87\% of the single-dish flux
measured with a 43\arcsec\ beam (FWHM) by \citet{Aalto95}.
The fraction is highest in the data cube made with lower weights to longer baselines.
We recovered 91--97\% of the single-dish CO(1--0) flux in the cubes made from the CSV\plus Cycle 0 data.
We expect similar or higher recovery rates for other Band 3 lines and continuum
but the recovery rate must be lower for emission in Band 7
because the central hole in the \uv\ plane is larger in Band 7.
\subsection{SMA}
\label{s.obs-sma}
We added to our SMA 1.3 mm observations reported in \citest{Sakamoto06} new data taken in 2008,
increasing the maximum projected baseline from 179 m to 509 m and doubling the total on-source time
from 6.9 hr to 12.9 hr.
The new observations in two nights had 7 antennas and excellent weather with the 220 GHz zenith opacity between 0.04 and 0.06.
We observed the same position as in our previous observations (as well as our ALMA observations) using the
tuning for the same three $J=2$--1 lines as before, namely, \twelveCO, \thirteenCO, and \CeighteenO\
although only the first two were bright enough to be imaged at high angular resolutions.
The primary beam of the SMA 6-m antennas has a FWHM size of 52\arcsec\ at the frequency of the redshifted CO(2--1) line.
The data were reduced with the same steps as before using the MIR reduction package.
\subsection{Conventions}
The offset coordinates in this paper are with respect to our SMA and ALMA Cycle 0 phase-tracking center
in Table \ref{t.4418param}.
We adopt radio positions for the two merger nuclei, namely,
R.A. = 10\hr27\mn51\fs23, Dec. = \minus43\arcdeg54\arcmin14\farcs0 (J2000) for the northern (N) nucleus
and
R.A. = 10\hr27\mn51\fs22, Dec.~=~\minus43\arcdeg54\arcmin19\farcs.2 (J2000) for the southern (S) nucleus
\citep{Neff03}.
Our phase tracking center is the midpoint of these nuclei with the last RA digit rounded up.
We use radio-defined velocity with respect to the Local Standard of Rest (LSR) throughout this paper
(LSRK in the ALMA terminology).
We adopt 2775 \kms\ (radio, LSR) for the systemic velocity of the galaxy and measure offset
velocities from this \Vsys\ (e.g., in presenting channels maps).
Our previous SMA observations found this to be a good fiducial velocity not only for the whole system but also
for individual nuclei because they almost align on the kinematical minor axis of the merger \citesp{Sakamoto06}.
\section{Observational Results}
\label{s.result}
\subsection{Continuum}
\label{s.result-continuum}
The 3 mm continuum emission shown in Fig. \ref{f.contmaps} (a) peaks at the two nuclei
and
is extended to a radius of at least 20\arcsec\ (3 kpc) with arcs and arm-like features in the region.
Millimeter continuum at 1.3 mm also peaks at the two nuclei \citesp{Sakamoto06}.
The nuclear peaks and the extended emission at 3 mm are morphologically similar to those
previously observed at 6 and 3.6 cm \citep[in their Fig. 1]{Norris95,Neff03}.
Both nuclei are resolved in our 0.86 mm continuum images in Fig. \ref{f.contmaps} (b), (c).
The northern nucleus has a high-intensity plateau with a diameter of about 2\arcsec\ (0.3 kpc)
and the southern nucleus has a compact (\about0\farcs5, 80 pc) peak
with a linear feature elongated by about 3\arcsec\ (0.5 kpc) in the east-west direction through the nucleus.
The extent of the northern nucleus agrees with that in X-rays \citep[FHWM \about1\farcs5 in 0.5--10 keV measured by][]{Lira02}.
Our highest-resolution continuum image in Fig. \ref{f.contmaps} (c) hints at a (broken) ring
in the plateau around the northern nucleus.
It is comparable in size to an optical ring-like structure noted by \citet{Laine03}.
There is also conspicuous bridge-like emission between the two nuclei.
It emanates from the circumnuclear region of the northern nucleus and curves toward the western side
of the elongated continuum emission across the southern nucleus.
This feature and the near linear emission across the southern nucleus,
are present in the 3.6 cm continuum data of \citet[their Figs. 1, 2]{Neff03}.
The peak brightness temperatures in Fig. \ref{f.contmaps} are between 0.08 and 0.24 K.
The compact southern nucleus shows higher peak brightness temperatures
when the northern circumnuclear plateau is spatially resolved.
We measured the spectral slope of the continuum emission at 2.8 mm and 0.86 mm by comparing the data in
the upper and lower sidebands separated by about 12 GHz from each other.
We used for this single-sideband continuum images that have a common spatial resolution and
were made with a common \uv\ baseline lengths; the shortest baselines in the LSB and the longest in the USB were
flagged for this.
Unfortunately, we cannot reliably compare the 2.8 mm and 0.86 mm data to estimate the spectral index between them
because the difference in their \uv\ coverages is too large.
The spectral index $\alpha$ of the continuum emission (for $S_\nu \propto \nu^\alpha$ where $\nu$ is frequency
and $S_\nu$ is flux density)
is found to be $-0.1$ at 2.8 mm and $+3$ at 0.86 mm in the central 20\arcsec.
The spectral indexes at the individual nuclei are also measured and listed in Table \ref{t.contFluxSpix}.
It is found that $\alpha$ is significantly larger at 0.86 mm than at 2.8 mm also for each nucleus.
Moreover, $\alpha$ is found larger at the northern nucleus than at the southern nucleus clearly at 0.86 mm and also at 2.8 mm.
The spectral index $\alpha$ is 3--4 for thermal dust emission that is optically thin because
dust mass opacity coefficient has a power law index of 1--2.
Free-free emission from thermal electrons often has a spectral index around $-0.1$.
Synchrotron emission from galaxy nuclei often has a spectral index of about $-1$ ($\pm 0.5$).
The smaller $\alpha$ of the southern nucleus is consistent with the nucleus having a larger fraction
of free-free or synchrotron emission and less of dust thermal emission than the northern nucleus.
\subsection{Line}
\label{s.result-line}
Line maps are shown in Figures \ref{f.maps.CO102132} and \ref{f.maps.nonCO} for the eight lines that we imaged.
Plots shown are: the integrated intensity, intensity-weighted mean velocity, intensity-weighted velocity dispersion, and
peak brightness temperature.
Also shown are line channel maps in Figs \ref{f.chans.CO10.br}, \ref{f.chans.CN10.br10MHz}, \ref{f.chans.CH3C2H.Na10MHz},
\ref{f.chans.CO32.br}, and \ref{f.chans.HCOp43.na30MHz}.
They are of low velocity resolutions for space reasons although we also made data cubes with higher velocity resolutions.
Note that contour levels are switched between channels with and without strong signals in the CO channel maps
in order to display both faint high-velocity emission and strong emission near the systemic velocity.
\subsubsection{Spatial Distribution}
All the molecular lines have emission peaks at or near the two nuclei, as does the continuum emission.
The degree of concentration and the relative strengths of the two nuclei vary among the lines.
The bridge-like feature between the northern and southern nuclei is also visible in line emission,
most clearly in CO(2--1), (3--2) and \HCOplus(4--3) integrated intensity images.
There are other arc features; some are seen in continuum and some are visible only in the line data.
The near-linear feature crossing the southern nucleus in the east-west direction is also visible
in line emission, most clearly in CO(3--2) and \HCOplus(4--3).
\subsubsection{Velocity Field}
\label{s.result.line.velocity_field}
{\em Large scale:}
The CO(1--0) velocity map in Fig. \ref{f.maps.CO102132} shows
overall rotation in the central \about5 kpc with the receding major axis at position angle \about70\arcdeg.
Significant deviations from circular motion at this scale are visible mostly at the locations of the arm-like features.
The apparent kinematical major axis is at p.a. \about90\arcdeg\ within about 1 kpc from the nuclei.
Both nuclei are therefore approximately on the apparent kinematical minor axis at this scale, as seen in
e.g., the 1st moment maps of CO(1--0), CO(2--1) and CN(1--0).
These large scale kinematics of molecular gas are consistent with those in \citest{Sakamoto06}.
{\em N nucleus:}
Further inside and around the northern nucleus, our data show rotation within about 300 pc from the nucleus.
This clearly appears as a butterfly-like pattern of isovelocity contours
in the mean velocity maps of \HCOplus(4--3) (Fig. \ref{f.maps.nonCO}) and CO(3--2) (Fig. \ref{f.maps.CO32nuc} b),
the latter of which was made only with brighter circumnuclear emission.
We fitted the velocity field to estimate the kinematical major axis to be at ${\rm p.a.} \approx 75\arcdeg$
and the disk inclination to be $ i \approx 30\arcdeg$ for a region with a 3\arcsec\ major axis diameter.
This kinematical major axis reasonably agrees with the morphological major axis of the circumnuclear high-intensity region
in CO, \HCOplus, and 0.86 mm continuum emission around the northern nucleus.
The kinematical major axis gradually changes its position angle in the sense
that it is smaller, about 60\arcdeg, at larger radii and is about 90\arcdeg\ closer to the nucleus.
This may be due to warp of the northern nuclear disk or non-circular motions of the gas in the disk.
{\em S nucleus:}
The southern nucleus has in its vicinity a velocity gradient in the east-west direction (${\rm p.a.} \approx 90\arcdeg$)
in the mean velocity maps.
The isovelocity contours, however, do not show a clear butterfly pattern there.
Also, the largest gradient of mean velocity is at about 0\farcs5 east of the southern radio nucleus (white plus sign).
It is in contrast to the peaks of line integrated intensity that are often at slightly west or northwest, by about 0\farcs3 -- 0\farcs5,
from the nucleus (e.g., in CO, \thirteenCO, CN, but not in \HCOplus).
We are going to model in the following the near-linear feature running east-west across the S nucleus as
a near edge-on circumnuclear disk of radius \about300 pc.
The lack of clear butterfly pattern around the southern nucleus is attributed to the edge-on viewing angle.
{\em Between the Nuclei:}
Conspicuously, the most redshifted CO(3--2) emission is located by about 2\arcsec\ south of the northern nucleus,
as seen in the CO(3--2) mean velocity map in Fig. \ref{f.maps.CO102132}.
This is due to the high-velocity wing of CO emission and is the reason for the very large line width at the same location in the
CO(3--2) line-width map.
This feature does not show up in Fig. \ref{f.maps.CO32nuc}b because the wing emission is faint and below the cutoff used for the moment analysis.
The high-velocity emission is separately described in \S\ref{s.result.highV} along with the line width information
in Fig. \ref{f.maps.CO102132} and \ref{f.maps.nonCO}.
\subsubsection{Peak \Tb\ and Integrated Intensity}
The three \twelveCO\ lines have peak integrated intensities on the order of $2\times 10^3$ K \kms\
and maximum brightness temperatures of about 20 K, both at about 1\arcsec\ resolution.
The maxima are 22.4 K and 2730 K \kms\ in CO(3--2) at our highest spatial resolution
($0\farcs58\times0\farcs39 \approx 80$ pc).
The peaks of line emission are in the vicinity of the two nuclei,
the spiral feature running between the two nuclei,
and in the linear feature across the western nucleus, particularly in its western side.
At least for these regions, our data do not show
significant decline of (integrated) intensity in higher transitions that would suggest significantly subthermal excitation
or
significant increase in (integrated) intensity arising from optically-thin emission from thermalized warm molecular gas.
Other lines are much weaker than the \twelveCO\ lines, having peak brightness temperatures at about 1 K or lower.
Possible reasons for this include that these lines are optically thin,
have lower excitation temperatures than \twelveCO\ (i.e., subthermally excited),
and are emitted from smaller regions than the \twelveCO\ lines.
\subsubsection{Line Flux, Gas Mass, and Surface Density }
\label{s.obs.line.flux}
The total flux of CO line emission is measured to be $1.0\times10^3$, $3.0\times10^3$, and $5.7\times10^3$ Jy \kms\
for J=1--0, 2--1, and 3--2 transitions, respectively,
in a 20\arcsec\ diameter aperture centered at the midpoint of the two nuclei.
The CO(1--0) flux in the concentric 40\arcsec\ diameter aperture is $1.6\times10^3$ Jy \kms.
These are corrected for the primary beam (and mosaic) responses but not for any missing flux in the interferometric data.
The fluxes above are measured in data cubes with resolutions
\about2\farcs7, \about0\farcs6, and \about0\farcs6 for CO(1--0), (2--1), and (3--2), respectively.
The CO(2--1) flux in the same 20\arcsec\ aperture is measured to be $4.0 \times10^3$ Jy \kms\ in a \about3\farcs0 resolution
data cube.
We note that the CO(2--1) to CO(1--0) flux ratio
at about 3\arcsec\ resolution, 4.0 with \about10\% calibration uncertainty,
is what is expected for the thermalized optically thick gas at $\gtrsim 30$ K.
The two data sets have about the same ranges of baseline length in units of wavelength and hence
the ratio must be affected little by missing flux.
The CO(3--2) to (2--1) ratio at about 0\farcs6 resolution is 1.9 in flux and 0.86 in brightness temperature.
It is fully compatible with thermalized optically thick CO at $\gtrsim30$ K considering the calibration uncertainties and
the probably larger missing flux in the CO(3--2) data.
On the whole, the data are consistent with the CO being thermalized at least up to $J = 3$ and optically thick.
We estimate the mass of molecular gas using the CO(1--0) to \HH\ mass
conversion factor $\Xco \equiv N_{\rm H_2}/I_{\rm CO(1-0)} = 1 \times 10^{20}$ \unitofX\ and
36\% mass contribution from He.
We do not have the true \Xco\ in NGC 3256 nor do we have a strong reason to believe that \Xco\ is constant across the galaxy.
Therefore we give our molecular gas masses with the parameter \Xtwenty\ that is \Xco\ in units of $1 \times 10^{20}$ \unitofX.
While \Xtwenty\ is unity for our assumed (i.e., fiducial) conversion factor,
this parameterization allows our mass estimates to be easily rescaled when a more plausible value of \Xco\ is given.
The conversion factors estimated with various methods in galaxies at solar metallicities or higher are usually
in the range of \Xtwenty = 0.3 -- 3 with high values for `normal' galaxies such as our Galaxy in its disk
and low values for luminous infrared galaxies \citep[see][for a review]{Bolatto13b}.
\citet{Bolatto13b} recommend \Xtwenty=0.4 with an uncertainty of 0.5 dex for luminous starburst galaxies
and \citest{Sakamoto06} obtained a value within 10\% of it for the central 3 kpc of NGC 3256 after
averaging various estimates.
In this paper, however, we adopt the normalization with \Xtwenty=1 partly for simplicity
and also because the gas to dynamical mass ratios
that we later calculate for the nuclei and for a larger area appear more reasonable with \Xtwenty=1.
In any case, we expect a factor of 3 uncertainty in our adopted \Xtwenty\ of 1.0 and therefore \Xtwenty=0.4 is within the uncertainty.
The conversion factor between CO(1--0) integrated intensity and molecular gas surface density is
$\alphaco \equiv \Sigma_{\rm mol} / I_{\rm CO(1-0)} = 2.2 \Xtwenty$ \unitofalpha.
The molecular gas mass estimated from our CO(1--0) line flux is
$\Mmol(r \leq 10\arcsec ) = 7\times10^{9} \Xtwenty\, \Msol$ in the central 20\arcsec\ diameter aperture
and
$\Mmol(r \leq 20\arcsec ) = 1\times10^{10} \Xtwenty\, \Msol$ for the central 40\arcsec\ (7 kpc).
The scaling parameter \Xtwenty\ should be read as the average value for each region in consideration.
The peak molecular gas surface densities toward individual nuclei are
$4\times 10^3 \Xtwenty $ and $3 \times 10^3 \Xtwenty$ \Msol \mbox{pc$^{-2}$}} % pc^{-2\
for the northern and southern nuclei, respectively,
at \about1\farcs4 (240 pc) resolution on the basis of the CO(1--0) data in Fig. \ref{f.maps.CO102132}.
The southern nucleus has the highest CO integrated intensity in the merger and its peak gas column density is
$\Sigma_{\rm mol}({\rm S}) = 6\times 10^3 \Xtwenty$ \Msol \mbox{pc$^{-2}$}} % pc^{-2\
in our \about0\farcs5 (80 pc) resolution CO(3--2) data in Fig. \ref{f.maps.CO102132}.
Here we do not correct for the different transition because of the CO excitation inferred above.
Converting this peak molecular gas column density to the peak hydrogen and proton column densities,
we obtain toward the southern nucleus
$\log (N_{\rm H, equiv.}/{\rm cm^{-2}}) = 23.9$
and
$\log (N_{\rm p}/{\rm cm^{-2}}) = 23.8$.
The former converts \HH\ and He with hydrogen atoms of equivalent mass
and the latter gives proton column density.
Both have 0.5 dex uncertainty inherited from the uncertainty in \Xtwenty.
\subsection{High Velocity Emission}
\label{s.result.highV}
We detected wide faint line wings in our data, most clearly in CO(1--0) and (3--2) and also in CN(1--0).
The new sensitive ALMA data not only confirm the previous detection of \citest{Sakamoto06}
but also better constrain the velocity extent and spatial distribution of the high velocity gas.
\subsubsection{Channel Maps}
\label{s.result.highV.channelMaps}
Figures \ref{f.HVchans.CO10.tp.VHV} and \ref{f.HVchans.CN10.tp.VHV}
show our CO(1--0) and CN(1--0) channel maps, respectively,
made with \uv\ tapering (i.e., spatial smoothing) and wide channel widths to better detect high-velocity emission.
These Band 3 images use the CSV and Cycle 0 data combined to maximize sensitivity.
Continuum was determined more than 750 (400) \kms\ away from \Vsys\ for the CO (CN) lines
and has already been subtracted from each data cube.
The CO(1--0) data show $>3 \sigma$ emission from $-650$ \kms\ to $+650$ \kms\ around the northern nucleus,
up to about 500 \kms\ from \Vsys\ between the two nuclei,
and up to \about$\Vsys \pm400$ \kms\ around the southern nucleus.
At the offset from \Vsys\ of about 300 \kms, redshifted emission is stronger than the blueshifted and the former
peaks between the two nuclei.
The same is observed in CN(1--0) and was also the case in the CO(2--1) observations of \citest{Sakamoto06} in which
the wing emission was first found up to $\Vsys \pm 300$ \kms.
Figure \ref{f.HVchans.CO32.RedBlueGray} shows CO(3--2) channel maps displaying blueshifted and redshifted emission
on the same panel for the same absolute offset from \Vsys. The background image in gray scale is continuum.
The upper panels (Fig. \ref{f.HVchans.CO32.RedBlueGray}a) are our 1\farcs1 resolution data.
Emission stronger than $4\sigma$ is detected up to 450 \kms\ from \Vsys\ in both blueshifted and redshifted velocities.
The northern nucleus has emission up to this largest offset velocity
and the centroid of the blueshifted emission is on the northwestern side of the nucleus
while the redshifted emission centroid is on the southeastern side.
Around the southern nucleus, redshifted and blueshifted emission are roughly symmetrical about the nucleus, redshifted
to the north and blueshifted to the south, except for the redshifted emission extending east from the southern nucleus
at the leftmost channel.
It is also notable that emission more than about 300 \kms\ from \Vsys\ is clearly detached from the southern nucleus
unlike the high velocity emission around the northern nucleus.
The lower panels (Fig. \ref{f.HVchans.CO32.RedBlueGray}b) are our 0\farcs5 resolution channel maps for the high velocity emission.
They more clearly show the symmetry around the southern nucleus.
Notable new observations here are that the high velocity gas has clumps in the extended structures
and
that the blueshifted gas slightly curves toward west at larger distances from the southern nucleus.
The highest velocity emissions are again clearly detached, by about 1\farcs8 (310 pc), from the southern nucleus.
Little CO emission is detected around the northern nucleus in these higher resolution data
indicating that the high velocity CO(3--2) emission around the
northern nucleus is more extended than that around the southern nucleus.
The extent of the high-velocity blueshifted gas is larger around the northern nucleus than around the southern nucleus
also in CO(1--0) as seen in Fig. \ref{f.HV.CO10.RedBlues}a.
\subsubsection{High-Velocity Line Flux}
\label{s.result.line.hv.flux}
The flux of the high velocity CO emission has been measured by only integrating the high velocity channels.
Fig. \ref{f.HV.CO10.RedBlues} shows CO(1--0) maps integrating about 530 \kms-wide ranges
offset by about 220--750 \kms\ from our fiducial velocity (\Vsys) 2775 \kms.
The CO(1--0) flux in our 2\farcs7 resolution data (Fig. \ref{f.HV.CO10.RedBlues}b) is
8.9, 3.2, and 1.2 Jy \kms\
for the redshifted emission,
blueshifted emission associated with the northern nucleus, and
blueshifted emission associated with the southern nucleus,
respectively.
The response of the primary beam was corrected for these measurements and the blueshifted emission about 15\arcsec\ east of
the nuclei excluded as it is associated with an arm there and is detected only down to about $\Vsys -300$ \kms.
In total this high velocity emission has 1.3\% of the total CO(1--0) flux detected in the central 20\arcsec\ diameter aperture given in
\S \ref{s.obs.line.flux}.
The flux of CO(3--2) emission integrated in the same velocity ranges in our 0\farcs6 resolution data are
40, 11, and 21 Jy \kms\
for the redshifted emission,
blueshifted emission associated with the northern nucleus, and
blueshifted emission associated with the southern nucleus,
respectively.
In total this high velocity emission has 1.2\% of the total CO(3--2) flux in the central 20\arcsec\ given in \S \ref{s.obs.line.flux}.
\subsubsection{High-Velocity Gas Mass}
\label{s.result.line.hv.mass}
The total mass of the high-velocity molecular gas is calculated to be
$\Mmol(223\, \kms \leq |V-\Vsys| \leq 752\, \kms) = 8.8\times10^7 \Xtwenty\, \Msol$
from the high-velocity CO(1--0) line flux.
The mass of high-velocity molecular gas associated with each nucleus is estimated to be
$6.3\times10^7 \Xtwenty\, \Msol$ for the northern nucleus and
$2.5\times10^7 \Xtwenty\, \Msol$ for the southern nucleus
under the assumption
that the redshifted high-velocity gas is composed of gas associated with the two nuclei with the same fractions
as in the blueshifted high-velocity gas (i.e., 72\% to the north and 28\% to the south.)
We use the ratio in our CO(1--0) data and not the ratio of ${\rm N}:\rm{S}=34:66$ in our CO(3--2) data.
This is because the former data suffer less from missing flux.
Also CO(1--0) is less affected by any difficulty in CO excitation in the high velocity gas.
We assume, unless otherwise noted,
that the \Xtwenty\ value is unity for the high-velocity gas as we did for the bulk CO emission of NGC 3256.
This is partly motivated by our observation
that the fraction of the high-velocity flux with respect to the total flux is almost the same in CO(1--0) and CO(3--2).
This can be, though not uniquely so,
because the physical properties of the high-velocity gas and those of the gas at lower velocities are
not drastically different from each other.
Our choice is also because we have insufficient information to specify a different value.
A possible alternative choice of \Xco\ for the high-velocity gas, which we suggest below to be high-velocity outflows,
is the one for optically thin CO emission.
This is possible because the peak CO brightness temperature of the high-velocity emission is only
on the order of 0.5 K for CO(1--0) and 1.5 K for CO(3--2) in Figs. \ref{f.chans.CO10.br} and \ref{f.chans.CO32.br}.
In beam-matched data of 1\farcs6 $\times$ 1\farcs2 resolution,
the CO(3--2) to CO(1--0) ratios of peak brightness temperatures around $\Vsys \pm 200$ \kms\ are mostly $\sim1 \pm0.5$
for the high-velocity emission associated with the southern nucleus.
Taken at face value, i.e., assuming little effect of CO(3--2) missing flux to this ratio because the high-velocity gas
around the southern nucleus is relatively compact, the ratio can be not only due to optically thick emission from thermalized CO
but also due to optically thin CO emission.
In the latter case, the ratio corresponds to the excitation temperature of $12 \pm 3$ K in LTE.
If the CO excitation is not in LTE then CO(1--0) can have a higher excitation temperature than this but
the CO(3--2) excitation temperature must be much lower than that.
The conversion factor for optically thin CO(1--0) emission is on the order of $\Xtwenty = 0.1$
in both of these LTE and non-LTE cases for a CO abundance of $[{\rm CO}/\HH]=10^{-4}$.
The non-LTE conversion factor for optically thin CO(1--0) depends little on gas temperatures above \about15 K
provided that CO molecules are well excited only to J=2 but not to 3 and beyond.
The CO level population is determined by the level statistical weights in such a case.
As the observed line ratio is consistent with multiple gas conditions
we keep in mind that the conversion factor for the high-velocity gas can be an order of magnitude lower
than our fiducial value of unity.
\subsubsection{Spectra}
\label{s.result.line.hv.spectra}
Figure~\ref{f.spectra.nuclei} shows spectra at the two nuclei.
Each line is fitted with a Gaussian to help highlight the high-velocity wings,
i.e., emission in excess of the Gaussian fit at large offset velocities.
At both nuclei, line centroids are within about 10 \kms\ from our fiducial velocity of 2775 \kms\
and line widths are about 150--200 \kms\ in FWHM.
The CO(1--0) data have the highest signal-to-noise ratio and show a clear redshifted wing at \about3\% level
in the spectrum toward the northern nucleus. There are also high-velocity wings at the level of 1\% of the peak
or less in both redshifted and blueshifted velocities.
The full width at zero intensity of the emission is about 1600 \kms\ toward the northern nucleus.
The southern nucleus also has blue and red-shifted wings visible at the level of 1--2\% of the main line; the
red wing is again stronger.
The fraction of the wing component to the main line is probably larger when the wing features are interpolated
to the systemic velocity.
The full width at zero intensity (FWZI) for the southern nucleus is about 1200 \kms.
Our observations in the spectra are consistent with what we saw above in channel maps (Fig. \ref{f.HVchans.CO10.tp.VHV}) in that
the full line widths exceed 1000 \kms, the line is wider toward the northern nucleus, and the high-velocity emission
is stronger in redshifted velocities at around $\left| V-\Vsys \right| = 300$ \kms.
The CO(3--2) spectra also show the high-velocity wings.
While more emission is in the redshifted wing in the aperture containing the northern nucleus,
fainter and broader wings than this are seen in both blue and redshifted velocities toward both nuclei.
The full width of the CO(3--2) line is about 1000 \kms\ (i.e., $\pm500$ \kms) in our data, consistent with
our observation in channel maps (Fig.~\ref{f.HVchans.CO32.RedBlueGray}a).
Line full width depends on sensitivity because noise can mask faint and wide high-velocity emission.
The smaller full line width in CO(3--2) than in CO(1--0) must be partly due to the lower signal-to-noise ratio (S/N)
in the former data.
In \HCOplus(4--3) we did not detect high-velocity emission.
This may be mostly because \HCOplus(4--3) has the lowest S/N among the lines shown in Fig. ~\ref{f.spectra.nuclei}
and also because the J=4 excitation of \HCOplus\ has a high critical density of $10^7$ \mbox{cm$^{-3}$}} % cm^{-3.
The \HCOplus\ line profiles have double peaks (or a dip near the line center) on both nuclei.
This is also seen in CO(3--2) toward the southern nucleus with a smaller 2\arcsec\ diameter aperture.
\subsubsection{CN in the High-Velocity Gas}
\label{s.result.line.hv.CN}
The CN(1--0) spectra in Fig.~\ref{f.spectra.nuclei} show the redshifted wing at about the same level as in the CO(1--0) data.
Although each of the CN lines consists of a group of hyperfine lines, the redshifted wing is not due to the line distribution
because if it were then the redshifted emission in Fig.~\ref{f.HVchans.CN10.tp.VHV} should have peaked on the nuclei
and not between them.
Thus both CN and CO red wings are probably from the same high-velocity gas.
The flux ratio of the two CN lines is 1.8 on both nuclei,
only slightly less than the ratio of 2 from optically thin lines \citep{Turner75}.
The CN emission is therefore mostly optically thin; the opacity of the brighter line is calculated to be 0.4 from
$(1-e^{-\tau})/(1-e^{-\tau/2}) = 1.8$.
Under a safe assumption that the low-velocity CO(1--0) with peak \Tb\ $\gtrsim$ 10 K
is optically thick and has a higher optical depth than high-velocity CO emission, the fraction of the high-velocity emission
to the main low-velocity component should be larger in CN than in CO after the CO opacity correction.
This suggests enhanced CN abundance or excitation in the high velocity gas.
If the CN enhancement is due solely to collisional excitation then the high-velocity gas is denser than the low-velocity gas
because the critical density for CN(1--0) is $10^6$ \mbox{cm$^{-3}$}} % cm^{-3\ and is $10^3$ times higher than that for CO(1--0).
This CN detection as well as enhancement in the high-velocity gas is noteworthy because the line has not been detected
in galactic molecular outflows before.
\subsubsection{Robustness of the Detection}
We regard our detection of these high-velocity emission components as robust for the following reasons.
Firstly,
the faint and broad wing emission cannot be errors in continuum subtraction,
because continuum in each channel is only at the levels of 30$\sigma$ and 20$\sigma$ in the CO(1--0) and (3--2) data,
respectively, while our passband calibration is much more accurate than 1/30 = 3\%
as seen in the flatness of our spectra sufficiently away from the line in Fig. \ref{f.spectra.nuclei}.
Moreover, much of the high-velocity emission peaks slightly offset from the nuclei where any passband error
could make the largest artifact.
Secondly,
it is unlikely that the high-velocity emission is due to line-blending, i.e., from lines other than the target line,
in part because of the offset of the high-velocity emissions from the nuclei where all lines peak
and also because of the lack of molecules that may plausibly contribute to the observed emission.
Individual line wings of a single CO transition sometimes have possible alternative sources,
such as \HCthreeN(38--37) and \HthirteenCN(4--3) on the red (i.e., low-frequency) side of CO(3--2).
However, these molecules cannot explain the redshifted emission of CO(2--1) or CO(1--0)
because their lower transitions are not adjacent to these CO transitions.
In addition, the peak of the emission on the red side of CO(3--2) is not exactly
at the redshifted frequencies of \HCthreeN(38--37) and \HthirteenCN(4--3).
Fig. \ref{f.spec.noblend} shows this in the spectrum sampled at the midpoint of the two nuclei.
The peak of the redshifted component clearly does not coincide with the expected frequencies of \HCthreeN(38--37) and \HthirteenCN(4--3).
Therefore their contribution to the high-velocity emission should be small, if any.
Finally,
the line wings are unlikely due to the response pattern of the spectral correlator to a strong narrow line,
as this would appear symmetric about the line center.
\subsubsection{Position-Velocity Diagrams}
Figure \ref{f.COpv} shows CO position-velocity diagrams across the nuclei.
The upper panels are for CO(1--0) and the lower for CO(3--2).
The three columns are, from left to right,
p.a.=270\arcdeg\ cuts through the N nucleus,
p.a.=270\arcdeg\ cuts through the S nucleus,
and
p.a.=0\arcdeg\ cuts through the midpoint of the N and S nuclei.
The position angle for the northern nucleus is along the kinematical major axis at the center of the northern circumnuclear disk.
The p.a. for the southern nucleus is because there is a structure extending across the nucleus in p.a.$\approx$ 90\arcdeg.
The PV diagrams along p.a.=0\arcdeg\ are for the high velocity emission that showed symmetrical velocity structure
around the southern nucleus approximately along this axis.
Rotation of the circumnuclear disk is evident around the northern nucleus in the panels (a) and (d).
The high-velocity emission at the northern nucleus is also clear in these PV diagrams.
Gas motion around the southern nucleus is more complex, in particular in the CO(3--2) data in panel (e), but
an overall velocity gradient within about 5\arcsec\ from the nucleus and presence of high velocity gas at the nucleus are
consistent with what we see in the channel maps.
The most interesting of the PV diagrams are the cuts in the north-south direction across the two nuclei, panels (c) and (f).
There we clearly see two components of high velocity gas.
One is on the northern nucleus and shows little positional shift with velocity.
The other is symmetric about the southern nucleus, blueshifted to the south (left in the plot) and redshifted to the north
within about 4\arcsec\ from the southern nucleus.
The terminal velocity increases with the distance from the southern nucleus up to about the offset of 2\arcsec.
It is also notable that the range of emission velocities at each position is large, about 500 \kms, across this region of
a north-south velocity gradient.
\subsection{Comparison with Other Observations}
\label{s.result.comparison}
\subsubsection{HST Optical Images}
\label{s.result.comparison.HST}
Figures \ref{f.hst.L} and \ref{f.hst.M} compare our CO images with multi-color HST images of NGC 3256.
The merger has many dark lanes, particularly in its southern part, as shown in Fig.~\ref{f.hst.L}(a).
Comparison of Fig.~\ref{f.hst.L}(a) with the color excess image in Fig.~\ref{f.hst.L}(b) shows that
the dark lanes are generally redder in color than their adjacent areas.
This suggests that the dark lanes are due to higher dust extinction.
As seen in Fig.~\ref{f.hst.L}(c), there is overall match between these dark lanes in the optical and the CO(1--0) distribution
shown in Fig.~\ref{f.hst.L}(d).
This is what is expected when the dark lanes are due to obscuration by the interstellar dust.
In addition, there is an interesting match between the dark lanes (i.e., optical color excess) and the CO line widths
as shown in Fig.~\ref{f.hst.L}(b).
Both are enhanced in a roughly triangular area on the south-western side of the binary nucleus.
The similarities between dark (dust) lanes and CO emission are also seen in our
higher resolution CO(3--2) data in Fig. \ref{f.hst.M} (c).
At this higher resolution, however, it becomes evident that the optical color excess (i.e., reddening) and the
CO integrated intensity are not strictly proportional.
Also the matching is poor between the high color excess regions and regions of large line widths in
the vicinity of the two nuclei (see Fig. \ref{f.maps.CO102132} for our CO(3--2) 2nd moment map),
as was already the case in our CO(1--0) comparison in Fig.~\ref{f.hst.L}(b).
\subsubsection{VLA Radio Continuum Images}
\label{s.result.comparison.VLA}
There are remarkable correlations between our ALMA data and VLA radio continuum data in \citet{Neff03}.
The spatial distribution of 6 and 3.6 cm continuum in their Fig.~1 matches quite well with
that of the sub/millimeter continuum shown in our Fig. \ref{f.contmaps}.
The agreement includes not only the two nuclei and the overall shape of the diffuse emission
but also a short arc (arm) about 5\arcsec\ northeast of the northern nucleus,
a spot about 20\arcsec\ west of the northern nucleus,
the bridge-like arm emanating from the northern nuclear disk to south,
and
a linear feature across the southern nucleus.
The 3.6 cm image also shows a faint spur that emanates from the southern nucleus to south and slightly curves toward west.
It has a counterpart in our CO data.
The blueshifted emission in the $|V - \Vsys| = 200$ \kms\ channel of Fig. \ref{f.HVchans.CO32.RedBlueGray} (b)
coincides with the radio spur.
Figure \ref{f.vla-almaHV} compares a higher resolution 3.6 cm image, in Fig. 2 of \citet{Neff03}, with our ALMA data.
The 3.6 cm continuum in black contours and 860 \micron\ continuum in gray scale again show very good correlation.
In the radio emission there is a pair of narrow spurs that emanate from the southern nucleus to north and south;
the one to the south is probably a part of the spur mentioned above.
Although we did not detect this feature in submillimeter continuum, our CO data have their counterparts.
The highest velocity CO(3--2) emission shown in red and blue contours are at the tips of these
radio continuum spurs.
We are going to discuss these observations in \S \ref{s.outflow.southern.driver}.
\subsubsection{Spitzer Infrared Images}
\label{s.result.comparison.Spitzer}
Figure.~\ref{f.spitzer} shows archival infrared images of NGC 3256
taken with the Spitzer Space Telescope Infrared Array Camera (Program ID. 32).
We show in each panel the same area as in Fig.~\ref{f.contmaps} (a) for our 2.8 mm continuum
and use the same linear intensity scale. The infrared images have 1\arcsec--2\arcsec\ resolutions.
Similarities between the infrared and millimeter continuum distributions are
not only the two bright nuclei but extended features around them
including the spiral arm to the north of the nuclei,
two bright areas about 5\arcsec\ and 20\arcsec\ east of the northern nucleus,
and a linear feature that protrudes west from the central region by about 15\arcsec\ at about the latitude of the southern nucleus.
The northern nucleus is brighter than the southern nucleus in the Spitzer images
(except at 4.5 \micron\ not shown here) and even more so at 11.5 \micron\ \citep{Lira08}.
This is also the case in millimeter emission.
The northern nucleus has comparable or more integrated flux density than the southern nucleus in 1\arcsec--3\arcsec\ apertures
(see Table \ref{t.contFluxSpix}), although the southern nucleus is more compact and has comparable
or higher peak brightness than the northern nucleus at $\lesssim2\arcsec$ resolutions.
It is very likely that the northern nucleus has larger flux densities also between 11.5 \micron\ and 860 \micron\
and hence a larger bolometric luminosity than the southern nucleus.
\section{Merger Configuration}
\label{s.configuration}
We suggest the merger configuration in Fig. \ref{f.illust} for the reasons given in this section.
There are two nuclei as in the model in \citest{Sakamoto06}.
Their identification as the nuclei of two merging galaxies is strongly supported
by the peaks of line and continuum emission at the two dominant radio sources
and by our detection of large velocity gradients there
(Figs.~\ref{f.contmaps} and \ref{f.maps.CO32nuc}).
Parameters estimated in this section are summarized in Table \ref{t.4418measured.param}.
\subsection{NGC 3256N}
The northern nucleus has a nuclear gas disk that has a low inclination and nearly circular rotation,
showing a clear butterfly pattern in the velocity field (Fig. \ref{f.maps.CO32nuc}).
We measured in \S \ref{s.result.line.velocity_field} that the disk major axis is at ${\rm p.a.(N)} \approx 75\arcdeg$
and inclination is $i_{\rm N} \approx 30\arcdeg$.
The molecular spiral arms around the northern nucleus, those shown in gray in Fig. \ref{f.illust},
must be also nearly face-on based on their morphology.
Since they emanate from the northern nuclear disk or its vicinity the arms most likely belong to the northern galaxy
and are coplanar with the northern nuclear disk.
The near side of the northern nuclear disk is then its southeastern side assuming
that the molecular spiral arms are trailing.
\subsection{NGC 3256S}
\label{s.configuration.n3256s}
The southern nucleus must be in front of the northern galaxy disk and nearly edge-on.
This deeply obscured nucleus cannot be much behind the northern galaxy disk because if so we should have seen
the foreground northern galaxy disk at its location.
The extinction peak toward the southern nucleus must be due mostly to the southern galaxy itself, because even if
the southern nucleus were slightly behind the northern galaxy disk its surface gas density, hence extinction, does not peak
at the radius of the southern nucleus.
A nearly edge-on configuration is therefore suggested for the obscured southern nucleus and the southern galaxy.
The high inclination is supported by the shape of the region that has both large optical extinction (reddening)
and large CO line width in Fig.~\ref{f.hst.L}(b).
This region is extended in the east-west direction across the southern nucleus
as expected when the foreground (part of the) southern galaxy has
a high inclination and a major axis at ${\rm p.a.(S)} \approx 90\arcdeg$.
Such a configuration also explains the distribution of large line widths
as due to the overlap of the two galaxies
and also due to the nearly edge-on geometry of the southern disk.
This argument disfavors the possibility that the southern nucleus is on or slightly behind the northern disk
because we do not see any high velocity-dispersion region with little reddening
(i.e., gas behind the northern galaxy disk) in the central few kpc of the southern galaxy.
Further outskirts of the southern galaxy appear already strongly disturbed and leaving their original orbital plane
judging from the large scale distribution of color excess in Fig.~\ref{f.hst.L}(b).
Closer to the center, there is a bar-like distribution of molecular gas and dust
across the southern nucleus (Fig. \ref{f.maps.CO32nuc}, HCN in Fig. \ref{f.maps.nonCO}, and Figs. \ref{f.contmaps} b and c ).
This is the edge-on southern nuclear disk in our interpretation.
The lack of clear butterfly pattern in its velocity field (Fig. \ref{f.maps.CO32nuc}) is consistent with
the proposed large inclination.
For the reasons given in \S \ref{s.outflow.southern}, the near side of the nearly edge-on southern nuclear disk
must be its northern side and the disk inclination is constrained to be
$70\arcdeg < i_{\rm S} \lesssim 85\arcdeg$.
\subsection{Mass Ratio}
\label{s.configuration.massRatio}
The northern nucleus is probably a few times more massive than the southern nucleus.
The ratio of the CO(3--2) line width (FWHM) at the northern nucleus to that at the southern nucleus is 0.77
for the 4\arcsec\ aperture used in Fig. \ref{f.spectra.nuclei}.
This ratio, after correction for the inclinations, reflects the mass ratio of the nuclei at 0.7 kpc scale
because the broad emission wings at the nuclei are too faint to affect FWHM.
For $i_{\rm N} \approx 30\arcdeg$ and $i_{\rm S} \approx 80\arcdeg$ and ignoring the effect of any difference in gas radial distributions,
the mass ratio $M_{\rm N}/M_{\rm S}$ is 2.3 for the line width ratio of 0.77;
the mass ratio is between 2.1 and 2.4 for $70\arcdeg < i_{\rm S} < 90\arcdeg$.
The ratio of line FWHM appears to increase to about 1 when the sampling area increases; the mass ratio
would be 3.9 for the FWHM ratio of 1 and $i_{\rm S}$ of 80\arcdeg.
This trend can be due to different degrees of mass concentration between the two nuclei
but can be also due to more contamination to the southern nucleus from the northern galaxy disk.
With these uncertainties in mind
we suggest $\Mdyn({\rm N})/\Mdyn({\rm S}) \sim 2.5$ with an error up to $\pm 1$ for 1 kpc diameters.
\subsection{Merger Orbit}
The orbital plane of the two nuclei is probably close to the disk plane of the northern galaxy.
In other words, not only must the southern nucleus be in front of the northern galaxy as argued above
it is probably near the northern galaxy plane.
This is deduced from two observations.
One is that the most prominent molecular arm emanating from the northern nuclear disk extends in the direction of the
southern nucleus as if bridging the two nuclei.
The other is that other molecular arms at larger radii are around the two nuclei;
the most notable is the arm starting from the northern nuclear disk and running east of the binary nuclei
by almost 180\arcdeg\ in our CO(3--2) map in Fig. \ref{f.maps.CO102132}.
These are expected to be so if the southern galaxy has been close to the disk plane of the northern galaxy and exerting
its gravitational force to the disk gas in the direction nearly within the disk plane.
Because the northern galaxy was estimated to be nearly face-on the merger orbital plane is also close to face-on.
The southern galaxy has a high inclination angle with respect to the orbital plane in the configuration we suggested above.
This high inclination is consistent with much of the gas in the outer disk of the southern galaxy leaving its original galactic plane
because for the southern galaxy the perturber is on a nearly polar orbit.
Direct contact of the gas in the two disks is another plausible reason for the disturbance although this works for both disks.
Fig. \ref{f.hst.L} (b) and (c) show a one-arm reddening and CO feature that starts at about 30\arcsec\ east of the
two nuclei and spirals into the southern nucleus after a 270\arcdeg\ clockwise turn.
This may well be material stripped from the southern galaxy tracing its past trajectory
around the center of mass near the northern nucleus.
If this is the case, the southern nucleus must be currently moving from west to east (right to left on our maps).
The right panel of Fig. \ref{f.illust} shows the two nuclear disks viewed from above the merger orbital plane.
As was in the sky-projection in Fig. \ref{f.illust} (left), the northern nuclear disk is close to face-on and the southern nuclear disk
is close to edge-on because the orbital plane is estimated to have a low inclination ($\lesssim$30\arcdeg) with respect to our sight line.
However, the sense of rotation of the southern nuclear disk is opposite between the sky-projection and orbital-plane-projection
in our model.
It is certainly possible in our model, with the southern nucleus in front of the northern disk,
that both of the two nuclear disks have prograde rotation with respect to the orbital motion of the two nuclei.
Whether this is not only possible but is indeed so is not certain from our argument above, but the prograde-prograde
configuration has been suggested to explain the long tidal tails seen in the optical and \ion{H}{1} \citep{Toomre72,English03}.
\subsection{Merged Gas Disk}
The gas presumably stripped from the southern galaxy and the gas from the northern galaxy appear to be
forming, from larger radii, a merged gas disk that is connected to the northern galaxy disk.
The overall CO(1--0) velocity field in Fig. \ref{f.maps.CO102132} is largely consistent with that of the northern nuclear disk
regarding the kinematical major axis and an apparently low inclination.
The stripped gas that we inferred above from the color index image does not stand out in our CO mean velocity field.
In our proposed configuration, this is mainly because the gas on the large-scale is settling to the merger orbital plane
that is close to the plane of the northern galaxy.
Such a merged gas disk was proposed in \citest{Sakamoto06}.
It is expected to form because gas is not collisionless unlike stars and hence cannot remain on the original disks
at the larger radii where the two disks have already collided with each other.
Although the small visible perturbation in the observed CO velocity field can be also because the northern galaxy,
whose nucleus we found to be more massive than the southern one,
had a dominant fraction of gas in the system, the presence of two \ion{H}{1} tidal tails makes it unlikely that the large scale gas
disk is only from the northern galaxy.
\subsection{Comparison with Other Estimates}
The configuration suggested above is consistent with what
\citet{English03} estimated from their \ion{H}{1} imaging of the merger.
They suggested on the basis of the wide \ion{H}{1} tidal tails that the merger orbital plane is almost face-on.
They further attributed the different shapes of the two tails to different inclinations of the progenitor galaxy disks
with respect to the orbital plane.
The spin of each galaxy was estimated to be prograde with respect to the binary orbital motion as mentioned above.
The consistency of these estimates by \citet{English03} from \ion{H}{1} observations
and ours from molecular gas and optical data adds credence to our model in Fig. \ref{f.illust}.
In addition, \citet{Trancho07} made a notable observation from their optical spectroscopy
of young star clusters that while the majority of the clusters follow the rotation of the
main (i.e., northern) gas disk some clusters about 20\arcsec\ west of the southern nucleus do not.
They deduced that the former belong to the northern galaxy and the latter either belong to the
other (i.e., southern) galaxy or may have formed in tidal-tail gas falling back to the system.
The locations of the out-of-rotation clusters are consistent with them belonging to the nearly edge-on southern galaxy.
The dominance of the northern galaxy over the southern one in the number and motion of the clusters
is consistent with the presumably larger mass of the northern progenitor.
\subsection{\Mdyn\ and Gas-to-Dynamical Mass Ratios}
\label{s.configuration.Mdyn_MgasMdyn}
We here calculate the dynamical masses of
the two nuclear disks and an area encompassing the two nuclei
and compare them to our gas mass estimates to see whether the gas masses are reasonable.
We estimate the dynamical mass of the northern nucleus to be on the order of
$\Mdyn(r_{\rm N} \leq {\rm 200\, pc}) \sim 4\times10^9$ \Msol\
using the line-of-sight rotational velocity of 150 \kms\ inferred from the CO position-velocity plot (Fig \ref{f.COpv} d)
and $i_{\rm N} \approx 30\arcdeg$ measured above.
We also estimate the dynamical mass of the southern nucleus to be $\Mdyn(r_{\rm S} \leq {\rm 200\, pc}) \sim 2\times10^9$ \Msol\
using the line-of-sight rotational velocity of 200 \kms\ inferred from the CO position-velocity plot (Fig \ref{f.COpv} e)
and $i_{\rm S}$ of 80\arcdeg.
The ratio between the two dynamical masses within 400 pc diameters is 2.2,
consistent with the ratio of $2.5\pm1$ in 1 kpc diameters estimated in \S \ref{s.configuration.massRatio}.
These dynamical masses have large uncertainties
because we cannot accurately measure the rotational terminal velocity of each nuclear disk in the PV diagrams
contaminated by the faint and broad line wings that we attribute to outflow in the next section.
We also crudely estimate the dynamical mass in the central 20\arcsec\ of the merger to be
$\Mdyn(r \leq {\rm 1.7\, kpc}) \sim 6\times10^{10}$ \Msol\ from a rotational line-of-sight velocity of about 200 \kms\ inferred from the
CO(1--0) channel maps (Fig. \ref{f.chans.CO10.br}) and an inclination of 30\arcdeg.
Although the high-velocity emission near the nuclei does not contaminate rotation at this large scale
this estimate still has a large uncertainty due to the assumed inclination and the possibility
that some gas at this radius may not be on a merged disk.
The molecular gas masses for the same regions are estimated from CO(1--0) to be
$\Mmol(r_{\rm N} \leq {\rm 200\, pc}) \sim 3\times10^8$ \Msol,
$\Mmol(r_{\rm S} \leq {\rm 200\, pc}) \sim 2\times10^8$ \Msol\,
and
$\Mmol(r \leq {\rm 1.7\, kpc}) \sim 6\times10^9$ \Msol\
for $X_{20} = 1$.
The gas to dynamical mass ratios are therefore about 6\%, 12\%, and 9\% for the northern nuclear disk, southern nuclear disk, and
the the merger in its central 3.4 kpc.
These ratios inherit the uncertainties of the adopted \Xco\ and any of its spatial variation and
any error in the dynamical masses.
The reasonable gas mass fractions on the order of 10\%, however, suggest
that the gas masses above are probably not very wrong.
The 0.5 dex uncertainty for the adopted $X_{20}=1$ seems reasonable for the bulk (though not all)
of molecular gas in the observed region.
\section{Two Outflows}
\label{s.twoOutflows}
We argue from our observations of high-velocity molecular emission
(in particular Figs. \ref{f.HVchans.CO10.tp.VHV}, \ref{f.HVchans.CO32.RedBlueGray}, and \ref{f.HV.CO10.RedBlues})
that each of the two nuclei has its own bipolar molecular outflow.
In our model illustrated in Fig. \ref{f.illust},
activities in the northern nucleus and its low-inclination nuclear gas disk
are driving a bipolar outflow with a wide opening angle in the direction perpendicular to the northern nuclear disk.
This causes the high-velocity molecular line emission observed around the northern nucleus.
The southern nucleus drives a more collimated bipolar outflow perpendicular to the southern nuclear disk, i.e.,
in the north-south direction on the sky.
The high velocity CO emission along ${\rm p.a.} \sim 0\arcdeg$ and 180\arcdeg\ are due to this outflow.
The redshifted gas of the two outflows overlap on the sky between the two nuclei
causing the peak of redshifted high velocity CO found in \citest{Sakamoto06}.
Outflow parameters derived in this section are summarized in Table~\ref{t.4418measured.param}.
Before elaborating on the two outflows we briefly mention two conceivable alternatives
for the southern outflow and why we do not favor them.
An alternative interpretation of the gas motion around the southern nucleus is that the north--south velocity gradient is
due to rotation around the nucleus.
If so the projected rotation axis of this hypothetical southern nuclear disk is along ${\rm p.a. } \approx 90\arcdeg$.
Then the continuum and line emission features along this p.a. ($\approx$ 90\arcdeg),
e.g., in Figs. \ref{f.contmaps} (b), (c), \ref{f.maps.CO32nuc} (a), and the leftmost channels in Fig. \ref{f.HVchans.CO32.RedBlueGray},
would be polar structures, plausibly a bipolar outflow.
The optical color-excess region across the southern nucleus would also be a polar structure for the southern galaxy.
This model is not favored because it makes the bipolar structures much larger than the base nuclear disk.
Another alternative interpretation of the high-velocity gas around the southern nucleus is that it may be
a merger-driven tidal feature rather than a bipolar outflow.
The tidal force exerted on the southern nucleus by the northern galaxy is along the north-south direction, i.e., the major axis
direction of the high-velocity gas.
We note, however, that the blueshifted high-velocity gas comes out almost directly from the southern nucleus
in Fig.~\ref{f.HVchans.CO32.RedBlueGray} (b).
If the tidal force were strong enough to strip gas in the nucleus from such a small radius
then the gas elongated in the east-west direction across the southern nucleus (Fig. \ref{f.maps.CO32nuc} a) would not be there.
Also, we estimated in the previous section that the merger orbital plane is close to face on.
Since the tidal force vector is along the orbital plane the force cannot give large line-of-sight velocities to the tidally stripped gas.
We therefore regard this alternative as equally unlikely.
\subsection{Northern Outflow: Uncollimated Bipolar Wind}
\label{s.outflow.northern}
\subsubsection{Evidence, Geometry, Driving Mechanism}
The following observations in \S \ref{s.result.highV}
are the pieces of evidence for a bipolar outflow with a wide opening angle from the northern nuclear disk.
CO(1--0) emission is detected ($\gtrsim 4\sigma$) around this nucleus up to $|\Delta V| = 650$ \kms\ from systemic
in Fig.~\ref{f.HVchans.CO10.tp.VHV} and the full extent of the line to zero intensity is about 1600 \kms\ (Fig. \ref{f.spectra.nuclei}).
The high velocity gas in CO(3--2) is detected on the northern nuclear disk
with its blueshifted emission slightly shifted to northwest and its redshifted counterpart biased toward southeast
in Fig.~\ref{f.HVchans.CO32.RedBlueGray} (a).
These spatial shifts of blueshifted and redshifted high-velocity gas are
along the minor axis of the northern nuclear disk
and the shift of the blueshifted gas is toward the far-side of the nuclear disk.
These observations are consistent with the high-velocity emission being an outflow from the nucleus in the direction
perpendicular to the northern nuclear disk (see Fig. \ref{f.illust}).
Since the northern nuclear disk is nearly face-on the outflow axis is close to our line of sight.
This pole-on viewing angle is consistent with the small spatial offset between the blueshifted and redshifted emission.
This northern outflow must be extended, i.e., must have a wide opening angle, because it is better detected at lower resolution
(in Figs. \ref{f.HVchans.CO32.RedBlueGray} and \ref{f.HV.CO10.RedBlues}).
In particular, the blueshifted emission in Fig. \ref{f.HV.CO10.RedBlues}(a) directly shows that the high-velocity gas
that we attribute to outflows is more extended around the northern nucleus than around the southern nucleus.
We note that the large extent is another reason, in addition to the velocity gradient along the minor axis, why
the northern high-velocity gas is unlikely to be due to rotation.
If the high velocities were rotational the enclosed dynamical mass would be unrealistically large,
although we do not exclude a small fraction of rotational high-velocity gas very close to the dynamical center.
We also note why we model the high velocity gas as outflow rather than inflow.
It would be too much of a coincidence to have polar inflow from both sides of the northern nuclear disk at the same time.
The most plausible driver for the northern molecular outflow is starburst in and around the northern nuclear disk.
The current data do not suggest the outflow originates from a particular single point, such as an active galactic nucleus (AGN), within the nuclear disk.
\subsubsection{Northern Outflow Parameters}
\label{s.outflow.northern.parameter}
We estimate the outflow rate from the northern nucleus to be $\dot{M}_{N} \approx 60 \Xtwenty$ \Msol\ \mbox{yr$^{-1}$}} % yr^{-1.
We assumed for this the outflow axis to have the same inclination as the northern nuclear disk, i.e., $i_{\rm N,outflow} \approx i_{\rm N} \approx 30\arcdeg$.
The extent of the outflow along its outflow axis is estimated to be 0.8 kpc from this inclination and the 2\farcs4 offset between the
peak of the blueshifted emission and the northern nucleus in Fig. \ref{f.HV.CO10.RedBlues}(b).
The outflow velocity along the outflow axis is $650/\cos(30\arcdeg) = 750$ \kms\ for the largest velocity in Fig. \ref{f.HVchans.CO10.tp.VHV}.
The outflow timescale is therefore 1 Myr.
(This is not necessarily the age of the outflow because the extent of the outflowing molecular gas may be limited by
interaction with ambient gas,
dissociation of the molecules,
gravity of the galaxy,
and our sensitivity.)
Dividing the mass of the high velocity gas around the northern nucleus (\S\ref{s.result.line.hv.mass})
with this timescale gives the outflow rate above.
This outflow rate is a lower limit because it does not account for the mass that is in the outflow
but has lower line-of-sight velocities than the 224 \kms\ cutoff in our flux measurement for the high-velocity emission.
The kinetic luminosity of the outflow is on the order of
$L_{\rm kin, N} \about 4\times 10^{8} \Xtwenty\, \Lsol $ (=$2 \Xtwenty \times 10^{35}$ W)
where we use the mass of the northern high-velocity gas in \S\ref{s.result.line.hv.mass},
300 \kms\ for a characteristic outflow velocity (i.e., 260 \kms\ along our sightline), and the characteristic timescale of 1 Myr.
The lower velocity gas excluded from our outflow mass adds to the luminosity but less so than to the outflow rate.
The outflow kinematic luminosity is about 10\Xtwenty \% of the mechanical luminosity from
a half of the star formation in NGC 3256 (25 \Msun\mbox{yr$^{-1}$}} % yr^{-1), $\sim2\times 10^{36}$ W \citep{Leitherer99}.
Thus the northern outflow can reasonably be driven by the starburst.
The gas depletion time from the northern nucleus, a 300 pc diameter region centered at the nucleus,
is calculated to be $3\chi_{N}$ Myr on the basis of our observations.
The parameter $\chi_{N}$ is the ratio of CO to \HH\ conversion factor
for the nuclear disk and that for the high-velocity gas, i.e., $\Xco({\rm nucleus})/\Xco({\rm outflow})$,
for the northern nucleus.
It is one in our default assumption but it can be \about10 if the outflow CO emission is optically thin.
\subsubsection{Comparison with Previous Outflow Observations}
Outflow of ISM around the northern nucleus has been reported and the parameters measured in several previous works
besides our own detection of high-velocity molecular gas in \citest{Sakamoto06}.
\citet{Scarrott96} found with optical imaging polarimetry a dust reflection nebula extending out to 7 kpc (40\arcsec) from the galactic center
and attributed it to dust entrained to the halo by a starburst-driven superwind.
\cite{Moran99} made optical slit spectroscopy across NGC 3256N and
found LINER-like emission line ratios off the nucleus (up to 30\arcsec\ from the center) coupled
with large line widths (FWHM up to 400 \kms).
They attributed these to shock-induced kinematics and ionization and
concluded the presence of a starburst-driven superwind.
\citet{Heckman00} detected Na~D absorption lines of 550 \kms\ width and 309 \kms\ blueshift
and concluded the presence of an outflowing superwind.
\citet{Lipari00} found blue wings of \Halpha\ and [\ion{N}{2}] lines in their spectroscopy toward the northern nucleus
and deduced an outflow with a velocity of \about350 \kms\ and line width \about130 \kms.
Notably, the minor axis of their outflow at ${\rm p.a.} \approx 70\arcdeg$ agrees with that of our molecular outflow
and so does their wide outflow opening angle (140\arcdeg).
\citet{Leitherer13} detected in their UV spectroscopy blueshifted line absorption of C and Si.
They detected three velocity components at $-126, -447$, and $-867$ \kms\ with the bulk velocity of $-461$ \kms\
at the position of their observations 2\arcsec\ northeast of the northern nucleus.
Our molecular outflow from the northern nucleus agrees with these observations regarding the
magnitude of the outflow velocity, outflow direction and opening angle, and that the outflow has a large spatial extent.
Therefore previous observations reporting an outflow/superwind in NGC 3256 are probably observations of
various aspects of this northern outflow,
although a minor contribution from the southern outflow is likely in \citest{Sakamoto06}.
\subsection{Southern Outflow: Molecular Bipolar Jet}
\label{s.outflow.southern}
\subsubsection{Morphology}
\label{s.outflow.southern.morph}
The southern bipolar outflow is clearly seen as bisymmetric high-velocity emission around the southern nucleus.
It is along a ${\rm p.a.} \sim 0\arcdeg$ and is redshifted to the north and blueshifted to the south of the nucleus
(Fig. \ref{f.HVchans.CO32.RedBlueGray}).
The outflow axis projected onto the sky is orthogonal to the nearly edge-on southern nuclear disk.
We therefore assume that the outflow is along the rotation axis of the southern nuclear disk.
The near side of the southern nuclear disk is then estimated to be its northern side
because the outflow toward us (i.e., blueshifted outflow) is on the south of the nucleus (see Fig. \ref{f.illust}).
The southern outflow appears highly collimated.
It has a narrow base at the southern nuclear disk and is detected to
a projected distance of \about4\arcsec\ (0.7 kpc) from the southern nucleus
(Figs.~\ref{f.HVchans.CO32.RedBlueGray} and \ref{f.chans.CO32.br}).
Its length-to-width ratio is about 5 in Fig.~\ref{f.HVchans.CO32.RedBlueGray}.
This ratio suggests an opening angle of about 20\arcdeg\ for an edge-on cone (i.e., the flow is within $10\arcdeg$ from its central axis).
Because the outflow is well collimated along its axis to about one kpc from its origin
we can reasonably call it a bipolar molecular jet.
Looking at details, the blueshifted outflow gradually curves toward west as it goes further from the nucleus (\S \ref{s.result.highV.channelMaps}).
In our model, this is most likely due to ram pressure because the southern nucleus
is moving from west to east with respect to the northern galaxy, as inferred in \S \ref{s.configuration}.
Similar curvature is unclear in the redshifted outflow to the north in Fig. \ref{f.HVchans.CO32.RedBlueGray}
although blueshifted emission to the north of the southern nucleus in the $-139, -113$, and $-87$ \kms\ channels
in Fig. \ref{f.chans.CO32.br} show the expected curves.
The southern cone of the outflow is visible in the integrated intensity maps of CO(3--2), (2--1),
and barely in CO(1--0) in Fig. \ref{f.maps.CO102132}.
It is also hinted at in the integrated intensity map of CN(1--0, 3/2--1/2).
This feature in the integrated maps, in particular in CO(3--2), appears to have little contamination from non-outflowing ambient gas
because the feature in the channel maps, when visible, consistently maintain its spur-like morphology.
This is expected for the almost edge-on southern nuclear disk; little non-outflowing molecular gas is expected to be at high latitudes.
The southern cone of this outflow is also visible in the 2.1 \micron\ line image
of \HH\ 1--0 S(1) in \citet[their Fig. 2a]{Kotilainen96}.
\subsubsection{Velocity Structure}
The velocity structure of the southern outflow is noteworthy in that
the highest velocity emission at $|V - \Vsys| \sim 400$ \kms\ is offset from the nucleus in both the blueshifted and redshifted velocities
by about 1\farcs8 (310 pc on the sky) as we noted in \S \ref{s.result.highV.channelMaps}.
About the same offsets are see at $|V - \Vsys| \sim 450$ \kms\ in CO(1--0).
This is also seen in the CO(3--2) position-velocity diagram along ${\rm p.a.} = 0\arcdeg$ (Fig.~\ref{f.COpv} f)
in which the terminal velocity increases with distance from the nucleus until this peak.
The symmetry in the blueshifted and redshifted emission suggests this to be systematic rather than a coincidence.
The simplest model is that the molecular outflow accelerates from the nucleus to this distance.
Alternatively, it may be that only the line-of-sight velocity increases along the outflow and peaks at $d \approx 1\farcs8$.
It is possibly because of a gradual increase of the outflow opening angle,
although at $d \approx 1\farcs8$ the high velocity emission is still compact ($< 1\arcsec$ in extent).
We regard the acceleration along the outflow to $d \approx 1\farcs8$ most likely
but do not rule out other possible causes for the observed velocity structure.
The outflow line-of-sight velocity decreases further out and the true outflow velocity may also do so.
\subsubsection{Inclination Correction}
We estimate the most likely inclination of the southern molecular jet (and the southern nuclear disk)
to be about 80\arcdeg\ with a range of possible values between about 70\arcdeg\ and 85\arcdeg.
We already deduced in \S \ref{s.configuration.n3256s} that the southern nuclear disk is nearly edge-on;
a conservative lower limit of the inclination is 70\arcdeg\ from the observations there.
A sign for a larger inclination is that there is blueshifted emission at the location of the redshifted cone
and redshifted emission at the location of the blueshifted cone
(e.g., at the $-113$ and $+69$ \kms\ channels in Fig. \ref{f.chans.CO32.br}).
The condition required to see both blueshifted and redshifted emission in a conical outflow is
$i_{\rm S, outflow} + \theta_{\rm S, op}/2 > 90\arcdeg$ where
$i_{\rm S, outflow} $ is the inclination of the outflow axis and $\theta_{\rm S, op}$ is the full opening angle of the cone.
For the $\theta_{\rm S, op} \sim 20\arcdeg$ measured above the inclination to barely see both blue- and red-shifted emission
in both cones is $ i_{\rm S, outflow} \gtrsim 80\arcdeg$.
On the other hand, the data do not support $ i_{\rm S, outflow} \approx 90\arcdeg$
because that would make both blueshifted and redshifted emission almost equally visible in each cone.
These arguments set the above-mentioned range of $i_{\rm S, outflow}$.
Its further refinement is hampered
by uncertainties in the outflow opening angle, its spatial variation (if any), and the curvature of the outflow.
The inclination correction to the line-of-sight velocity is at least a factor of 2.9 ($= 1/\cos 70\arcdeg$) and is
5.8 and 11.4 for the $i_{\rm S, outflow}$ of 80\arcdeg\ and 85\arcdeg, respectively.
With the CO detection at least up to $\pm 450$ \kms\ along our sightline,
the maximum outflow velocity is therefore at least $\gtrsim 1000$ \kms\ even considering the
jet opening angle of about 20\arcdeg.
It is plausible, though not yet certain, that the maximum velocity is as large as 2600 \kms\
($= 450\, \kms /\cos 80\arcdeg$).
The maximum velocity is very likely larger in this southern molecular outflow than in the northern one.
The large velocity provides additional support to the description of this outflow as a molecular jet.
\subsubsection{Southern Outflow Parameters}
\label{s.outflow.southern.parameter}
The mass outflow rate from the southern nucleus is estimated to be
$\dot{M}_{S} \approx 50 \Xtwenty $ and $25 \Xtwenty$ \Msol\ \mbox{yr$^{-1}$}} % yr^{-1\ for
$i_{\rm S, outflow} = 80\arcdeg$ and 70\arcdeg, respectively, from the projected outflow extent of 4\arcsec,
a characteristic line-of-sight velocity of the outflow of 250 \kms,
and the gas mass estimated in \S \ref{s.result.line.hv.mass}.
The time scale for the outflow to travel 4\arcsec\ on the sky is 0.5 and 1 Myr, respectively,
for $i_{\rm S, outflow} = 80\arcdeg$ and 70\arcdeg.
Adopting the same characteristic velocity, the kinetic luminosity of the southern outflow is on the order
of $L_{\rm kin, S} \about 9\Xtwenty \times 10^{9} \Lsol $ (=$3\Xtwenty \times10^{36}$ W) and $1\Xtwenty \times 10^{9} \Lsol$
for $i_{\rm S, outflow} = 80\arcdeg$ and 70\arcdeg, respectively.
This kinetic luminosity is larger than that of the northern outflow even though the northern nucleus is more luminous
in mid-infrared and presumably also in total luminosity.
It exceeds the mechanical luminosity of the southern nucleus due to supernovae and stellar winds
if $i_{\rm S, outflow} = 80\arcdeg$ and $\Xtwenty = 1$.
The gas depletion time of the southern nucleus by this outflow is only $0.6\chi_{S}$ Myr for the central 80 pc
for $i_{\rm S, outflow} = 80\arcdeg$.
Here we used the peak gas surface density at the 80 pc resolution for the mass of gas to be depleted by the outflow.
The choice of this small size is because the base of the bipolar molecular jet appears compact.
Again the depletion time does not depend on our choice of the CO to \HH\ conversion factor
if the same conversion factor applies to the gas at the nucleus and in the outflow (i.e., if $\chi_{S} = 1$)
but the time scale can be ten time longer ($\chi_{S} \sim 10$) if the outflowing CO is optically thin.
The outflow rate and kinetic luminosity above are lower limits in the sense that they do not account for the mass that is in the outflow
but has lower line-of-sight velocities than the 224 \kms\ cutoff in our flux measurement of the high-velocity gas
in \S \ref{s.result.line.hv.flux}. The gas depletion time is an upper limit for the same reason.
The omission of the low-velocity gas is more significant than for the northern outflow
because the de-projected cutoff velocity is larger, 1300 and 650 \kms\ for $i_{\rm S, outflow} = 80\arcdeg$ and 70\arcdeg, respectively.
The total gas mass in the outflow may well be an order of magnitude larger than the gas mass above our cutoff velocity.
In our 0\farcs6 resolution integrated intensity image in Fig. \ref{f.maps.CO102132},
the CO(3--2) flux in the southern cone of the southern outflow is 300 Jy \kms\
at distances from the southern nucleus between 1\arcsec\ and 4\arcsec\ along the outflow.
For comparison, the CO(3--2) flux at velocities above our cutoff is only 21 Jy \kms\ in this outflow cone in the same dataset.
We used the latter value for our outflow rate calculation.
Thus the omission of low-velocity flow may cause an underestimate of the outflow rate by up to an order of magnitude.
\subsubsection{Possible Driver: Radio Jet}
\label{s.outflow.southern.driver}
The radio image in Fig. \ref{f.vla-almaHV} suggests
that the southern molecular outflow is associated with a bipolar radio jet from the southern nucleus.
The high velocity CO(3--2) emission at $2775 \pm 380$ \kms\ are at either ends of the pair of linear radio features
that emanate to north and south from the southern nucleus, although the southern radio spur appears to go further
(see \S\ref{s.result.comparison.VLA}).
We also found that the southern cone of the molecular jet is along the southern radio spur,
even following its westward curve.
These configurations allow a model that there is a bipolar radio jet from the southern nucleus and
the southern molecular outflow is entrained by this radio jet.
If so the apparent acceleration of molecular gas along the outflow to $d\approx 1\farcs8$ is probably due to
continuous dragging of molecular gas by the high speed plasma jet.
\subsection{Significance of the Outflows}
Both molecular outflows are significant in the mass consumption budget of the individual nuclei
because the outflow rates are comparable to or larger than the star formation rates in the nuclei.
\citet{Lira08} estimated the star formation rates of the northern and southern nuclei to be
\about15 and \about6 \Msun\mbox{yr$^{-1}$}} % yr^{-1, respectively, by modeling their infrared spectral energy distributions.
Our outflow rates are larger than the star formation rates at both nuclei; they are at least comparable considering
their uncertainties.
The total outflow rate, $60 X_{\rm 20, \, N\, outflow} + 50 X_{\rm 20, \,S \, outflow}$ \Msun\mbox{yr$^{-1}$}} % yr^{-1, is also
on the same order as the total star formation rate of NGC 3256, \about50 \Msun\mbox{yr$^{-1}$}} % yr^{-1\
(Table \ref{t.4418param}).
This is still so when \Xtwenty\ is \about0.1 in both outflows for optically thin CO emission.
The star formation history of the merger should be influenced by the molecular gas outflow ---
this was a conclusion of \citest{Sakamoto06} and it still holds in our new study.
Part of the outflowing molecular gas, in particular that in the southern molecular jet, will probably escape
from their original galaxy but may not leave the merger.
The ratio of escape velocity to circular orbital velocity is 2.5--3 for extended mass distributions of galaxies \citep{Leitherer13};
it is $\sqrt{2}$ for Keplerian motion.
We estimated in \S\ref{s.configuration.Mdyn_MgasMdyn} the rotational velocities of 300 \kms\ and 200 \kms\
at a radius of 200 pc for the northern and southern galaxies, respectively.
Assuming a flat rotation curve in each galaxy beyond this radius,
the ratio is 2.5 and 13, respectively, for the maximum molecular outflow velocity that we estimated
for $\gtrsim$4$\sigma$ emission in \S\ref{s.outflow.northern.parameter} and \ref{s.outflow.southern.parameter}
(i.e., 750 \kms\ for N and 2600 \kms\ for S).
The ratio is 3.3 for the \about1000 \kms\ maximum velocity obtained from the FWZI of CO(1--0) spectrum on the northern nucleus.
It is 7.5 (3.2) for our 1300 (650) \kms\ de-projected cutoff velocity used for the southern jet
with $i_{\rm S,\, outflow} =80\arcdeg\; (70\arcdeg)$.
A tiny fraction of the molecular gas in the northern outflow and most of the high-velocity molecular gas in the southern
outflow are therefore above their respective escape velocities from respective galaxies.
Whether the molecular gas will escape from the merging system is a different problem and not certain.
For one thing, the escape velocity from the merger is larger than that from a constituent galaxy
because the former is more massive.
Moreover, hydrodynamical effects on the outflowing molecular gas, already implied by the curvature of the southern molecular jet,
are likely significant and can decelerate the outflow through interaction with ambient gas in the system.
\section{Dormant? AGN in the Southern Nucleus}
\label{s.Snucleus}
The most plausible driver of the southern bipolar molecular jet is an AGN in the southern nucleus if the outflow is
entrained by a bipolar radio jet.
It is because only AGNs are known to drive well-collimated radio jets of several 100 pc to several 100 kpc.
The contrast between the northern and southern outflows in terms of the outflow opening angle, velocity, and kinetic luminosity
also implies different driving mechanisms between them.
Since the northern outflow is a starburst-driven superwind in all likelihood the southern outflow is left with an AGN barring
a starburst with very unusual parameters.
\subsection{Constraints on Current AGN Activities}
\label{s.Snucleus.AGNconstraints}
Despite the likely radio jet that we identified, recent searches for an AGN in the southern nucleus as well as in NGC 3256 as a whole
have been generally negative though not unanimously so.
\citet{AlonsoHerrero12} modeled the Spitzer mid-IR spectra at \about5--38 \micron\ from the central 13\arcsec\ including both nuclei
and concluded that any AGN contribution to the bolometric luminosity of NGC 3256 is less than 1\%.
In X-rays, \citet{Lira02} detected the southern nucleus, in addition to the brighter northern nucleus, with
long (28 ks) Chandra observations but found no evidence for an AGN in either nucleus.
Their absorption-corrected X-ray luminosity for the southern nucleus in the 0.5--10 keV range was at least two orders of magnitude
below that of classical Seyfert nuclei.
They concluded that only a low luminosity AGN comparable to that in M81 is possible.
\citet{PereiraSantaella11} analyzed 126 ks observations with XMM-Newton and concluded the absence of a luminous Compton-thick AGN.
Although they confirmed the weak 6.4 keV Fe K$\alpha$ line marginally detected by \citet{Jenkins04}
the line equivalent width was found to be too small for a luminous AGN.
On the positive side, \citet{Neff03} found that the radio-to-X-ray ratios of both nuclei are indicative of low-luminosity AGNs.
Our ALMA observations set constraints on any hidden AGN in the southern nucleus regarding the column density
and the spatial extent of the obscuring material as well as on the AGN luminosity.
The mean absorbing column density is as high as $\log (N_{\rm H, equiv.}/{\rm cm^{-2}}) \approx 23.5$
toward the central 80 pc of the southern nucleus; here we use a half of the total column density.
Although this does not make the nucleus Compton thick this is an order of magnitude larger than the column density
that \citet{Lira02} used for absorption correction.
The true column density toward the AGN, if any, can be much higher (or lower) than this mean value because
an AGN could be shrouded at a much smaller scale.
Regardless of the heating source behind, the very large obscuration toward the southern nucleus is consistent with
its very deep 9.7 \micron\ silicate absorption observed by \citet{MartinHernandez06} and \citet{DiazSantos10}.
The absorption index is $S_{\rm 9.7 \mu m} = \ln (f_{\rm 9.7 \mu m, obs}/f_{\rm 9.7 \mu m, cont}) < -3.0$ according to the 0\farcs36 aperture
data in Fig. 3 of \citet{DiazSantos10}.
This absorption index is comparable to those of Arp 220 and NGC 4418 both of which
have been suspected to host hidden Compton-thick AGNs \citep{Roche86, Spoon07}.
Their nuclei have compact and bright dusty cores with sizes of tens of parsecs, high opacities at submillimeter wavelengths,
and $\gtrsim 100$ K brightness temperatures at 860 \micron\ \citep{Sakamoto08,Sakamoto13}.
Interestingly, we did {\it not} detect such a compact and bright continuum core toward the southern (as well as northern) nucleus
nor did we detect lines from vibrationally excited molecules (Table \ref{t.linelist}) unlike toward Arp 220 and NGC 4418 \citep{Costagliola10,Sakamoto10,Martin11}.
This indicates that any Compton thick and warm absorber around an AGN in the southern nucleus must be very compact.
For example,
a dust shroud having 860 \micron\ opacity of 0.3 (i.e., X-ray Compton opacity \about 10) and a temperature 100 K
should have the size of 6 pc (0\farcs03) so that it has the observed peak 860 \micron\ brightness temperature of 0.24 K at our 0\farcs43 resolution.
The bolometric luminosity of this core would be $2\times 10^9$ \Lsun.
It is an upper limit because
only a part of the 860 \micron\ continuum is thermal dust emission (\S \ref{s.result-continuum})
and
probably only a part of the observed 860 \micron\ dust continuum is from the central 6 pc.
The absence of a bright submillimeter core in our data is therefore consistent with
the mid-IR estimate of the low luminosity of any AGN in NGC 3256.
\subsection{AGN Activities in Recent Past?}
\label{s.Snucleus.recent}
The molecular bipolar jet plausibly driven by an AGN combined with the absence of luminous AGN could
be explained in two ways.
One is that the low-luminosity AGN is very efficient in driving the radio and molecular jets
and the other is that the AGN was previously active but is currently inactive possibly due to the quenching effect of the outflow.
There are indeed observations that arguably suggest a luminous AGN in NGC 3256 some $10^4$ yr ago.
\citet{Moran99} found, in addition to sings of a several 100 \kms\ superwind,
broad \Halpha\ line emission with FWZI $\approx$ 4000--6000 \kms\ at off-center positions.
The broad line was not detected on the northern nucleus but was detected $\gtrsim$10\arcsec\ from it in a 2\farcs5 slit along
${\rm p.a.} = 155\arcdeg$.
No velocity shift of the broad line was found between the two sides of the nucleus.
Although the unusually large line widths and the lack of velocity shift alone could be attributed to our southern outflow,
the locations where the broad line is detected are not in the 20\arcdeg\ opening angle of the outflow.
\citet{Moran99} determined it implausible
that the broad line emission is reflected light of an AGN broad line region
citing the lack of a luminous AGN that can illuminate the scattering ISM several kpc away.
However, it is possible, given our detection of high-velocity molecular jet from the southern nucleus,
that the southern nucleus had a luminous AGN until very recently.
If the broad line emission at least 3 kpc away from the southern nucleus is a light echo of the past activity,
the nucleus was (much more) active $10^4$ yr ago.
Similar variations of AGN luminosity at $10^3$--$10^5$ yr time scales have been found
in a growing number of galaxies \citep{Keel12} and,
in our Galaxy, the X-ray luminosity of Sgr A$^\ast$ dropped from its `high' state of the last 500 yr by 4--6 orders of magnitude within
the last 100 years \citep{Ryu13}.
A caveat for the scenario that the southern outflow was driven by a radio jet from a recently deactivated AGN is
that AGN radio jets are not preferentially aligned with the galaxy rotation axes \citep{Kinney00,Gallimore06}
though a good alignment was recently reported for Sgr A$^\ast$ \citep{Li13}.
The southern outflow and the southern nuclear disk have apparent alignment at least in the projection onto the sky.
Unless this is another case of intrinsic galaxy-jet alignment, this probably suggests collimation by the nuclear disk.
It may be through interaction of a radio jet and the nuclear gas concentration, perhaps through which
the jet is loaded with molecular gas.
Alternatively, the alignment might be because the outflow is not entrained by a radio jet but driven by some other mechanisms
including a compact starburst, AGN, and their combination where the nuclear disk works as a collimator.
Star formation in the southern nucleus is active,
having one third of the star formation in the northern nucleus \citep{Lira08}
fueled by the high surface-density gas of
$\Sigma_{\rm mol}({\rm S}) = 6\times 10^3 \Xtwenty$ \Msol \mbox{pc$^{-2}$}} % pc^{-2.
It is therefore reasonable to expect some contribution of star formation to the southern outflow.
If the southern outflow is driven mainly by starburst then the kinetic luminosity of the outflow must be much lower
than that calculated from our fiducial conversion factor and outflow inclination;
these parameters must be lower than we assumed.
On the whole we regard that AGN jet-driven outflow is more plausible than others to be the main mechanism
for the southern molecular jet. Some help from starburst is certain.
This model is, however, not yet proven and needs further studies for verification and
to determine the true driving mechanism(s).
\section{Discussion and Conclusions}
\label{s.conclusions}
We have reported our ALMA and SMA observations of molecular line and continuum emission in the center of NGC 3256.
We constrained the configuration of the two merger nuclei and their nuclear molecular disks much better than before
and resolved for the first time the high-velocity molecular gas in the merger into two molecular outflows from the two nuclei.
We have suggested the southern molecular outflow from NGC 3256S to be driven by an AGN bipolar jet.
If confirmed, it joins a small group of outflows that share the same driving mechanism and have been imaged in molecular line(s).
They include the molecular outflows in M51 \citep{Matsushita04}, NGC 1266 \citep{Alatalo11}, and NGC 1433 \citep{Combes13}.
Compared with these outflows, the bipolar molecular jet of NGC 3256S is better collimated and more energetic
for a common \Xco.
This may be because the AGN radio `jets' in the other galaxies are wider radio plumes.
Mainly because of the large outflow velocity, the kinetic luminosity of the southern outflow approaches
that of local ultraluminous infrared galaxies and quasar hosts observed by \citet{Cicone14}, who obtained
outflow kinetic luminosities on the orders of $10^{36}$--$10^{37}$ W with a conversion factor 3 times lower than ours.
The large maximum velocity of the southern outflow is also comparable to or larger than those in their survey
but this is probably because ours is helped much by the high ALMA sensitivity and the proximity of NGC 3256.
The overall significance of AGN-driven, jet-entrained molecular outflows is an open question.
AGN time variability similar to the one we suggested for NGC 3256S may reduce the apparent AGN contribution to
galactic molecular outflows.
Regarding jet-entrained outflows, on one hand, radio jets have been found only in minority of AGNs.
For instance, \citet{Ho01} found ``linear'' structures of radio continuum in 14/52 = 27\% of optically selected, nearby Seyfert galaxies.
On the other hand, the parameters of our southern outflow imply that a jet-entrained outflow can be more powerful and efficient
than other outflows when normalized by the source bolometric luminosity.
It is possible therefore that the small number and/or short lifetime of the outflows driven by AGN radio jets are offset to some extent
by their efficiencies and luminosities.
The two molecular outflows in NGC 3256 are excellent targets for such assessment
because we can simultaneously study properties and driving mechanisms of two powerful molecular outflows of different natures.
Our observations have added two similarities between NGC 3256 and Arp 220
in addition to both being late stage mergers with large infrared luminosities.
One is the presence of outflows from both of the two merger nuclei; for Arp 220
blueshifted molecular line absorption indicative of outflow has been detected toward both nuclei
\citep{Sakamoto09}.
The other is that the two merger nuclei with less than 1 kpc projected separation still retain
their nuclear gas disks with misaligned rotational axes; for Arp 220 this was first imaged by \citet{Sakamoto99}.
Our submillimeter observations also revealed a clear difference between the two mergers.
Namely, the nuclei of NGC 3256 are less obscured than the Arp 220 nuclei in terms of
gas and dust column density averaged at 100 pc scale.
This is most clearly seen in the submillimeter continuum emission whose opacity due to dust is almost unity at 860 \micron\
toward the nuclei of Arp 220 but about two orders of magnitude lower toward the nuclei of NGC 3256.
In order for NGC 3256 to evolve into Arp 220, therefore, significant gas accretion is needed to the nuclei
despite the ongoing strong molecular outflows that would deplete the gas in the nuclei in Myrs.
Such evolution may indeed occur because Arp 220 is probably more advanced as a merger than NGC 3256 judging from their nuclear separations.
NGC 3256 may become more luminous in that process, perhaps as luminous as Arp 220, because
there is a statistical trend for larger nuclear obscuration (i.e., more gas funneling to the nuclei) and
larger total luminosities in more advanced mergers \citep{Haan11,Stierwalt13}.
Further studies on NGC 3256 are warranted also for the purpose of tracing the late evolutionary path of a merger
that is plausibly about to become ultraluminous.
Finally we reemphasize our caution on \Xco\ in particular for the high-velocity molecular outflows.
The large line widths of the outflow gas reduce the CO column density per line width and hence
may well result in optically thin CO emission.
The conversion factor for that case is $\Xtwenty \sim 0.1$.
Such a low conversion factor for optically-thin CO has been adopted, for example, for the molecular outflow in NGC 1266
on the basis of multi-line CO excitation analysis \citep{Alatalo11}.
The outflows in NGC 3256 may have a similar situation and \Xco.
Alternatively, the outflowing gas may consist of an ensemble of optically-thick (in CO) clouds that spread in a wide velocity range.
In its partial support is our detection of CN(1--0) lines, with likely enhancement relative to CO(1--0),
in the high-velocity gas (\S\ref{s.result.line.hv.spectra}).
Although CN may be subthermally excited, the detection of a line with a $10^6$ \mbox{cm$^{-3}$}} % cm^{-3\ critical density
implies gas clumping for dense gas to exist in the high velocity outflows.
Even if individual clumps are not virialized as assumed for the standard \Xco, the conversion factor for optically thick
clumps will be larger than that for optically thin CO (and lower that that for virialized CO-thick clouds).
Similar clumping and presence of dense gas in a galactic molecular outflow have been deduced for Mrk 231
by \citet{Aalto12} from their detection of broad line wings in HCN, \HCOplus, and HNC lines.
Because most outflow parameters in Table~\ref{t.4418measured.param} depend on \Xtwenty,
followup studies on the physical and chemical properties of the high-velocity gas are highly desired.
Our primary findings are:
1. Each of the two merger nuclei has its own nuclear disk where molecular line and continuum emission peak.
The northern nuclear disk is nearly face-on ($i$ \about30\arcdeg),
has a \about200 pc characteristic radius,
and clearly rotates around the northern nucleus.
The southern nucleus has a more compact emission peak and a linear structure extending \about200 pc on either side.
It is deduced to be a nearly edge-on nuclear disk rotating around the southern nucleus.
The mean molecular gas surface densities of both nuclei is about 3$X_{20} \times10^4$ \Msol\mbox{pc$^{-2}$}} % pc^{-2\
at 240 pc resolution, where $X_{20}$ is the CO-to-\HH\ conversion factor in the unit of $10^{20}$ \unitofX.
The peak gas surface density is $6 X_{20} \times10^4$ \Msol\mbox{pc$^{-2}$}} % pc^{-2\ at the southern nucleus at 80 pc resolution.
2. The high velocity molecular gas previously found at the center of the merger is resolved to two molecular outflows
associated with the two nuclei.
We detected not only CO but also CN lines with enhancement in these outflows.
The CN detection in a galactic outflow is for the first time to our knowledge.
The total molecular outflow rate of the two outflows is on the same order of the total star formation rate in NGC 3256.
3. The molecular outflow from the northern nuclear disk is a bipolar flow with a wide opening angle
and a nearly pole-on viewing angle.
It has de-projected outflow velocities up to 750 \kms\ at $\gtrsim$4$\sigma$
and an outflow time scale (crossing time) of 1 Myr.
Its molecular gas mass is $6 X_{20} \times10^7$ \Msol,
mass outflow rate $60 X_{20}$ \Msol\mbox{yr$^{-1}$}} % yr^{-1,
and kinetic luminosity on the order of $4 X_{20} \times 10^8$ \Lsun.
The last three are for the gas at de-projected velocities above 260 \kms.
At the current rate the outflow would deplete molecular gas in the northern nuclear disk in 3 Myr
if the same conversion factor applies to the nuclear disk and the outflow.
Most of the outflow/superwind signatures found so far at other wavelengths in NGC 3256
must be from this outflow.
4. The molecular outflow from NGC 3256S is a well collimated bipolar jet
with a \about$20\arcdeg$ opening angle and is nearly edge on.
It has a de-projected maximum velocity 2600 \kms\ for a favored inclination angle 80\arcdeg\
or 1300 \kms\ for $i=70\arcdeg$.
The line-of-sight outflow velocity increases with distance up to 300 pc from the nucleus.
This molecular jet has a 0.5 Myr crossing time,
a mass of $2.5 X_{20} \times10^7$ \Msol,
a mass outflow rate $50 X_{20}$ \Msol\mbox{yr$^{-1}$}} % yr^{-1, and a kinetic luminosity
on the order of $90 X_{20} \times 10^8$ \Lsun\ for $i=80\arcdeg$.
These are for gas at projected velocities above 220 \kms\ and the lower velocity gas in the outflow
may be an order of magnitude larger in mass.
The gas depletion time for the central 80 pc is \about0.6 Myr
under the same assumption about the conversion factor as above and ignoring the lower velocity flow.
5. The northern outflow is a starburst driven superwind in all likelihood.
The southern outflow is most likely entrained by a radio jet from a weak or recently dimmed AGN in the southern nucleus.
Pieces of evidence for the latter outflow driver are the large differences in the outflow parameters from the northern superwind,
off-nuclear broad \Halpha\ lines in NGC 3256,
and a pair of radio spurs from the southern nucleus that matches in shape the southern molecular bipolar jet.
6. Continuum spectral indexes are negative at 3 mm and positive at 0.86 mm for both nuclei.
The index is lower, in particular at 0.86 mm, for the southern nucleus, suggesting
significant synchrotron and/or free-free emission even at 860 \micron.
Neither nucleus has a bright ($\Tb > 10$ K) dust continuum core of several 10 pc size at 860 \micron\
such as those found in Arp 220 and NGC 4418.
This disfavors presence of a highly Compton-thick and currently luminous AGN in the nuclei of NGC 3256.
The new observations presented in this paper contain more information than we could fit in a single paper.
Further analysis will be reported elsewhere.
\vspace{5mm}
\acknowledgements
We are grateful to the people who worked or supported to make ALMA a reality.
We also thank the ALMA and SMA staff who carried out our observing runs or made the data assessments.
This paper made use of the following ALMA data: ADS/JAO.ALMA\#2011.0.00002.SV and ADS/JAO.ALMA\#2011.0.00525.S.
ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan),
together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile.
The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
This paper also uses observations made with the Submillimeter Array, which is a joint project
between the Smithsonian Astrophysical Observatory and the
Academia Sinica Institute of Astronomy and Astrophysics, and is
funded by the Smithsonian Institution and the Academia Sinica.
This research is also partly based on observations made with the NASA/ESA Hubble Space Telescope,
and obtained from the Hubble Legacy Archive,
which is a collaboration between the Space Telescope Science Institute (STScI/NASA),
the Space Telescope European Coordinating Facility (ST-ECF/ESA)
and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory,
California Institute of Technology, under contract with the National Aeronautics and Space Administration.
The authors also made use of the NASA/IPAC Extragalactic Database (NED),
NASA's Astrophysics Data System (ADS),
and
the splatalogue database for astronomical spectroscopy.
KS was supported by the Taiwanese NSC grants 99-2112-M-001-011-MY3 and 102-2119-M-001-011-MY3.
{\it Facilities:} \facility{ALMA, SMA, HST, Spitzer}
\clearpage
\clearpage
|
1,108,101,565,307 | arxiv | \section{Introduction}
Ecology studies the relationship between living organisms and their
physical environment, including the
habitat requirements of certain species, the interactions in a community of species, both positive
(mutualism and commensalism) and negative
(predation and competition),
and the stability of the resulting relationships across space and time.
Species distribution models (SDMs) for these phenomena
describe the impact of environmental variables on the distribution of a
species. These models have developed into popular tools for predicting the occurrence or
abundance of a species in a certain habitat, which in turn provides insights into how that
species reacts to changes in important environmental variables, such as
temperature and precipitation. Predictions from SDMs are valuable not only in
basic ecology
but also in applied ecology, especially for conservation of
endangered species and the management of invasive species.
Binary regression models and
classification machine learning are often used to predict the
presence or absence of a species in a certain habitat
\citep[\textit{e.g.},~][]{Elith_Leathwick_Hastie_2008,hothorn2011decomposing}.
Models describing the abundance of a species rely on observed numbers of individuals instead
of presence/absence information and thus yield more
detailed information on a species' distribution. However, despite early
work in this direction \citep{Death_2002}, most contemporary SDMs ignore the
relationships between members of a community of species in a certain
habitat.
Therefore, more elaborate models covering both the community and the environmental aspect
have been proposed over the past decade.
\cite{Kissling_Dormann_Groeneveld_2012} reviewed several ad hoc approaches, such
as the inclusion of the presence/absence information of one species as a
predictor variable in the SDM for another species.
\cite{Ingam_Vukcevic_Golding_2020} categorized more sophisticated model-based approaches into
multi-species distribution models (MSDMs) and joint species distribution
models (JSDMs). MSDMs are used to jointly estimate multiple individual SDMs
by assuming that the impact of the environment on the species
distribution is similar for similar species. \cite{Ovaskainen_Soininen_2011}
proposed a binary regression model, and \cite{Ingam_Vukcevic_Golding_2020} a more
flexible Gaussian process model for presence/absence data in this context.
Recently, \cite{norberg2019comprehensive} published an empirical comparison of
33 models for presence/absence data.
JSDMs consider relationships between species as
a priori unstructured correlations expressed in a residual term.
\cite{Ovaskainen_Hottola_Siitonen_2010}
proposed a logistic model for presence/absence data,
\cite{Pollock_Tingley_Morris_2014} a probit model, and
\cite{Warton_Blanchet_OHara_2015} mixed Poisson models for the abundance
data of multiple species. Similar models with a priori structured correlations have also
been introduced, especially in phylogenetic
generalized linear mixed models \citep{Ives_Helmus_2011}.
The modeling approach implemented by MSDMs and JSDMs is mutually exclusive:
MSDMs estimate marginal models for each species but lack an explicit
assessment of their relationships whereas JSDMs allow the identification of
species relationships without providing interpretable marginal models.
While marginal SDMs can be obtained from JSDMs by
integrating over species numerically, this destroys the simple structure
of the conditional JSDM \citep{Lee_Nelder_2004,Muff_Held_Keller_2016}.
Moreover, these models typically lack the flexibility needed to estimate complex changes in
the relationship between species across space and time or in different
habitats.
In the following, we present a novel perspective on models describing the joint
distribution of multiple
target species based on abundance data, \textit{i.e.},~ the number of individuals of all
target species observed at the same time and place. The correlation
between each pair of species is described explicitly in terms of Spearman's rank
correlation through dedicated model parameters. As this parameter
may also change in response to changes in environmental conditions, space or
time, the complexities that are known to occur in nature can be modeled as well
\citep[for example][]{petren1998habitat,bakker2006herbivore}.
Single SDMs,
describing the marginal distribution for each target species as a function
of environmental variables and space or time can be derived
from this joint model. With the number of individuals $Y_j \in \{0, 1, 2, 3, \dots\}$
for species $j = 1, \dots, J$ in an environment characterized by the
configuration $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$, we model the joint conditional distribution
function $\mathbb{P}(Y_1 \le y_1, Y_2 \le y_2, Y_3 \le y_3, \dots,
Y_J \le y_J \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ for all $J$ species such that the relationship between
each pair of species (for example, species $j = 1,2$)
in environment $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$ is characterized by parameters describing
its joint distribution
\begin{eqnarray*}
\mathbb{P}(Y_1 \le y_1, Y_2 \le y_2 \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) =
\mathbb{P}(Y_1 \le y_1, Y_2 \le y_2, Y_3 \le \infty, \dots, Y_J \le \infty \mid
\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}).
\end{eqnarray*}
In addition, the marginal distribution of each target species
(species $j = 1$, for example)
\begin{eqnarray*}
\mathbb{P}(Y_1 \le y_1 \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) =
\mathbb{P}(Y_1 \le y_1, Y_2 \le \infty, Y_3 \le \infty, \dots, Y_J \le \infty \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})
\end{eqnarray*}
is interpretable as a SDM for a single species. We focus
on the count transformation models introduced by \cite{Siegfried_Hothorn_2020}
as marginal SDMs. The structure of the joint model emerged from
multivariate conditional transformation models
\citep{klein2019multivariate}, in which the dependence
structure is described by correlations in a latent Gaussian copula.
The development of multi-species count transformation models and their
interpretation as species community distribution models for multiple target species
is illustrated by a model for the joint abundance of Great Cormorant
(\emph{Phalacrocorax carbo}), Great Crested Grebe (\emph{Podiceps
cristatus}) and Goosander (\emph{Mergus merganser}) monitored at Seehammer
See (Upper Bavaria, Germany) during the period between 2002 and 2016 \citep{Kinshofer}.
All three bird species feed on fish of approximately the same size within the same
habitat.
In Europe, these birds play a major role in controlling
the abundance of their prey fish species and thus potentially influence
their own abundances. We therefore expected negative interactions between
the three bird species.
These correlations would presumably change according to seasonal variations in the
abundances of the bird species \citep[for seasonal variation of competition see
\textit{e.g.},~][]{wignall2020seasonal,cecala2020seasonal}.
Details on this specific aquatic bird competition scenario are provided
in Section~\ref{sec:methods_data}, and the results of relatively simple models
describing the joint distribution of all three species over time in
Section~\ref{sec:empeval}.
To avoid any bias,
we empirically compared the ability of our model to identify interspecies dependencies
to the best performing community model identified in a recent large-scale benchmark
comparison of 33 approaches \citep{norberg2019comprehensive}.
\section{Methods} \label{sec:methods}
\subsection{Competitive interactions in fish-eating aquatic birds} \label{sec:methods_data}
A simple example of distribution models for species communities is illustrated by a
model system of three piscivorous birds whose abundances vary with the season:
Great Cormorant (\emph{Phalacrocorax carbo}), Great Crested Grebe (\emph{Podiceps cristatus})
and Goosander (\emph{Mergus merganser}; Figure~\ref{fig:timeseries}).
\begin{figure}
\begin{center}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{timeseries-1}
\end{knitrout}
\caption{Time series for the three bird species Great Crested Grebe, Great Cormorant and Goosander at lake Seehammer See between May 2002 and November 2016.}
\label{fig:timeseries}
\end{center}
\end{figure}
In the study area, the abundances of all three species in winter increased
during the second half of the 20th century such that carrying capacity was reached
in the 1990s \citep{suter1995cormorants}.
A possible explanation for the logistic growth of the winter populations is
that the food resources for these piscivores were limited.
All three species forage fish of almost the same size
(10–20~cm) by diving to a depth of 5~m.
Thus, the three species show considerable overlap in their feeding niches.
Accordingly, if resources are limiting, interspecific competition would be expected.
In the absence of experimental intervention data, our aim was to detect
interspecific competition via negative correlations in observed multivariate
abundances (for details see \cite{Kinshofer}).
The analysis is based on daily
counts of the three piscivorous bird species sampled at lake Seehammer See, which
is located in the foothills of the Alps in southern Bavaria, about 40~km
southeast of Munich.
It has an area of 1.47~km$^2$, with an average depth of 3.8~m (maximum 12~m).
Thus, the entire water body is accessible
to the three bird species. A local ornithologist, Gerhard Kinshofer, counted
the three species between May 1, 2002 and November 13, 2016
from various locations around the lake by using a spotting scope. He was therefore
able to spot most, if not all, birds on the lake. As the sampling design was
identical across all years, it can be safely assumed that any sampling
error was the same over time.
A total of 5,311 counts were available, with a few missing values either because data
were not collected on that day due to adverse meteorological conditions or
because one of the bird species was not spotted but the ornithologist was
unsure whether it was definitely absent.
Since our model required observations of all three bird species on the same day,
about 6.7\% of the instances in which values were missing were excluded
such that finally 4,955 observation-days were included in our analyses.
\subsection{Single-species models} \label{sec:methods_uni}
As single-species distribution models
we consider count transformation models as introduced by
\cite{Siegfried_Hothorn_2020}. They are the main building block of the novel
species community distribution models constructed for the multivariate abundance
data developed in Section~\ref{sec:methods_multi}. Univariate transformation models
describing the impact of explanatory environmental variables $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$ on the
distribution of the abundance of a single species described by the
univariate count response $Y \in \{0, 1, 2, \dots \}$
rely on an unknown transformation of the response, which must be
estimated from the data. On a technical level, these models consist of a
fully parameterized, smooth, monotone non-decreasing transformation $\alpha: \mathbb{R}^+
\rightarrow \mathbb{R}$ of the discrete response and of a function of the environmental
variables
$\eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ (for instance, one can consider a linear predictor $\eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) =
\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}^\top \text{\boldmath$\beta$}$). The discrete conditional distribution function (CDF)
$F_{\rY \mid \rX = \rx}$ is modeled directly through
\begin{eqnarray} \label{eq:count_tm}
F_{\rY \mid \rX = \rx}(y \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) = \mathbb{P}_{\rY \mid \rX = \rx}(Y \leq y \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) =
F( {\alpha}(\lfloor y \rfloor) - \eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}))
\end{eqnarray}
where $y \in \mathbb{R}^+$ is an arbitrary cut-off evaluated at the largest integer
$\lfloor y \rfloor$ at most as large as $y$
and $F$ is an inverse link
function. The choice of the $F$ will determine the interpretation of
the function $\eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ in the model. For example, when $F = \Phi$ , then
$\mathbb{E} (\alpha(Y) \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) = \eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ expresses the conditional mean of the
transformed counts. An alternative and attractive method
to interpret the model utilizes a connection to probabilistic
index models. For a reference habitat $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}_\text{Ref}$, the area under the
curve (AUC) or probabilistic index $\mathbb{P}(Y \preceq Y_\text{Ref} \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$},
\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}_\text{Ref})$ describes the probability of observing fewer
individuals in habitat $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$ than in the reference habitat. Under
model~(\ref{eq:count_tm}) and $F}%%_\rZ = \Phi$, this probability is simply $\Phi((\eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) -
\eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}_\text{Ref})) / \sqrt{2})$ \citep{Thas_Neve_Clement_2012}.
For other choices of $F$, parameter interpretations are listed in Table~1 in \cite{Siegfried_Hothorn_2020}.
The negative sign of $\eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$
ensures that larger values of $\eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ correspond non-linearly to larger values of the
conditional mean $\mathbb{E} (Y \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$.
The conditional discrete density for an observed number of individuals $y \in \mathbb{N}$ is
$f_{\rY \mid \rX = \rx}(y \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) = \mathbb{P}_{\rY \mid \rX = \rx}(Y = y \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})
= \mathbb{P}_{\rY \mid \rX = \rx}(Y \leq y \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) - \mathbb{P}_{\rY \mid \rX = \rx}(Y \leq y - 1 \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$, which we can
rewrite as
\begin{eqnarray} \label{eq:count_tm_dens}
f_{\rY \mid \rX = \rx}(y \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) = F( \alpha(\lfloor y \rfloor) - \eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})) -
F( \alpha(\lfloor y - 1 \rfloor) - \eta(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}))
\end{eqnarray}
where $\mathbb{P}(Y = y \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) \equiv 0$ for $y \notin \mathbb{N}$.
The corresponding likelihood is equivalent to the interval-censored
likelihood, in which an observed count $y$ is represented by the interval
$(\ubar{y}, \bar{y}] = (y - 1, y]$ for $y > 0$. \cite{Siegfried_Hothorn_2020}
estimated suitably parameterized functions $\alpha$ and $\eta$ by maximizing this
likelihood.
\subsection{Multi-species models} \label{sec:methods_multi}
In the following, we develop a joint multivariate regression model for multivariate
count data, that is, a model describing the joint distribution of several
count variables conditional on a set of environmental explanatory variables.
In the context of the aquatic bird competition problem
(Section~\ref{sec:methods_data}), the abundances of three species
$\mY = (Y_\text{A}, Y_\text{B}, Y_\text{C})^\top \in \{0, 1,
2, \dots\}^3$ (A = Great Crested Grebe, B = Great Cormorant, C = Goosander) are
modeled jointly, conditional on the time of year. To ensure the interpretability
of the models, our presentation is limited to joint models allowing
the marginal distribution of each species to be understood in terms of
a univariate count transformation model (Section~\ref{sec:methods_uni}):
\begin{equation} \label{eq:mod_shift}
F_{Y_j \mid \mX = \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}}(y_j \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) =
\mathbb{P}_{Y_j \mid \mX = \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}}(Y_j \leq y_j \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) =
F}%%_\rZ( \alpha_j(\lfloor y_j \rfloor) - \eta_j(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}))
\end{equation}
with counts $y_j \in \mathbb{N}$ for the three species $j = \text{A}, \text{B}, \text{C}$.
We choose $F}%%_\rZ = \Phi$, alternative choices are discussed in
Section~\ref{sec:discussion}.
Furthermore, pairwise associations between species are made quantifiable
on the scale of the transformed counts via a correlation coefficient.
Multivariate conditional transformation models
\citep{klein2019multivariate} for a continuous response vector $\mY =
(Y_1, Y_2, Y_3, \dots, Y_J)^\top \in \mathbb{R}^J$ feature these types of simple
expressions for the marginal and joint distributions; however, neither those
models nor the inference procedures are directly applicable to count data.
To extend the models to count data and for the sake of simplicity, we restrict
the notation to $J = 3$; the methodology applies to arbitrary dimensions
$J \ge 2$.
In the univariate transformation model for a continuous response, it is assumed
that a transformed version of the response follows a known
distribution, such as a standard normal distribution in models with $F
= \Phi$. Similarly, for a multivariate continuous response, we assume that the
transformed data $\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}(\mY) = (h_1(\mY),
h_2(\mY), h_3(\mY))^\top$ follow a multivariate normal distribution with
mean zero.
The correlation structure is defined by imposing a triangular structure on
each of the component-wise transformation functions
$h_1(\mY), h_2(\mY), h_3(\mY)$ by formulating them as
\begin{equation}
h_1(\mY) = h_1(Y_1), h_2(\mY) = h_2(Y_1, Y_2), \text{ and } h_3(\mY) = h_3(Y_1, Y_2, Y_3).
\end{equation}
Rationals explaining this choice are given in \cite{klein2019multivariate}.
Each of these transformation functions is decomposed as a linear
combination of marginal, monotone, non-decreasing transformation functions
$\tilde{h}_j: \mathbb{R} \rightarrow \mathbb{R}$
with unknown parameters $\lambda_{21}$, $\lambda_{31}$, and $\lambda_{32}$.
With $h_1(\mY) = \Tilde{h}_1(Y_1)$, we can define
$h_2(\mY) = \lambda_{21} \Tilde{h}_1(Y_1) + \Tilde{h}_2(Y_2)$ and
$h_3(\mY) = \lambda_{31} \Tilde{h}_1(Y_1) + \lambda_{32} \Tilde{h}_2(Y_2) +
\Tilde{h}_3(Y_3)$, or in more compact vector-matrix form,
\begin{equation} \label{eq:mat-vec}
\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}(\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}) = \begin{pmatrix*}[l] h_1(\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}) \\ h_2(\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}) \\
h_3(\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}) \end{pmatrix*} =
\begin{pmatrix*}[l] h_1(y_1) \\ h_2(y_1, y_2) \\
h_3(y_1, y_2, y_3) \end{pmatrix*} =
\begin{pmatrix*}[l] 1 & 0 & 0 \\ \lambda_{21} & 1 & 0 \\
\lambda_{31} & \lambda_{32} & 1 \end{pmatrix*}
\begin{pmatrix*}[l] \tilde{h}_1(y_1) \\
\tilde{h}_2(y_2) \\ \tilde{h}_3(y_3) \end{pmatrix*}
= \mathbf{\Lambda} \tilde{\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}}(\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$})
\end{equation}
for a lower triangular matrix $\mathbf{\Lambda} \in \mathbb{R}^{3 \times 3}$ defined by the
$\lambda$-parameters.
The coefficients in
$\mathbf{\Lambda}$ characterize the dependence structure of the transformed
responses via a Gaussian copula, in which the joint (unconditional)
distribution of $\mY$
is given by
\begin{eqnarray*}
\mathbb{P}(\mY \leq \text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}) = \Phi_{\mathbf{0}, \mathbf{\Sigma}} \left[
\Phi^{-1}_{0, \sigma_{11}^2} \lbrace \Phi(y_1) \rbrace ,
\Phi^{-1}_{0, \sigma_{22}^2} \lbrace \Phi(y_2) \rbrace ,
\Phi^{-1}_{0, \sigma_{33}^2} \lbrace \Phi(y_3) \rbrace
\right]
\end{eqnarray*}
with variance-covariance matrix $\mathbf{\Sigma} = \mathbf{\Lambda}^{-1}
\mathbf{\Lambda}^{- \top}$.
This allows $\mathbf{\Lambda}$ to be interpreted as the inverse Cholesky factor
of the variance-covariance matrix of the Gaussian copula.
The $\lambda$-parameters thus describe the correlation of the transformed
data $\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}(\mY)$.
Correlation coefficients, the off-diagonal elements of $\text{diag}(\mathbf{\Sigma})^{-1/2}
\, \mathbf{\Sigma} \, \text{diag}(\mathbf{\Sigma})^{-1/2}$, can be computed from $\mathbf{\Lambda}$.
Information about the interaction of species can be described by Spearman's rank
correlations $\rho^{(S)}$: because the transformations $\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}(\mY)$ are monotone
increasing, the rank correlations between species counts $\mY$ can be computed
from the correlations of $\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}(\mY)$, \textit{i.e.},~ from the off-diagonal elements
$\rho_{\tilde{k} k}$ of $\text{diag}(\mathbf{\Sigma})^{-1/2}
\, \mathbf{\Sigma} \, \text{diag}(\mathbf{\Sigma})^{-1/2}$. The rank correlation of $(Y_{\tilde{k}}, Y_k)$ is
given by $\rho^{(S)}(Y_{\tilde{k}}, Y_k) = \frac{6}{\pi}
\text{arcsin}\left(\frac{\rho_{\tilde{k}k}}{2} \right)$.
Moreover, the Gaussian copula framework allows a formulation of the marginal
distributions
in terms of the transformation functions $\tilde{h}_j$ for $j =1, 2, 3$:
$\Phi_{Y_j \mid \mX = \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}}(y_j \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) =
\mathbb{P}(Y_j \leq y_j \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) = \Phi( \lfloor \tilde{h}_j(y_j) \rfloor - \eta_j(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}))$
that is, $\tilde{h}_j$ is the transformation function of the marginal models and it is
conditional only on environmental variables $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$. These
marginal distribution models correspond to single-species distribution models.
In principle, not only the shift terms $\eta_j(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ but
also the entries of $\mathbf{\Lambda}$ can depend on $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$, such that
interactions between species may change with changes in habitat, space, or
time.
This dependence can be formulated entry-wise as:
\begin{equation} \label{eq:lambdas}
\lambda_{\tilde{k} k}(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) = \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}^\top \text{\boldmath$\tau$}_{\tilde{k} k} =
\begin{cases}
\tau_{\tilde{k} k} & \text{``Model } \text{M-}\mLambda \text{''} \\
\tau_{\tilde{k} k} + \zeta_{\tilde{k} k}(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) & \text{``Model } \text{M-}\mLambda(\rx) \text{''}
\end{cases}
\quad \text{for } 1 \leq k < \tilde{k} \leq J,
\end{equation}
where $\tau_{\tilde{k} k} \in \mathbb{R}$ and $\zeta_{\tilde{k} k}(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ is a function of the
environmental variables $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$.
In the Gaussian copula framework, the coefficients $\lambda_{\tilde{k} k}(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ allow
statements to be made about the independence of the components of the response
vector.
In model $\text{M-}\mLambda$ it is assumed that
the Spearman's rank correlations between components of the random vector
$\mY$ is constant and model $\text{M-}\mLambda(\rx)$ allows the Spearman's rank correlation
between species to vary depending on the environmental variables
$\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$, space, or time.
\subsubsection{Likelihood inference}
The log-likelihood for continuous observations
$\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}_i \in \mathbb{R}^3, i = 1, \dots, N$ under this continuous
multivariate transformation model is given by
\begin{equation} \label{eq:loglik_continuous}
\ell(\text{\boldmath$\theta$}) = \sum_{i = 1}^N \log \left(
\sum_{j = 1}^3 -\frac{1}{2} \left( \tilde{h}_{j}(y_{ij}) +
\sum_{\jmath = 1}^{j-1} \lambda_{j \jmath}
\tilde{h}_{\jmath}(y_{i \jmath})\right)^2 \right) +
\log \left( \tilde{h}_{j}^\prime(y_{ij}) \right)
\end{equation}
where $\text{\boldmath$\theta$} = (\text{\boldmath$\tilde h$}} \def \mtildeH {\text{\boldmath$\tilde H$}, \mathbf{\Lambda})^\top$ contains the parameters of
the model (see Section~\ref{sec:param_inf} for details).
Hence, the parameters of the joint distribution of a continuous response vector
can be readily estimated via maximum likelihood (see
Appendix~\ref{sec:app}). Introducing environmental variables $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$ into this
unconditional model is
straightforward: for conditional marginal distributions, the change
$\tilde{h}_{j}(y_{ij}) \rightarrow \tilde{h}_{j}(y_{ij})
- \eta_j(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}_i)$ is required and conditional correlations can be estimated
by switching from model $\text{M-}\mLambda$ to model $\text{M-}\mLambda(\rx)$ in Equation~\ref{eq:lambdas}.
Count data $\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}_i \in \mathbb{N}^3$ can be viewed as interval-censored information.
Instead of observing an exact continuous observation $\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}_i \in \mathbb{R}^3$,
the interval $(\underline{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}}_i, \overline{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}}_i] = (\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}_i - 1,
\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}_i]$ is observed.
Thus, for a count response vector $\mY = (Y_\text{A}, Y_\text{B}, Y_\text{C})^\top$
the exact log-likelihood of $\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}_i, i = 1, \dots, N$ in this framework is given by
\begin{equation} \label{eq:exact_ll}
\ell(\text{\boldmath$\theta$}) = \sum_{i = 1}^N \log \left(
\int_{\tilde{\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}}(\underline{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}}_i)}^{\tilde{\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}}
(\overline{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}}_i)}
\phi_{\mathbf{0}, \mathbf{\Sigma}} (\tilde{\text{\boldmath$z$}} \def \mZ {\text{\boldmath$Z$}}) d\tilde{\text{\boldmath$z$}} \def \mZ {\text{\boldmath$Z$}} \right)
\end{equation}
where $\tilde{\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}}(\underline{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}}) =
\left(\tilde{h}_\text{A}(\underline{y}_\text{A}), \tilde{h}_\text{B}(\underline{y}_\text{B}),
\tilde{h}_\text{C}(\underline{y}_\text{C}) \right)^\top$,
$\tilde{\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}}(\overline{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}}) =
\left(\tilde{h}_\text{A}(\overline{y}_\text{A}), \tilde{h}_\text{B}(\overline{y}_\text{B}),
\tilde{h}_\text{C}(\overline{y}_\text{C}) \right)^\top$, and $\phi_{\mathbf{0},
\mathbf{\Sigma}}$ is the density of the trivariate normal with mean zero and
covariance $\mathbf{\Sigma}$.
Maximum-likelihood estimation of the parameters $\text{\boldmath$\theta$}$ in
Equation~\ref{eq:exact_ll} is computationally
extremely challenging, because higher-dimensional normal integrals have to be
evaluated. Moreover, it is not possible to derive analytic expressions for the
scores of the parameters.
For this reason, in Sections~\ref{sec:cont_appr} and \ref{sec:disc_appr}
we introduce two computationally attractive approximations of
the exact log-likelihood $\ell(\text{\boldmath$\theta$})$
to estimate the multi-species count transformation model.
\subsubsection{Continuous approximation} \label{sec:cont_appr}
In the first approximation, all counts are transformed as follows
\begin{equation} \label{eq:ytilde}
\tilde{y} = \begin{cases} y - 0.5 & y \geq 1 \\ y & y = 0. \end{cases}
\end{equation}
That is, this transformation is applied component-wise to a count vector
$\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$} = (y_\text{A}, y_\text{B}, y_\text{C})^\top$ and the mid-points of the
intervals $(y_\text{A} - 1, y_\text{A}]$, $(y_\text{B} - 1, y_\text{B}]$, and $(y_\text{C} - 1,
y_\text{C}]$ respectively are obtained.
Then, because the resulting random vector is no longer a count vector,
a multivariate conditional transformation model is fit maximizing (\ref{eq:loglik_continuous})
as described in \cite{klein2019multivariate}. In the univariate context,
this approximation corresponds to the principle of applying least-squares to
$\log(y + 1)$ for parameter estimation and inference
\citep[\textit{e.g.},~][]{Ives_2015, Dean_Voss_Draguljic_2017,
Gotelli_Ellison_2013, DeFelipe_etal_2019, Mooney_etal_2016}.
The log-likelihood contribution $\ell(\text{\boldmath$\theta$}) = \log(f_\mY (\Tilde{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}} \mid
\text{\boldmath$\theta$}))$ for a transformed datum
$\Tilde{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}}$ is then (with $\phi = \Phi^\prime$):
\begin{eqnarray*}
\ell(\text{\boldmath$\theta$}) &=& \log \left( \phi\left( \tilde{h}_\text{A}(\tilde{y}_{\text{A}}) \right) \right)
+ \log \left( \frac{\partial \tilde{h}_\text{A}(\tilde{y}_{\text{A}})}
{\partial \tilde{y}_\text{A}} \right) + \\
& \quad & \quad \log \left( \phi\left( \tilde{h}_\text{B}(\tilde{y}_{\text{B}})
+ \lambda_{\text{A} \text{B}} \tilde{h}_\text{A}(\tilde{y}_{\text{A}}) \right) \right)
+ \log \left( \frac{\partial \tilde{h}_\text{B}(\tilde{y}_{\text{B}})}
{\partial \tilde{y}_\text{B}} \right) + \\
& \quad & \quad\log \left( \phi\left( \tilde{h}_\text{C}(\tilde{y}_{\text{C}})
+ \lambda_{\text{A} \text{C}} \tilde{h}_\text{A}(\tilde{y}_{\text{A}})
+ \lambda_{\text{B} \text{C}} \tilde{h}_\text{B}(\tilde{y}_{\text{B}}) \right) \right)
+ \log \left( \frac{\partial \tilde{h}_\text{C}(\tilde{y}_{\text{C}})}
{\partial \tilde{y}_\text{C}} \right).
\end{eqnarray*}
Suitably parameterized choices of $\tilde{h}_j$ and $\mathbf{\Lambda}$ can then be
estimated by maximum-likelihood; details are discussed in
Appendix~\ref{sec:app}.
\subsubsection{Discrete approximation} \label{sec:disc_appr}
It is also possible to approximate the integral of Equation~\ref{eq:exact_ll}
directly as follows.
Note that if $\mZ \sim \ND(\mathbf{0}, \mathbf{\Sigma})$ then
$\mathbf{\Lambda} \mZ \sim \ND(\mathbf{0}, \mathbf{\Lambda} \mathbf{\Sigma} \mathbf{\Lambda}^{\top} = \mI)$
with $\text{det}(\mathbf{\Lambda}) = 1$.
A change of variables leads to
\begin{eqnarray} \label{eq:discr_approx}
\exp(\ell(\text{\boldmath$\theta$})) &=&
\int_{\mathbf{\Lambda} \tilde{\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}}(\underline{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}})}^{\mathbf{\Lambda}
\tilde{\text{\boldmath$h$}} \def \mH {\text{\boldmath$H$}}(\overline{\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}})}
\phi_{\mathbf{0}, \mI} (\text{\boldmath$z$}} \def \mZ {\text{\boldmath$Z$}) \, d\text{\boldmath$z$}} \def \mZ {\text{\boldmath$Z$} \notag \\
& \approx &
\left[ \Phi( \tilde{h}_\text{A}(\overline{y}_\text{A})) -
\Phi( \tilde{h}_\text{A}(\underline{y}_\text{A})) \right] \times \\
& & \quad \, \left[ \Phi(\tilde{h}_\text{B}(\overline{y}_\text{B}) + \lambda_{\text{A} \text{B}}
\tilde{h}_\text{A}(\tilde{y}_\text{A})) -
\Phi(\tilde{h}_\text{B}(\underline{y}_\text{B}) + \lambda_{\text{A} \text{B}}
\tilde{h}_\text{A}(\tilde{y}_\text{A})) \right] \times \notag \\
& & \quad \, \Big[ \Phi(\tilde{h}_\text{C}(\overline{y}_\text{C}) +
\lambda_{\text{A} \text{B}} \tilde{h}_\text{A}(\tilde{y}_\text{A}) +
\lambda_{\text{B} \text{C}} \tilde{h}_\text{B}(\tilde{y}_\text{B})) - \notag \\ & & \qquad \quad
\Phi(\tilde{h}_\text{C}(\underline{y}_\text{C}) + \lambda_{\text{A} \text{B}} \tilde{h}_\text{A}(\tilde{y}_\text{A}) +
\lambda_{\text{B} \text{C}} \tilde{h}_\text{B}(\tilde{y}_\text{B})) \Big] \notag
\end{eqnarray}
where $h_\text{A}(\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}), h_\text{B}(\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$}), h_\text{C}(\text{\boldmath$y$}} \def \mY {\text{\boldmath$Y$})$ are defined as
$h_1, h_2, h_3$ in Equation~\ref{eq:mat-vec}.
This approximation relies on a simplifying assumption concerning the integration
limits of the multivariate integral. Technical details are provided in
Appendix~\ref{sec:app}. This approximation to the likelihood resembles the correspondence between
the exact discrete univariate likelihood~(\ref{eq:count_tm_dens}) and the interval-censored
case.
The quality of the two approximations to the exact likelihood~(Equation~\ref{eq:exact_ll})
for the aquatic bird competition problem is evaluated by a parametric
bootstrap procedure (Appendix~\ref{sec:comp_app}).
\subsection{Parametrization and inference} \label{sec:param_inf}
In this section we discuss the technical aspects of the parameterization of
the transformation functions $\tilde{h}_j = \alpha_j$, the shift functions
$\eta_j$, and the inverse Cholesky factor $\mathbf{\Lambda}$, as well as details of the
implementation and computation of these models.
The univariate transformation
functions $\alpha_j : \mathbb{R}^+ \rightarrow \mathbb{R}$, $j =$ A, B, C, are continuous and
monotonically non-decreasing. They are applied to the largest
integer $\lfloor y \rfloor$ at most as large as $y$ for an arbitrary cut-off point
$y \in \mathbb{R}^+$ \citep{Siegfried_Hothorn_2020}. Following the approach of \cite{Hothorn_Moest_Buehlmann_2017}, the
functions $\alpha_j(y) = \text{\boldmath$a$}} \def \mA {\text{\boldmath$A$}_j(y)^\top \text{\boldmath$\vartheta$}_j$ are parameterized in
terms of basis functions $\text{\boldmath$a$}} \def \mA {\text{\boldmath$A$}_j : \mathbb{R} \rightarrow \mathbb{R}^P$ and are evaluated only at
integer arguments $y \in \mathbb{N}$. For the models
presented in Section~\ref{sec:empeval}, the $P$-dimensional Bernstein basis
leading to polynomials in Bernstein form of order $P-1$ were used.
This choice is computationally attractive and monotonicity can be
achieved by imposing the constraint
$\vartheta_{j,1} \leq \dots \leq \vartheta_{j,P}$ on the parameters
$\text{\boldmath$\vartheta$}_j = (\vartheta_{j,1}, \dots, \vartheta_{j,P})^\top \in \mathbb{R}^P$
for species $j$.
In general, the flexibility of a transformation function $\alpha_j$ can be adjusted
in different ways: first, by choosing the degree of the Bernstein basis appropriately
\citep{Hothorn_2020_JSS}, and second by allowing the transformation function to depend on both the
counts $y_j$ and the environmental variables $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$ with a more sophisticated model
of the form
$\alpha_j(y_j \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) = \text{\boldmath$a$}} \def \mA {\text{\boldmath$A$}_j(y_j)^\top \text{\boldmath$\vartheta$}_j(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$.
For model $\text{M-}\mLambda$, the lower
$J \times (J - 1) / 2$ triangular elements of the inverse Cholesky factor $\mathbf{\Lambda}$ are constants. In
the more complex setup of model $\text{M-}\mLambda(\rx)$, each of these lower triangular elements is
formulated as $\lambda_{\tilde{k} k}(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) = \tau_{\tilde{k} k} + \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}^\top \zeta_{\tilde{k} k}$.
As a consequence, Spearman's rank correlations between the three species are
allowed to vary with the day of the year.
The model formulation and the two approximations of the exact log-likelihood enable
the estimation of all parameters jointly by maximum likelihood. Analytical
expressions for the score functions are available for both approximations
(see Appendix~\ref{sec:app}) and thus standard optimizers can be employed for fast model
inference. The variance-covariance matrix of all model parameters can be obtained from
the numerically evaluated Hessian; corresponding Wald confidence intervals for
selected model parameters are available \citep{klein2019multivariate}.
\subsection{Models for the aquatic birds: in search of competitive interactions} \label{sec:models_birds}
We developed a single model aimed at answering two questions derived from
the hypothesis that the bird species compete for a common and limited resource:
(1) How does the abundance of each species vary over the course of a year?
(2) What is the pairwise correlation among the three species, \textit{i.e.},~ is
there a higher likelihood of observing large counts of individuals in
pairs of considered species? Are the
species counts independent or is a high abundance of the one species accompanied by
a small abundance of the other?
We formulated two nested models sharing the same marginal structure
\begin{equation} \label{eq:marginal_flex}
F_{Y_j \mid \mX = \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}}(y_j \mid \text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}) =
\Phi( \alpha_j(\lfloor y_j \rfloor) - \eta_j(\text{Year, Day})), \quad j =
\text{A}, \text{B}, \text{C}
\end{equation}
describing the abundance of each species for each day (1 to 365) within each
year of the observation period (2002 to 2016).
Two nested choices for the dependence structure (\ref{eq:lambdas}) were applied:
a multivariate model $\text{M-}\mLambda$ assuming
time-constant Spearman's rank correlations between all three species, and a
multivariate model $\text{M-}\mLambda(\rx)$ that allows the pairwise Spearman's correlations
to change over the course of a year.
The correlations are modeled as a smooth annual function by parametrizing
$\mathbf{\Lambda}(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ appropriately, details can be found in the Appendix~\ref{sec:app}.
It is important to note that, because all model parameters were estimated
simultaneously, the marginal models of $\text{M-}\mLambda$ and $\text{M-}\mLambda(\rx)$
implicitly account for correlations among the
three bird species.
Among all models from the literature on species distribution modeling,
we benchmarked our model against the best performing model according to the
review by \cite{norberg2019comprehensive}:
the joint species distribution model Hierarchical Modeling of Species
Communities (Hmsc) \citep{ovaskainen2017make}.
Hmsc was developed to analyze multivariate data from species communities.
After specifying the marginal models (in our case, marginal Poisson models with
log link), the Hmsc model is fitted with Bayesian inference through MCMC sampling.
Hmsc allows the estimation of so-called \emph{species-to-species associations}
by a latent factor approach, that is, including a random effect on the sampling
unit level.
Moreover, it is possible to incorporate covariate, temporal
or spatial dependencies in the random effect, making Hmsc conceptually similar
to model $\text{M-}\mLambda(\rx)$.
\section{Results} \label{sec:empeval}
The latent correlations and Spearmans's rank correlations estimated from model $\text{M-}\mLambda$ are
given in Table~\ref{tab:mod3_corr}, together with the 95\% confidence intervals.
The correlations and confidence intervals between
Great Cormorant and
Great Crested Grebe were very similar in a comparison of the continuous and
discrete approximations. Correlations involving Goosander, due to the
smaller number of counts, led to larger discrepancies for this species.
The estimated time-dependent Spearman's rank correlations from model $\text{M-}\mLambda(\rx)$ are presented in
Figures~\ref{fig:harm_corr_disc} and~\ref{fig:harm_corr_cont} for the discrete
and continuous approximations respectively. All
three pairwise correlations were described by a U-shaped function, with
higher correlations of around $0.5$ in December and January and correlations
close to zero between June and October.
The marginal distributions are given in Figures~\ref{fig:dist_var_marginal_const_disc}
and~\ref{fig:dist_var_marginal_const_cont} for model $\text{M-}\mLambda$,
and in Figure~\ref{fig:dist_var_marginal_var_disc} and
Figure~\ref{fig:dist_var_marginal_var_cont} for model $\text{M-}\mLambda(\rx)$,
for the discrete and the continuous approximations respectively. The annual
pattern was the same as in the other models but discrepancies
between the two approximations were larger.
\begin{figure}
\begin{center}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{legend-1}
\vspace*{-1.8cm}
\end{knitrout}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{mod3_dist_d_const-1}
\end{knitrout}
\caption{Model $\text{M-}\mLambda$, discrete approximation:
univariate marginal distributions conditional on time for the three bird species. The
$x$-axis indicates the time frame when the data were collected. The $y$-axis
indicates the natural logarithm of the counts, augmented by one.}
\label{fig:dist_var_marginal_const_disc}
\end{center}
\end{figure}
\begin{table}
{\centering
{Table 1: Measures of dependendence between three bird species from model $\text{M-}\mLambda$}\\ \vspace{.5cm}
\begin{tabular}{lclcl}
& \multicolumn{2}{l}{Continuous approximation} &
\multicolumn{2}{l}{Discrete approximation}
\\
\toprule
$\hat{\lambda}_{\text{A}, \text{B}}$
& $-0.483$
& $[-0.514, -0.452]$
& $-0.467$
& $[-0.500, -0.435]$ \\
$\hat{\lambda}_{\text{A}, \text{C}}$
& $-0.200$
& $[-0.232, -0.168]$
& $-0.202$
& $[-0.241, -0.164]$ \\
$\hat{\lambda}_{\text{B}, \text{C}}$
& $-0.219$
& $[-0.247, -0.190]$
& $-0.244$
& $[-0.284, -0.204]$ \\
\midrule
$\hat{\rho}_{\text{A}, \text{B}}$
& \phantom{$-$}$0.435$
& $[0.411, 0.458]$
& \phantom{$-$}$0.423$
& $[0.398, 0.448]$ \\
$\hat{\rho}_{\text{A}, \text{C}}$
& \phantom{$-$}$0.286$
& $[0.260, 0.310]$
& \phantom{$-$}$0.294$
& $[0.263, 0.324]$ \\
$\hat{\rho}_{\text{B}, \text{C}}$
& \phantom{$-$}$0.309$
& $[0.283, 0.333]$
& \phantom{$-$}$0.330$
& $[0.295, 0.362]$ \\
\midrule
$\hat{\rho}_{\text{A}, \text{B}}^{(S)}$
& \phantom{$-$}$0.419$
& $[0.396, 0.441]$
& \phantom{$-$}$0.407$
& $[0.383, 0.431]$ \\
$\hat{\rho}_{\text{A}, \text{C}}^{(S)}$
& \phantom{$-$}$0.274$
& $[0.249, 0.298]$
& \phantom{$-$}$0.281$
& $[0.252, 0.311]$ \\
$\hat{\rho}_{\text{B}, \text{C}}^{(S)}$
& \phantom{$-$}$0.296$
& $[0.271, 0.320]$
& \phantom{$-$}$0.316$
& $[0.282, 0.348]$ \\
\bottomrule
\end{tabular}\par}
\bigskip
\caption
{Model $\text{M-}\mLambda$: the first three rows of the table show point estimates
for the inverse Cholesky factor $\mathbf{\Lambda}$ and the 95\% Wald confidence intervals.
Rows 4 to 6 show the resulting constant correlations $\rho$ and the 95\%
confidence intervals based on the asymptotic distribution of the parameters of
model $\text{M-}\mLambda$.
The last three rows show the corresponding Spearman's rank correlation $\rho^{(S)}$ with
95\% confidence intervals based on the asymptotic distribution of the parameters of
model $\text{M-}\mLambda$.
A: Great Cormorant, B: Great Crested Grebe, C: Goosander.}
\label{tab:mod3_corr}
\end{table}
\begin{figure}
\begin{center}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{mod3_corr_d_-1}
\end{knitrout}
\caption{Model $\text{M-}\mLambda(\rx)$, discrete approximation:
trajectories of the pairwise Spearman's rank correlations $\rho^{(S)}$ across
species over one year and the corresponding 95\% confidence
intervals based on the asymptotic distribution of the parameters.
For comparison, the constant Spearman's rank correlations of model $\text{M-}\mLambda$
(see Table~\ref{tab:mod3_corr}) are plotted in orange with the corresponding
95\% confidence intervals.}
\label{fig:harm_corr_disc}
\end{center}
\end{figure}
Because the two models $\text{M-}\mLambda$ and $\text{M-}\mLambda(\rx)$ are nested, they can be compared in
a likelihood-ratio test. The result is the same for the two approximations,
with strong evidence in favor of the more complex model $\text{M-}\mLambda(\rx)$ over
the simpler model $\text{M-}\mLambda$. For the discrete approximation, the log-likelihoods are
$-42357.4$
for $\text{M-}\mLambda$ and $-42246$ for $\text{M-}\mLambda(\rx)$.
Because the more complex model $\text{M-}\mLambda(\rx)$ has $18$
parameters more than model $\text{M-}\mLambda$,
we compare twice the difference of the log-likelihoods of $\text{M-}\mLambda(\rx)$ and $\text{M-}\mLambda$ with
a one-sided chi-squared distribution with 18 degrees of freedom and obtain
strong evidence ($p < 0.0001$) in favor of the more complex model $\text{M-}\mLambda(\rx)$.
The main motivation for multi-species count transformation models is to identify
interspecies dependencies.
We performed a parametric bootstrap comparison between multi-species count
transformation models and Hmsc: one hundred new data sets were sampled from
a model assuming time-constant correlations between the three aquatic bird species
(model $\text{M-}\mLambda$ fitted to the original data).
For each of these 100 data sets, we compared Spearman's rank correlations estimated
by multi-species count transformation models and residual correlations obtained
from Hmsc to the ground truth, \textit{i.e.},~ the Spearman's rank correlations from model
$\text{M-}\mLambda$ fitted to the original data (Figure~\ref{fig:boxplot_hmsc}).
The same exercise was performed for model $\text{M-}\mLambda(\rx)$.
\begin{figure}
\begin{center}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{hmsc_mcotram_const-1}
\end{knitrout}
\caption{Measures of species-to-species associations for Hmsc
(residual species-to-species association) and \code{mcotram} (Spearman's rank correlation)
obtained from refitting the models on 100 independent data sets simulated from
model $\text{M-}\mLambda$.
A: Great Cormorant, B: Great Crested Grebe, C: Goosander.}
\label{fig:boxplot_hmsc}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}
\includegraphics[width=\maxwidth]{mcotram_var_bootstrap_plot-1}
\end{knitrout}
\caption{Yearly trajectories of Spearman's rank correlations from the models
fitted on the 100 bootstrapped data sets (grey lines) and from the original
model $\text{M-}\mLambda(\rx)$ (red line).}
\label{fig:mcotram_var}
\end{center}
\end{figure}
Multi-species count transformation models detected the true Spearman's correlations
from samples of the original model accurately and with precision.
The uncertainty obtained from the parametric bootstrap was comparable to the
uncertainty reported by the 95\% confidence interval for Spearman's correlation in the
original model (Table~\ref{tab:mod3_corr}). This also indicates correctness of likelihood-based
inference as described in Section~\ref{sec:param_inf}.
Hmsc identified a similar correlation pattern with equally high precision,
the model formulation underlying Hmsc however does not allow to derive
Spearman's correlations for counts and we could not expect the residual correlations
being in line with the ground truth in this setting.
For time-varying dependencies, multi-species count transformation models picked up
this signal from bootstrapped data sets, the uncertainty (Figure~\ref{fig:mcotram_var})
matched the uncertainty
reported by confidence intervals obtained from the original model
(Figure~\ref{fig:harm_corr_disc}).
Conceptually, a similar analysis can be performed by Hmsc \citep{tikhonov2017using},
however did not succeed yet in extracting this information from models fitted
with version 3.0-11 of package \pkg{Hmsc}.
Re-fitting multi-species count transformation models is in general much faster than
re-fitting Hmsc models to the same data.
The computation time for Hmsc depends on parameters for MCMC sampling, which we
defined as in the package's vignette.
On average, fitting the multi-species transformation model
$\text{M-}\mLambda$ to 4955 observations took 15.3 seconds, the more complex model $\text{M-}\mLambda(\rx)$
required 15.7 seconds.
Fitting a poisson model fitting Hmsc took 424 seconds on average.
\section{Discussion}\label{sec:discussion}
Multi-species count transformation models provide a new perspective on joint
models for species distributions, as they take both the habitat and community
aspects into account. Although the technical details are challenging, we
are confident that at least simple forms of the models will be of value in
ecological modeling. In contrast to JSDMs, \textit{i.e.},~ generalized mixed models in which
the community aspect is represented by an unstructured random residual term,
transformation models do not require strict distributional assumptions but they
still offer interpretability of marginal and joint effects. Pairwise
relationships between species can be understood as correlations on a latent
normal scale, and marginal effects expressed as the means of transformed
counts or as AUCs.
Conceptually, multi-species count transformation models differ from standard
approaches in MSDM and JSDM in two ways. First, we do not make any
assumptions about a parametric distribution of the counts, such as Poisson
or negative binomial. Instead, the distribution is estimated in a
data-driven way. Second, measures of dependence (Pearson's correlation on a
latent scale, or Spearman's rank correlations on the original scale of the
counts) are estimated simultaneously to all other model parameters and can
be computed directly from identifiable quantities estimated in our model.
Most MSDMs and JSDMs do not provide such quantities directly and if they do,
they often resort to the fixed-/random effects dichotomy which inevitably
influences the interpretation of the model parameters. Consequently,
multi-species count transformation models allow to answer a broader class of
research question that involve the dependency of the relation between
different species to depend on covariates $\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$}$.
This interesting feature enabling habitat-dependent modeling of pairwise
correlations on a latent scale accommodates spatio-temporal variations in
species interactions in addition to possible spatio-temporal trends in
marginal abundances. In the relatively simple models for aquatic bird
competition presented herein, the marginal abundances of the three
fish-eating bird species were shown to vary systematically over the course
of a year (hypothesis 1, reflecting different habitat requirements
particularly during the breeding period in summer), with moderate positive
correlations determined in winter and early spring, and small and highly
variable correlations during the rest of the year (hypothesis 2). Thus, the
results of the final model indicate that competition is not a driving factor
regulating the abundance of these bird species in the study area. Unlike
the seasonal variation, which is clearly represented in the time-series
plots of the observations, correlations cannot be inferred from the raw data
in the absence of a suitable model.
Multi-species count transformation models are currently limited in three
different ways. (1) We restricted our attention to latent Gaussian models with
$F}%%_\rZ = \Phi$. From a
marginal perspective, other choices might be more interesting, such as using
the inverse logit link $F}%%_\rZ = \text{logit}^{-1}$, which allows the
interpretation of the shift terms $\eta_j(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ as log-odds ratios. Model
estimation is implemented in \pkg{cotram} for a range of alternatives.
(2) with non-constant parameters $\mathbf{\Lambda}(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ in the inverse Cholesky
factor, the model is not as lean on assumptions as a model with constant
correlations, \textit{i.e.},~ constant parameters in $\mathbf{\Lambda}$.
Concretely, the parameter estimates resulting from the joint model might depend
on the order in which the marginal models are specified. This effect can be
counterbalanced by accounting for enough flexibility in the marginal models,
as explained in a simple example provided in Appendix~\ref{sec:app}.
(3) Complex models can be specified for the marginal shift terms
$\eta_j(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ and for components of the Cholesky factor $\mathbf{\Lambda}(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$, for
example by including either nonlinear or lagging effects of explanatory
environmental variables or terms capturing spatio-temporal trends for
correlated observations. However, in such cases the likelihood
approximations proposed here require supplementation with appropriate
smoothness constraints. A mixed version of univariate transformation models
that is also applicable to count transformation models was recently proposed
\citep{Tamasi_Hothorn_2021,Tamasi_Crowther_Puhan_2022} and can handle spatially or otherwise correlated
data. However, the technical challenges associated with mixed
transformation models on the one hand and multi-species count transformation
models on the other clearly demonstrate that significant improvements in
this direction will require substantial research efforts. With respect to
nonlinear habitat effects, a pragmatic approach might be to estimate
univariate SDMs separately for each species, thus allowing a nonlinear
impact of environmental variables. The joint models could then be fitted
with the same functional form as that of the marginal shift terms
$\eta_j(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$.
Based on our empirical results as well as the theoretical and empirical results
presented in \cite{klein2019multivariate} and \cite{Siegfried_Hothorn_2020},
the following can be recommended when using the model: (1) For count
data with large or at least moderate numbers of counts and few zeros,
application of the continuous approximation to multi-species count
transformation models with constant $\mathbf{\Lambda}$ is possible for small and
moderate numbers of species \citep[][report results for up to $J =
10$]{klein2019multivariate}. (2) For small counts and data with many zeros,
the discrete approximation should be employed.
(3) Models with habitat-dependent correlations
$\mathbf{\Lambda}(\text{\boldmath$x$}} \def \mX {\text{\boldmath$X$})$ should be estimated for different permutations of the
species in a sensitivity analysis. Both the ordering of species with
increasing variance and the introduction of habitat-dependencies into
transformation functions as in model $\text{M-}\mLambda(\rx)$ are recommended. In general, a strong
dependency of the parameter estimates on the order of species indicates a severe
lack of model fit.
The approaches discussed herein are quite complex and far-removed from those
typically employed for species distribution modeling.
Multivariate transformation models cannot be viewed as extensions of
established models, such as generalized linear mixed models.
Model interpretation requires an understanding of latent, transformed
scales, especially on the marginal scale. Nevertheless, in our
proof-of-concept application, we demonstrated that interesting insights into
the interplay between species can be obtained from a simple
version of the model. The implementation of a similar analysis using
concepts from established MSDMs and JSDMs is likely to be technically much more
difficult. In summary, the extension of single-species distribution models to
models for species communities remains conceptually challenging. While not
offering a one-size-fits-all solution, multivariate transformation models
provide an alternative approach to the problem.
\section*{Computational details}
All computations were performed using \proglang{R} version
4.1.2 \citep{R}.
Multi-species count transformation models are implemented in
the \pkg{cotram} add-on package \citep{pkg:cotram}. The package includes
reproducibility material for the empirical
results presented in Section~\ref{sec:empeval}
and the Appendix~\ref{sec:app}.
Models and figures can be replicated by
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{kframe}
\begin{alltt}
\hlkwd{install.packages}\hlstd{(}\hlstr{"cotram"}\hlstd{)}
\hlkwd{library}\hlstd{(}\hlstr{"cotram"}\hlstd{)}
\hlkwd{demo}\hlstd{(}\hlstr{"aquabirds"}\hlstd{,} \hlkwc{package} \hlstd{=} \hlstr{"cotram"}\hlstd{)}
\end{alltt}
\end{kframe}
\end{knitrout}
The continuous approximation (Section~\ref{sec:cont_appr})
relies on
implementation of the multivariate conditional transformation models in
package \pkg{tram} \citep{pkg:tram}.
The competitor model Hmsc was fitted with the package
\pkg{Hmsc} \citep{Hmsc_pkg}.
\section*{Acknowledgments}
LB and TH developed the models and derived the discrete approximation to the
likelihood. LB wrote the \code{mcotram} function in the \pkg{cotram}
package implementing multi-species count transformation
models. RB contributed the aquatic bird model system. All authors analyzed and
interpreted this data, drafted, and finally revised the manuscript.
LB and TH acknowledge financial support by
Schweizerischer Nationalfonds (grant number 200021\_184603).
The authors thank Wendy Ran for improving the language.
LB thanks the maintainer of the \pkg{Hmsc} package for
support in applying the package to the aquatic bird problem.
\singlespace
\bibliographystyle{natbib}
|
1,108,101,565,308 | arxiv | \section{INTRODUCTION}
Learning to manipulate objects is a fundamental problem in RL and robotics.
An end-to-end learning approach can explore the reach range of future intelligent robotics.
Recently, researchers have shown an increased interest in visual affordance~\cite{https://doi.org/10.48550/arxiv.2203.00352, mandikal2020graff,Mo_2021_ICCV, wang2021adaafford, wu2022vatmart}, \emph{i.e.}, a task-specific prior representation of objects. Such representations provide the agents with semantic information of objects, allowing better performance of manipulation.
\begin{figure}[h]
\centering
\includegraphics[trim=150 155 265 80,clip, scale=0.7]{figures/teaser.pdf}
\caption{\textbf{Affordance examples of different manipulation tasks.} (a-b): Agent-to-object affordance map. (c): Agent-to-object and object-to-object affordance map. (d): Dual agent-to-object affordance map.}
\label{fig:teaser}
\vspace{-0.3cm}
\end{figure}
The existing affordance methods for manipulation have two training stages~\cite{Mo_2021_ICCV, wu2022vatmart, wang2021adaafford, zhao2022dualafford}. For example, VAT-Mart~\cite{wu2022vatmart} first trains the affordance map with data collected by an RL agent driven by curiosity, and then fine-tunes both the affordance map and the RL agent. In Where2act~\cite{Mo_2021_ICCV} and many other works~\cite{mo2021o2oafford,wang2021adaafford,zhao2022dualafford}, affordance is associated with a corresponding primitive action for each task, such as pushing and pulling.
Some recent works~\cite{borja2022affordance, wu2022learning} learn affordance through human demonstration.
A significant drawback of those two-stage methods, which first train the affordance map and then propose action sequence based on the learned affordance, is that the success rate of interaction is highly related to the accuracy of the learned affordance. Any deviations in affordance predictions will significantly reduce the task performance.
In this paper, we investigate learning affordance along with RL in an end-to-end fashion by using contact frequency to represent the affordance. Therefore the affordance is not associated with a specific primitive action but rather the contact information from past manipulation experiences of RL training.
In our method, the RL algorithm learns to utilize visual affordance generated from contact information to find the most suitable position for interaction. We also incorporate visual affordance in reward signals to encourage the RL agent to focus on points of higher likelihood.
The advantages of end-to-end affordance learning are two-fold:
1) affordance can awaken the agent where to act as an additional observation and be incorporated into reward signals to improve the manipulation policy;
2) learning affordance and manipulation policy simultaneously, without human demonstration or a dedicated data colleting process simplifies the learning pipeline and can migrate to other tasks easily. Additionally, it helps the affordance and manipulation policy to adapt to each other, thus producing a more robust affordance representation.
Using contact information as affordance evidently supports multi-stage tasks, such as picking up an object and then placing it to a proper place, as well as multi-agent tasks, such as pushing a chair with two robotic arms~\cite{borja2022affordance,lobbezoo2021reinforcement,vyas2021robotic}. These two types of tasks are difficult for two-stage affordance methods~\cite{Mo_2021_ICCV,wu2022vatmart,borja2022affordance} because they need different pre-defined data collection for each human-defined primitive action. Also, by unifying all interactions as contacts, our method can effectively represent both agent-to-object (A2O) and object-to-object (O2O) interactions, which is hard for other methods.
To test if our method can boost visual-based RL in robotic manipulation,
we conducted experiments on eight representative robot tasks, including articulated object manipulation, object pick-and-place and dual arm collaboration tasks. The results showed that our method outperformed all the baselines, including those of RL and the current two-stage affordance methods, and can successfully transfer to the real world.
To the best of our knowledge, we are the first to investigate end-to-end affordance learning for robotic manipulation. Our method can be intergrated into visual-based RL to suppport multi-stage tasks and multi-agent tasks without additional annotations or demonstrations.
\section{Related Work}
\label{abs}
\subsection{Robotic Manipulation Policy Learning}
The recent simulators and benchmarks have boosted the development of manipulation policy learning methods~\cite{DBLP:journals/corr/abs-2003-08515, mu2021maniskill, DBLP:journals/corr/abs-1910-10897, bi-dexhands}. For rigid object manipulation, there are already robust algorithms handling tasks such as grasping~\cite{suctionnet,DBLP:journals/corr/abs-2101-01132,borja2022affordance}, planar pushing~\cite{Li2018PushNetDP,DBLP:journals/corr/YuBFR16} and object hanging~\cite{you2021omnihang}. However, it is yet difficult to manipulate articulated objects with multiple parts despite various attempts to approach this problem from different perspectives.
For example, UMPNet~\cite{Xu2022UMPNetUM} and VAT-Mart~\cite{wu2022vatmart} utilized visual observation to directly propose action sequence, while some other studies~\cite{pmlr-v100-abbatematteo20a,DBLP:journals/corr/abs-1907-09014,eisner2022flowbot3d} achieved robust and adaptive control through model prediction. The multi-stage and multi-agent manipulation settings are also challenging for current methods~\cite{zhu2021hierarchical,mandlekar2020learning,bi-dexhands}.
\subsection{Visual Actionable Affordance Learning}
Till now, several studies have demonstrated the power of affordance representation on manipulation~\cite{Mo_2021_ICCV,wu2022vatmart}, grasping~\cite{mandikal2020graff,lenz2015deep,borja2022affordance,wu2022learning}, scene classification~\cite{dixit2015scene,zhang2014learning}, scene understanding~\cite{fowler2018human,ye2017can} and object detection~\cite{do2018affordancenet}.
The semantic information in affordance is instructive for manipulation.
Some prior affordance learning processes for manipulation, such as Where2Act~\cite{Mo_2021_ICCV}, VAT-Mart~\cite{wu2022vatmart}, AdaAfford~\cite{wang2021adaafford} and VAPO~\cite{borja2022affordance}, have two training stages. Specifically, they need to first collect interacting data to pretrain the affordance, and then train the policy based on the affordance.
For methods~\cite{https://doi.org/10.48550/arxiv.2203.00352, mandikal2020graff, nagarajan2019grounded} which train affordance and policy simultaneously, however, their affordance learning relies on human demonstration.
Unlike them, our method requires neither pre-defined data collection process for different primitive actions\,/\,tasks nor any additional human annotations.
\subsection{Comparison with Related Works}
The related works mentioned above studied robotic manipulation in different problem settings, the difference includes observation and annotation.
It is hard to compare these works given their distinct settings.
Specifically,
in the door-opening task, Maniskill~\cite{mu2021maniskill} utilizes expert demonstrations for imitation learning.
However, expert demonstrations are difficult to obtain since they are usually collected by human.
Affordance methods, such as Where2Act~\cite{Mo_2021_ICCV}, VAT-Mart~\cite{wu2022vatmart}, and VAPO~\cite{borja2022affordance} learn the affordance prior to its policy training and are not in an end-to-end fashion. They also output gripper pose as actions which in reality have no guarantee that the gripper can reach the position. Additionally, the related affordance studies mentioned here are designed for single-stage single-agent tasks, such as opening a door or grasping an object, which has no guarantee for multi-stage or multi-agent tasks. However, using contact information for affordance natually allows RL policy to handle multi-stage tasks like picking up an object and placing it to the proper place, and multi-agent tasks like pushing a chair collaboratively by two robotic arms.
\begin{table}[t!]
\caption{Comparison between our work and related works.}
\begin{tabular}{c|p{0.652cm}<{\centering}|p{0.652cm}<{\centering}|p{0.652cm}<{\centering}|p{0.652cm}<{\centering}|p{0.652cm}<{\centering}|p{0.652cm}<{\centering} }
\toprule
& W2A & VAT & MSkill & VAPO & Hang & \textbf{Ours } \\ \hline
No Demo & \Checkmark & \Checkmark & & & \Checkmark & \CheckmarkBold \\
No Full Obs & \Checkmark & \Checkmark & & \Checkmark & \Checkmark & \CheckmarkBold \\
End-to-End & & &\Checkmark & & & \CheckmarkBold \\
Multi-Stage & & & & & & \CheckmarkBold \\
Multi-Agent & & &\Checkmark & & & \CheckmarkBold \\
\bottomrule
\end{tabular}
\label{table:compare}
\vspace{-0.3cm}
\end{table}
Table~\ref{table:compare} compares our method with five representative related works discussed above.
Listed works are Where2Act (W2A)~\cite{Mo_2021_ICCV}, VAT-Mart (VAT)~\cite{wu2022vatmart}, Maniskill (MSkill)~\cite{mu2021maniskill}, VAPO~\cite{borja2022affordance} and OmniHang (Hang)~\cite{you2021omnihang}.
"\textbf{No Demo}" means the method does not need any expert demonstrations, such as human-collected trajectories, pre-defined primitive actions and human-designed interaction poses. "\textbf{No Full Obs}" indicates the method does not need state observations of objects such as the coordinate of door handle, since the accurate state of an object is difficult to obtain in real world.
"\textbf{End-to-End}" signifies the method trains the policy in an end-to-end fashion, \emph{i.e.}, no multiple training stages are involved and the actions of the policy can be directly applied to agents.
"\textbf{Multi-Stage}" infers the method
can complete multi-stage tasks which the agent needs to finish multiple dependent tasks sequentially. "\textbf{Multi-Agent}" suggests the method can be adapted to multi-agent tasks where agents need to cooperate one another to finish the work.
\section{Methods}
\begin{figure*}[t]
\centering
\includegraphics[trim=183.5 155 168 115,clip, scale=1.37]{figures/pipeline.pdf}
\caption{\textbf{Training Pipeline of End-to-End Affordance Learning.} Our pipeline contains two main modules: \textit{Manipulation Module} ($MA$ Module) generating interaction trajectories and \textit{Visual Affordance Module} ($VA$ Module) learning to generate per-point affordance map $M$ based on the real-time point cloud.
The \textit{Contact Predictor} (CP), shared across two modules, serves as a bridge between them: 1) $MA$ Module uses the affordance map (indicated by the blue arrow) and \textit{Max-affordance Point Observation} ($MPO$) (indicated by the upper red arrow) predicted by the CP as a part of the input observation. A \textit{Max-affordance Point Reward} ($MPR$) feedback (indicated by the lower red arrow) is also incorporated in training $MA$ Module; 2) $MA$ Module maintains a $\textit{Contact Buffer}$ ($CB$) by collecting collision information and generating \textit{Dynamic Ground Truth} ($
DGT$) (indicated by the orange arrow), where $VA$ Module uses the $DGT$ as the target for training $CP$.}
\label{fig:pipeline}
\vspace{-0.5cm}
\end{figure*}
\subsection{Method Overview}
Visual-based RL is increasingly valued on robotic manipulation tasks, especially those requiring the agent to manipulate different objects with a single policy. Meanwhile, recent studies~\cite{srinivas2020curl,stooke2021decoupling,wu2022learning} identified the difficulty of
learning observation encoders by RL from high-dimensional inputs such as point clouds and images. In our framework, we tackled this critical problem by exploiting underlying information through a process called "Contact Prediction".
In manipulation settings, contact is the fundamental way humans interact with an object. We believe that physical contact positions during interactions reflect the understanding of crucial semantic information about the object (\emph{e.g.}, a human grasp a handle to open a door because the handle provides the position to apply force).
We proposed a novel end-to-end RL learning framework for manipulating 3D objects. As shown in Fig.~\ref{fig:pipeline}, our framework is comprised of two parts. 1) $\textit{Manipulation Module}$ ($MA$ Module) is a RL framework which uses the affordance map predicted by a \textit{Contact Predictor} ($CP$) as an additional observation and reward signal; 2) $\textit{Visual Affordance Module}$ ($VA$ Module) is a per-point scoring network, which uses
the contact positions collected from RL training process as the \textit{Dynamic Ground Truth} ($DGT$) to indicate the position of interaction.
Concretely, at every time-step $t$, the $MA$ Module
outputs an action $a_t$
based on
the robotic arm state $s_t$ (\emph{i.e.,} the angle and angular velocity of each joint) and the affordance map $M_t$ predicted by the $VA$ Module.
After each time-step $t$, the contact position in RL training is inserted to the $\textit{Contact Buffer}$ ($CB$).
After each $k$ time-steps, we integrate the data in $CB$ to generate the per-point score as the $DGT$ to update the $VA$ Module.
\begin{algorithm}[t]
\caption{End-to-End Affordance Learning.}\label{alg:DERL_with_arbitrary_policies}
\begin{algorithmic}
\Require $E$: the environment, $CB$: current contact buffer, $RL$: RL pipeline, $i$: current timestep.
\Ensure $a$: an action generated by RL pipeline
\State $c \gets collectContact(E, RL)$
\State $CB \gets insert(CB, c)$
\State $PC \gets getPointCloud(E)$\Comment{Point Cloud}
\If { $i \% k = 0$}\Comment{Update CP every k timesteps}
\State $DGT \gets getMap(PC, CB)$\Comment{Dynamic Ground Truth}
\State $CP \gets update(CP, DGT)$\Comment{Update CP network}
\EndIf
\State $M \gets CP(PC)$\Comment{Affordance map}
\State $RL \gets train(RL, E, M)$\Comment{Update RL network}
\State $a \gets RL(E, M)$\Comment{The action}
\State \textbf{return} $a$
\end{algorithmic}
\end{algorithm}
\subsection{Visual Affordance Module: \emph{Contact as Prior}}
During robotic manipulation, physical contacts naturally happen between agent and object, or object and object. As contacts do not relate to any human-defined primitive action such as pull or push, the contact position is a general representation, providing visual prior for manipulation.
The RL training pipeline in Manipulation Module ($MA$ Module) continuously interacts with the environment to collect 1) the partial point cloud observation $\mathcal{P}$, 2) the contact position under object coordinate.
Based on this information, we measure how likely a contact between agent and object (A2O) or between object and object (O2O) is going to happen by the per-point contact \textit{frequency} as the affordance during the current RL training.
The Visual Affordance Module ($VA$ Module) then learn to predict the per-point \textit{frequency}. The training details of $VA$ Module is as follow.
\textbf{Input:}
Following the prior studies~\cite{wang2021adaafford, wu2022vatmart, Mo_2021_ICCV}, the input for $VA$ Module contains a partial point cloud observation $\mathcal{P}$.
\textbf{Output:}
The output of the $VA$ Module is a per-point affordance map $\textit{M}$ for each of the point from the input. The map contains A2O affordance and O2O affordance.
\textbf{Network Architecture:}
The prediction is completed by a \textit{Contact Predictor} ($CP$) that uses a PointNet++ \cite{qi2017pointnetplusplus} to extract a per-point feature $f\in \mathbb{R}^{128}$ from point cloud observation $\mathcal{P}$, the feature $f$ is then fed through a Multi-Layer Perceptron (MLP) to predict the per-point actionable affordance~\cite{Mo_2021_ICCV}.
\textbf{Dynamic Ground Truth:}
To connect the RL pipeline in $MA$ Module with the $VA$ Module, we use a \textit{Contact Buffer} $CB$ to keep $l$ record of history contact points, and to compute the $DGT$.
Specifically, each object in the training set has a corresponding $CB$, it records contact positions on the object. To maintain the buffer size, the buffer randomly evicts one record whenever a new record of contact event is inserted.
To provide training ground truth for $CP$, we calculate the $DGT$ by first calculating the number of contacts within radius $r$ from each point on the object point cloud, and then applying normalization to obtain \textit{Dynamic Ground Truth} $DGT$.
The normalization is as follow:
\begin{equation}
DGT_t^i(p)=\frac{\sum_{q\in CB_t^i}I(|p-q|_2<r)}{\max_{p'}\sum_{q\in CB_t^i}I(|p'-q|_2<r) + \epsilon} ,
\end{equation}
where $DGT_t^i$ indicates the \textit{Dynamic Ground Truth} for object $i$ at time-step $t$, $CB_t^i$ is the corresponding \textit{Contact Buffer}.
\textbf{Training:}
The $CP$ is updated with $DGT^i_t$ as below:
\begin{equation}
\textit{CP}_t^*=\argmin_{\textit{CP}}\sum_isr_t^i\left|\left|\sum_{p\in \mathcal{P}^i}\textit{CP}(p|\mathcal{P}^i)-DGT_t^i(p)\right|\right|_2
\end{equation}
where $sr_t^i$ is the current manipulation success rate on object $i$, $\mathcal{P}^i$ is the pointcloud of $i$-th object and $\textit{CP}_t^*$ is the optimal $\textit{CP}$ .
\subsection{Manipulation Module: \emph{Affordance as Guidance}}
\textit{Manipulation Module} ($MA$ Module) is an RL framework able to learn to manipulate objects from scratch. Different from previous methods~\cite{wu2022vatmart, Mo_2021_ICCV,mu2021maniskill}, our $MA$ Module takes advantage of both the reward and observation generated by the $VA$ Module.
\textbf{Input:} The input for $MA$ Module includes, 1) a point cloud $\mathcal{P}$ of the real-time environment~\cite{Mo_2021_ICCV,wu2022vatmart}; 2) an affordance map $M$ generated by $VA$ Module; 3) the state $s$ of the robotic arm. The state $s$ consists of position, velocity and angle of each joint of the robotic arm; 4) a state-based \textit{Max-affordance Point Observation} ($MPO$), which indicates the point with the maximum affordance score on $\mathcal{P}$ .
\textbf{Output:} The output of the $MA$ Module is an action $a$, which is then executed by the robotic arm. In our setting, the RL policy controls each joint of the robotic arm directly.
\textbf{Reward from Affordance:} We introduce the \textit{Max-affordance Point Reward} ($MPR$) into our pipeline, where a point on the point cloud with maximum affordance score predicted by the $VA$ Module is selected as the guidance for learning $MA$ Module. We use the distance between robot end-effector and this selected point to compute an additional reward in the RL process. We found this reward from affordance could benefit the RL training thus improve the overall performance.
\textbf{Network Architecture:}
The policy of the $MA$ Module is a neural network $\pi_\theta$ with learnable parameter $\theta$. The network consists of a PointNet \cite{DBLP:journals/corr/QiSMG16} and a MLP. The PointNet extracts feature $f\in \mathbb{R}^{128}$ from the point cloud $\mathcal{P}$, affordance map $M$ and additional masks $m$. The extracted feature $f$ is then concatenated with $s$ and fed to the MLP to obtain actions.
\textbf{Training:}
We use Proximal Policy Optimization (PPO) algorithm~\cite{DBLP:journals/corr/SchulmanWDRK17} to train the $MA$ Module.
To improve the training efficiency by exploiting the high parallelism of our simulator, we deploy $k$ different objects in the simulator, each object is replicated $n$ times and given to one or two robotic arms. Hence, there are a total of $k \times n$ environments, each with a robotic arm (or two robotic arms in our multi-agent tasks) interacting with an object, as shown in Fig.~\ref{fig:map}.
\section{Experiment}
\label{exp}
\subsection{Task Description}
To evaluate our method, we designed three types of manipulation tasks: single-stage, multi-stage and multi-agent.
In all tasks, a robotic arm or two robotic arms are required to complete a specific manipulation task on different objects
The first type of tasks are single-stage manipulation tasks as follow:
\textbf{Close Door:}
A door is initially open to a specific angle.
The agent need to close the door completely. We increase the difficulty of this task by applying an additional force on the door attempting to keep the door to the initial position and doubling the friction of the hinge.
\textbf{Open Door:}
A door is initially closed.
The agent need to open the door to a specific angle.
This task can test whether the agent learn to leverage key parts like the handle to open the door, which is challenging.
\textbf{Push Drawer:}
A drawer is initially open to a specific distance.
similar to \textit{close door}, the agent need to close the drawer on a cabinet completely.
\textbf{Pull Drawer:} A drawer is initially closed, similar to \textit{open door}, the agent need to open the drawer to a specific distance.
\textbf{Push Stapler:} A stapler is on the desk, initially open. The agent need to push on the stapler and close it.
\textbf{Lift Pot Lid:} A pot is on the floor with its lid on. The agent need to lift the lid.
To show the agent can learn a policy in a multi-stage task, we use the pick-and-place task as follow:
\textbf{Pick and Place:}
An object should be picked up and then placed on a table that already have several random objects on it, both the table and objects are randomly selected from the given datasets. The agent need to place the object stably on the table without collision.
To show our method can be generalized to multi-agent settings, we use the dual-arm-push task as follow:
\textbf{Dual Arm Push:}
Two robotic arms need to be controled to push a chair to a specific distance and prevent the chair from falling over.
To make the agent better adapt to the environment, we add a movable base to the arm, allowing the arm to move horizontally within a specific range.
The reward designs and other details are listed on our website.
\begin{figure*}[t]
\centering
\includegraphics[trim=52 39 10 5,clip, scale=0.54]{figures/main_figure.pdf}
\caption{\textbf{Experiment Settings and Affordance Learning Visualization.} Top: the tasks settings in simulators. Middle: the change in affordance maps during end-to-end training and the final affordance map examples. Bottom: the real-world experiments.}
\label{fig:map}
\end{figure*}
\subsection{Dataset and Simulator}
\label{dataset}
We performed our experiments using the Isaac Gym simulator~\cite{DBLP:journals/corr/abs-2108-10470}.
We used Franka Panda robot arm as the agent for all tasks.
Our training and testing data are the subset of the PartNet-Mobility dataset~\cite{chang2015shapenet} and VAPO dataset~\cite{borja2022affordance}.
For tasks \textbf{Close Door} and \textbf{Open Door}, we divided the objects with door handles in the StorageFurniture category into four subcategories: \textit{one door left}, \textit{one door right}, \textit{two door left} and \textit{two door right}.
For tasks \textbf{Pull Drawer} and \textbf{Push Drawer}, we divided the objects with door handles in the StorageFurniture category into two subcategories: \textit{drawer without door} and \textit{drawer with door}.
For tasks \textbf{Push Stapler} and \textbf{Lift Pot Lid}, we chose all \textit{Stapler} and \textit{Pot} from PartNet-Mobility dataset.
For task \textbf{Pick and Place}, we chose three representative categories of objects from VAPO dataset to pick. We also selected four types of different tables: \textit{Round Table}, \textit{Triangle Table}, \textit{Square Table} and \textit{Irregular Table} and three daily items from PartNet-Mobility dataset were placed randomly on the table.
For task \textbf{Dual Arm Push}, we chose 60 \textit{Chairs} from PartNet-Mobility dataset.
\subsection{Baselines and Ablations}
We compared our method with seven baselines:
\begin{itemize}
\item Where2act~\cite{Mo_2021_ICCV}: the original method only generates single-stage interaction proposals. To use this method as a baseline in our tasks, we implemented a multi-stage Where2act baseline (up to six steps). The object is gradually altered by pushing or pulling interactions produced by Where2act until the task is completed or the maximum number of steps have been taken. Unlike our own setting, this baseline used a flying gripper instead of a robotic arm
\item VAT-Mart~\cite{wu2022vatmart}:
We followed the implementation of paper~\cite{wu2022vatmart}. Similar to Where2act, we implemented this method in our environment as a baseline with a flying gripper.
\item RL: we used a point cloud based PPO as our baseline.
\item RL+Where2act: we replaced the Contact Predictor in our method with a pre-trained Where2act model that can output a per-point actionable score.
The parameters in the Where2act model is frozen when training $MA$ Module.
\item RL+O2OAfford and RL+O2OAfford+Where2act: Similar to RL+Where2act, we replaced the O2O affordance map in our method with the map produced by a pre-trained O2OAfford~\cite{mo2021o2oafford} model.
\item MAPPO: we used a point cloud based multi-agent RL algorithm (MARL): MAPPO~\cite{yu2021surprising} as our baseline
\item Multi-Task RL~\cite{DBLP:journals/corr/SchulmanWDRK17}: we adapted PPO to the multi-task setting by providing the one-hot task ID as input. To make this method comparable on the test set, both the test set and the training set were used in training process. So this is an oracle baseline.
\end{itemize}
\begin{table*}[]
\centering
\setlength\tabcolsep{5.4pt}
\caption{\textbf
Quantitative results of single-stage tasks. (More results on our website.)}}
\label{table:ablation}
\label{table:asr}
\begin{tabular}{c|cccc|cccc|cccc|cccc}
\toprule
\multirow{3}{*}{\diagbox{Methods}{Datasets}} & \multicolumn{4}{c|}{Open Door} & \multicolumn{4}{c|}{Pull Drawer} & \multicolumn{4}{c|}{Push Stapler} & \multicolumn{4}{c}{Open Pot Lid} \\ \cline{2-17}
& \multicolumn{2}{c|}{ASR} & \multicolumn{2}{c|}{MP} & \multicolumn{2}{c|}{ASR} & \multicolumn{2}{c|}{MP} & \multicolumn{2}{c|}{ASR} & \multicolumn{2}{c|}{MP} & \multicolumn{2}{c|}{ASR} & \multicolumn{2}{c}{MP} \\
& train & \multicolumn{1}{c|}{test} & train & test & train & \multicolumn{1}{c|}{test} & train & test & train & \multicolumn{1}{c|}{test} & train & test & train & \multicolumn{1}{c|}{test} & train & test \\ \hline
Where2act &22.8 & \multicolumn{1}{c|}{14.1} &6.8 &8.3 &19.0 & \multicolumn{1}{c|}{12.9} &2.3 &0.0 & 16.4 & \multicolumn{1}{c|}{14.4} & 13.0 & 13.0 & 10.5 & \multicolumn{1}{c|}{5.4} & 8.7 & 4.3 \\
VAT-Mart &23.2 & \multicolumn{1}{c|}{21.9} &31.8 &33.3 &5.5 & \multicolumn{1}{c|}{5.1} &0.0 &0.0 & 21.9 & \multicolumn{1}{c|}{20.9} & 17.4 & 13.0 & 27.4 & \multicolumn{1}{c|}{21.5} & 17.4 & 17.4\\
Multi-task RL & 18.8 & \multicolumn{1}{c|}{9.2}& 11.4 & 5.0 & 0.1 & \multicolumn{1}{c|}{2.4} & 0.0 & 2.8 & 34.9 & \multicolumn{1}{c|}{30.2} & 30.4 & 26.1 & 35.2 & \multicolumn{1}{c|}{32.6} &21.7 &17.4 \\
RL & 21.5 & \multicolumn{1}{c|}{5.5}& 22.7 & 0.0 & 23.1 & \multicolumn{1}{c|}{22.4} & 19.6 & 19.5 &45.5 & \multicolumn{1}{c|}{40.6}
& 34.8 & 30.4 & 32.5 & \multicolumn{1}{c|}{28.6} & 21.7 & 21.7 \\
RL+Where2act & 20.5 & \multicolumn{1}{c|}{8.0}& 19.3 & 9.4 & 25.2 & \multicolumn{1}{c|}{22.2} & 24.4 & 21.9 &48.9 & \multicolumn{1}{c|}{45.2}
& 39.1 & 34.8 & 38.2 & \multicolumn{1}{c|}{30.6} & 26.1 & 21.7 \\\hdashline[1pt/1pt]
\textbf{Ours}& \textbf{52.9} & \multicolumn{1}{c|}{\textbf{32.6}} & \textbf{61.4} & \textbf{41.7} & 59.7 & \multicolumn{1}{c|}{\textbf{58.6}} & 62.8 &\textbf{63.3} & \textbf{69.5} & \multicolumn{1}{c|}{\textbf{53.2}} & \textbf{47.8} & \textbf{39.1} & \textbf{49.5} & \multicolumn{1}{c|}{\textbf{44.6}} & \textbf{34.8} & \textbf{30.4} \\
\hdashline[1pt/1pt]
Ours w/o MPO & 48.0 & \multicolumn{1}{l|}{23.8} & 50.0 & 16.7 & 41.9 & \multicolumn{1}{l|}{42.5} & 38.6 & 43.8 & 60.6 & \multicolumn{1}{l|}{52.5} & 43.5 & \textbf{39.1} & 44.2 & \multicolumn{1}{l|}{40.7} & \textbf{34.8} &\textbf{30.4} \\
Ours w/o MPR &28.2 & \multicolumn{1}{l|}{8.4} &29.5 &8.3 &\textbf{62.3} & \multicolumn{1}{l|}{44.0} &\textbf{65.9} &43.8 & 50.8 & \multicolumn{1}{l|}{39.9} &39.1 & 30.4 & 44.8 & \multicolumn{1}{l|}{40.1} & 30.4 & 26.1\\
Ours w/o E2E & 21.2 & \multicolumn{1}{l|}{12.4} &20.5 &8.3 & 57.7 & \multicolumn{1}{l|}{57.3} & 61.1 & 61.7 & 40.2 & \multicolumn{1}{l|}{36.6} & 39.1 & 34.8 & 32.1 &\multicolumn{1}{l|}{30.6} & 30.4 & 26.1\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[]
\begin{minipage}[t]{.49\linewidth}
\centering
\caption{\textbf{Quantitative results of Pick-and-Place.}}
\vspace{-0.2cm}
\label{table:pap-asrmr}
\label{table:pap-ablation}
\begin{tabular}{c|cc|cc}
\toprule
\multirow{2}{*}{\diagbox{Methods}{Metrics}} & \multicolumn{2}{c|}{ASR} & \multicolumn{2}{c }{MP} \\
& train & test & train & test \\
\hline
RL &25.2 & 22.1 & 19.2 & 11.5 \\
RL+O2OAfford & 26.1 & 22.2 & 19.2 & 11.5 \\
RL+Where2act & 28.6 & 23.5 & 23.1 & 15.4 \\
RL+O2OAfford+Where2act & 30.5 & 26.2 & 23.1 & 15.4 \\ \hdashline[1pt/1pt]
\textbf{Ours} &\textbf{46.5} &\textbf{39.2} &\textbf{30.7}&\textbf{26.9} \\ \hdashline[1pt/1pt]
Ours w/o A2O Map &26.7 &22.3 &23.1 &19.2 \\
Ours w/o O2O Map &31.9 &26.2 &23.1 &15.4 \\
Ours w/o MPO & 40.1 & 30.2 & 19.2 & 15.4 \\
Ours w/o MPR & 36.2 & 33.5 & \textbf{30.7} & 23.1 \\
Ours w/o E2E & 30.2 & 21.4 & 26.9 & 19.2 \\
\bottomrule
\end{tabular}
\end{minipage}
\begin{minipage}[t]{.49\linewidth}
\centering
\caption{\textbf{Quantitative results of dual-arm-push.}}
\vspace{-0.2cm}
\label{table:dap-asrmr}
\label{table:dap-ablation}
\begin{tabular}{c|cc|cc}
\toprule
\multirow{2}{*}{\diagbox{Methods}{Metrics}} & \multicolumn{2}{c|}{ASR} & \multicolumn{2}{c }{MP} \\
& train & test & train & test \\
\hline
MAPPO &7.8 &9.0 &0.0 &0.0 \\
RL &37.2 &36.1 &36.4 &31.3 \\
Multi-task RL &51.6 &52.9 &54.5 &56.3 \\ \hdashline[1pt/1pt]
\textbf{Ours} &83.9 &78.5 &90.9 &93.8 \\ \hdashline[1pt/1pt]
Ours w/o MPO &\textbf{95.9} &\textbf{96.3} &\textbf{100.0}&\textbf{100.0} \\
Ours w/o MPR &63.9 &55.3 &63.6 &56.3 \\
Ours w/o E2E &53.5 &55.9 &56.8 &50.0 \\
\bottomrule
\end{tabular}
\end{minipage}
\vspace{-0.5cm}
\end{table*}
To further evaluate the importance of different components of our method, we conducted ablation study by comparing our method with five ablations:
\begin{itemize}
\item Ours w/o MPR: ours without the max-point reward.
\item Ours w/o MPO: ours without the max-point obversation.
\item Ours w/o E2E: our method trained by a two-stage procedure. The $VA$ Module is trained upon a fixed pretrained $MA$ Module. The $MA$ Module is then fine-tuned on the freezed $VA$ Module.
\item Ours w/o A2O Map: our method without agent-to-object affordance map in multi-stage tasks.
\item Ours w/o O2O Map: our method without object-to-object affordance map in multi-stage tasks.
\end{itemize}
\subsection{Evaluation Metrics}
For each task, we trained the method (ours, baselines and ablations) on the training set and saved checkpoints every 3200 time-steps within $160,000$ total time-steps. After training, we chose the checkpoint with the largest average success rate on training set for comparison, the method was tested on eight different random seeds.
We adpoted two metrics to measure the performance:
\begin{itemize}
\item Average Success Rate (ASR): The ASR
is the average of the algorithm's success rate on all objects in the training\,/\,testing dataset.
\item Master Percentage (MP): We assume a policy is ``stable'' on an object if it has a success rate of more than 50\% on that object. The master percentage
is the percentage of objects which the algorithm can success with a probability greater than or equal to $50\%$. If an algorithm manages to reach a success rate over $50\%$ on a certain object, it is expected to success within two trials.
\end{itemize}
\textbf{Due to the page limit, we listed the results of Close Door and Push Drawer and the variance of the reported metric on our website.}
\subsection{Baseline Comparision and Ablation Study}
From~\Cref{table:ablation,table:pap-ablation}, the results of Where2act and RL show the visual affordance can improve the RL performance. However, our method achieves a more significant improvement over baselines in both training and testing sets.
In dual-arm-push, as Table~\ref{table:dap-ablation} shows, our method outperforms both RL and MARL methods.
From all tables, we see the MPO, MPR and E2E components play important roles in our method except that E2E on dual-arm-push. The potential reason is that the predicted max affordance point on the object is changing during object movement, which may influence the RL training. This may be something worth looking into in the future.
Fig.~\ref{fig:map} shows the change in affordance maps during end-to-end training and examples of final affordance maps. We can see that as the training proceeds, the affordance map gradually concentrates. More qualitative results can be found on our website.
\subsection{Real-world Experiment}
We used a digital twin system~\cite{XIA2021210} for real-world experiment: The training process was in simulation, we then used some unseen objects to evaluate our method in real world. The input of the agent has two folds: 1) point cloud input from simulator, 2) agent state input from real world.
The actions of the agent were computed upon the combination of the two input sources, and then were applied to the robotic arms both in the simulator and the real world.
The experiment settings are shown in Fig.\ref{fig:map}.
Experiments show that our trained model can successfully transfer to the real world. The video and more details can be found on our website \url{https://sites.google.com/view/rlafford/}.
\section{Conclusion}
\label{conc}
To the best of our knowledge, this the first work that proposes an end-to-end affordance RL framework for robotic manipulation tasks.
In RL training, affordance can improve the policy learning by providing additional observation and reward signals. Our framework automatically learns affordance semantics
through RL training without human demonstration or other artificial designs dedicated to data collection.
The simplicity of our method, together with the superior performance over strong baselines and the wide range of applicable tasks, has demonstrated the effectiveness of learning from contact information. We believe our work could potentially open a new way for future RL-based manipulation developments.
\section*{ACKNOWLEDGEMENT}This project was supported by the National Natural Science Foundation of China (No. 62136001). We would like to thank Hongchen Wang, Ruihai Wu, Yan Zhao and Yicheng Qian for the helpful discussion and baseline implementation, and Ruimin Jia for suggestions in paper writing.
{
\bibliographystyle{IEEEtran}
|
1,108,101,565,309 | arxiv | \section{Introduction}
Online spaces are often exploited and misused to spread content that can be degrading, abusive, or otherwise harmful to people. An important and elusive form of such language is {\it hateful speech}: content that expresses hatred of a group in society.
Hateful speech has become a major problem for every kind of online platform where user-generated content appears: from the comment sections of news websites to real-time chat sessions in immersive games. Such content can alienate users and can also support radicalization and incite violence \cite{allan2013harm}. Platform operators recognize that hateful content poses both practical and ethical issues and many, including Twitter, Facebook, Reddit, and gaming companies such as Riot Games, have tried to discourage it, by altered their platforms or policies.
Yet reliable solutions for online hateful speech are lacking. Currently, platforms predominantly rely on users to report objectionable content. This requires labor-intensive review by platform staff and can also entirely miss hateful or harmful speech that is not reported. With the high volume of content being generated on major platforms, an accurate automated method might be a useful step towards diminishing the effects of hateful speech.
Without exception, state-of-the-art computational approaches rely upon either human annotation or manually curated lists of offensive terms to train classifiers \cite{kwok2013locate,ting2013approach}. Recent work has shown that human annotators tasked with labeling hate speech have significant difficulty achieving reasonable inter-coder reliability \cite{kwok2013locate}. Within industry, it is generally acknowledged that keyword lists are also insufficient for accurate detection of hateful speech. However, little work has been done to understand the nature of their limitations and to design able alternative approaches. This is the topic of the present work.
This paper makes three key contributions. First, we establish why the problem of hateful speech detection is difficult, identifying factors that lead to the poor performance of keyword-based approaches. Second, we propose a new approach to hateful speech detection, leveraging online communities as a source of language models. Third, we show that such a model can perform well both within a platform and {\it across} platforms --- a feature we believe we are the first to achieve.
We are also aware that automated detection of online speech could be misused to suppress constructive and/or dissenting voices by directing the system at individuals or groups that are not dedicated to expressing hatred. Such a use would be antithetical to our intent, which is to explore and illustrate ways in which computational techniques can provide opportunities to observe and contain harmful content online, without impinging on the freedom to speak openly, and even to express unpalatable or unpopular views. We hope that our work can help diminish hatred and harm online. Furthermore, since our method can be trained on and applied to a wide array of online platforms, this work may help to inform the direction of future research in this area.
\section{Background}
\paragraph{Hate and hateful speech.} Legal and academic literature generally defines hate speech as speech (or any form of expression) that expresses (or seeks to promote, or has the capacity to increase) hatred against a person or group of people because of a characteristic they share, or a group to which they belong \cite{mendel2012does}. There is no consensus definition, however. Definitions of this sort are problematic for a number of reasons \cite{bartlett2014anti}, including that hate speech is defined by prevailing social norms, context, and individual and collective interpretation. This makes it difficult to identify hate speech consistently and yields the paradox (also observed with pornography) that each person seems to have an intuition for what hate speech is, but rarely are two people's understandings the same. This claim is affirmed by a recent study that demonstrated a mere 33\% agreement between coders from different races, when tasked to identify racist tweets \cite{kwok2013locate}.
A particular ambiguity in the term `hate speech' is in ``hate'' itself. That word might refer to the speaker/author's hatred, or his/her desire to make the targets of the speech feel hated, or desire to make others hate the target(s), or the apparent capacity of the speech to increase hatred. Needless to say, we require a rigorous --- and formal --- definition of a type of speech if we are to automate its detection.
Our initial motivation was to find, and work with, a notion of hate speech that can be operationalised. The work of online platform operators (e.g., Twitter, Facebook, and Reddit) helped to focus this aim. Their concern over the capacity of language to do harm --- whether emotional, mental, or physical --- logically focuses more on what is {\it expressed} rather than how it is \textit{intended}. Whereas ``hate speech'' can imply an inquiry or judgment about intent (e.g. what was this person feeling or wishing?), we propose the term ``hateful speech" to focus on the expression of hate --- a nuanced, but useful distinction since expression is easier to detect than intent, and more likely to be linked to language's capacity to cause harm.
This leads to our term \textit{hateful speech}: speech which contains an expression of hatred on the part of the speaker/author, against a person or people, based on their group identity.
Hateful speech is not to be mistaken for "cyber-bullying," another form of troubling online content that has been widely discussed and studied in recent literature. Cyber-bullying is repetitive, intentional, aggressive behavior against an individual, and it either creates or maintains a power imbalance between aggressor and target \cite{tokunaga2010following}. It is often hateful but it does not necessarily denigrate a person based on his or her membership in a particular group, as hateful speech (the subject of the present work) does.
\paragraph{Community-defined speech.} As we will discuss in detail later, we use the language that emerges from self-organized communities (in Reddit and elsewhere) as the basis for our models of hateful speech. Our decision is based on a deep sociological literature that acknowledges that communities both form, and are formed by, coherent linguistic practices \cite{bucholtz2005identity}. Most groups are defined in part by the ``relationships between language choice and rules of social appropriateness'' forming speech communities \cite{gumperz2009speech}. In this way of thinking, the group is defined by the speech and the speech comes to define the group \cite{klein2007social,reicher1995social,spears1992social,spears1994panacea}.
In the context of this study, this means that hate groups and the hateful speech they deploy towards their target community cannot exist without one another, especially online. Therefore, taking the linguistic attributes particular to a community committed to degrading a specific group is a legitimate and principled way of defining a particular form of hateful speech. To our knowledge, this work represents the first effort to explicitly leverage a community-based classification of hateful language.
\paragraph{Existing approaches to detecting hateful speech.} Despite widespread concern about hateful speech online, to our knowledge there have been only three distinct lines of work on the problem of automated detection of hateful speech. One study concerned the detection of racism using a Naive Bayes classifier \cite{kwok2013locate}. This work established the definitional challenge of hate speech by showing annotators could agree only 33\% of the time on texts purported to contain hate speech. Another considered the problem of detecting anti-Semitic comments in Yahoo news groups using support vector machines \cite{warner2012detecting}. Notably, the training data for this classifier was hand-coded. As we will discuss in this paper, manually annotated training data admits the potential for hard-to-trace bias in the speech ultimately detected. A third study used a linguistic rule-based approach on tweets that had been collected using offensive keywords \cite{xiang2012detecting}. Like manually annotated data, keyword-based data has significant biasing effects as well.
In this work we aim to build on these studies in two ways. First, we will consider a definition of hateful speech that could be practically useful to platform operators. Second, we will develop a general method for the detection of hateful speech that does not depend on manually annotated or keyword-collected data.
\paragraph{Reddit and other online sources of hateful speech.} Reddit is currently one of the most actively used social content aggregation platforms. It is used for entertainment, news and social discussions. Registered users can post and comment on content in relevant community discussion spaces called \textit{subreddits}. While the vast majority of content that passes through Reddit is civil, multiple subreddits have emerged with the explicit purpose of posting and sharing hateful content, for example, \texttt{r/CoonTown}, \texttt{r/FatPeopleHate}, \texttt{r/beatingwomen}; all which have been recently banned under Reddit's user-harassment policy \cite{Moreno15}. There are also subreddits dedicated to supporting communities that are the targets of hate speech.
Reddit is an attractive testbed for work on hateful speech both because the community spaces are well-defined (i.e., they have names, complete histories of threaded discussions) and because, until recently, Reddit has been a major online home for both hateful speech communities and supporters for their target groups. For these reasons, throughout this paper, our analyses heavily leverage data from Reddit groups.
Of course, Reddit is not the sole platform for hateful speech. Voat, a recently created competitor to Reddit, along with a vibrant ecosystem of other social content aggregation platforms, provide online spaces for topical discussion communities, hate groups among them. Furthermore, dedicated websites and social networking sites such as Twitter and Facebook are also reservoirs of easily accessible hateful speech.
Important research has investigated the effects of racist speech \cite{nakamura2009don} and sexual harassment \cite{fox2014sexism} in online games. Notably, in this study we have not worked with data from online gaming platforms, primarily because the platforms are generally closed to conventional data collection methods.
\section{The limits of keyword-based approaches} \label{sec:support_hardness}
In the same way that hateful groups have defining speech patterns, communities that consist of the targets of hateful speech also have characteristic language conventions. We will loosely call these \textit{support groups}. Notably, support groups and the groups that espouse hateful speech about them often engage in discourse on similar topics, albeit with very different intent. Fat-shaming groups and plus-size communities both discuss issues associated with high BMI, and women and misogynists both discuss gender equity. This topical overlap can create opportunities for shared vocabulary that may confuse classifiers.
In addition, many keyword-based approaches select established and widely known slurs and offensive terms that are used to target specific groups. While such keywords will certainly catch some hateful speech, it is common to express hate in less explicit terms, without resorting to standard slurs and other offensive terms.
For example, hateful speakers refer to migrants and refugees as ``parasites'' and call African-Americans ``animals.'' While neither of these terms are inherently hateful, in context they strongly denigrate the group to which each term is applied.
We can expect that classifiers trained on overtly hateful keywords will miss such posts that use more nuanced or context-dependent ways of achieving hateful speech.
Furthermore, keywords can be also be obscured through misspelings, character substitutions (by using symbols as letters), using homophones etc. These practices are commonly employed to circumvent keyword-based filters on online platforms \cite{warner2012detecting}.
In this section, we study the potential impact of topic overlap on data returned by keyword-based queries (we will consider under-sampling issues in the next section). Here our focus will be on the sample that keyword-based filters return and in later sections we will consider the performance of classifiers built from such samples.
\begin{table}[t]
\centering
\begin{tabularx}{.48\textwidth}{l|Xr|Xr}
Target & Hate & \# of & Support & \# of \\
Group & subreddit & comments & subreddit & comments \\
\hline
\hline
Black & CoonTown & 350851 & Racism & 9778 \\
Plus & FPH & 1577681 & LoseIt & 658515 \\
Female & TRP & 51504 & TwoXCr & 66390
\end{tabularx}
\caption{Public comments collected from hate and support subreddits on Reddit, for three target groups. (\textit{FPH: FatPeopleHate, TRP: TheRedPill, TwoXcr: TwoXChromosomes})}
\label{TAB:data_reddit}
\end{table}
\paragraph{Data.} Recently, Reddit user, \texttt{Stuck\_In\_the\_Matrix}\footnote{https://www.reddit.com/user/Stuck\_In\_the\_Matrix/}, made available large data dumps that contain a majority of the content (posts and comments) generated on Reddit\footnote{http://couch.whatbox.ca:36975/reddit/}. The data dumps, collected using the Reddit API, are organized by month and year. The data date back to 2006 and are regularly updated with new content. We use all comments from January 2006 through January 31, 2016 and expanded the dataset with each update. Each file corresponds to a month of Reddit data, and every line is a json object of a Reddit comment or post.
For our analysis, we identify three commonly targeted groups on Reddit --- African-American (black), plus-sized (plus) and women. For each of the target groups, we select the most active support and hate subreddits. To create our datasets, we extract all user comments in the selected subreddits from the data dumps described above, in October 2015. The details on the selected subreddits and the number of the extracted comments are provided in Table \ref{TAB:data_reddit}.
\paragraph{Methods.} For each of the selected subreddits, we use labeled Latent Dirichlet Allocation (LLDA) to learn the topics that characterize them, against a baseline Reddit language. This baseline is intended to push the LLDA to remove non-topical vocabulary from the two subreddit topics; it consists of a sample of 460,000 comments taken at random from the Reddit data scrape (none of the posts belonged to any of the subreddits of interest). Prior to topic modeling, stop words, punctuation, URLs, and digits were stripped from the comments and for the purpose of balanced analysis, an equal number of comments was selected from the subreddit and the random sample. We use JGibbLDA for the topic inference \cite{phan2006jgibblda}.
\begin{table}[]
\centering
\begin{tabularx}{.49\textwidth}{XX|XX|XX}
\multicolumn{2}{c}{Black} & \multicolumn{2}{c}{Plus-size} & \multicolumn{2}{c}{Female} \\
\hline
Coon- & racism & FPH & loseit & TRP & TwoXCr \\
Town & & & & & \\
\hline
\hline
nigger & \textbf{white} & \textbf{weight} & \textbf{weight} & \textbf{women} & \textbf{time} \\
\textbf{white} & racism & \textbf{calorie} & \textbf{calorie} & \textbf{girl} & \textbf{women} \\
\textbf{black} & \textbf{black} & \textbf{time} & \textbf{time} & \textbf{time} & \textbf{feel} \\
shit & \textbf{racist} & \textbf{work} & \textbf{food} & woman & \textbf{work} \\
\textbf{time} & \textbf{race} & \textbf{food} & \textbf{eating} & \textbf{shit} & \textbf{year} \\
fucking & \textbf{time} & \textbf{feel} & \textbf{week} & \textbf{work} & \textbf{fuck} \\
fuck & person & \textbf{eating} & \textbf{work} & \textbf{year} & \textbf{shit} \\
\textbf{race} & point & \textbf{week} & \textbf{feel} & \textbf{life} & weight \\
year & feel & \textbf{lose} & \textbf{lose} & \textbf{fuck} & \textbf{fucking} \\
hate & comment & \textbf{year} & \textbf{diet} & guy & person \\
\textbf{racist} & american & women & \textbf{body} & point & \textbf{life} \\
live & post & \textbf{diet} & exercise & friend & \textbf{girl} \\
work & issue & \textbf{body} & \textbf{goal} & post & love \\
jew & asian & start & loss & \textbf{feel} & pretty \\
crime & color & \textbf{goal} & \textbf{year} & \textbf{fucking}& food \\
\hline
\multicolumn{2}{c}{Jaccard Index: 0.28} & \multicolumn{2}{c}{JI: 0.76} & \multicolumn{2}{c}{JI: 0.50}
\end{tabularx}
\caption{Top discovered topics from support and hate subreddits for the three targets. The bold terms signify those that are present in both the hate and support vocabulary.}
\label{TAB:topic_overlap}
\end{table}
\paragraph{Results.} In Table \ref{TAB:topic_overlap}, we present the 15 most topical words from each subreddit. The top terms in the topics are consistent with the target/support communities. For example, the term ``women'' was ranked highly in subreddits that concern women (whether positively or negatively referenced) and ``weight" is the highest ranked topic for subreddits discussing plus-sized individuals and lifestyle.
We observe a substantial overlap in vocabulary of hate and support subreddits, across all three target communities (see bold words in Table \ref{TAB:topic_overlap}). While in the case of a black target group, we observe a Jaccard Index ({\it JI}) of 0.28, the overlap is higher in the case of female targets with {\it JI} at 0.50 and much higher for plus-size targets, with a {\it JI} of 0.76.
The implication of this shared vocabulary is that while keywords can be used to detect text relevant to the target, they are not optimal for detecting targeted hateful speech. Shared vocabulary increases the likelihood of tagging content that is related to the target but not necessarily hateful, as hateful and increases false positives. We therefore require more robust training data.
\section{A community-driven model of hateful speech}
A key objective of our research is to avoid the issues associated with using manual annotation and keyword searches to produce training data for a classifier. As noted previously, sociological literature acknowledges that communities are formed by coherent linguistic practices and are defined, in part, by their linguistic identity \cite{gumperz2009speech}. Thus, the opportunity considered here is to leverage the linguistic practices of specific online communities to empirically define a particular kind of hateful speech.
Since linguistic practices coincide with the identity of a community using them, we can define hateful speech as discourse practiced by communities who self-identify as hateful towards a target group. The members of the community contribute to the denigration of the target and, therefore, share a common linguistic identity. This allows us to develop a language model of hateful speech directly from the linguistic conventions of that community without requiring manual annotation of specific passages or keyword-based searches. This approach has a number of advantages over these practices.
First, a community-based definition removes the interpretive challenge involved in manual annotation. Membership in a self-organized community that is committed to denigration of a target group through the hatred of others is an observable attribute we can use to surface hateful speech events.
Second, unlike prior work, our method does not require a keyword list. We identify communities that conform to the linguistic identity of a self-organized hateful groups and use such communities to collect data. This data is used to learn the language model around the linguistic identity for detection. This removes any biases implicit in the construction of a keyword list (i.e., in the words included in or excluded from the list).
Third, a community-based definition provides a large volume of high quality, current, labeled data for training and then subsequent testing of classifiers. Such large datasets have traditionally been difficult to collect due to dependence on either manual annotation (annotation is slow and costly) or keyword searches (stringent keywords may turn up relatively few hits).
This approach generalizes to other online environments (such as Voat and other hateful speech-focused web forums) in which communities declare their identities, intentions, and organize their discussions. Any online (or, even, offline) communication forum in which all participants gather for the understood purpose of degrading a target group constitutes a valid source of training data.
In the following subsections, this approach is validated through three analyses. First, we demonstrate that the hate speech communities identified actually employ distinct linguistic practices: we show that our method can reliably distinguish content of a hateful speech community from the rest of Reddit. We also show that our approach substantially outperforms systems built on data collected through keywords.
Second, we show that our approach is sensitive to the linguistic differences between the language of hateful and support communities. This task is notably difficult given the results we reported above, showing that such communities share many high-frequency words.
Finally, we use our Reddit-trained classifier to detect hateful speech on other (non-Reddit) platforms: on Voat and hateful speech web forums (websites devoted to discussion threads attacking or denigrating a target community). For both, we find that our method performs better than a keyword-based baseline.
\subsection{Data collection}
\noindent {\bf Reddit.} We use Reddit as the primary source for the hateful communities and leverage the linguistic practices of these communities to empirically define and develop language models for target-specific hateful speech. In all three of our studies, we focus on the aforementioned three target groups: black people, plus-sized individuals, and women. For each, we select the most active hateful and support subreddits and collect all the publicly available comments present in the data dumps provided by \texttt{Stuck\_In\_the\_Matrix}. The details on the dataset are provided in Table \ref{TAB:data_reddit}. We also collect a random sample of 460,000 Reddit comments to serve as negative examples.
\noindent {\bf Voat.} Voat, a content aggregator similar to Reddit, also hosts active discussion communities, called \textit{subverses}, few of which identify as hateful. We select Voat because of its similarity to our original source\footnote{http://thenextweb.com/insider/2015/07/09/what-is-voat-the-site-reddit-users-are-flocking-to/}. Since the two websites cater to a similar user-base, the generated linguistic identities should be similar in sub-communities with similar themes. Therefore, the language model of hateful communities on Reddit should match, to an extent, with the language model of similar hateful communities on Voat.
For the three target groups, we identify hateful subverses --- \texttt{v/CoonTown}, \texttt{v/fatpeoplehate} and \texttt{v/TheRedPill} --- sub-communities that share their name with their counterparts on Reddit and target blacks, plus-size individuals, and women, respectively. In the absence of an API, we use web-scraping libraries to retrieve all publicly available comments posted to the selected subverses between July 2015 and January 2016. We also collect a set of 50,000 comments (from the same time period) from a random sample of subverses to serve as negative examples (Table \ref{TAB:data_other_platform}).
\noindent {\bf Web forums.} We also use stand-alone web forums that are dedicated to expressing hate or contempt for the target communities. These web forums are social platforms that provide their users with discussion boards, where users can create threads under predefined topics and other users can then add comments in these threads. We, therefore, select web forums for their discussion-based communities and user-generated content. Again, due to the lack of APIs, we use, as data, comments that were collected by web-scraping libraries from numerous threads of their discussion boards during October 2015.
For the black target group, we use \texttt{Shitskin.com}: our dataset consists of 3,160 comments posted to 558 threads from three of website's boards: ``Primal Instinct'', ``Crackin the whip!'' and ``Underground Railroad.'' For the female target group, we use \texttt{mgtowhq.com}: this dataset consists of 20688 comments posted to 4,597 threads from the ``MGTOW General Discussion" board. Finally, as a source of negative examples, we use the ``random'' discussion board on \texttt{topix.com}: this dataset consists of nearly 21,000 comments from 2458 threads. To our knowledge, no large fat-shaming forum exists, thus we do not include this target group in this phase of the study (Table \ref{TAB:data_other_platform}). All comments have posting times between July 2015 and January 2016.
\begin{table}[]
\centering
\begin{tabularx}{.5\textwidth}{llrlr}
Target & Subverse & Voat & Website & Comments \\
\hline
\hline
Black & CoonTown & 3358 & shitskin & 3160 \\
Plus & fatpeoplehate & 31717 & - & \\
Female & TheRedPill & 478 & mgtowhq & 20688 \\
\end{tabularx}
\caption{Target-relevant hateful comments collected from Voat subverses and web forums.}
\label{TAB:data_other_platform}
\end{table}
\begin{table*}[]
\centering
\begin{tabularx}{\textwidth}{l|XXX|XXX|XXX|XXX|XXX}
\multicolumn{10}{l}{(a) Assessing the distinct nature of language emerging from hate groups.}\\
\hline
Target & \multicolumn{3}{c}{Accuracy} & \multicolumn{3}{c}{Precision} & \multicolumn{3}{c}{Recall} & \multicolumn{3}{c}{F1-Score} & \multicolumn{3}{c}{Cohen's $\kappa$} \\
& NB & SVM & LR & NB & SVM & LR & NB & SVM & LR & NB & SVM & LR & NB & SVM & LR \\
\hline
\hline
Black & 0.79 & 0.81 & 0.81 & 0.78 & 0.84 & 0.87 & 0.82 & 0.74 & 0.73 & 0.8 & 0.79 & 0.79 & 0.58 & 0.61 & 0.61 \\
Plus & 0.78 & 0.78 & 0.79 & 0.78 & 0.81 & 0.82 & 0.79 & 0.75 & 0.73 & 0.78 & 0.78 & 0.77 & 0.56 & 0.57 & 0.57 \\
Female & 0.77 & 0.8 & 0.81 & 0.71 & 0.81 & 0.84 & 0.9 & 0.77 & 0.75 & 0.79 & 0.79 & 0.79 & 0.55 & 0.6 & 0.61 \\
\hline
\multicolumn{10}{l}{}\\
\multicolumn{10}{l}{(b) Assessing sensitivity between the language of hate and support groups.}\\
\hline
\hline
Black & 0.8 & 0.79 & 0.79 & 0.8 & 0.8 & 0.78 & 0.85 & 0.82 & 0.86 & 0.82 & 0.81 & 0.82 & 0.57 & 0.56 & 0.55 \\
Plus & 0.83 & 0.85 & 0.85 & 0.85 & 0.84 & 0.84 & 0.79 & 0.86 & 0.86 & 0.82 & 0.85 & 0.85 & 0.66 & 0.69 & 0.7 \\
Female & 0.79 & 0.78 & 0.78 & 0.78 & 0.79 & 0.8 & 0.79 & 0.77 & 0.77 & 0.79 & 0.78 & 0.78 & 0.57 & 0.56 & 0.57
\end{tabularx}
\caption{The performance of the three classification algorithms across the three target groups, with a 10 fold cross-validation. (a) Hateful comments are classified against random comments. (b) Hateful comments are classified against comments from support communities. In both cases, the classifier is able to distinguish hate speech from negative cases. (NB: Naive Bayes, SVM: Support Vector Machines, LR: Logistic Regression) }
\label{TAB:results_inreddit}
\end{table*}
\begin{table*}[]
\centering
\begin{tabularx}{\textwidth}{l|XXX|XXX|XXX|XXX|XXX}
\multicolumn{10}{l}{(a) Baseline performance over Reddit data.}\\
\hline
Target & \multicolumn{3}{c}{Accuracy} & \multicolumn{3}{c}{Precision} & \multicolumn{3}{c}{Recall} & \multicolumn{3}{c}{F1-Score} & \multicolumn{3}{c}{Cohen's $\kappa$} \\
& LDA & $\chi^2$I & $\chi^2$II & LDA & $\chi^2$I & $\chi^2$II & LDA & $\chi^2$I & $\chi^2$II & LDA & $\chi^2$I & $\chi^2$II & LDA & $\chi^2$I & $\chi^2$II \\
\hline
\hline
Black & 0.59 & 0.63 & 0.57 & 0.61 & 0.71 & 0.62 & 0.52 & 0.44 & 0.4 & 0.56 & 0.54 & 0.48 & 0.18 & 0.26 & 0.15 \\
Plus & 0.53 & 0.57 & 0.53 & 0.54 & 0.6 & 0.55 & 0.35 & 0.4 & 0.34 & 0.42 & 0.48 & 0.42 & 0.06 & 0.14 & 0.06 \\
Female & 0.68 & 0.7 & 0.7 & 0.65 & 0.69 & 0.74 & 0.71 & 0.71 & 0.6 & 0.68 & 0.7 & 0.66 & 0.35 & 0.40 & 0.4 \\
\hline
\multicolumn{10}{l}{}\\
\multicolumn{10}{l}{(b) Baseline performance over Voat data.}\\
\hline
\hline
Black & 0.62 & 0.63 & 0.62 & 0.65 & 0.73 & 0.68 & 0.48 & 0.4 & 0.4 & 0.55 & 0.51 & 0.51 & 0.24 & 0.26 & 0.23 \\
Plus & 0.56 & 0.6 & 0.57 & 0.58 & 0.65 & 0.61 & 0.35 & 0.4 & 0.36 & 0.43 & 0.5 & 0.45 & 0.11 & 0.2 & 0.14 \\
Female & 0.67 & 0.69 & 0.67 & 0.68 & 0.71 & 0.74 & 0.63 & 0.63 & 0.5 & 0.65 & 0.67 & 0.6 & 0.35 & 0.38 & 0.34 \\
\hline
\multicolumn{10}{l}{}\\
\multicolumn{10}{l}{(c) Baseline performance over web forum data.}\\
\hline
\hline
Black & 0.66 & 0.62 & 0.57 & 0.72 & 0.77 & 0.67 & 0.53 & 0.35 & 0.31 & 0.61 & 0.48 & 0.42 & 0.32 & 0.24 & 0.15 \\
Female & 0.78 & 0.79 & 0.77 & 0.81 & 0.83 & 0.87 & 0.75 & 0.74 & 0.64 & 0.78 & 0.78 & 0.74 & 0.56 & 0.58 & 0.54
\end{tabularx}
\caption{We calculate the baseline performance on multiple platforms with three keyword-generating methods: LDA, $\chi^2$I and $\chi^2$II. Classification was done using logistic regression.}
\label{TAB:results_baseline}
\end{table*}
\begin{table}[]
\centering
\begin{tabularx}{.48\textwidth}{Xccccc}
Target & Acc & Pre & Rec & F1 & $\kappa$\\
\textit{Voat} & & & & &\\
\hline
\hline
Black & 0.82 & 0.87 & 0.74 & 0.80 & 0.64 \\
Plus & 0.81 & 0.85 & 0.74 & 0.79 & 0.62 \\
Female & 0.74 & 0.76 & 0.71 & 0.73 & 0.49 \\
\textit{Websites} & & & & & \\
\hline
\hline
Black & 0.82 & 0.87 & 0.77 & 0.82 & 0.65 \\
Female & 0.77 & 0.83 & 0.69 & 0.75 & 0.54
\end{tabularx}
\caption{For our targets, we collect comments from hateful communities on Voat and web forums and test the performance of language models learned from Reddit communities.}
\label{TAB:results_crossplatform}
\end{table}
\begin{table}[]
\centering
\begin{tabularx}{.48\textwidth}{llccccc}
Training & Testing & Acc & Pre & Rec & F1 & $\kappa$\\
\hline
\hline
CT & FPH & 0.58 & 0.72 & 0.26 & 0.38 & 0.15 \\
CT & TRP & 0.55 & 0.6 & 0.22 & 0.32 & 0.08 \\
FPH & TRP & 0.58 & 0.65 & 0.3 & 0.41 & 0.15 \\
FPH & CT & 0.54 & 0.61 & 0.23 & 0.34 & 0.08 \\
TRP & CT & 0.51 & 0.53 & 0.28 & 0.36 & 0.03 \\
TRP & FPH & 0.6 & 0.65 & 0.41 & 0.51 & 0.19
\end{tabularx}
\caption{We test the performance of classification systems built on data that belongs to a target community differnt than the one we test on. (CT: CoonTown) }
\label{TAB:results_Crosstarget}
\end{table}
\subsection{Methods}
Before the classification process, we preprocess all the data by eliminating URLs, stopwords, numerals and punctuations. We further lowercase the text and remove platform-relevant noise (e.g., comments from house keeping bots on Reddit like AutoModerator). The text is finally tokenized and used as input for the classification pipeline.
We use multiple machine learning algorithms to generate the language models of hateful communities. From the analysis of the prior work, we identify the commonly-used algorithms and employ them in our analysis. Specifically, we use naive Bayes (NB), support vector machines (SVM) and logistic regression (LR). We do this in order to assess the merits of our insight into using community-defined data collection.
The algorithms take as input, tokenized and preprocessed arrays of user comments along with the label of the community they belong to. We use a sparse representation of unigrams with \textit{tfidf} weights as our feature set. In future investigation, we would like to add part of speech tags and sentiment score as features.
For performance evaluation, we use the standard measures: accuracy, precision, recall and F1-Score. We also use Cohen's $\kappa$ as a measure of agreement between the observed and expected labels. $\kappa$ helps in evaluating the prediction performance of classifiers by taking in account any chance agreement between the labels.
\paragraph{Baseline comparison.} Our aim is to assess the impact of using community-based text compared with keyword-based text as training data. Due to space limitations, here we report only a logistic regression classifier trained on keyword-collected data (SVM and NB showed comparable performance).
The specific keywords used are generated from the comments collected from hateful Reddit communities. For a given target group, we generate three sets of keywords for each: (1) keywords generated between hate subreddits and a random sample of Reddit comments using LLDA, as in Section 3, (2) keywords generated between hate subreddits and a random sample of Reddit comments using $\chi^2$ weights ($\chi^2$I), and (3) keywords generated between hate and support subreddits using $\chi^2$ weights ($\chi^2$II). To generate the training datasets, we use the top 30 keywords and from a separate random sample of Reddit comments, collect samples that contain at least one of the keywords as positive samples and samples that contain no keywords as negative samples. For each keyword type and each target, we aggregate 50,000 positive and 50,000 negative samples for training.
\subsection{Results and Discussion}
\paragraph{Community language vs. hateful speech.} It may seem that, by comparing classifiers on the task of detecting hateful community posts, we are equating language produced by a hateful community with hateful language. Certainly, they are not always the same. Some content is likely non-hateful chatter. One alternative for excluding such noise is manual coding of testing data. Given the existing issues with such labeled data, we avoid such manual labeling. Furthermore, a comparison of the two approaches is not fair due to the associated trade-offs. The community definition, as mentioned, relies on the assumption that all the content in a hateful community is hateful, which might not always be true. However, such an assumption allows us to generate large training datasets with relative ease. We therefore allow the presence of some noise in the training data for ease of training data generation and favouring recall. On the other hand, manual annotation promises less noisy datasets at the expense of time and resources, which limits the size of training datasets. It would be very laborious to produce datasets as large as those generated with our community approach. Also, since manual annotation relies heavily on personal perception, it can also introduce noise in the datasets. In other words, manual annotation does not allow us to generate large training sets, and also cannot provide completely noise-free data.
Another option, however, is to focus on the precision ($\frac{TP}{TP+FP}$) of the classifier. Precision indicates the classifier's ability to identify only content from the hateful community. The construction of the test datasets is such that hateful speech should only exist in the hateful community posts. Thus, a method that detects hateful content should strongly favor including only content from hateful communities --- yielding high precision. Crucially, in the discussions that follow, we find that a community-based classifier demonstrates much higher precision than keyword-based methods. Thus, by either measure (F1 or precision), our community-based classifier outperforms the baselines.
\paragraph{Hateful groups have distinct linguistic signatures.} In Table \ref{TAB:results_inreddit}(a), we see the performance of the three classifiers when classifying a balanced corpus of hateful posts and randomly selected (non-hateful speech) Reddit posts with 10-fold cross validation. The dataset consists of all the comments collected from the relevant hate subreddit (Table \ref{TAB:data_reddit}) as positive samples and an equal number of random Reddit comments as negative samples. We observe the three classifiers perform almost identically. Naive Bayes slightly outperforms others on Recall and F1-score, while Logistic Regression is a slightly better performer on the other metrics. Also, the performance of the classifiers is consistent across the three target groups. Analysis of $\kappa$ suggests that observed labels after the classification process are in moderate to substantial agreement with the expected labels.
\noindent {\bf Comparison to baseline.} In all cases considered, a classifier trained on community-based data outperforms a keyword-based classifier. Notably, the keyword-based classifier for the women-target group performed best, suggesting that hateful community language associated with the keywords used for collection are more representative of hateful speech (compared to other communities).
From a precision perspective, we find that the community-based classifier outperforms the baselines by between $10\%$ and $20\%$, indicating that the community-based classifier is including far fewer incorrect cases of hateful speech (false positives). When we look at the true positive posts that have been detected exclusively by the community-based classifier (i.e., that the keyword-based approach missed), we find many that are clearly hateful, but in ways that do not use specialized slurs. Several examples from the \texttt{CoonTown} subreddit:
\begin{enumerate}
\item ``I don't see the problem here. Animals attack other animals all the time.''
\item ``Oy vey my grandparents vuz gassed ven dey vaz six years old!''
\item ``DNA is rayciss, or didn't you know?''
\item ``Are they going to burn their own town again? Yawn.''
\end{enumerate}
These examples characterize different (and important) ways in which speech can be hateful without using words that typically operate, largely independent of context, as slurs. In Example 1, African-Americans are described as animals, employing a word that is not usually a slur, to denigrate them. In Example 2, historical context (the gas chambers in Nazi concentration camps), culturally stereotyped language (``Oy vey''), and spelling to imitate an accent (``ven dey vaz'') are successfully used to express contempt and hatred, without any slur or even any word that, like 'animals' in the first example, is sometimes pressed into service as a slur. The third example, like the second, parodies an accent, and here it is notable that while ``racist'' might be a keyword use for collection, it's unlikely that ``rayciss'' would be used. Finally Example 4 achieves its effect by attacking a group through an implication of stereotyped action without even actually naming them at all (as opposed to Example 1, in which the targets were called ``animals'').
\paragraph{Community-based approach is sensitive to the linguistic differences of hate and support communities.} In Section 3, we showed that hateful and support communities for a target group have a shared vocabulary: the two communities often engage in discourse on similar topics, albeit with quite different intent. Since the shared keywords are not effective in the discrimination process, recognizing the distinction between hate and support communities can be challenging. We set up a classification task for identifying comments from support and hate communities, carried out with a 10-fold cross-validation. The performance of the task is presented in Table \ref{TAB:results_inreddit}(b). We observe that this performance is close to the performance of our system against a random collection of Reddit comments (Table \ref{TAB:results_inreddit}(a)). Therefore, even with shared vocabulary, our system is sensitive to the distinction in linguistic characteristics of hateful and support communities for the same target.
\paragraph{Community-trained systems can be deployed on other platforms.} Often training data for hateful language classification can be hard to obtain on specific platforms. For this reason, methods that work across platforms (trained on one platform, applied on another platform) present significant advantages.
For the analysis, we continue with the same three target groups and train our language model, using logistic regression, with comments from relevant Reddit communities and then test it on data we collected from other platforms. The performance of the system, (Table \ref{TAB:results_crossplatform}), is very similar to the results we obtain when testing on Reddit (Table \ref{TAB:results_inreddit}(a)). This said, we must be careful not to overstate our method's generalizability. While, certainly, the degree of generalizability observed is noteworthy (particularly given past work), these platforms all feature similar posting conventions: posts are not length restricted, are made within well defined discussion threads, and have a clear textual context. Our method will likely perform well on any such forum-based system. Platforms, which involve quite different conventions, particularly those that are predominantly populated by short-text posts (e.g., Twitter and Facebook), will likely involve additional work. Nonetheless, we do believe that the community-based approach presents opportunities for these other platforms as well.
\paragraph{Hateful classifiers are not target-independent.} Hateful conversations are thematic and major topics discovered from conversations are target related (Table \ref{TAB:topic_overlap}). Not surprisingly, our system performs poorly when tested across targets. We train the classifier on one target and test it on another. The results (see Table \ref{TAB:results_Crosstarget}) provide a strong indication that hateful speech classification systems require target-relevant training.
\paragraph{Detailed Error Analysis.} In order to better understand the performance of our system, we manually inspect a set of erroneously classified posts from the \textit{coontown} training/testing dataset. We characterize the kinds of issues we observe and discuss them here.
\noindent {\it Type I errors.} These posts arise when non-hate group posts are labeled as hate-group posts. Notably, we observe that some of these errors are actually racist comments that originated from other communities in Reddit.
\begin{enumerate}
\item ``well jeez if u pit a nigger against a cunt what do u expect"
\item ``Triskaid is a fucking nigger."
\end{enumerate}
In both of the cases the comments were in fact racist and were therefore correctly labeled. This, of course, points out a potential (though, we would argue minor) weakness of our approach, which is that hate groups are not the {\it only} source of hateful language --- simply the most high-density source.
More frequently, Type I errors featured non-racist comments which had been mislabeled. This is likely due to the fact that not all content in a hateful community is hateful: some is simply off-topic banter among community members. This adds noise during the training phase which manifests as classification errors. While certainly an issue, given the dramatic improvement in overall classification performance, we consider this an acceptable trade off at this stage in the research. Future work should consider ways of focusing training data further on the distinctly hateful content produced by these communities.
\noindent {\it Type II errors.} In most cases where hateful-speech community posts were incorrectly labeled as non-hateful, we primarily find that these were, in fact, non-racist posts that were made to the hateful subreddit. Here are a few examples:
\begin{enumerate}
\item ``and you're a pale virgin with a vitamin d deficiency."
\item ``Whats the deal with you 2? And besides, we're all on the same side here.."
\item ``IP bans do literally nothing, it only takes a moment to change it."
\item ``I can't believe Digg is still up. I can't believe Reddit is still up."
\end{enumerate}
Posts like these constitute noise, in terms of our community-based definition of hateful speech, discussed above. Nonetheless, our system was able to correctly identify them as non-hateful. Taken together with the Type I errors, it appears that the noise implicit in our community-definition of hateful speech yields a modest increase in Type I error, but can somewhat be removed by the classifier in the form of Type II errors (which are not, in fact, errors).
A very small number of other Type II errors are examples of hateful speech, but that target a community other than blacks (in the cases we saw, primarily Jews):
\begin{enumerate}
\item ``Peace and harmony? Yeah that's why they stole that land (now kikeriel) and killed the civilians that lived there before. Did I mention they STILL kill the Palestinians to this day and cover it up? Fuck them."
\item ``quit kissing kikeass"
\item ``You sound like a jew. In a system ruled by money, money can buy anything. Everything is capitalisms fault. But I get why you'd support capitalism since your ``people" invented the whole shebang"
\item ``Losing weight isn't even hard, stop eating like a fucking landwhale, drink lots of water and move your fatass"
\end{enumerate}
Although these comments are hateful, since they are not directed at black people, the system is technically performing according to specification.
Our system missed some cases of obvious racism, such as the following examples. However, such cases constitute only a small fraction of the comments in Type II error.
\begin{enumerate}
\item ``Ok Korea - you know your duty in the impending `blackification' of the globe? I know where I stand"
\item ``Black people are terrible. "
\item ``Pretty soon we will need a dedicated sub for black-on-senior sexual assaults."
\item ``Who is the target audience? I would think black literacy levels would prevent ``nig lit" from ever being a viable book market."
\end{enumerate}
Overall, our analysis of Type II errors indicated that the vast majority of mislabeled comments are not racist and are, therefore, correctly labeled. This suggests that the actual performance of our method is likely higher than what we report.
\paragraph{Imbalanced Datasets} We use balanced datasets for our analysis. Since this assumption may or may not hold for different data sources, we perform some initial analysis on imbalanced datasets. As the actual composition of data sources can be variable, we generate testing sets with the ratio of hateful content to non-hateful content at 1:10, 1:100, 1:1000. Our preliminary results are similar to the performance on a balanced test set. These results are encouraging but require further analysis. We hope to overcome the challenges of dataset-shift due to mismatch in the composition of testing and training datasets in future work.
\section{Conclusion}
The presence of hateful speech on online platforms is a growing problem with a need for robust and scalable solutions. In this work, we investigated the limitations of keyword-based methods and introduced a community-based training method as an alternative. Our work makes two key contributions.
First, we highlight two major mechanisms that hurt the performance of keyword-based methods. The shared vocabulary between hateful and support communities causes training positive examples to contain non-hateful content. Also, because keyword lists focus on more widely known slurs, these lists miss many instances of hateful speech that use less common or more nuanced constructions to express hatred all too clearly.
Our second contribution is the idea of using self-identified hateful communities as training data for hateful speech classifiers. This approach both involves far less effort in collecting training data and also produces superior classifiers.
The promising results obtained in this study suggest several opportunities for future work. Foremost is the extension of this approach to other non-forum-based platforms. Twitter and Facebook, for example, are heavily used platforms which mainly feature short-text messages. Such content presents unique challenges that will require new or modified approaches. Another direction involves looking at other high-signal features (syntax, n-grams, and sentiment scores).
In these and other initiatives, we believe that community-based data may play an essential role in producing both better detectors of hateful speech, and a richer understanding of the underlying phenomenon.
\section{Bibliographical References}
\label{main:ref}
\bibliographystyle{lrec2016}
|
1,108,101,565,310 | arxiv | \section{Introduction}
Video continuity is the length of time the video is being played without
interruptions. A low video continuity would result in a stop-start
video that is known to impact on the viewer perceived video quality
\cite{ref:jitterperceptual}. One of the main causes of video discontinuity
in streaming video is due to buffer underflow caused by for example
network delay. The traditional way to reduce the buffer underflow
occurrences is to prebuffer video at the decoder. The prebuffering
amount needs to be sufficiently large to maintain video continuity.
However, a large prebuffering introduces unwanted initial delays into
the system.
To reduce the amount of prebuffering required while maintaining video
continuity, current approaches make use of adaptive media playout
(AMP) \cite{ref:adaptiveamp1,ref:ampkalman,ref:ampsmoother3}. AMP
approaches reduce the \emph{playout frame rate} of the decoder in
order to avoid buffer underflows, this is because slowing down the
playout frame rate is preferable to halting the playout \cite{ref:scaleisbetter}.
The playout frame rate is defined as the rate of frames being removed
from the decoder buffer for decoding and playout to the viewer. AMP
has been shown to reduce prebuffering while maintaining video continuity
\cite{ref:ampkalman}. However, it introduces \emph{playout distortion}
to the viewers, this is because the video is being played slower than
its natural playout frame rate (also known as the video capture frame
rate). Furthermore, AMP schemes adjust the playout frame rate independent
of the encoder strategy, this potentially introduces more playout
distortion than required. While the playout distortion can be reduced
by limiting the playout slowdown, it potentially affects the video
continuity. Moreover, slowing down the playout frame rate increases
the viewing latency and introduces additional delay into streaming
system.
In this paper, we aim to reduce the amount of prebuffering required
by dealing with the continuity of the video in a reactive manner.
We realize this aim by proposing a framework that performs frame rate
control at both the encoder and decoder as well as introduce a way
to constrain the viewing latency. The key idea is that, if the network
bandwidth drops and the number of frames in the decoder buffer is
low, the encoder should send more frames to the decoder to prevent
buffer underflow from occurring, thus maintaining playout continuity.
In order to approach this systematically, we formulate an optimization
problem that takes into account the video continuity, video quality
and overall playout delay. The contributions of this paper are:
\begin{enumerate}
\item We propose a frame rate control framework that jointly adjusts the
encoder frame generation rate and the playout frame rate. This distinguishes
our framework from conventional approaches that do not perform frame
rate control and AMP approaches that only adjusts the playout frame
rate.
\item We derive an optimization formulation of the frame rate control framework
using the technique of virtual buffer. We then use Lyapunov optimization
\cite{ref:neelybook} on this model to systematically derive the optimization
policies. We show that these policies can be decoupled into separate
encoder and decoder optimization policies with feedback between the
two. This allows for a distributed implementation of the policies.
We demonstrate that this framework produces a very low playout distortion
in addition to a significant reduction in the prebuffering requirements
compared to existing approaches.
\item A delay constraint that reduces the accumulated delay from playout
slowdowns. We then show that the delay constrained framework provides
a superior tradeoff between the video quality and the delay introduced
compared to the existing approach.
\item An analysis of the impact of delayed feedback between the receiver
and the sender. We show that the delayed feedbacks have a minimal
impact on the optimization policies.
\end{enumerate}
This paper is organized as follows. Section \ref{sec:Related-Work}
reviews the current approaches. Section \ref{sec:Discontinuity-Penalty}
demonstrates how the frame rate control problem can be modelled. Section
\ref{sec:Delay-Constrained-Frame} demonstrates how a delay constraint
can be introduced into the framework. Section \ref{sec:Network-Delay-Impact}
analyzes the delayed feedback on the optimization policies. Section
\ref{sec:Video-Quality-Functions} describes the video quality functions
that can be used with this framework. Section \ref{sec:Performance-Evaluation}
evaluates the performance of the proposed improved framework. Section
\ref{sec:Conclusions} concludes this paper.
\section{Related Work\label{sec:Related-Work}}
Prebuffering video data has been studied in \cite{ref:bufcalc2,ref:bufcalc1,ref:bufcalc3}.
These techniques are focused on calculating the correct amount of
prebuffered data based on a mathematical model to avoid the occurrence
of a buffer underflow once the video playout has started. However,
these techniques do not take into account of the varying playout rates
and the resulting reduction in prebuffering into their models. Furthermore,
prebuffering introduces unwanted delay into the system and is known
to have an impact on the user perceived quality of the video \cite{ref:delaynjitterimpact}.
Sender rate adaptation techniques such as encoder rate control \cite{ref:h264rc,ref:x264rc}
and scalable rate control\cite{ref:h264scalable} achieve video continuity
by ensuring the video rate matches the network bandwidth. However,
these approaches do not take into account of the network delay variation
that might occur (e.g. due to network jitter). Moreover, it has been
demonstrated that coupling AMP with sender rate adaptation techniques
can further reduce the prebuffering requirements \cite{ref:ampkalman}.
AMP is another class of techniques that is used to improve video continuity.
The main idea behind AMP is to reduce the playout rate of the received
media. This reduces the departure rate of the media in the decoder
buffer and potentially allows the buffer to be filled up to safer
levels.
AMP has been studied extensively for audio applications, \cite{ref:audioamp1,ref:audioamp2,ref:audioamp3,ref:audioamp4,ref:audioamp5,ref:audioamp6}.
These audio AMP techniques depend on audio scaling techniques such
as WSOLA \cite{ref:wsola}, which allows audio to be scaled without
changing its pitch. A recent work by Damnjanovic et al has shown that
audio and video scaling can be done in real-time \cite{ref:audioscale}.
In this paper, we focus on AMP for video.
AMP for video involves scaling the frame intervals to slowdown or
speedup the playout rate. The slowdown or speedup is triggered by
a threshold set on the buffer occupancy. Once the buffer occupancy
drops below the threshold, AMP will slow down the playout rate and
vice versa. There have been studies conducted on the dynamic adjustment
of this threshold \cite{ref:adaptiveamp1,ref:adaptiveamp2,ref:adaptiveamp3,ref:ampseo}.
The adjustment is normally based on the network condition, the poorer
the network condition, the higher the threshold. These techniques
mainly based the threshold on the buffer occupancy. We instead record
the difference between the receiving frame interval and the playout
frame interval into a virtual buffer and treat it as a penalty, which
is equivalent to soft thresholding.
AMP has also been integrated into the design of packet schedulers
\cite{ref:ampscheduler1,ref:ampscheduler2,ref:ampscheduler3,ref:ampscheduler4,ref:ampscheduler5}.
These techniques tend to slowdown the playout rate for important video
packets. This ensures that the more important video packets have a
higher probability of meeting its deadline and thus avoid being dropped.
We do not focus on packet scheduling in this paper and our proposed
framework could complement any packet scheduling scheme.
Another aspect of AMP being studied is the smoothness of transition
between playout rate adjustment \cite{ref:ampsmoother3,ref:ampsmoother1,ref:ampsmoother2}.
The goal of these approaches is to ensure that adjustments made to
the playout rate is done as smoothly as possible so as to reduce any
noticeable effects to the viewers. We do not focus on rate smoothing
in the current paper and again any smoothing scheme can be used within
the framework.
Steinbach et al \cite{ref:ampsteinbach} and Kalman et al \cite{ref:ampkalman}
both examined the trade-off between delay and buffer underflow using
a two state Markov models. Kalman et al further proposed AMP-Initial,
AMP-Robust and AMP-Live. AMP-Initial slows down the playout rate until
the buffer reaches a certain target level, this produces a lower perceived
initial buffering delay to the viewer. AMP-Robust slowdowns the playout
rate if the current buffer occupancy falls below the target buffer
level while AMP-Live slowdowns or speedups the playout rate to maintain
the target buffer level.
These AMP approaches mainly examine only the effects of adjusting
the playout frame rate independently and do not consider any encoder
strategy to reduce playout distortion. While our proposed approach
aims to examine the effects on video quality and viewing delay based
on the adjustment of both the encoder frame generation rate and the
playout frame rate. Slowing down the playout frame rate also introduces
viewing latency, we will propose a way to constrain this latency in
our proposed approach.
\section{Frame Rate Control For Video Continuity}
\begin{figure}[H]
\includegraphics[scale=0.43]{encdec}\caption{Encoding-decoding flow with encoder frame generation rate $i(t)$,
sending frame rate $\mu(t)$, receiving frame rate $\lambda(t)$ and
playout frame rate $o(t)$. All rates are in frames per seconds.\label{fig:encdec}}
\end{figure}
Figure \ref{fig:encdec} shows a typical video transmission scenario.
The encoder generates video frames at a rate of $i(t)$ frames per
second (fps) at time $t$ into the encoder buffer. The network transport
protocol then transmits the video data from the encoder buffer into
the network at a rate of $\mu(t)$ fps. The transmitted video data
will be received at the decoder at a rate of $\lambda(t)$ fps in
its buffer. The decoder then proceeds to playout the received video
data from the decoder buffer at a rate of $o(t)$ fps. Video data
arriving after the playout deadline are assumed to be lost. In the
scenario described in fig. \ref{fig:encdec}, there are two mechanisms
that can be used to maintain video continuity: reducing playout frame
rate $o(t)$ and increasing the encoder frame generation rate $i(t)$.
Our analytical model takes network delay into consideration but does
not consider packet losses %
\footnote{Note that even though we do not consider packet loss explicitly, the
simulation results in section \ref{sec:Performance-Evaluation} show
that our optimisation framework works well in the presence of packet
loss.%
}. In particular, an important factor that affects the performance
is the inter-frame delay, which is defined as the difference of network
delays for consecutive video frames. We assume that the frames are
received into the decoder buffer in display order, so reordering is
done before the frames enter the decoder buffer. Thus, inter-frame
delays can be affected by network jitter and packet losses. Notice
that this model allows us to focus on calculating the amount of frames
arriving into and departing from the decoder buffer such that decoder
buffer underflow can be avoided, and video continuity can be maintained.
We also assume, in this paper, that the encoder is able to generate
frames faster than the natural playout frame rate. This is a reasonable
assumption since there are existing encoders such as x264 \cite{ref:x264}
that can encode frames in real-time faster than a typical natural
playout frame rate of 30 frames per second (fps).
Slowing down the video playout frame rate $o(t)$ is termed in the
literature as AMP. The idea is that the slower video playout allows
time for the buffer to fill up to the required level without the need
to stop the video for rebuffering.
Another way to maintain video continuity is to increase the \emph{encoder
frame generation rate} $i(t)$, which refers to the amount of frames
the encoder actually sends out into the network and it is important
to point out that the encoder frame generation rate is \emph{not}
a temporally scaled frame rate. Note that we purposely chose the term
encoder frame generation rate to differentiate it from the commonly
used term of encoder frame rate because they represent two \emph{different}
concepts. The meaning of encoder frame generation rate is best illustrated
by an example: when no frame rate control is used, the encoder frame
generation rate $i(t)$ is the natural playout frame rate. Let us
assume that to be 30 fps; when frame rate control is used, the encoder
may increase the encoder frame generation rate $i(t)$ to say 60 fps
to quickly fill up the decoder buffer to maintain continuity. Note
that these 60 frames are still encoded using the same natural playout
frame rate of 30 fps, so they form the next 2 seconds of video. This
example illustrates that a higher encoder frame generation rate means
more than one second of video is generated in one second but the video
is always encoded using the same natural playout frame rate.
Increasing $i(t)$ allows potentially more frames to reach the decoder
and increase the decoder buffer level. This, in turn, helps to improve
the continuity of the video. However, increasing $i(t)$ is likely
to cause the video bitrate to increase as more frames are produced
per second by the encoder. To ensure that the video bitrate does not
exceed the available bandwidth, we introduce additional compression
to the video such that the higher the encoder frame generation rate
$i(t)$, the higher the compression applied.
For a given encoder generation rate $i(t)$, the higher compression
is obtained by the rate controller adjusting the encoder such that
the average frame size produced by the encoder is:
\begin{equation}
\textrm{average frame size}=r(i(t))=\frac{ABR(t)}{i(t)}\label{eq:fropt_avgfrmsiz}\end{equation}
where $ABR(t)$ is the available bandwidth%
\footnote{ideally the encoder should use $ABR(t+d_{s})$, where $d_{s}$ is
the sender buffer delay. However, since this is difficult to determine,
we use $ABR(t)$ to approximate $ABR(t+d_{s})$.%
} at time $t$. In practice, the average frame size produced by a rate
control strategy may not satisfy \eqref{eq:fropt_avgfrmsiz}. However,
\eqref{eq:fropt_avgfrmsiz} is what most rate control schemes try
to meet as they use \eqref{eq:fropt_avgfrmsiz} as part of their fluid
flow model to determine the amount of bits to allocate to a frame
\cite{ref:h264rc,ref:x264rc,ref:mpeg4rc}.
It can be shown that when \eqref{eq:fropt_avgfrmsiz} is satisfied
by the rate controller, the resulting video bitrate will not exceed
the available bandwidth for different values of the encoder frame
generation rate $i(t)$. Also, the average frame size decreases when
$i(t)$ is increased which would tend to reduce the frame quality.
Finally, a higher $i(t)$ also allows more frames to be sent to the
decoder, thus it allows us to increase the buffer level and maintain
video continuity.
Reducing $o(t)$ and increasing $i(t)$ increases playout distortion
and reduces frame quality respectively while improving video continuity.
This suggests that an optimal trade-off need to be found. To do this,
we model the frame rate control problem as an optimization problem
and make use of Lyapunov optimization to obtain policies that help
determine the optimal trade-off.
In this paper, we adjust the encoder frame generation rate $i(t)$
and playout frame rate $o(t)$ by adjusting the encoder frame generation
interval $\frac{1}{i(t)}$ and the playout frame interval $\frac{1}{o(t)}$
respectively. The reason we chose to adjust the intervals is because
it allows the optimization problem to be concave and, therefore, easier
to solve. This issue will be discussed in more detail in Section \ref{sec:Video-Quality-Functions}.
\section{Discontinuity Penalty For Frame Rate Optimization\label{sec:Discontinuity-Penalty}}
\subsection{Buffering Criteria}
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=1.1]{frmmodel}
\par\end{centering}
\caption{Receiver buffer model.}
\label{fig:frmmodel}
\end{figure}
Since the video discontinuity is correlated to decoder buffer underflows,
we first study how buffer underflow occurs. To do this, we make use
of the receiver model illustrated in fig. \ref{fig:frmmodel}. We
assume that the sender, network and receiver all work in slotted time.
At time slot $t$, the receiver receives $\lambda(t)$ frames (receiving
frame rate) and stores it into the receiver buffer. Simultaneously,
$o(t)$ frames are being removed from the buffer and played out to
the viewer (playout rate) at time $t$. Let $b(t)$ be the amount
of buffered video frames in the buffer at time $t$ and $T$ be the
length of the sequence in time slots, then to avoid an underflow the
following condition needs to be met:
\begin{equation}
b(t)\geq\sum_{\tau=t}^{T}o(\tau)-\sum_{\tau=t}^{T}\lambda(\tau)\quad\quad\forall T\geq t\label{eqn:b0}\end{equation}
Equation \eqref{eqn:b0} intuitively means that, to avoid a buffer
underflow, the cumulative buffer drainage for the rest of the sequence
playing time should not exceed the current buffer occupancy $b(t)$.
Now we refine the above model to make use of \emph{frame intervals}.
This is done to build up a system model using frame intervals. Basically
to make use of the frame intervals, we set $\lambda(t)=\frac{1}{r(t)}$
and $o(t)=\frac{1}{p(t)}$. Equation \eqref{eqn:b0} then becomes:
\begin{align}
b(t) & \geq\sum_{\tau=t}^{T}\frac{r(\tau)-p(\tau)}{r(\tau)\, p(\tau)}\label{eqn:b1}\end{align}
We assume that $r_{min}\leq r(t)\leq r_{max}$ and $p_{min}\leq p(t)\leq p_{max}$,
i.e. both $r(t)$ and $p(t)$ are bounded. That would mean that we
can approximate \eqref{eqn:b1} as:
\begin{align}
b(t)\, r_{min}\, p_{min} & \geq\sum_{\tau=t}^{T}r(\tau)-p(\tau)\label{eqn:b2}\end{align}
Note that the choice of using $r_{min}$ and $p_{min}$ results in
a more conservative bound. An alternative bound, which is looser,
is to replace the left-hand-side of \eqref{eqn:b2} by $b(t)\, r_{max}\, p_{max}$.
We will show in simulation results (section \ref{sec:Performance-Evaluation})
that these two choices give similar results.
If we divide \eqref{eqn:b2} by $T-t$, the remaining time slots left,
we can estimate the buffer underflow bound \emph{per time slot}. This
will result in:
\begin{equation}
\frac{b(t)\, r_{min}\, p_{min}}{T-t}\geq\hat{r}-\hat{p}\label{eqn:b3}\end{equation}
where $\hat{r}$ and $\hat{p}$ are the averages of $r(t)$ and $p(t)$
respectively. Equation \eqref{eqn:b3} provides us with a way to design
the optimization policy to avoid buffer underflow in a time slot.
Since Lyapunov optimization works on a per time slot basis, we prefer
\eqref{eqn:b3} over \eqref{eqn:b2}. Essentially, we need to design
a policy that produces a \emph{receiving frame interval} $r(t)$ and
a \emph{playout frame interval} $p(t)$ in such a way that the above
bound is met.
\subsection{System Model\label{sub:fro-imp-System-Model}}
\begin{figure}[H]
\includegraphics[scale=0.5]{sysmodel}
\caption{System model showing the frame intervals.}
\label{fig:sysmodel}
\end{figure}
We now show how $r(t)$ and $p(t)$ are produced in the complete system
model. Fig. \ref{fig:sysmodel} illustrates the system model. In a
time slot $t$, the encoder at the sender produces video frames at
intervals of $f(t)$ and stores them into the sender buffer. The sender
sends the video frames in its buffer at intervals of $s(t)$. Note
that $f(t)$ and $s(t)$ are the \emph{encoder frame generation interval}
and the \emph{sending frame interval} respectively. Simultaneously,
the receiver receives video frames from the sender at intervals of
$r(t)$ and puts them into the decoder buffer. The decoder in the
receiver plays out the video frames at intervals of $p(t)$ to the
viewer, $p(t)$ is the \emph{playout frame interval}. The network
will also produce a forward delay of $d_{f}$ and a backward delay
of $d_{b}$.
The goal of our framework is to jointly adjust the encoder frame generation
interval $f(t)$ and the playout frame interval $p(t)$ to maintain
video continuity. To simplify the model, we assume that $f(t)=s(t)$,
while this essentially assumes no delay caused by the sender buffer,
delays caused by the sender buffer is simulated in our experiments
later on. We also assume that $f(t)$ is bounded within the range
$[f_{min},f_{max}]$ and $r(t)$ is defined by a network delay variation
function $F(s(t))$ as : $r(t)=F(s(t))=F(f(t))$, since $f(t)=s(t)$.
This means that we can represent the delay variation based on the
encoder frame generation interval $f(t)$. In this paper, we specify
$F(f(t))$ as:
\begin{equation}
F(f(t))=e(t)\times f(t)\label{eqn:Fdefine}\end{equation}
Where $e(t)$ is the frame interval scaling factor due to delay variations
from the network. In practice, we estimate $e(t)$ at the receiver
by:
\begin{equation}
e(t)=\frac{r(t)}{f(t-d_{f})}\label{eqn:delayscale}\end{equation}
Where $d_{f}$ is the forward delay between the sender and receiver.
Note that if there is no delay (i.e. $d_{b}=d_{f}=0$), it will mean
that $F(f(t))=r(t)$.
\subsection{General Optimization Problem\label{sub:General-Optimization-Problem}}
There are three main objectives that we want to optimize: 1. frame
quality, 2. playout distortion and 3. continuity. Frame quality is
defined as the perceived visual quality of the video. This will be
represented as a frame quality function $g(f(t))$, where $g(f(t))$
is an increasing function of $f(t)$. Playout distortion is the perceived
distortion when the playout rate deviates from the natural frame rate
and will be represented by a function $h(p(t))$, where $h(p(t))$
is a convex function of $p(t)$. Both $g(f(t))$ and $h(p(t))$ are
non-negative functions, i.e. $g(f(t))\geq0$ and $h(p(t))\geq0$,
and are assumed to be uncorrelated. We will suggest a specific form
for $g(f(t))$ and $h(p(t))$ later in Section \ref{sec:Video-Quality-Functions}.
Continuity is the length of time the video is played without interruptions
due to buffer underflow. We ensure continuity in this framework by
ensuring that a \emph{virtual buffer stabilizes}, this concept will
be explained further in the next section.
With the system model described in the previous section, we now formulate
a general optimization problem:
\begin{align}
\mathrm{\textrm{Maximize:}} & \quad g(f(t))-h(p(t))\label{eqn:lyapoptobj}\\
\textrm{Subject to:} & \quad U(t)\textrm{ is stable}\label{eqn:lyapoptobj1}\\
& \quad f_{min}\leq f(t)\leq f_{max}\label{eqn:lyapoptobj2}\\
& \quad p_{min}\leq p(t)\leq p_{max}\label{eqn:lyapoptobj3}\end{align}
Where \emph{$U(t)$} is the virtual buffer representing the \emph{discontinuity
penalty}. Since the objective \eqref{eqn:lyapoptobj} is separable,
maximizing \eqref{eqn:lyapoptobj} can be seen as maximizing the frame
quality function $g(f(t))$ and minimizing the playout distortion
function $h(p(t))$. The constraint \eqref{eqn:lyapoptobj1} is the
continuity constraint. Constraints \eqref{eqn:lyapoptobj2} and \eqref{eqn:lyapoptobj3}
are the limits set on $f(t)$ and $p(t)$ respectively.
To see how the above general optimization problem is derived, we first
replace \eqref{eqn:lyapoptobj1} with a continuity constraint derived
from \eqref{eqn:b3}:
\begin{equation}
\mathbb{E}\{F(f(t))-p(t)\}<\beta(t)\label{eqn:opts1}\end{equation}
where $\beta(t)=\frac{b(t)\, r_{min}\, p_{min}}{T-t}$. Constraint
\eqref{eqn:opts1} can be satisfied by either decreasing $f(t)$ and/or
increasing $p(t)$. However, decreasing $f(t)$ would result in a
lower frame quality given by $g(f(t))$ and increasing $p(t)$ would
result in a higher playout distortion given by $h(p(t))$. The optimization
policy would need to handle these tradeoffs. To solve this optimization
problem, we make use of the concepts of virtual buffer and Lyapunov
optimization.
\subsection{Virtual Buffer And Stability\label{sub:Virtual-Buffer}}
Virtual buffers are a concept introduced by Neely et al \cite{ref:neelyvb}
to replace certain constraints of an optimization problem. To determine
a suitable virtual buffer for our problem, we make an initial virtual
buffer design to represent the continuity of the video. The virtual
buffer will be updated at every time slot $t$, and is updated as:
\begin{equation}
U(t+1)=[U(t)-p(t)]^{+}+F(f(t))\label{eqn:vb1}\end{equation}
where $U(0)=0$. The virtual buffer $U(t)$ is lower bounded by 0
(i.e. always non-negative). This virtual buffer can be seen as the
discontinuity penalty. If $F(f(t))$ is higher than $p(t)$, it means
that the network throughput is lower than the playout rate, i.e. the
rate of video frames received is slower than the amount of video frames
being played out. Thus, the discontinuity penalty $U(t)$ \emph{accumulates}
the difference of $F(f(t))-p(t)$ as a penalty. If the network subsequently
improves and $F(f(t))$ is lower than $p(t)$, the penalty in the
buffer then \emph{reduces} by $p(t)-F(f(t))$. The higher $U(t)$
becomes, the higher the possibility of a buffer underflow. In an ideal
network, where $F(f(t))\leq p(t)$ at all times, $U(t)$ will never
accumulate any penalty.
To ensure that $U(t)$ does not keep increasing, we want $U(t)$ to
\emph{stabilize}. We will show later that by stabilizing $U(t)$,
we can ensure that the video continuity is preserved. The definition
of stability used here is $\mathbb{E}\{U\}\triangleq\limsup_{t\to\infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\mathbb{E}\{U(\tau)\}<\infty$.
This intuitively means that the buffer is stable if it does not grow
infinitely large over time. We also like the virtual buffer to meet
the continuity constraint \eqref{eqn:opts1} when it stabilizes. To
do that, we extend the initial virtual buffer \eqref{eqn:vb1} as:
\begin{equation}
U(t+1)=[U(t)-p(t)-\beta(t)]^{+}+F(f(t))\label{eqn:vb2}\end{equation}
To see how it works, notice that for the discontinuity penalty $U(t)$
to grow infinitely, the following condition needs to be met: $F(f(t))-p(t)>\beta(t)$.
Therefore, in order for $U(t)$ to stabilize the following condition
needs to be true: $F(f(t))-p(t)\leq\beta(t)$. This is the continuity
constraint as defined in \eqref{eqn:opts1}, so when $U(t)$ stabilizes,
the continuity constraint will be met. The discontinuity penalty $U(t)$
is maintained at the receiver in our design.
Thus, with the stability of the discontinuity penalty $U(t)$ as a
constraint, we then obtain the general optimization problem presented
in Section \ref{sub:General-Optimization-Problem}. We will demonstrate
in Section \ref{sub:Delay-Constrained-Exp}, using simulations, that
$U(t)$ is positively correlated with the video discontinuity.
\subsection{Lyapunov Optimization Derivation\label{sub:Discontinuity-Penalty-Derivation}}
We show here how we convert the optimization problem presented in
Section \ref{sub:Virtual-Buffer} into a separate encoder and decoder
optimization policies using Lyapunov optimization. We assume that
there is no network delay between the sender and receiver (i.e. $d_{f}=d_{b}=0$).
This is to simplify the analysis presented here. We will relax this
assumption in Section \ref{sec:Network-Delay-Impact}.
We define a Lyapunov function $L(U(t))$ to represent the \textquotedbl{}energy\textquotedbl{}
of the discontinuity penalty $U(t)$ at time $t$, this can be any
arbitrary non-negative function. We use the following Lyapunov function
in this paper:
\begin{equation}
L(U(t))\triangleq\frac{U^{2}(t)}{2}\label{eqn:lyap}\end{equation}
We then define the one-step conditional Lyapunov drift $\Delta(U(t))$
as:
\begin{equation}
\Delta(U(t))\triangleq\mathbb{E}\{L(U(t+1))-L(U(t))|U(t)\}\label{eqn:1stepdrift}\end{equation}
Equation \eqref{eqn:1stepdrift} can be understood as the expected
change in the energy in one time slot step. The goal of Lyapunov optimization
is to show that this energy reduces or stays the same at each slot
(i.e. \eqref{eqn:1stepdrift} produces a negative or zero drift).
This would ensure the stability of the buffer, which in turn enforces
the continuity of the video playout. To show that $U(t)$ stabilizes,
we need to convert the buffer update equation \eqref{eqn:vb2} into
a one-step drift \eqref{eqn:1stepdrift}.
To do that, we square the buffer update equation \eqref{eqn:vb2},
divide it by two, and take expectations of the result (see Section
\ref{sec:apx-drift1-froimp} in the appendix for details),
we will get the following expression:
\begin{equation}
\Delta(U(t))\leq B-U(t)\mathbb{E}\{\beta(t)+p(t)-F(f(t))|U(t)\}\label{eqn:drift1}\end{equation}
Where $B$ is a constant defined as:
\begin{equation}
B=\frac{1}{2}\bigg(r_{max}^{2}+\bigg(p_{max}+\frac{T\, r_{min}\, p_{min}}{T-t}\bigg)^{2}\bigg)\label{eqn:driftB}\end{equation}
From \eqref{eqn:drift1}, we can use the results in \cite{ref:neelybook}
to prove that $U(t)$ stabilizes (see Section \ref{sub:Discontinuity-Penalty-bnd}
in the appendix). Once $U(t)$ is proven to stabilize,
it can be shown using the results in \cite{ref:neelyvb} that the
continuity constraint \eqref{eqn:opts1} is satisfied. However, to
optimize the frame quality utility $g(f(t))$ and the playout distortion
$h(p(t))$, we need to massage the equation more. By subtracting from
both sides, the term $V\mathbb{E}\{g(f(t))-h(p(t))|U(t)\}$, which
is the expectation of \eqref{eqn:lyapoptobj} scaled by a positive
constant $V>0$, and by rearranging the terms. We get:
\begin{align}
\Delta(U(t)) & -V\mathbb{E}\{g(f(t))-h(p(t))|U(t)\} & \notag\label{eqn:lyapobj}\\
\leq B & -\mathbb{E}\{U(t)\beta(t)|U(t)\} & \notag\\
& -\mathbb{E}\{Vg(f(t))-U(t)F(f(t))|U(t)\} & \notag\\
& -\mathbb{E}\{U(t)p(t)-Vh(p(t))|U(t)\}\end{align}
The third term on the right hand side of \eqref{eqn:lyapobj} is a
function of the encoder frame generation interval $f(t)$ only and
represents the sender optimization policy. The last term of \eqref{eqn:lyapobj}
is a function of the playout frame interval $p(t)$ and represents
the receiver optimization policy.
To summarize:
\subsubsection{Encoder optimization policy}
From the third term of \eqref{eqn:lyapobj}, based on the frame interval
scaling factor $e(t)$ (defined in \eqref{eqn:delayscale}) and discontinuity
penalty $U(t)$ feedback from the receiver. The encoder in the sender
will calculate $F(f(t))$ using \eqref{eqn:Fdefine}, and it will
choose $f(t)$ at each time slot as the solution of the following
optimization:
\begin{align}
\textrm{Maximize:} & \quad Vg(f(t))-U(t)F(f(t))\notag\label{eqn:encobj}\\
\textrm{Subject to:} & \quad f_{min}\leq f(t)\leq f_{max}\end{align}
\subsubsection{Decoder optimization policy }
From the last term of \eqref{eqn:lyapobj}, the decoder in the receiver
will observe the current discontinuity penalty $U(t)$ and choose
$p(t)$ at each time slot as the solution of the following optimization:
\begin{align}
\textrm{Maximize:} & \quad U(t)p(t)-Vh(p(t))\notag\label{eqn:decobj}\\
\textrm{Subject to:} & \quad p_{min}\leq p(t)\leq p_{max}\end{align}
Notice that the optimization policies are decoupled into separate
optimization subproblems for playout frame and encoder frame generation
intervals. Note that the decoder is responsible for updating $U(t)$
using equation \eqref{eqn:vb1} and sending the value of $U(t)$ to
the encoder. Furthermore, under appropriate conditions, the decoupled
problems are convex. This makes the problem easier and more flexible
to solve. Given that Lyapunov optimization minimizes the right hand
side of \eqref{eqn:lyapobj} instead of the original optimization
problem \eqref{eqn:lyapoptobj} (which is hard to solve), the objective
function value realized by Laypunov optimization is sub-optimal but
its deviation from the optimal value can be controlled (see Section
\ref{sub:Discontinuity-Penalty-bnd} in the appendix).
\section{Delay Constrained Frame Rate Optimization\label{sec:Delay-Constrained-Frame}}
While slowing down the video playout allows us to reduce the occurrences
of buffer underflows and preserve the video continuity, it introduces
extra viewing latency to the viewers. This becomes an issue when the
system has a delay budget, as frequent playout slowdowns might cause
the delay budget to be exceeded.
In this section, we examine the problem when there is a constraint
imposed on the viewing latency. We focus on constraining the additional
playout latency generated by slowing down the playout. More specifically,
we want to impose a constraint on how often playout slowdowns occur
and how much the playout can be slowed down. This is done by introducing
another virtual buffer, called the \emph{delay accumulator}, into
the problem to represent the constraint on the accumulated delay due
to playout slowdowns. We then use Lyapunov optimization to ensure
that this constraint is met.
\subsection{Delay Constrained Policy Design}
The delay constraint can be described as constraining the accumulated
playout slowdowns used over the lifetime of the whole video sequence.
Specifically, let $\theta$ be the maximum playout slowdown delay
tolerable for the video application and $p_{n}$ represent the natural
playout interval of the video. Then, the constraint could be written
as:
\begin{equation}
\sum_{\tau=0}^{T-1}(p(\tau)-p_{n})\leq\theta\label{eqn:delcon}\end{equation}
Recall that $T$ is the length of the sequence in time slots. Since
Lyapunov optimization works on a per time slot basis, we need to express
the constraint \eqref{eqn:delcon} as a constraint for each time slot.
To do this, note that for the current timeslot $t$, the averaged
maximum playout slowdown delay tolerable will be: $t_{d}=\frac{\theta}{T-t}$,
where $T-t$ represents the number of time slots remaining for the
sequence. With this, we can rewrite the delay constraint \eqref{eqn:delcon}
as:
\begin{equation}
\mathbb{E}\{p(t)-p_{n}\}\leq t_{d}\label{eqn:delcon2}\end{equation}
Then, what remains to be done is to design a policy that ensures that
constraint \eqref{eqn:delcon2} is met at each time slot. It can be
seen that by adding constraint \eqref{eqn:delcon2} to the general
optimization problem presented in Section \ref{sub:Virtual-Buffer},
we obtain a problem that provides video continuity while ensuring
that the delay constraint is met.
\subsection{Delay Accumulator Virtual Buffer}
To apply the delay constraint \eqref{eqn:delcon2} into the framework
using Lyapunov optimization, we again make use of the virtual buffer
concept. We introduce another virtual buffer named as the \emph{delay
accumulator}. The delay accumulator is a virtual buffer that keeps
track of the accumulated delay caused by playout slowdowns. Everytime
a playout slowdown is perform, the delay accumulator will increase
correspondingly. However, the delay accumulator will reduce when playout
speed up is perform. Formally, the buffer dynamics to describe the
delay accumulator for each time slot would be:
\begin{equation}
X(t+1)=[X(t)-p_{n}-t_{d}]^{+}+p(t)\label{eqn:Xbuff}\end{equation}
Note that the terms $p_{n}$, $p(t)$ and $t_{d}$ are taken from
constraint \eqref{eqn:delcon2} above. Therefore, by showing that
the delay accumulator $X(t)$ stabilizes, we can show that the delay
constraint \eqref{eqn:delcon2} can be satisfied \cite{ref:neelyvb}.
This also means that the delay constraint can be written as: $X(t)\textrm{ is stable}$.
We now show how Lyapunov optimization can be used to derive optimization
policies to solve the above problem.
\subsection{Lyapunov Optimization Derivation}
Note that there are two buffers in the problem, the discontinuity
penalty $U(t)$ and the delay accumulator $X(t)$. To stabilize both
of these simultaneously, we first redefine the Lyapunov function to
be:
\begin{equation}
L(U(t),X(t))\triangleq\frac{U^{2}(t)+X^{2}(t)}{2}\label{eqn:Xlyap}\end{equation}
The one step conditional drift also needs to consider both buffers
and is defined as:
\begin{align}
\Delta(U(t),X(t))\triangleq\; & \mathbb{E}\{L(U(t+1),X(t+1))\notag\label{eqn:Xonestep}\\
& \quad-L(U(t),X(t))|U(t),X(t)\}\end{align}
To shorten the formulas, we use $\Delta$, $U$, $X$, $p$ and $f$
to represent $\Delta(U(t),X(t))$, $U(t)$, $X(t)$, $p(t)$ and $f(t)$
respectively. By squaring \eqref{eqn:vb2} and \eqref{eqn:Xbuff},
taking expectations and dividing by 2, we get (see Section \ref{sec:apx-Xdrift1-froimp}
in the appendix for details):
\begin{align}
\Delta(U,X)\leq & \; B+C-U\mathbb{E}\{p-F(f)|U,X\}\notag\label{eqn:Xdrift1}\\
& \quad-X\mathbb{E}\{p_{n}+t_{d}-p|U,X\}\end{align}
where $B$ is defined as in \eqref{eqn:driftB} and $C=\frac{1}{2}\bigg(p_{max}^{2}+(p_{n}+t_{d})^{2}\bigg)$.
It can be proven that \eqref{eqn:Xdrift1} results in stability for
both the discontinuity $U$ and the delay accumulator $X$ \cite{ref:neelyvb}
(see Section \ref{sub:Delay-Constrained-bnd} of the supplementary
materials). Furthermore, it can be proven that the stabilization of
$X$ implies that the constraint \eqref{eqn:delcon2} can be met by
using the results from \cite{ref:neelyvb}.
To optimize the frame quality and the playout distortion, we subtract
$V\mathbb{E}\{g(f(t))-h(p(t))|U(t)\}$ from both sides of \eqref{eqn:Xdrift1}
to obtain:
\begin{align}
\Delta(U,X) & -V\mathbb{E}\{g(f)-h(p)|U,X\} & \notag\label{eqn:Xlyapobj}\\
\leq B+C & -X\mathbb{E}\{p_{n}+t_{d}|U,X\}\notag\\
& -\mathbb{E}\{Vg(f)-UF(f)|U,X\} & \notag\\
& -\mathbb{E}\{Up-Vh(p)-Xp|U,X\}\end{align}
Note that the encoder optimization policy (second last term) remains
the same as \eqref{eqn:encobj}. The last term shows that decoder
optimization policy \eqref{eqn:decobj} has an additional penalty
term $-Xp$. This will mean that as $X$ increases, the decoder will
get penalized more for high playout interval $p$ values. This will
encourage the decoder to choose a lower $p$ whenever the accumulated
delay in $X$ is high. Lastly, the performance bound for the Lyapunov
optimization can be derived (see Section \ref{sub:Delay-Constrained-bnd}
of the appendix).
\section{Network Delay Impact\label{sec:Network-Delay-Impact}}
In Sections \ref{sec:Discontinuity-Penalty} and \ref{sec:Delay-Constrained-Frame}.
we showed how Lyapunov optimization can derive optimization policies
that help ensure the video continuity and enforce a delay constraint.
However, the assumption made in those sections for the derivations
is that there is no network delay. Specifically, we assumed that the
feedback delay from the receiver to the sender is non-existent. In
this section, we relax this network delay assumption and analyze the
impact of network delay on the optimization policies.
Recall that the network generates a forward delay of $d_{f}$ and
a backward delay of $d_{b}$ from the system model discussed in Section
\ref{sub:fro-imp-System-Model}. With network delay, the discontinuity
penalty $U(t)$ updating in the receiver is performed as:
\begin{equation}
U(t+1)=[U(t)-\gamma(t)]^{+}+F(f(t-d_{f}))\label{eqn:rxbuf}\end{equation}
where $\gamma(t)=p(t)+\beta(t)$ and recall that $\beta(t)=\frac{b(t)\, r_{min}\, p_{min}}{T-t}$.
Note that the difference between the previously presented discontinuity
penalty buffer dynamics in \eqref{eqn:vb1} and above is that \eqref{eqn:rxbuf}
is based on the forward delayed encoder frame generation interval
$f(t-d_{f})$. Furthermore, the sender relies on feedback from the
receiver, so it chooses $f(t)$ based on a backward delayed $U(t-d_{b})$.
There are two possible issues that might impact on the optimization
policies when delays are present:
\begin{enumerate}
\item the delayed discontinuity penalty $U(t)$ feedback from the receiver
to the sender means the encoder optimization policy will need to make
use of $U(t-d_{b})$.
\item \label{enu:fro-imp_issue-2}the current choice of $f(t)$ at the sender
would only affect receiver $d_{f}$ time slots later.
\end{enumerate}
What we need to do is to derive optimization policies that take the
above two issues into account and show that these policies stabilizes
the discontinuity penalty $U(t)$.
To begin, note that \eqref{eqn:rxbuf} can be represented recursively
as:
\begin{align}
U(t+1) & =[U(t)-\gamma(t)]^{+}+F(f(t-d_{f}))\notag\\
U(t) & =[U(t-1)-\gamma(t-1)]^{+}+F(f(t-d_{f}-1))\notag\\
& \vdots\notag\\
U(t-d_{b}+1) & =[U(t-d_{b})-\gamma(t-d_{b})]^{+}+F(f(t-d_{f}-d_{b}))\end{align}
This implies that $U(t)$ can be recursively defined as:
\begin{equation}
U(t)\leq\left[U(t-d_{b})-\sum_{\tau=t-d_{b}}^{t-1}\gamma(\tau)\right]^{+}+\sum_{\tau=t-d_{b}-d_{f}}^{t-d_{f}-1}F(f(\tau))\label{eqn:ut}\end{equation}
Issue 2 suggests that we need to predict $d_{f}$ time slots in the
future. To do that, we change the buffer updating equation \eqref{eqn:rxbuf}
into a $d_{f}$ slot update:
\begin{equation}
U(t+d_{f}+1)\leq\left[U(t)-\sum_{\tau=t}^{t+d_{f}}\gamma(\tau)\right]^{+}+\sum_{\tau=t-d_{f}}^{t}F(f(\tau))\label{eqn:currfut}\end{equation}
What \eqref{eqn:currfut} means is that the receiver updates the discontinuity
penalty $U(t)$ not only based on the known current values $\gamma(t)$
and $f(t-d_{f})$ but also using the predicted values $d_{f}$ slots
in the future, $\left[\gamma(t+1)\ldots\gamma(t+d_{f})\right]$ and
$\left[f(t-d_{f}+1)\ldots f(t)\right]$. Note that $F(f(\tau))$ in
\eqref{eqn:currfut} is calculated from $d_{f}$ time steps in the
\emph{past}. This is because the current $f(t)$ only affects the
discontinuity penalty $d_{f}$ time slots later which will be $U(t+d_{f}+1)$.
The $d_{f}$ step buffer dynamics can be proved to obtain stability
by using the \emph{T-slot Lyapunov drift} \cite{ref:neelybook}.
To show how the T-slot Lyapunov drift achieves stability, we first
convert the 1-step Lyapunov drift in \eqref{eqn:1stepdrift} into
a $d_{f}$-step drift:
\begin{equation}
\Delta(U(t))\triangleq\mathbb{E}\{L(U(t+d_{f}+1))-L(U(t))|U(t)\}\label{eqn:dfstepdrift}\end{equation}
We now use the following shortened notations to simplify the equations:
$U\triangleq U(t)$, $U_{d_{b}}\triangleq U(t-d_{b})$, $\gamma\triangleq\sum_{\tau=t-d_{b}}^{t-1}\gamma(\tau)$,
$\gamma_{d_{f}}\triangleq\sum_{\tau=t}^{t+d_{f}}\gamma(\tau)$, $F\triangleq\sum_{\tau=t-d_{b}-d_{f}}^{t-d_{f}-1}F(f(\tau))$
and $F_{d_{f}}\triangleq\sum_{\tau=t-d_{f}}^{t}F(f(\tau))$.
Using the same Lyapunov function \eqref{eqn:lyap} as in the previous
section and drift definition \eqref{eqn:dfstepdrift}. If we square
\eqref{eqn:currfut}, divide it by two and take expectations (see
Section \ref{sec:apx-deldrift1-froimp} of the appendix
for details), we get:
\begin{equation}
\Delta(U)\leq B'-\mathbb{E}\left\{ U\gamma_{d_{f}}\big|U\right\} +\mathbb{E}\left\{ UF_{d_{f}}\big|U\right\} \label{eqn:deldrift1}\end{equation}
Where:
\begin{equation}
B'=\frac{d_{f}}{2}\bigg(r_{max}^{2}+\bigg(p_{max}+\frac{T\, r_{min}\, p_{min}}{T-t}\bigg)^{2}\bigg)\label{eqn:driftBdelay}\end{equation}
Equation \eqref{eqn:deldrift1} can be shown to achieve stability
\cite{ref:neelythesis}, effectively settling issue 2. However, to
deal with issue 1, we need to show how the feedbacked discontinuity
penalty $U(t-d_{b})$ affects \eqref{eqn:deldrift1}. To do that,
we substitute the recursively defined $U(t)$ \eqref{eqn:ut} into
\eqref{eqn:deldrift1}:
\begin{equation}
\Delta(U)\leq\; B'-\mathbb{E}\left\{ U\gamma_{d_{f}}\big|U\right\} +\mathbb{E}\left\{ \left(\left[U_{d_{b}}-\gamma\right]^{+}+F\right)F_{d_{f}}\big|U\right\} \label{eqn:drift3}\end{equation}
Recall the definitions of $F$ and $F_{d}{}_{f}$, and given that
$F(.)$ is an increasing function of $f(t)$ (see \eqref{eqn:Fdefine}),
this implies that $F$ and $F_{d_{f}}$ can be bounded as $F\leq d_{b}r_{max}$
and $F_{d_{f}}\leq(d_{f}+1)r_{max}$ respectively. It then follows
that $FF_{df}$ can be bounded as:
\begin{equation}
FF_{d_{f}}\leq d_{b}(d_{f}+1)r_{max}^{2}\label{eqn:sumbound}\end{equation}
Note that $\left[U_{d_{b}}-\gamma\right]^{+}\leq U_{d_{b}}$. Thus,
we use \eqref{eqn:sumbound} in \eqref{eqn:drift3}, we get:
\begin{equation}
\Delta(U)\leq\; B''-U\mathbb{E}\left\{ \gamma_{d_{f}}\big|U\right\} -\mathbb{E}\left\{ -U_{d_{b}}F_{d_{f}}\big|U\right\} \label{eqn:drift4}\end{equation}
where $B''=B'+d_{b}(d_{f}+1)\; r_{max}^{2}$ with $B'$ from \eqref{eqn:driftBdelay}.
Equation \eqref{eqn:drift4} in that form can be proven to stabilize
by using the results from \cite{ref:neelythesis} (see Section \ref{sub:Perf-Bnds-del}
in the appendix), thus settling issue 1.
As in Section \ref{sub:Discontinuity-Penalty-Derivation}, to cater
for utility optimization, we subtract from both sides, the term $V\mathbb{E}\{g(f(t))-h(p(t))|U\}$
and rearrange the terms to obtain:
\begin{align}
\notag\Delta(U) & -V\mathbb{E}\{g(f(t))-h(p(t))|U\}\\
\notag & \leq\; B''-\mathbb{E}\left\{ U\gamma_{d_{f}}-Vh(p(t))\big|U\right\} \\
& \quad\quad\quad-\mathbb{E}\left\{ Vg(f(t))-U_{d_{b}}F_{d_{f}}\big|U\right\} \label{eqn:delcon-deriv1}\end{align}
Using the definitions of $\gamma_{d_{f}}$ and $F_{d_{f}}$ and that
$\gamma(t)=p(t)+\beta(t)$. Equation \eqref{eqn:delcon-deriv1} can
be rewritten as:
\begin{align}
\notag\Delta(U) & -V\mathbb{E}\{g(f(t))-h(p(t))|U\}\\
\notag & \leq\; B''-U\mathbb{E}\left\{ \sum_{\tau=t+1}^{t+d_{f}}p(\tau)+\sum_{\tau=t}^{t+d_{f}}\beta(\tau)\Bigg\vert U\right\} \\
\notag & \quad\quad\quad+U_{d_{b}}\mathbb{E}\left\{ \sum_{\tau=t-d_{f}}^{t-1}F(f(\tau))\Bigg\vert U\right\} \\
\notag & \quad\quad\quad-\mathbb{E}\left\{ Up(t)-Vh(p(t))\vert U\right\} \\
& \quad\quad\quad-\mathbb{E}\left\{ Vg(f(t))-U_{d_{b}}F(f(t))\vert U\right\} \label{eqn:delcon-deriv2}\end{align}
It can be seen from \eqref{eqn:delcon-deriv2} that the last two terms
represent the decoder and encoder optimization policies respectively.
The decoder policy is not affected by the network delay while the
only change to the encoder policy is to make use of $U(t-d_{b})$
fed back from the decoder. Moreover, since the delay accumulator $X(t)$
is updated locally within the decoder. This would imply that the delay
constrained decoder policy \eqref{eqn:Xlyapobj} would not be affected.
Lastly, the performance bound for the Lyapunov optimization can be
derived (see Section \ref{sub:Perf-Bnds-del} of the supplementary
materials).
\section{\label{sec:Video-Quality-Functions}Video Quality Functions}
In the previous sections, we showed how using the discontinuity penalty
virtual buffer in the system model allows us to express the processes
as frame intervals. We then used Lyapunov optimization to derive optimization
policies that stabilizes the discontinuity penalty virtual buffer
and showed that this helps to maintain the video continuity. We also
demonstrated how to add a delay constraint into a framework by using
the delay accumulator virtual buffer. By deriving optimization policies
that stabilze the delay accumulator virtual buffer, we showed that
the delay constraint can be met. Finally, we studied the impact of
network delay on the optimization policies and showed how the optimization
policies can be derived with network delay consideration.
What is lacking thus far is a discussion on the specific choice of
frame quality function $g(f(t))$ and playout distortion function
$h(p(t))$. In this section, we shall examine the specific forms for
$g(f(t))$ and $h(p(t))$.
\subsection{Frame Quality Function\label{sub:Frame-Quality-Function}}
We first look at an appropriate frame quality function for $g(f(t))$,
$g(f(t))$ should ideally be concave so that a solution can be easily
found for the encoder policy \eqref{eqn:encobj}. As mentioned before
as $f(t)$ decreases, the encoder frame generation rate increases.
This means that more compression is needed to meet the available network
bandwidth and more compression tends to mean that the frame quality
will be reduced. One of the ways to measure frame quality is to measure
the peak signal-to-noise-ratio (PSNR) of the video.
An and Nguyen \cite{ref:vidutilities} has been shown that PSNR can
be represented using a log function of the bitrate. We make use of
their result and fit PSNR to the average frame size of the sequence:
\begin{equation}
PSNR(f(t))=a\, log(ABR(t)\, f(t))+c\label{eqn:frmpsnr}\end{equation}
where $a$ and $c$ are the modeling coefficients, and $ABR(t)$ is
the current available network bandwidth. Note that we make use of
all the available bandwidth to transmit the video frames from the
sender. Fig. \ref{fig:fr_psnr} shows the fitted curve. The video
sequence used for fitting is a concatenation of football, city, crew
and akiyo sequences in that order. This is done to ensure that the
sequence contains several subsequences of different coding complexity.
Notice that if we use the encoder frame generation rate $i(t)$ as
the input variable instead, \eqref{eqn:frmpsnr} becomes:
\begin{equation}
PSNR(i(t))=a\, log\bigg(\frac{ABR(t)}{i(t)}\bigg)+c\label{eqn:invfrmpsnr}\end{equation}
where $i(t)=\frac{1}{f(t)}$, the resulting PSNR function would no
longer be a concave function of $i(t)$ and would make the optimization
problem more difficult to solve. This is the reason why we use the
encoder frame generation interval $f(t)$ instead of encoder frame
generation rate $i(t)$.
Since $f(t)$ is upper bounded by $f_{max}$, we fix the maximum of
$PSNR(f(t))$ to $f_{max}$. This is done to make the PSNR function
differentiable in the interval $[f_{min},f_{max}]$ and make the maximization
of it easy to calculate. We do this by subtracting $\frac{a}{f_{max}}$
from the first derivative of \eqref{eqn:frmpsnr} ($PSNR'(f(t))$):
\begin{equation}
PSNR'(f(t))-\frac{a}{f_{max}}=\frac{a}{f(t)}-\frac{a}{f_{max}}\label{eqn:frmpsnrmax}\end{equation}
Integrating \eqref{eqn:frmpsnrmax} will give us the desired $g(f(t))$:
\begin{equation}
g(f(t))=a\, log(ABR(t)\, f(t))+c-\frac{a\, f(t)}{f_{max}}\label{eqn:g}\end{equation}
Note that $g(f(t))$ is concave between the range $[f_{min},f_{max}]$
(where $f_{min}>0$).
\begin{figure}[tbh]
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.3]{fr_psnr}
\caption{PSNR versus average frame size.}
\label{fig:fr_psnr}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.35]{podist}
\caption{Playout distortion function. $p(t)$ is in milliseconds, $m=1$ and
$p_{n}=\frac{1}{30}$ ms.}
\label{fig:podist}%
\end{minipage}
\end{figure}
\subsection{Playout Distortion Function}
In this section, we choose an appropriate playout distortion function
for $h(p(t))$. We modified the version of playout distortion function
used in \cite{ref:ampscheduler5}:
\begin{equation}
h(p(t))=m\cdot(p_{n}-p(t))^{2}\label{eqn:h}\end{equation}
where $m$ is the motion intensity of the sequence, calculated using
the technique in \cite{ref:frquality}, and $p_{n}$ is the natural
playout interval. Fig. \ref{fig:podist} shows the playout distortion
function $h(p(t))$. $h(p(t))$ is convex in the range $[p_{min},p_{max}]$.
Eqn. \eqref{eqn:h} is a combination of the quadratic playout distortion
function proposed in \cite{ref:ampscheduler1,ref:quadplayout} and
the motion scaling used in \cite{ref:ampscheduler5}. The idea is
that the playout distortion increases as the playout rate deviates
from the natural playout rate. The playout distortion is also affected
by the motion intensity of the sequence. Intuitively, higher motion
sequences increases playout distortion more as the change in motion
is more perceivable when the playout rate deviates from the natural
playout rate.
\section{\label{sec:Performance-Evaluation}Performance Evaluation}
\subsection{Experiment Setup}
We made use of ns-2 \cite{ref:ns2} to simulate a network with time-varying
data bandwidths. We implemented our framework into a x264 encoder
\cite{ref:x264}, an open source multi-threaded H.264/AVC encoder.
Our implementation in x264 simulates the network transmission using
network traces. The decoder buffer evolution is simulated by tracking
the arrivals of video frames from the simulated network and the removal
of video frames from the buffer for playout. Everytime a frame is
encoded, the framework will make a decision on the encoder frame generation
rate and the playout frame rate at the simulated decoder. This is
to simulate the real-time adjustment of parameters as the video is
being encoded for transmission. Network traces obtained from ns-2
are used to simulate the network within the encoder. The ns-2 traces
provide the sending and receiving times as well as the loss status
of each video packet. Every packet produced by x264 is assigned a
sending time, receiving time and a loss status. These infomation is
used to simulate the sender and receiver buffers at a packet level.
The video packets that arrived at the receiver buffer are then used
to calculate the amount of frames that the decoder buffer contains.
A decoder buffer underflow occurs when the frames removed from the
decoder buffer exceeds the number of frames in it. Note that our framework
could easily be adapted to multiple pre-encoded copies of the video.
For the network simulations, we made use of a dumbell network topology
with the bottleneck bandwidth set to 5 Mbits/s. One pair of nodes
in the network simulated video streaming using the TFRC transport
protocol \cite{ref:tfrc}. To generate background traffic, random
webpage sessions were used for the other pairs of nodes. All the random
sessions go through the bottleneck link. The average packet delay
from the encoder to the decoder is 235 ms, while the feedback delay
from the decoder to encoder is 330 ms. These high delay values enable
us to show that the our proposed distributed frame rate control algorithm
works in presence of delay.
The encoder receives feedback from the decoder and solves the optimization
problem to determine the encoder frame generation interval $f(t)$.
The x264 encoder makes use of this encoder frame generation interval
$f(t)$ by setting the target frame rate of the rate controller to
encoder frame generation rate $\frac{1}{f(t)}$. The rate controller's
target frame rate is used to compute the quantization level for each
frame. The lower the target frame rate, the lower the quantization
level and, as a result, bigger frame sizes. More details on x264's
rate control can be found in \cite{ref:x264rc}.
The test sequence used is a concatenation of football, city, crew
and akiyo sequences in that order. This is to ensure a mix of high
and low motion within the sequence. A 16 minute test sequence is obtained
by repeating the concatenated sequence.
The model coefficient $a$ from \eqref{eqn:g} is found by curve fitting
to be 4.91. The constant $V$ is set to 1. In our experiments, we
tested the continuity of the video, this is defined here as the amount
of time spent playing video, specifically:
\begin{equation}
\text{Playout Continuity}=1-\frac{\text{rebuffering time}}{\text{total sequence time}}\label{eqn:cont}\end{equation}
where the rebuffering time is the time needed to refill the buffer
to a certain threshold after an underflow event, this is a typical
behaviour of a video player \cite{ref:rebuff}. The rebuffering threshold
is set to half of the prebuffer amount. The prebuffer amount is varied
for each run to determine the performance of each scheme. At the end
of each run, we will calculate the playout continuity using \eqref{eqn:cont}
for each scheme and make a comparison.
\subsection{Discontinuity Penalty Lyapunov Optimization Results\label{sub:Discontinuity-Penalty-Exp}}
\begin{figure}[tbh]
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.5]{ucont}
\caption{Correlation between average discontinuity penalty and continuity.}
\label{fig:ucont}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.5]{prerollcont}
\caption{Prebuffering delay vs continuity. }
\label{fig:prerollcont}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.5]{prerolldist}
\caption{Prebuffering delay vs playout distortion. }
\label{fig:prerolldist}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.5]{prerollpsnr}
\caption{PSNR loss for \emph{LOpt}.}
\label{fig:prerollpsnr}%
\end{minipage}
\end{figure}
We first examine the (negative) correlation between the discontinuity
penalty $U(t)$ and the continuity of the video \eqref{eqn:cont},
fig. \ref{fig:ucont} plots this. It can be seen from the graph that
as the discontinuity penalty increases, the continuity drops due to
the longer rebuffering time caused by an increased number of buffer
underflows. This shows that the discontinuity penalty $U(t)$ can
be used as an indicator of a lack of video continuity.
We next present the results of the Lyapunov optimization framework
described in Section \ref{sec:Discontinuity-Penalty}, labelled as
\emph{LOpt} here. We also set up our framework with a less conservative
bound than the one described in \eqref{eqn:b3}, this is labelled
as \emph{LOptM}. This is done by setting $\beta(t)=\frac{b(t)\, r_{max}\, p_{max}}{T-t}$
in \eqref{eqn:vb2}. We compare our framework with a typical setup
of x264 with its target frame rate and its playout rate set to a constant
30 fps, we label this scheme \emph{Norm}. We also compared our framework
with the AMP scheme \cite{ref:ampkalman}. We implemented a combination
of AMP-Initial and AMP-Robust. AMP-Initial slows down the playout
rate by a predetermined slowdown factor when the video starts playing
until a certain buffer level has been reached. AMP-Robust slows down
the playout rate of the video by a predetermined slowdown factor when
the buffer level falls below a certain level. In our implementation,
AMP-Initial is used in conjunction with AMP-Robust. We empirically
chose the smallest slowdown factor of AMP such that it achieves a
continuity of 99\% or higher in each test case. This effectively simulates
a close to optimal adaptive AMP scheme, we label this as \emph{AMP}.
We also added in the results of \emph{AMP} with a constant slowdown
factor of 25\% (used in \cite{ref:ampkalman}) as a reference, which
we label \emph{AMP25}.
To examine the performance of each scheme, we compare the continuity
of each scheme based on the amount of prebuffering provided. Note
that in the results, the prebuffering delay is on a log base 10 scale.
Continuity is calculated as in \eqref{eqn:cont}.
The continuity results are shown in fig. \ref{fig:prerollcont}. It
can be seen that \emph{LOpt} and \emph{AMP} achieves similar results.
This is expected as \emph{AMP} was tuned to achieve a high continuity.
The performance disparity between \emph{AMP} and \emph{AMP25} shows
that this simulation requires a greater amount of playout slowdown
at the decoder in order to reduce the occurrences of buffer underflows.
However, both \emph{LOpt} and \emph{AMP} require about a 100 times
less prebuffering compared to \emph{Norm} to provide similar continuity.
\emph{AMP25} too requires about 7 times less prebuffering than \emph{Norm}
but still requires about 50 times more prebuffering than \emph{LOpt}
and \emph{AMP}. This suggests that using some form of playout slowdown
would reduce the prebuffering requirements of a video application
significantly.
We next measure the playout distortion of each scheme using \eqref{eqn:h}
($p_{n}=1/30$), i.e. playout distortion will be produced when the
playout interval drops below or goes above $1/30$. Note that while
\eqref{eqn:h} will only produce a non-zero value when the playout
deviates from the natural playout frame interval, playout interruptions
due to buffer underflows are not factored into the playout distortion.
Fig. \ref{fig:prerolldist} shows the playout distortion results.
\emph{Norm} does not have any playout distortion since it has a constant
30 fps playout rate, but suffers from playout discontinuity as discussed
earlier. \emph{LOpt} has a very similiar playout distortion characteristic
when compared to \emph{AMP25}. In contrast, \emph{AMP} for most cases
produces twice the amount of playout distortions compared to \emph{LOpt}
and \emph{AMP25}. This is mainly because \emph{AMP} requires a higher
slowdown factor to obtain a better video continuity and, as a consequence,
this results in a higher playout distortion.
\emph{LOpt}'s comparatively low playout distortion is due to the joint
adjustment of both the encoder frame generation rate and playout rate.
By increasing the encoder frame generation rate, the receiving frame
at the decoder increases. This provides a higher buffer occupancy
and reduces the need to slowdown the playout rate, thus reducing the
playout distortion. This comes at the expense of frame quality, because
increasing encoder frame generation rate will result in a higher amount
of compression. To examine \emph{LOpt}'s effect on frame quality,
we compare the PSNR of the encoded video before transmission. This
is done to eliminate any possible drops in PSNR due to transmission.
\emph{Norm}, \emph{AMP25} and \emph{AMP} have a constant encoding
rate of 30 fps, this means all produce an encoded video of the same
PSNR. Thus, we only compared \emph{LOpt} with \emph{Norm}. From fig.
\ref{fig:prerollpsnr}, it is shown that the drop in PSNR is about
0.6 dB for \emph{LOpt}. This is a reasonably small tradeoff in frame
quality given the improvements in playout distortion and continuity.
It can be also seen from the graphs that the performance of \emph{LOpt}
and \emph{LOptM} are very similiar. This suggests that the bound in
\eqref{eqn:b3} is not too conservative. We also tested the schemes
on football, see figs. \ref{fig:prerollcont-football}, \ref{fig:prerolldist-football},
\ref{fig:prerollpsnr-football} and akiyo, see figs. \ref{fig:prerollcont-akiyo},
\ref{fig:prerolldist-akiyo}, \ref{fig:prerollpsnr-akiyo}. football
and akiyo were chosen because they have the highest and lowest motion
content respectively in the four sequences used. The results show
a similiar pattern to the concatenated sequence.
We now examine the complexity of each scheme.\emph{ Norm} is the least
complex scheme while the \emph{AMP25} is marginally more complex than
\emph{Norm}. This is because \emph{AMP25} involves some simple logic
at the decoder to slowdown the playout once a certain buffer level
is reached. \emph{LOpt }is more complex than \emph{AMP25} as it involves
more calculations at both the encoder and decoder end. However, it
is not significantly more complex than \emph{AMP25} because the optimization
policies are concave, so the solution search is very efficient. For
example, in our implementation we solve for $f(t)$ and $p(t)$ by
using the first derivatives of the encoder policy \eqref{eqn:encobj}
and decoder policy \eqref{eqn:decobj} respectively. \emph{AMP} is
the most complex in our implementation as it requires several runs
to determine the most optimal slowdown factor. In summary, the \emph{LOpt}
runs in $O(n)$, where $n$ is the total number of video packets.
While \emph{AMP} runs in $O(n\times s)$, where $s$ is the number
of slowdown factors to consider. However, it should be noted that
complexity-wise, \emph{AMP} is not representative of the complexity
of AMP schemes in practice. Its main purpose in our simulations is
to act as the upper bound in the performance of AMP schemes.
\begin{figure}[tbh]
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.45]{prerollcont_15mins_football}
\caption{Prebuffering delay vs continuity (football).}
\label{fig:prerollcont-football}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.45]{prerollcont_15mins_akiyo}
\caption{Prebuffering delay vs continuity (akiyo). }
\label{fig:prerollcont-akiyo}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.45]{prerolldist_15mins_football}
\caption{Prebuffering delay vs playout distortion (football). }
\label{fig:prerolldist-football}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.45]{prerolldist_15mins_akiyo}
\caption{Prebuffering delay vs playout distortion (akiyo). }
\label{fig:prerolldist-akiyo}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.45]{prerollpsnr_15mins_football}
\caption{PSNR loss for \emph{LOpt} (football).}
\label{fig:prerollpsnr-football}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.45]{prerollpsnr_15mins_akiyo}
\caption{PSNR loss for \emph{LOpt} (akiyo).}
\label{fig:prerollpsnr-akiyo}%
\end{minipage}
\end{figure}
\subsection{Delay Constrained Lyapunov Optimization Results\label{sub:Delay-Constrained-Exp}}
\begin{figure}[tbh]
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.5]{livecont}
\caption{Playout delay vs continuity.}
\label{fig:livecont}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.5]{livedist}
\caption{Playout delay vs playout distortion.}
\label{fig:livedist}%
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.45\textwidth}%
\includegraphics[scale=0.5]{livepsnr}
\caption{PSNR loss for \emph{DLOpt}}
\label{fig:livepsnr}%
\end{minipage}
\end{figure}
To evaluate the performance of our delay constrained Lyapunov optimization
framework, which we call \emph{DLOpt} here. As in the previous section,
we also set up out framework with a less conservative bound $\frac{b(t)\, r_{max}\, p_{max}}{T-t}$,
labelled as \emph{DLOptM}. We compare our scheme with AMP-Live \cite{ref:ampkalman},
labelled as \emph{AMPL}. AMP-Live maintains the buffer level by slowing
down or speeding up the playout based on a predetermined scale factor.
The scale factor of \emph{AMPL} is set to 40\% in our experiments,
this value was found to provide the best overall performance for \emph{AMPL}.
The playout delay of each scheme is measured as: $\text{Total playout delay}=\sum_{\tau=0}^{T}(p(\tau)-p_{n})$.
Recall that $p_{n}$ represents the natural playout frame interval
of the sequence. So every playout slowdown will cause $p(t)$ to be
larger than $p_{n}$, thus accumulating playout delay. To reduce the
total playout delay, the scheme needs to find the proper moment to
increase the playout rate.
We compared the schemes by varying the delay constraints from 4.27
seconds to 273.07 seconds with the constraint doubled at each run.
At the end of each run, we plot the total playout delay and the performance
metric of each scheme, the playout delay in the results are on a log
base 10 scale. The performance metric we examined are playout continuity,
playout distortion and PSNR.
We first look at the continuity results with respect to the total
playout delay (fig. \ref{fig:livecont}). It can be seen that \emph{DLOpt}
achieves maximum continuity regardless of the delay constraint, while
\emph{AMPL} requires more than 50 times the amount of total playout
delay compared to \emph{DLOpt} to reach maximum continuity. \emph{AMPL}
manages the total playout delay constraint by maintaining a certain
buffer level \cite{ref:ampkalman}, thus a tighter delay constraint
will result in a lower buffer level which has higher probabilities
of buffer underflows. On the other hand, \emph{DLOpt} makes a trade-off
between the encoder frame generation rate and playout rate to satisfy
the continuity goal and delay constraint.
We next look at the distortion results in fig. \ref{fig:livedist}.
Again, \emph{DLOpt} achieves a much lower playout distortion compared
to \emph{AMPL}. Note that the lower the total playout delay constraints
the lower the playout distortion. This is because a low playout constraint
causes \emph{DLOpt} to adjust the playout rate a lot less, thus causing
a low playout distortion. \emph{AMPL} has an almost constant playout
distortion. This is because, as mentioned before, \emph{AMPL} only
tries to maintain a certain buffer level and does not constrain its
playout rate adjustment in anyway.
Lastly, we examine the PSNR trade-offs made, see fig. \ref{fig:livepsnr}.
As before, we compared the PSNR of the encoded video of both schemes
prior to transmission. It can be seen that \emph{DLOpt} sacrifices
about a maximum of 1 dB of PSNR, this PSNR drop is higher than \emph{LOpt}
in the previous section. The main cause of this is the additional
delay constraint imposed using the virtual buffer \eqref{eqn:Xbuff}.
This results a lower PSNR due to the need to satisfy the delay constraint,
however, this allows the large gains in continuity and distortion.
Again the performances of \emph{DLOpt} and \emph{DLOptM} are mostly
similiar. This reinforces the possibility that the bound in \eqref{eqn:b3}
is not too conservative..
\section{Conclusions\label{sec:Conclusions}}
In this paper, we proposed a the frame rate optimization framework.
What we achieved with this framework is:
\begin{itemize}
\item Performed frame rate control by a joint adjustment of the encoder
frame generation interval and the playout frame interval.
\item Modelled the frame rate control problem using the encoder frame generation
interval, the playout frame interval and the discontinuity penalty
virtual buffer. The model is created in such a way that stabilizing
the discontinuity penalty virtual buffer allows the video to maintain
continuity. We then used Lyapunov optimization on this model to systematically
derive the optimization policies. We showed that these policies can
be decoupled into separate encoder and decoder optimization policies
with feedback between the two. We also showed through experiments
that the proposed discontinuity penalty based on the virtual buffer
is correlated to the video continuity. Finally, simulation results
demonstrates the effectiveness of the discontinuity penalty virtual
buffer approach.
\item A delay constraint imposed on the accumulated delay from playout slowdowns.
We introduced the delay constraint into the framework using the delay
accumulator virtual buffer. We showed, using Lyapunov optimization
analysis, that by stabilizing the delay accumulator virtual buffer,
the delay constraint would be satisfied. Simulation results showed
a superior playout continuity and playout distortion performance with
a reasonable tradeoff in PSNR.
\item An analysis of the impact of delayed feedback from receiver to sender.
We derived two different analyses, the first analysis showed very
little impact on the optimization polices. The alternate analysis
showed that the decoder needed to use an outdated buffer state. Simulation
results demonstrated that using the first analysis results in a better
performance.
\end{itemize}
\bibliographystyle{IEEEtran}
|
1,108,101,565,311 | arxiv | \section{Introduction}
The internal structure of nucleons has attracted much attention in
the contexts of
the nucleon form factors, proton spin
and spin/charge asymmetry in deeply virtual Compton scattering and so on.
For a systematic study
of the nucleon internal structure, generalized parton distributions (GPDs) are
introduced through the off-forward matrix elements of quark-bilinear operators:
\begin{equation}
\int\!\!\frac{d\eta}{4\pi}e^{i\eta x}\langle P'|
\bar{q}({\textstyle -\frac{\eta n}{2}})
\gamma^\mu
{\cal U}
q({\textstyle\frac{\eta n}{2}}) |P\rangle
= \bar{N}(P')\!\!\left(\gamma^\mu
H(x,\xi,t)
+ i{\textstyle \frac{\sigma^{\mu\nu}{\Delta_\nu}}{2M}}
E(x,\xi,t)
\right)\!\! N(P),
\end{equation}
with a light cone vector $n$ and the momentum transfer $\Delta=P'-P$
as functions of the quark momentum fraction $x$, the skewedness
$\xi=-n\cdot\Delta/2$ and the virtuality $t=\Delta^2$.
The axial counterparts are denoted by $\tilde{H}$ and $\tilde{E}$.
Since the GPD is defined with the finite momentum transfer in contrast to
the conventional parton distribution functions,
partons bring us the informations on hadron structure in the transverse space.
In this contribution,
we report on the first moments of GPD, so called generalized form factors,
for nucleon, as a function of the virtuality calculated on the lattice
with unquenched configurations of QCDSF/UKQCD collaboration.
In the forward limit these generalized form factors provide the total
angular momentum of quark in the nucleon through Ji's sum rule \cite{Js},
\begin{equation}
J^q=\frac{1}{2}\int_{-1}^1 dx x (H(x,\xi,0)+E(x,\xi,0))
\equiv\frac{1}{2}(A_{20}(t=0)+B_{20}(t=0)).
\end{equation}
Combined with the quark spin contributions to the nucleon
obtained as the forward value of the axial form factor,
\begin{equation}
s^q=\frac{1}{2}\int_{-1}^1 dx \tilde{H}(x,\xi,0)
\equiv\frac{1}{2}\tilde{A}_{10}(t=0),
\end{equation}
we compute the orbital angular momentum of quarks as $L^q=J^q-s^q$.
Using the results of chiral perturbation theory ($\chi$PT)
for chiral extrapolation to the physical point,
we discuss the angular momentum carried by quark in the nucleon.
\section{Generalized form factors on the lattice}
The Mellin moments of the GPDs are known to be expressed by
polynomials in terms of $\xi$ \cite{XJ},
\begin{equation}
\int_{-1}^1 dx x^{n-1} \left[\begin{array}{c}{
{H}(x,\xi,t)} \\ {E(x,\xi,t)} \end{array}\right]
= \sum_{k=0}^{[(n-1)/2]}
(2\xi)^{2k} \left[\begin{array}{c}
{ A_{n,2k}(t)} \\{ B_{n,2k}(t)}\end{array}\right]
\pm \delta_{n,\rm even}(2\xi)^{n}{ C_{n}(t)}.
\end{equation}
The generalized form factors $A_{n,2k},B_{n,2k}$ and
$C_{n}$ are defined from the coefficients of this expansion.
Since the integration by $x$ makes the quark operator local,
the $(n-1)$-th moments can be calculated \cite{QL} on the lattice
through the matrix element of
$\langle P'|\bar{q}\gamma^{\{\mu_1}D^{\mu_2}\cdots D^{\mu_n\}}q|P\rangle$
by taking a ratio of the three- and two-point functions.
To estimate these correlation functions, 400 to 2200 configurations
are used for each $\beta, \kappa$
with two flavor Wilson fermion with the clover improvement.
Simulations are performed with various set of parameters $\beta$ and $\kappa$
corresponding to the lattice spacing less than 0.09fm and pion mass
covering from order of 1GeV down to 350MeV with a reference scale
$r_0=0.467$fm.
Nonperturbative renormalizations are incorporated to convert the lattice
results into the values in the $\overline{\rm MS}$ scheme at a scale of
$\mu^2=4$GeV$^2$.
The ${\cal O}(a)$ improvement of the quark energy-momentum tensor
are carried out through the boosted perturbation theory following
ref.\cite{bpt}
and the tadpole improved version is used for the axial current
following ref.\cite{ga}.
We note that the contributions from disconnected diagrams
are not included in the present lattice results.
\section{Lattice simulation results and chiral extrapolation}
We focus on the generalized form factors $A_{20}$ and $B_{20}$
as well as the axial form factor $\tilde{A}_{10}$ to evaluate the
quark angular momentum in the nucleon.
Typical $t$ dependence of the axial form factor in the isoscalar channel
are shown in Fig.\ref{fig:DS}.
The obtained lattice data agrees well with
a fitting by the dipole form, $\tilde{A}_{10}(0)/(1-t/m^2)^2$.
The forward values obtained by setting $t=0$ present a smooth pion-mass
dependence as shown in the right panel of Fig.\ref{fig:DS}.
Here we use an expression derived in a heavy baryon $\chi$PT \cite{DMS},
\begin{equation}
\tilde{A}_{n,k}^{\rm u+d}(0)={ \alpha_{n,k}}\left[
1-\frac{3m_\pi^2g_A^2}{16\pi^2F_\pi^2}\left(
\ln\frac{m_\pi^2}{\lambda^2}+1\right)
\right]+{\beta_{n,k}}m_\pi^2+{\cal O}(m_\pi^3),
\end{equation}
for the chiral extrapolation with fitting parameters $\alpha_{10}$ and
$\beta_{10}$ at a scale of $\lambda=1$GeV.
As the heavy baryon formalism is valid only for the small pion mass,
we restrict the data points at pion masses less than
500MeV.
Then it turns out that the chiral log term gives a strong $m_\pi$
dependence for small $m_\pi$ region and the extrapolated
value is eventually comparable with the latest experimental value
of deep-inelastic scattering reported by HERMES \cite{HER}.
With this extrapolation, we obtain the quark spin contribution
in the nucleon as
$\tilde{A}_{10}^{\rm u+d}(0)\equiv 2s^{\rm u+d} = 0.402\pm 0.024$
at the physical pion mass.
\begin{figure}[t!]
\centering
{\includegraphics[scale=.33,angle=270]{At10s.eps} \hspace*{1.6em}
\includegraphics[scale=.33,angle=270]{DS.eps}}
\caption{ The isoscalar axial form factor $\tilde A_{10}(t)$
for $\beta=5.29$, $\kappa=0.13632$
with dipole fit (left) and the forward values with $\chi$PT fit (right).
The open star in the right panel represents the latest experimental
value of HERMES.}
\label{fig:DS}
\end{figure}
Next we show a typical $t$ dependence of generalized form factors
$A_{20},B_{20}$ and $C_{2}$ in the isovector channel in Fig.\ref{fig:GFF}.
Up to 1GeV$^2$, the lattice results of $A_{20}$
agree with the dipole fit $A_{20}(0)/(1-t/m_{\rm D}^2)^2$.
The dipole mass $m_{\rm D}$ of $A_{20}$ shows a
smooth $m_\pi$ dependence shown in the right panel of Fig.\ref{fig:GFF}.
We see that the dipole mass seems to extrapolate
to the observed mass of tensor meson $f_2$ at the physical point.
This fact contrasts with a mass scale of the electromagnetic form factors
comparable with the vector meson mass \cite{WS}.
\begin{figure}[t!]
\centering
{\includegraphics[width=.35\textwidth,angle=270]{GFF20v.eps}\hspace*{1.6em}
\includegraphics[width=.35\textwidth,angle=270]{mD0.eps}}
\caption{ Generalized form factors in the isovector channel
for $\beta=5.29$, $\kappa=0.13632$ with dipole fit for $A_{20}$
and the dipole mass of $A_{20}$.
The open star represents the experimental value of $f_2$ tensor meson mass.}
\label{fig:GFF}
\end{figure}
However, the empirical dipole fit
for the generalized form factors has no solid justification from
a theoretical point of view.
Therefore we count on the covariant baryon chiral perturbation \cite{DGH}
to extract the forward values of $B_{20}$ as well as the chiral
extrapolation of $A_{20}$ and $B_{20}$.
The forward values of $A_{20}$ are identical to
the quark momentum fraction $\langle x\rangle^q$ and
are shown as a function of $m_\pi^2$ in Fig.\ref{fig:A20}.
\begin{figure}[b!]
\centering
{\includegraphics[width=.35\textwidth,angle=270]{xs.eps} \hspace*{1.6em}
\includegraphics[width=.35\textwidth,angle=270]{xv.eps}}
\caption{ The forward values of $A_{20}$ in the
isoscalar (left) and isovector (right) channel with $\chi$PT fits.
The open stars represent the phenomenological values from CTEQ6.}
\label{fig:A20}
\end{figure}
Both in the isoscalar and isovector channel, the lattice results show
a moderate pion mass dependence.
These values are extrapolated to the physical point by
the following eqs. derived in the baryon $\chi$PT,
\begin{eqnarray}
A_{2,0}^{\rm u+d}(0)&=&{ a_{20}^{\rm s}}+{ c_9}
\frac{4m_\pi^2}{M_0^2}
-{ a_{20}^{\rm s}}
\frac{3g_A^2m_\pi^2}{16\pi^2F_\pi^2}\left[
\frac{m_\pi^2}{M_0^2}+\frac{m_\pi^2}{M_0^2}\left(
2-\frac{m_\pi^2}{M_0^2}\right)\ln\frac{m_\pi}{M_0}
\right.\nonumber \\ & & \hspace*{.3em} \left.
+\frac{m_\pi}{\sqrt{4M_0^2-m_\pi^2}}\left(
2-4\frac{m_\pi^2}{M_0^2}+\frac{m_\pi^4}{M_0^4}
\right)\arccos\frac{m_\pi}{2M_0}
\right] +{\cal O}(p^3),
\end{eqnarray}
for the isoscalar channel and
\begin{eqnarray}
A_{2,0}^{\rm u-d}(0)&=&{ a_{20}^{\rm v}}+{ c_8}
\frac{4m_\pi^2}{M_0^2}
+{ a_{20}^{\rm v}}
\frac{g_A^2m_\pi^2}{16\pi^2F_\pi^2}\left[
-\left(3+\frac{1}{g_A^2}\right)\ln\frac{m_\pi^2}{\lambda^2}+
\frac{m_\pi^2}{M_0^2}-2+\frac{m_\pi^2}{M_0^2}\left(
6-\frac{m_\pi^2}{M_0^2}\right)\ln\frac{m_\pi}{M_0}
\right.\nonumber \\ & & \left.
+\frac{m_\pi}{\sqrt{4M_0^2-m_\pi^2}}\left(
14-8\frac{m_\pi^2}{M_0^2}+\frac{m_\pi^4}{M_0^4}
\right)\arccos\frac{m_\pi}{2M_0}
\right]
\\ &&
+{ \Delta a_{20}^{\rm v}}
\frac{g_A^2m_\pi^2}{48\pi^2F_\pi^2}\left[
2\frac{m_\pi^2}{M_0^2}+\frac{m_\pi^2}{M_0^2}\left(
6-\frac{m_\pi^2}{M_0^2}\right)\ln\frac{m_\pi^2}{M_0^2}
+2m_\pi\frac{(4M_0^2-m_\pi^2)^{3/2}}{M_0^4}\arccos\frac{m_\pi}{2M_0}
\right] \nonumber
\end{eqnarray}
for the isovector channel, where $M_0$ is the nucleon mass in the chiral limit.
We perform 2-parameter fits with $a_{20}^{\rm s}$, $c_9$ for the isoscalar
and $a_{20}^{\rm v}$, $c_8$ for the isovector channel
at a scale of $\lambda=1$GeV and fixed the other values
following ref.\cite{DGH}.
A strong $m_\pi$ dependence is observed especially for the isovector channel,
but the extrapolated values in both channels overshoot beyond the
phenomenological values estimated using the CTEQ6 parton distribution
functions.
The chiral extrapolation gives
$A_{20}^{\rm u+d}\equiv\langle x\rangle^{\rm u+d}
= 0.572\pm 0.012$ for the isoscalar channel and
$A_{20}^{\rm u-d}\equiv\langle x\rangle^{\rm u-d}
= 0.198\pm 0.008$ for the isovector channel at the physical pion mass.
See ref.\cite{DP} for discretization effects of these form factors.
\begin{figure}[t!]
\centering
{\includegraphics[width=.35\textwidth,angle=270]{b20s.eps} \hspace*{1.6em}
\includegraphics[width=.35\textwidth,angle=270]{b20v.eps}}
\caption{ The forward values of $B_{20}$ extrapolated
by $\chi$PT in the isoscalar (left) and isovector (right) channel
with $\chi$PT fits.}
\label{fig:B20}
\end{figure}
In contrast to $A_{20}$, the forward values of $B_{20}$ cannot be calculated
directly from the lattice simulation since the kinematic pre-factor
for $B_{20}$ vanishes at zero momentum transfer. Again we use the
expressions of covariant baryon $\chi$PT \cite{DGH},
\begin{eqnarray}
B_{2,0}^{\rm u\pm d}({t}) &=& ({ b_{20}^{\rm s,v}
}+{ \hat{\delta}_B^{\rm s,v}}\, m_\pi^2 +
{ \hat{\delta}_{Bt}^{\rm s,v}}\, { t})\,\frac{M_{\rm N}(m_{\pi})}{M_0}\mp
a_{20}^{\rm s,v} \frac{(2\pm 1)g_A^2M_0^2}{48\pi^2F_{\pi}^2} G({ t}), \\
G({ t})&=&
\int_{-\frac{1}{2}}^{\frac{1}{2}}\!
\frac{du}{\tilde{M}^8}\Bigg[\left(M_0^2-\tilde{M}^2\right)\tilde{M}^6+
9m_{\pi}^2M_0^2\tilde{M}^4
-6m_{\pi}^4M_0^2\tilde{M}^2+6m_{\pi}^2M_0^2
\left(m_{\pi}^4-3m_{\pi}^2\tilde{M}^2+\tilde{M}^4\right)\ln{\frac{m_{\pi}}{\tilde{M}}}
\nonumber \\ && \left. \hspace*{4em}
-\frac{6m_{\pi}^3M_0^2}{\sqrt{4\tilde{M}^2-m_{\pi}^2}}\bigg(
m_{\pi}^4-5m_{\pi}^2\tilde{M}^2+5\tilde{M}^4\bigg)
\arccos{\frac{m_{\pi}}{2\tilde{M}}}\Bigg]
\right|_{{\tilde{M}^2}=M_0^2+\left(u^2-\frac{1}{4}\right) t}, \nonumber
\end{eqnarray}
to fit the lattice results as a function of $t$ and $m_\pi$.
Free fit parameters are $b_{20}^{\rm s,v}$,
${\hat{\delta}_B^{\rm s,v}}$ and ${\hat{\delta}_{Bt}^{\rm s,v}}$
for the isoscalar and isovector channel respectively, and
the parameters of $a_{20}^{\rm s,v}$ are fixed by the fitting
of $A_{20}^{\rm u\pm d}$.
The forward values of $B_{20}$ extracted from this fit
with fixed $m_\pi$ are shown in Fig.\ref{fig:B20}, where
the solid lines represent
the section of fitting surfaces at $t=0$.
Since the forward value of $B_{20}^{\rm u+d}$ is equivalent to
the difference of $2J^q-\langle x\rangle^q$, the small values of
the lattice results indicate
the cancellation between total angular momentum
and momentum fraction of quarks. However, the $\chi$PT fit suggests
a sizeable bending through the chiral extrapolation, which makes
a sharp contrast to the chiral quark soliton model \cite{CQSM}.
We obtain $B_{20}^{\rm u+d}(0)=-0.120\pm 0.023$ at the physical point
for the isoscalar channel and $B_{20}^{\rm u-d}(0)=0.269\pm 0.020$
for the isovector channel.
\begin{figure}[t!]
\centering
{\includegraphics[width=.35\textwidth,angle=270]{Jq.eps} \hspace*{1.6em}
\includegraphics[width=.35\textwidth,angle=270]{JsL.eps}}
\caption{ Total angular momentum of quark in the nucleon with $\chi$PT fit
(left) and spin, orbital angular momentum of quarks (right). The open
symbols represent the extrapolated values to the physical pion mass.}
\label{fig:Jq}
\end{figure}
Combining all these data, we can estimate the angular momentum of quarks
in the nucleon. The pion mass dependence of
the total angular momentum $J^q$ for u and d quark is shown in Fig.\ref{fig:Jq}.
The strong $m_\pi$ dependences of $A_{20}$ in the isovector channel and
$B_{20}$ in the isoscalar channel are enhanced for the u quark and gives
a significant suppression of $J^{\rm u}$ near the physical point, while these
dependences cancel each other for the d quark and so keep the value of
$J^{\rm d}$ small.
The extrapolation to the physical point gives $J^{\rm u}= 0.230\pm 0.008$
and $J^{\rm d}=-0.004\pm 0.008$.
The quark spin $s^q$, the total and orbital angular momentum $L^q=J^q-s^q$
are shown in the right panel of Fig.\ref{fig:Jq}.
We obtain the values at physical point by the chiral extrapolation
as $J^{\rm u+d}=0.226\pm 0.013$, $s^{\rm u+d}=0.201\pm 0.024$ and
$L^{\rm u+d}=0.025\pm 0.027$.
The results show that the total angular momentum of
quark is comparable with the quark spin and hence the orbital angular
momentum is consistent with zero, which agrees with the result of
ref.\cite{PH}.
\section{Conclusions}
We have carried out lattice simulations to calculate
the first moments of GPDs, which play an important role for
the proton spin, quark transverse density and deeply virtual
Compton scattering.
The lattice results of the generalized form factor $A_{20}$
are fitted to the dipole form for small $-t$ region and the dipole
mass turns out to be comparable with the observed tensor meson mass.
Since this empirical fit has no solid justification from a theoretical
point of view,
we employ baryon $\chi$PT
to take the forward limit of $B_{20}$ and to
chirally extrapolate the form factors to the physical pion mass.
The resulting values indicate that the total angular
momentum of quarks in the nucleon is of the same size
as the quark spin contribution, while the orbital
angular momentum is consistent with zero.
Further analyses are needed to estimate the finite size effects \cite{DL},
contributions from disconnected diagrams and so on.
Results with lighter pion masses will be forthcoming.
\acknowledgments{
The numerical calculations have been performed on the Hitachi SR8000
at LRZ (Munich), the BlueGene/L and the Cray T3E at EPCC (Edinburgh),
the BlueGene/Ls at NIC/FZJ (J\"ulich) and KEK (by the Kanazawa group
as part of DIK research program) and on the APEmille and apeNEXT at
NIC/DESY (Zeuthen).
This work was supported in part by the DFG and by
the EU Integrated
Infrastructure Initiative Hadron Physics (I3HP) under contract
number RII3-CT-2004-506078.
}
|
1,108,101,565,312 | arxiv | \section{\label{sec:Introduction}Introduction}
Monte-Carlo methods are the workhorses of statistical physics and lattice field theories providing insights in strongly correlated systems from first principles \cite{gattringerlang, newman1999monte}. In spite of their overall success, these approaches come with a number of downsides: Monte-Carlo methods potentially get trapped in local minima that prevent them from exploring the full configuration space \cite{Kirkpatrick1983:Optimization}. Furthermore, they can suffer from large autocorrelation times --- in particular close to criticality thus making them very costly in certain regions of the parameter space. In these regions, observables at physical parameter values can often only be extrapolated from simulations at unphysical parameter values. Last but not least, observables that explicitly depend on the partition function, such as free energy and entropy, can only be evaluated up to an overall constant by "chaining" the results of a considerable number of Monte-Carlo chains \cite{newman1999monte, bishop2006pattern,nakajima2019vblt}.
Generative neural samplers (GNSs) are machine learning models which allow sampling from probability distributions learned by using deep neural networks. We refer to \cite{goodfellow2016deep} for an accessible overview. GNSs have shown remarkable performance in generating realistic samples capturing complicated probability distributions of real-world data such as images, speech, and text documents. This has inspired the application of GNSs in the context of theoretical physics \cite{Torlai2016LearningTW,Morningstar2017DeepLT,liu2017simulating,huang2017accelerated, li2018neural,koch2018mutual,Urban:2018tqv,zhou2019regressive,mustafa2019cosmogan,nicoli2019comment,hu2019machine,yang2019deep,albergo2019flow,wu2019solving,sharir2019deep,noe2019boltzmann}.
In this work, we focus on a particularly promising subclass of GNSs. Namely, we will consider deep neural networks $q$ that allow to sample configurations $s \sim q$ from the model and also provide the exact probability $q(s)$ of the sample $s$. A notable example for this type of GNS are Variational Autoregressive Networks (VANs)\cite{wu2019solving}, which sample from a PixelCNN \cite{vanOord2016pixelrnn} to estimate observables. The main advantage of this class of GNSs is that they can be trained without resorting to Monte-Carlo configurations by minimizing the Kullback--Leibler divergence between the model $q$ and a target (Boltzmann) distribution $p$. As a result, they represent a truly complementary approach to existing Monte-Carlo methods.
Observables are often estimated by directly sampling from the GNS and then taking the sample mean. However, as we will discuss in detail, this approach suffers from a mismatch of the sampling distribution $q$ and the target distribution $p$. This mismatch is unavoidable since it cannot be expected that the GNS fully captures the underlying physics. This leads to uncontrolled estimates as both the magnitude and the direction of this bias is in general unknown and scales unfavorably with the system size \cite{nicoli2019comment}.
In this work, we propose a general framework to avoid this serious problem. Our method applies to any GNS with exact sampling probability. Specifically, we will show that it is possible to define asymptotically unbiased estimators for observables along with their corresponding variance estimators. Notably, our method also allows to directly estimate observables that explicitly depend on the partition function, e.g. entropy and free energy. Our proposal therefore greatly enhances the applicability of GNSs to real-world systems.
The paper is organized as follows: In Section~\ref{sec:Background}, we will discuss the proposed asymptotically unbiased estimators for observables along with corresponding variance estimators. We illustrate the practical applicability of our approach for the two-dimensional Ising model in Section~\ref{sec:Experiments}, discuss the applicabilty to other GNSs in Section~\ref{sec:applic} and conclude in Section~\ref{sec:conclusion}. Technical details are presented in several appendices.
\section{\label{sec:Background}Asymptotically Unbiased Estimators}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{graphics/figure1}
\caption{Estimates for various observables around $\beta_c$. NMCMC and NIS agree with the reference values provided by the exact analytical solutions as well as the Wolff algorithm. VAN deviates significantly. Observables are: internal energy $U=\mathbb{E}_p[H]$, absolute magnetization $|M|=\sum_i\mathbb{E}_p(|s_i|)$, the free energy $F=\tfrac{-1}{\beta} \ln(Z)$ and the entropy $S=-\mathbb{E}_p[\ln p]$.}\label{fig:observables}
\end{figure*}
\subsection{\label{sec:VAN}Generative Neural Samplers with Exact Probability (GNSEP)}
We will use a particular subclass of GNSs to model the variational distribution $q$ as they can provide the exact probability $q(s)$ of configurations $s$ and also allow sampling from this distribution $s \sim q$. We will henceforth refer to this subclass as generative neural samplers with exact probability (GNSEP). Using these two properties, one can then minimize the inverse Kullback--Leibler divergence between the Boltzmann distribution $p(s)=1/Z \, \exp(-\beta H(s))$ and the variational distribution $q$ without relying on Monte-Carlo configurations for training,
\begin{align}
\textrm{KL} (q | p) &= \sum_s q(s) \, \ln \left(\frac{q(s)}{p(s)}\right) \nonumber \\ &= \sum_s q(s) (\ln(q(s)) + \beta H(s)) + \ln Z \,. \label{eq:loss}
\end{align}
This objective can straightforwardly be optimized using gradient decent since the last summand is an irrelevant constant shift. After the optimization is completed, observables (expectation values of an operator $\mathcal{O}$ with respect to the Boltzmann distribution $p$) are then conventionally estimated by the sample mean
\begin{align}
\langle \mathcal{O}(s) \rangle_{p} \approx \frac{1}{N}\sum_{i=1}^N \mathcal{O}(s_i) \label{eq:samplemean}
\end{align}
using the neural sampler $s_i \sim q $.
Various architectures for generative neural samplers are available. Here, we will briefly review the two most popular ones:
\paragraph{Normalizing Flows (NFs):} Samples from a prior distribution $q_0(z)$, such as a standard normal, are processed by an invertible neural network $f(z)$. The probability of a sample $s=f(z)$ is then given by
\begin{align*}
q(s) = q_0 (f^{-1}(s)) \left|\det \left(\frac{\partial f}{\partial z}\right)\right|^{-1} \,.
\end{align*}
The architecture of $f$ is chosen such that the inverse and its Jacobian can easily be computed. Notable examples of normalizing flows include NICE\cite{dinh2014nice}, RealNVP\cite{dinh2016density} and GLOW\cite{kingma2018glow}. First physics applications of this framework have been presented in \cite{noe2019boltzmann} in the context of quantum chemistry and subsequently in \cite{albergo2019flow} for lattice field theory.
\paragraph{Autoregressive Models (AMs):} In this case, an ordering $s_1, \dots, s_N$ of the components of $s$ is chosen and the conditional distribution $q(s_i| s_{i-1} \dots s_1)$ is modeled by a neural network. The joint probability $q(s)$ is then obtained by multiplying the conditionals
\begin{align}
q(s) = \prod_{i=1}^N q(s_i| s_{i-1} \dots s_1)
\end{align}
and one can draw samples from $q$ by autoregressive sampling from the conditionals. State-of-the-art architectures often use convolutional neural networks (with masked filters to ensure that the conditionals only depend on the previous elements in the ordering). Such convolutional architectures were first proposed in the context of image generation with PixelCNN \cite{vanOord2016pixelrnn,Salimans2017pixelcnn} as most prominent example. In \cite{wu2019solving} these methods were first used for statistical physics applications.
A major drawback of using generative neural samplers is that their estimates are A) (often substantially) biased and B) do not come with reliable error estimates, see Figure~\ref{fig:observables}. Both properties are obviously highly undesirable for physics applications.
The main reason for this is that the mean \eqref{eq:samplemean} is taken over samples drawn from the sampler $q$ to estimate expectation values with respect to the Boltzmann distribution $p$. However, it cannot be expected that the sampler $q$ perfectly reproduces the target distribution $p$. This discrepancy will therefore necessarily result in a systematic error which is often substantial. Furthermore, in all the cases that we are aware of, this error cannot be reliably estimated.
In order to avoid this serious problem, we propose to use either importance sampling or Markov chain Monte Carlo (MCMC) rejection sampling to obtain asymptotically unbiased estimators. We also derive expressions for the variances of our estimators.
\subsection{Sampling Methods}\label{sec:methods}
Here we propose two novel estimators that are asymptotically unbiased and are shown to alleviate the serious issues A) and B) mentioned in the previous section.\\
\emph{Neural MCMC (NMCMC)} uses the sampler $q$ as the proposal distribution $p_0(s|s')$ for a Markov Chain. Samples $s\sim p_0(s|s')$ are then accepted with probability
\begin{align}
\text{min}\left(1, \tfrac{p_0(s| s') \, p(s')}{p_0(s'| s) \, p(s)}\right)
= \text{min}\left(1, \tfrac{q(s) \, \exp(-\beta H(s'))}{q(s') \, \exp(-\beta H(s))}\right) \,.
\end{align}
We note that the proposal configurations do not depend on the previous elements in the chain. This has two important consequences: Firstly, they can efficiently be sampled in parallel. Secondly, the estimates will typically have very small autocorrelation.
\emph{Neural Importance Sampling (NIS)} provides an estimator by
\begin{align}
\langle \mathcal{O}(s) \rangle_{p} \approx
\textstyle \sum_{i} w_i \, \mathcal{O}(s_i) &&\text{with} && s_i \sim q \,,
\end{align}
where $w_i = \tfrac{\hat{w}_i}{\sum_i \hat{w}_i}$ for $\hat{w}_i = \tfrac{e^{-\beta H(s_i)}}{q(s_i)}$ is the importance weight. It is important to stress that we can obtain the configurations $s_i$ by independent identically distributed (iid) sampling from $q$. This is in stark contrast to related reweighting techniques in the context of MCMC sampling \cite{gattringerlang}.
We assume that the output probabilities of the neural sampler $q$ are bounded within $[\epsilon, 1 - \epsilon]$ for small $\epsilon >0$. In practice, this can easily be ensured by rescaling and shifting the output probability of the model as explained in Appendix~\ref{app:proofs}.
It then follows from standard textbook arguments that these two sampling methods provide asymptotically unbiased estimators. For convenience, we briefly recall these arguments in Appendix~\ref{app:proofs}.
We note that our asymptotic unbiased sampling methods have the interesting positive side effect that they allow for \emph{transfer across parameter space}, a property they share with conventional MCMC approaches \cite{NewmanBarkemachap8:1999}. For example, we can use a neural sampler trained at inverse temperature $\beta'$ to estimate physical observable at a different target temperature $\beta\neq \beta'$, as shown later in Section~\ref{sec:Experiments}. In some cases, this can result in a significant reduction of runtime, as we will demonstrate in Section~\ref{sec:Experiments}.
\subsection{Asymptotically Unbiased Estimators}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{graphics/figure2}
\caption{Zoom around critical $\beta_c\approx0.4407$ showing the internal energy per lattice site from Figure~\ref{fig:observables} (e.g. $16 \times 16$ lattice). We took internal energy as a reference example, same considerations hold for the other observables. The estimates from our proposed method match with the exact reference. VAN is not shown because it is out of range as one can see from Figure~\ref{fig:observables}.}\label{fig:zoom_energy}
\end{figure}
For operators $\mathcal{O}(s)$ which do not explicitly depend on the partition function, such as internal energy $\mathcal{O}_U(s) = H(s)$ or absolute magnetization $\mathcal{O}_{|M|}(s) = \sum_i |s_i|$, both NIS and NMCMC provide asymptotically unbiased estimators as explained in the last section.
However, generative neural samplers are often also used for operators $\mathcal{O}(s, Z)$ explicitly involving the partition function $Z$. Examples for such quantities include
\begin{align}
&\mathcal{O}_F(s,Z) = -\frac{1}{\beta} \ln(Z)\,,\label{eq:obsF} \\ &\mathcal{O}_S(s,Z) = \beta \, H(s) + \ln Z \,, \label{eq:obsS}
\end{align}
which can be used to estimate the free energy $F=\tfrac{-1}{\beta} \ln(Z)=\mathbb{E}_p[ \mathcal{O}_F ]$ and the entropy $S=-\mathbb{E}_p[\ln p]=\mathbb{E}_p[ \mathcal{O}_S ]$ respectively. Since the Kullback-Leibler divergence is greater or equal to zero, it follows from the optimization objective \eqref{eq:loss} that
\begin{align}
F_q = \frac{1}{\beta} \sum_s q(s) (\ln(q(s)) + \beta H(s)) \ge - \frac{1}{\beta} \ln(Z) = F \,.
\end{align}
Therefore, the variational free energy $F_q$ provides an upper bound on the free energy $F$ and is thus often used as its estimate. Similarly, one frequently estimates the entropy $S=-\mathbb{E}_p(\ln p)$ by simply using the variational distribution $q$ instead of $p$. Both estimators however typically come with substantial biases which are hard to quantify. This effect gets particularly pronounced close to the critical temperature.
Crucially, neural importance sampling also provides asymptotically unbiased estimators for $\mathbb{E}_p[\mathcal{O}(s,Z)]$ by
\begin{align}
\hat{\mathcal{O}}_N = \frac{\tfrac{1}{N}\sum_{i=1}^N \mathcal{O}(s_i, \hat{Z}_N) \, \hat{w}(s_i)}{\hat{Z}_N} && \text{with} && s_i \sim q \,, \label{eq:estOp}
\end{align}
where the partition function $Z$ is estimated by
\begin{align}
\hat{Z}_N = \frac{1}{N} \, \sum_{i=1}^N \hat{w}_i \,. \label{eq:estimatorZ}
\end{align}
In the next section, we will derive the variances of these estimators. Using these results, the errors of such observables can systematically be assessed.
\subsection{\label{sec:VarianceEstimators}Variance Estimators}
In the following, we focus on observables of the form
\begin{align}
\mathcal{O}(s,Z) = g(s) + h(Z) \,,
\label{eq:GeneeralObservable}
\end{align}
that include the estimators for internal energy, magnetization but most notably also for free energy \eqref{eq:obsF} and entropy \eqref{eq:obsS}.
As just mentioned, expectation values of these operators can be estimated using \eqref{eq:estOp}.
Let us assume that $h$ is differentiable at the true value of the partition function $Z$.
Then, as shown in Appendix~\ref{app:variance},
the variance of the estimator for large $N$ is given by
\begin{align}
\sigma^2_{\hat{\mathcal{O}}_N}
=&
\frac{\boldsymbol{\psi}^T \mathbb{E}_q [\boldsymbol{\phi} \boldsymbol{\phi}^T] \boldsymbol{\psi}}{N}
+ o_P(N^{-1}) \,, \label{eq:fullvariance}
\end{align}
where
\begin{align}
\boldsymbol{\phi} &=
\begin{pmatrix}
g \hat{w} - \mathbb{E}_p[g] Z\\
\hat{w} - Z
\end{pmatrix},
&
\boldsymbol{\psi} &=
\begin{pmatrix}
1/Z\\
- \mathbb{E}_p[g]/ Z + h'(Z)
\end{pmatrix}.
\end{align}
Note that
$\mathbb{E}_p[g]$ can be estimated by
\begin{align}
\frac{ \frac{1}{N} \sum_{i=1}^N g(s_i) \hat{w}(s_i) }{\hat{Z}_N}
\end{align}
and $Z$ can be estimated by \eqref{eq:estimatorZ}, respectively.
For operators with $h\equiv 0$, it is well-known \cite{isvariance} that Eq.~\eqref{eq:fullvariance} reduces to
\begin{align}
\sigma^2_{\hat{\mathcal{O}}_N}
&=
\frac{\text{Var}_p(g)}{N_{\textrm{eff}}} + o_P(N^{-1}) \,,
\end{align}
where we have defined the effective sampling size
\begin{align}
N_{\textrm{eff}} = \frac{N}{\mathbb{E}_q \left[w^2\right]} \,.
\end{align}
Note that the effective sampling size does not depend on the particular form of $g$. It is however important to stress that for observables with $h\neq 0$, the error cannot be estimated in terms of effective sampling size but one has to use \eqref{eq:fullvariance}. While this expression is more lengthy, it can be easily estimated. Therefore, neural importance sampling allows us to reliably estimate the variances of physical observables --- in particular observables with explicit dependence on the partition function. This is in stark contrast to the usual GNS approach.
It is also worth stressing that MCMC sampling does not allow to directly estimate those observables which explicitly involve the partition function. For completeness, we also note that a similar well-known effective sampling size can be defined for MCMC
\begin{align}
N_{\textrm{eff}} = \frac{N}{2 \, \tau_{int, \mathcal{O}}} \,, \label{eq:autocor}
\end{align}
where $\tau_{int, \mathcal{O}}$ is the integrated auto-correlation time of the operator $\mathcal{O}$, see \cite{wolff2004monte, gattringerlang} for more details.
\section{\label{sec:Experiments}Numerical Results}
We will now demonstrate the effectiveness of our method on the example of the two-dimensional Ising model with vanishing external magnetic field. This model has an exact solution and therefore provides us with a ground truth to compare to. The Hamiltonian of the Ising model is given by
\begin{equation}
H(s) = -J \sum_{\langle i,j \rangle} s_i \, s_j \,,
\end{equation}
where $J$ is the coupling constant and the sum runs over all neighbouring pairs of lattice sites. The corresponding Boltzmann distribution is then given by
\begin{align}
p(s) = \frac{1}{Z} \exp(-\beta H(s)) \,,
\end{align}
with partition function $Z=\sum_s \exp(-\beta H(s))$. For simplicity, we will absorb the coupling constant $J$ in $\beta$ in the following. Here, we will only consider the ferromagnetic case for which $J>0$ and the model undergoes a second-order phase transition at $\beta_c\approx0.4407$ in the infinite volume limit.
In addition to the exact solution by Onsager for the infinite volume case \cite{onsager}, there also exists an analytical solution for finite lattices \cite{ferdinand1969bounded}, which we review in Appendix~\ref{app:ZIsing} and use for reference values. An exact partition function for the case of vanishing magnetic field is not enough to derive expressions for some observables, such as magnetization. For these observables, we obtain reference values by using the Wolff MCMC clustering algorithm \cite{wolff1989comparison}.
\subsection{Unbiased Estimators for the Ising Model}
For discrete sampling spaces, autoregressive algorithms are the preferred choice as normalizing flows are designed for continuous ones \footnote{However, \cite{tran2019discrete, hoogeboom2019integer} present a recent attempt to apply normalizing flows to discrete sampling spaces.}. It is nonetheless important to stress that our proposed method applies irrespective of the particular choice for the sampler.
We use the standard VAN architecture for the GNS. For training, we closely follow the procedure described in the original publication \cite{wu2019solving}. More detailed information about hyperparameter choices can be found in Appendix~\ref{appendix:hyperparams}. We use VANs, trained for a $16 \times 16$ lattice at various temperatures around the critical point, to estimate a number of observables. The errors for neural importance sampling are determined as explained in Section~\ref{sec:VarianceEstimators}. For Wolff and Neural MCMC, we estimate the autocorrelation time as described in \cite{wolff2004monte}.
Figure~\ref{fig:observables} summarizes the main results of our experiments in terms of estimates for internal energy, absolute magnetization, entropy and free energy around the critical regime. NMCMC and NIS agree with the reference values while VAN deviates significantly. We note that this effect is also present for observables with explicit dependence on the partition function, i.e. for entropy and free energy.
All estimates in Figure~\ref{fig:observables} deviate from the reference value in the same direction. Whereas this is expected for the free energy (for which the true value is a lower bound) also for the other observables the trained GNSs seem to favor a certain direction of approaching the true value. However, as we show in Appendix~\ref{app:directionofbias}, this trend holds only on average and is not a systematic effect.
\begin{figure}[ht]
\centering
\includegraphics[width=1\linewidth]{graphics/figure3}
\caption{\label{fig:checkpoints} Estimation of observables during training for a single run on a $16\times16$ lattice. The modified sampling procedure leads to accurate predictions at significantly earlier stages of training since it correct for imperfect samplers. As before, we look at the internal energy $U=\mathbb{E}_p[H]$, the absolute magnetization $|M|=\sum_i\mathbb{E}_p(|s_i|)$, the free energy $F=\tfrac{-1}{\beta} \ln(Z)$ and the entropy $S=-\mathbb{E}_p[\ln p]$.}
\end{figure}
In Figure~\ref{fig:checkpoints}, we track the evolution of the estimates for the four observables under consideration during training.
This figure clearly demonstrates that our proposed method leads to accurate predictions even at earlier stages of the training process. This is particularly important because the overall runtime for GNS estimates is heavily dominated by the training.
Table~\ref{tab:scaling} summarizes results for $24 \times 24$ lattice. For this larger lattice, the systematic error of VAN is even more pronounced and the estimated values do not even qualitatively agree with the reference values. Our modified sampling techniques, on the other hand, lead to fully compatible results.
\begin{table*}[ht]
\caption{Comparison of VAN, NMCMC and NIS on a $24\times24$ and a $16\times16$ lattices, both trained at $\beta_c$. Entropy and free energy cannot be directly estimated using Monte Carlo approaches. Bold numbers denote estimates which are compatible with ground truth within one standard deviation. Standard deviations are in parentheses.Observables are: internal energy $U=\mathbb{E}_p[H]$, absolute magnetization $|M|=\sum_i\mathbb{E}_p(|s_i|)$, the free energy $F=\tfrac{-1}{\beta} \ln(Z)$ and the entropy $S=-\mathbb{E}_p[\ln p]$. Details on the runtime performance are reported in Appendix \ref{appendix:hyperparams}.}
\label{tab:scaling}
\centering
\begin{tabular}{c|l|cccc}
Lattice & Sampler & $\nicefrac{U}{L^2}$ & $\nicefrac{|M|}{L^2}$ & $\nicefrac{S}{L^2}$ & $\nicefrac{F}{L^2}$ \\
\colrule
\multirow{ 3}{*}{(24x24)} & \textbf{VAN} & -1.5058 (0.0001) & 0.7829 (0.0001) & 0.26505 (0.00004) & -2.107250 (0.000001) \\
& \textbf{NIS} & \textbf{-1.43} (0.02) & \textbf{0.67} (0.03) & \textbf{0.299} (0.007) & \textbf{-2.1128} (0.0008) \\
& \textbf{NMCMC} & \textbf{-1.448} (0.007) & \textbf{0.68} (0.04) & - & -\\
\colrule
\colrule
\textbf{Reference} & & -1.44025 & 0.6777 (0.0006) & 0.29611 & -2.11215 \\
\colrule
\multirow{ 3}{*}{(16x16)} & \textbf{VAN} & -1.4764 (0.0002) & 0.7478 (0.0002) & 0.28081 (0.00007) & -2.11363 (0.00001) \\
& \textbf{NIS} & \textbf{-1.4533} (0.0003) & \textbf{0.71363} (0.00004) & \textbf{0.2917} (0.0002) & \textbf{-2.11529} (0.00001) \\
& \textbf{NMCMC} & \textbf{-1.4532} (0.0007) & \textbf{0.714} (0.001) & - & -\\
\colrule
\colrule
\textbf{Reference} & & -1.4532 & 0.7133 (0.0008) & 0.29181 & -2.11531
\end{tabular}
\end{table*}
\begin{figure}[ht]
\centering
\includegraphics[width=1.\linewidth]{graphics/figure4}
\caption{\label{fig:autocorr} Evolution of the acceptance rate (right) and the integrated autocorrelation time of the internal energy $\tau_{int, U}$ (left) during training. NMCMC runs were preformed on a 16$\times$16 lattice at $\beta_c$.}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.\linewidth]{graphics/figure5}
\caption{Histogram for the magnetization of the system at $\beta=0.55$. While the Metropolis algorithm is only able to capture one of the two modes of the distribution, NMCMC is able to cover both.}\label{fig:histogram}
\end{figure}
Lastly, our proposed methods allow for transfer across parameter space, as explained in Section~\ref{sec:methods}. In Figure~\ref{fig:transfer}, we summarize a few transfer runs.
We performed a full training procedure for each value of $\beta$ shown in Figure 6. The trained samplers were then used to estimate observables at $\beta_c$ (i.e. \textit{not} at the training temperature $\beta$).
All predicted values agree with the reference within error bars. As the difference between model's temperature $\beta$ and target $\beta_c$ increases, the variance grows as well --- as was to be expected. In practice, this limits the difference between model and target inverse temperature. Nevertheless, we can use models trained at a single $\beta$ value to predict other values in a non-trivial neighbourhood of the model $\beta$. This allows to more finely probe parameter space at only minimal additional computational costs.
\begin{figure}[ht]
\centering
\includegraphics[width=1.\linewidth]{graphics/figure6}
\caption{\label{fig:transfer} Samplers $q$ are trained at increasingly lower $\beta$ values and used to predict the internal energy $U/L^2$ at the critical coupling $\beta_c$. All results agree with the reference values within error bars. The variance of the estimators increase as the difference between model and target temperature gets larger. Transfer runs for NMCMC and NIS allow to only train one model which leads to significant speed up since runtime is dominated by training, as illustrated in the inset.}
\end{figure}
\subsection{Neural MCMC}
NMCMC obtains a proposal configuration by independent and identically distributed sampling from the sampler $q$. This can result in a significantly reduced integrated autocorrelation time $\tau_{int, \mathcal{O}}$ for the observables $\langle \mathcal{O} \rangle$. For this reduction, it is not required to perfectly train the sampler. It is however required that the sampler is sufficiently well-trained such that the proposal configuration is accepted with relatively high probability, as illustrated in Figure~\ref{fig:autocorr}. Table~\ref{tab:autocorr} demonstrates a significant reduction in integrated autocorrelation $\tau_{int}$, as defined in \eqref{eq:autocor}, for two observables at $\beta_c$ on a $16 \times 16$ lattice.
\begin{table}[ht]
\begin{center}
\caption{Neural MCMC instead of Metropolis leads to a significant reduction of integrated autocorrelation times $\tau_{int}$ for a $16 \times 16$ lattice at $\beta_c$. The neural sampler was trained over ten thousands steps and the acceptance rate was 69 percent. The observables $U=\mathbb{E}_p[H]$ and $|M|=\sum_i\mathbb{E}_p(|s_i|)$ are the internal energy and the absolute magnetization respectively.}
\label{tab:autocorr}
\begin{tabular}{l|cc}
Observable & Metropolis & NMCMC \vspace{0.1cm}\\
\colrule
\textbf{$\tau_{int, U}$} & 4.0415 & 0.8317 \\\vspace{0.05cm}
\textbf{$\tau_{int, |M|}$} & 7.8510 & 1.3331
\end{tabular}
\end{center}
\end{table}
In NMCMC, the proposal configuration $s \sim p_0(s|s') =q(s)$ is independent of the previous configuration $s'$ in the chain. This is in stark contrast to the Metropolis algorithm for which the proposal configuration is obtained by a local update of the previous configuration. As a result, NMCMC is less likely to stay confined in (the neighbourhood of) an energy minimum of the configuration space. This is demonstrated in Figure~\ref{fig:histogram} which shows the magnetization histograms for Metropolis and Neural MCMC. Since the Ising model has a discrete $\mathbb{Z}_2$-symmetry, we expect a two-moded distribution. In constrast to the Metropolis algorithm, NMCMC indeed shows such a behaviour.
\section{Applicability to Other Samplers}\label{sec:applic}
We note that our approach can in parts be applied to other generative models. Table~\ref{tab:taxonomy} summarizes the applicability of neural MCMC (NMCMC) sampling and neural importance sampling (NIS).
Namely, when the employed GNS provides an unnormalized sampling probability, i.e., the exact probability multiplied by a constant, then NMCMC and NIS can again correct the learned sampler $q$ leading to asymptotically unbiased estimators. However, the applicability is limited to the observables that do \emph{not} explicitly depend on the partition function, i.e., $h \equiv 0$ in Eq.~\eqref{eq:GeneeralObservable}.
If the employed GNS allows us to approximate the (normalized or unnormalized) sampling probability, one can apply our approach by using the approximate probability for $q$.
The bias can then be reduced if the gap between the target distribution and the sampling distribution is larger than the approximation error to the sampling probability. However, then the estimator may not be asymptotically unbiased.
In summary, our method can be applied broadly to a variety of machine learning models and therefore does not depend on the particular choice of the sampler. Depending on the physical system, a particular architecture may be preferable. For example, whether an autoregressive model or a normalizing flow is the best machine learning tool to use, depends on how the sampling space looks like. The former requires a discrete sampling space, thus being a particularly good match for discrete systems, such as spin chains; the latter finds its applicability in the context of continuous systems, such as lattice field theories.
As shown in Table~\ref{tab:taxonomy}, applying our method to these models provides
asymptotically unbiased estimators.
\section{\label{sec:conclusion}Conclusion and Outlook}
In this work, we presented a novel approach for the unbiased estimation of observables with well-defined variance estimates from generative neural samplers that provides the exact sampling probability (GNSEP). Most notably, this includes also observables that explicitly depend on the partition function such as free energy or entropy. The practical applicability of the approach is demonstrated for the two-dimensional Ising model, stressing the importance of unbiased estimators compared to biased estimators from the literature.
In summary, the methods proposed in this paper not only lead to theoretical guarantees but are also of great practical relevance. They are applicable for a large class of generative samplers, easy to implement, and often lead to a significant reduction in runtime. We therefore firmly believe that they will play a crucial role in the promising application of generative models to challenging problems of theoretical physics.
\begin{table*}[ht]
\centering
\caption{Applicability of NMCMC and NIS to various GNSs. $h$ refers to the term explicitly depending on the partition function $Z$ of the observable (see Eq.~\eqref{eq:GeneeralObservable}). General Adversarial Networks (GANs) do not provide sampling probabilites and therefore can not be used for our method. Restricted Boltzmann Machines (RBMs) only provide approximate and unnormalized sampling probability and therefore do not lead to asymptotically unbiased estimators using our methods. Because of the lack of normalization, observables with explicit dependence on the partition function cannot be estimated. Variational Autoencoders (VAEs) provide approximate sampling probability. Our method can therefore be applied but does not lead to asymptotic guarantees. The cases of Normalizing Flows (NFs) and Autoregressive Models (AMs) were discussed at length before. The applicability is summarized in the table using the following notation: $\checkmark$: the estimator is asymptotically unbiased; (\checkmark): applicable but the estimator is still biased; \text{\sffamily X}: not applicable. Generative neural samplers in the last row are GNSEP as introduced in Sec.~\ref{sec:VAN}. The last column gives example references in which the corresponding type of GNS is applied to a physical system.}
\label{tab:taxonomy}
\begin{tabular}{c|c|c|c|c}
Accessible sampling probability & NMCMC, NIS($h \equiv 0$) & NIS($h \ne 0$) & GNSs & Application in Physics \\
\hline
none & \text{\sffamily X} & \text{\sffamily X} & GAN & \cite{zhou2019regressive,Urban:2018tqv} \\
approximate, unnormalized &(\checkmark) & \text{\sffamily X} & RBM & \cite{carleo2017solving,Morningstar2017DeepLT} \\
approximate, normalized & (\checkmark) & (\checkmark) & VAE & \cite{Cristoforetti2017TowardsMP}\\
exact, unnormalized & \checkmark & \text{\sffamily X} & -- & -- \\
exact, normalized & \checkmark & \checkmark & AM, NF & \cite{wu2019solving, noe2019boltzmann,albergo2019flow} \\
\end{tabular}
\end{table*}
\acknowledgements
This work was supported by the German Ministry for Education and Research as Berlin Big Data Center (01IS18025A) and Berlin Center for Machine Learning (01IS18037I). This work is also supported by the Information \& Communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (No. 2017-0-001779) and by the DFG (EXC 2046/1, Project-ID 390685689). Part of this research was performed while one of the authors was visiting the Institute for Pure and Applied Mathematics (IPAM), which is supported by the National Science Foundation (Grant No. DMS-1440415). The authors would like to acknowledge valuable comments by Frank Noe and Alex Tkatchenko to an earlier version of the manuscript.
|
1,108,101,565,313 | arxiv | \section{Introduction}
Entanglement is indispensable in the quantum information process.
Quantum entanglement states have been applied in quantum key
distribution and teleportation, entanglement purification,
factorization of integers, random
searches\cite{chuang}-\cite{ekert}. Generation and preservation of
entanglement of qubits are crucial for all the quantum information
process. It is still challenge to externally control entanglement.
Recently, using quantum bus to coherently and controllably
manipulate quantum entanglement is provided in Ref.\cite{m}, where
the famious Jaynes-Cummings (JC) model is applied as the control
mechanism.
Theoretically, the entanglement generation and maintaining of two
qubits can be achieved by use of Jaynes-Cummings (JC) model
Hamiltonian\cite{yu}. The entanglement reciprocation is also
studied between the field variables and a pair of qubitsin JC
cavity\cite{krau}-\cite{zhou}. Later, study shows that
coherent-state control of a pair of non-local atom-atom entanglement
between two spatially separated sites is possible\cite{m}. There are
some time-dependent entanglement death and rebirth effects in these investigations.
JC model is a main mechanism to be used to study how to control the
manipulation of quantum qubits, whose validity rely on the
assumption of the weak coupling of quantum oscillator, qubits and their near
resonance condition. It is obtained from the Rabi model by
discarding the count-rotating-wave terms\cite{mand}. The strong and
ultra-strong coupling region of qubit and
oscillator provide many new and anti-intuitive
results for the Rabi model\cite{iris}-\cite{asha}, which is
treated in Ref.\cite{sorn} with zero detuning. Further, ultra-strong
coupling and large detuning is investigated, and novel results
appear, like frequency modification, collapse and revival of Rabi
oscillation for one qubit with the initial state of the single mode
field being thermal or coherent\cite{iris}-\cite{asha}. In
Ref.\cite{agar}, Rabi model is extended to two qubits case, where
the authors studied the death and revival phenomena for the two
qubits' entanglement for the initial coherent state of the
oscillator. However, they only study very special case of the
coupling parameter $\beta^2\ll \Omega_{1N}$ (see Ref.\cite{agar} or
the following for details), which will greatly simplify the
eigenvectors and subsequent calculation. There are no investigation
concerning the range where the coupling strength does not satisfy
$\beta^2\ll \Omega_{1N}$.
Enlightened by these works, we investigate use the Rabi mechanism
control. In our study, we will not restrict our study by the
condition $\beta^2\ll \Omega_{1N}$, and this in turn will make
calculation complicated. Never the less, it also give one chance to
obtain some unexpected phenomena, that is the new way to use
coherent quantum mode to control one of the full entangled Bell
state and novel results are obtained based on the complex
calculation, which will helpful for the quantum information process.
In addition, the condition $\beta^2\ll \Omega_{1N}$ in
Ref.\cite{agar} is not easy to be satisfied due to the fact
$\Omega_{1N}$ depends on the parameters $N,\ \beta,\
\frac{\omega_0}{\omega}$ nonlinearly. So the investigation without
the condition is crucial for their further application in quantum
information process. Furthermore, the above mentioned complexity
unexpectedly promotes our ability to preserve the entanglement of
the two qubits in the state $|I_0\rangle$, one of the four Bell
states. This unexpected result can be exploited in quantum
information process involved the entanglement of two qubits.
The paper organized as
follows: we first introduce the two-qubit system with inter-qubit
coupling briefly and give a simple model for it. Then we give a view
of the method to to be used to study the Rabi model in ultra-strong
coupling regime with lager detuning, the adiabatic approximation
method (AA). The spectrum of the qubits coupled with quantum mode
filed is given in subsequent section, Then the evolution of the
system are investigated and followed by the section to study the
preservation of the entanglement.
\section{The two qubits with inter-qubit coupling}
Because we will treat the two qubits system as three-level-energy
system, which will be used both for circuit QED and cavity QED.
Here we give a brief introduction about it. The Hamiltonian for
the system of a two-qubit with inter-qubit
coupling\cite{jing}-\cite{fice}
\begin{eqnarray}\label{H2 for qutrit}
H_{q}= \frac12\hbar\omega_0(\sigma_z^{(1)}+\sigma_z^{(2)})
+\kappa\hbar\omega_0(\sigma_-^{(1)}\sigma_+^{(2)}+\sigma_-^{(2)}\sigma_+^{(1)}),
\end{eqnarray}
where $ \sigma_z^{(i)}, \ i=1,2$ are the ith qubit's Pauli matrices
and $\kappa$ is the coupling strength between the qubits.
$\sigma_{\pm}^{(i)}$ are\begin{eqnarray} \sigma_{+}^{(i)} =\left(
\begin{array}{ccc}
0 & 1 \\
0& 0 &\\
\end{array}
\right),\ \
\sigma_{-}^{(i)}=\left(
\begin{array}{ccc}
0 & 0 \\
1& 0 &\\
\end{array}
\right)
\end{eqnarray}
for $ith$-qubit. The Hamiltonian could be a diagonal matrix under
the collective states \[|3\rangle=|\uparrow \uparrow \rangle,\]\[
|s\rangle=\frac1{\sqrt2}(|\uparrow \downarrow\rangle+|\downarrow
\uparrow\rangle),\]\[ |a\rangle=\frac1{\sqrt2}(|\uparrow
\downarrow\rangle-|\downarrow \uparrow\rangle),\]\[
|1\rangle=|\downarrow \downarrow \rangle\] as basis
\cite{jing}-\cite{dick}:
\begin{eqnarray}
H'_q=\hbar\omega_0\left(
\begin{array}{cccc}
1 & 0 & 0 &0 \\
0 & \kappa & 0 &0 \\
0 & 0 & -\kappa&0 \\
0 & 0 & 0& -1 \\
\end{array}
\right).
\end{eqnarray}
There are two transition channels, the symmetric one
$|3\rangle\rightarrow|s\rangle\rightarrow |1\rangle $
and the asymmetric one $|3\rangle\rightarrow|a\rangle\rightarrow
|1\rangle$. The two channels are not correlated
\cite{jing}-\cite{dick}, so we will divide the $H_q $ as
\begin{eqnarray}
H_q=\hbar\omega_0\left(
\begin{array}{cccc}
1 & 0 & 0 \\
0 & \kappa & 0 \\
0 & 0 & -1 \\
\end{array}
\right)
\end{eqnarray}
for symmetric transition channel or
\begin{eqnarray}
H_q=\hbar\omega_0\left(
\begin{array}{cccc}
1 & 0 & 0 \\
0 & -\kappa & 0 \\
0 & 0 & -1 \\
\end{array}
\right)
\end{eqnarray}
for asymmetric channel transition. We will unite the two cases as
\begin{eqnarray}
H_q=\hbar\omega_0\left(
\begin{array}{cccc}
1 & 0 & 0 \\
0 & a & 0 \\
0 & 0 & -1 \\
\end{array}
\right)
\end{eqnarray}
with $a$ positive and negative for symmetric and asymmetric channels
respectively. We also denote $|2\rangle=|s\rangle$ for symmetric
channel and $|2\rangle=|a\rangle$ for asymmetric channel
respectively.
The interacting two identical qubits will correspond to two
three-level-energy systems with the same top and bottom eigenstates
and different middle states. The top and bottom states' energies are
$\hbar\omega_0, \ -\hbar\omega_0$. The two middle states will have
the same energy if the two qubits do not couple with each other. In
this case, the two three-level-energy systems are the same
concerning their energy distributions. Generally, they will be
regarded as just one. However, the two middle states will have
different energies, one positive and the other negative. The
corresponding two three-level-energy systems are different even
concerning with their energy distribution. Nevertheless, relating
the transition, the two three-level-energy systems are not
correlated with each other, so we could consider one .
\section{The Rabi Hamiltonian }
As the Hamiltonian of the two qubits can be represented a $3\times
3$ matrix, the Rabi model is extended as describing the dynamics of
the two qubits interacting with a single quantum mode field
by\begin{eqnarray}\label{H for qubits}
H=\hbar\omega_{0}S_z+ \hbar\omega a^{\dagger}a
+ \hbar\omega\beta(a+a^{\dagger})S_x , \end{eqnarray} which will be
the Rabi model as $S_x,\ S_z$ reduce to the usual Pauli matrices.
The qubits is described by $H_q=\hbar\omega_0$ before stated.In the
matrix form, $S_x,\ S_z$ are
\begin{eqnarray}
S_x=\left(
\begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 &1 \\
0 & 1 & 0 \\
\end{array}
\right),
S_z=\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & a &0 \\
0 & 0 & -1\\
\end{array}
\right).
\end{eqnarray}
The operator $S_x$ is connected with the operators $\sigma^+,\
\sigma^-$
\[S_x=\sigma^++\sigma^-,\]
where\begin{eqnarray} \sigma^+=\left(
\begin{array}{ccc}
0 &0 & 0 \\
1 & 0 &0 \\
0 & 1 & 0 \\
\end{array}
\right),\
\sigma^-=\left(
\begin{array}{ccc}
0 & 1 & 0 \\
0 & 0&1 \\
0 & 0 & 0\\
\end{array}
\right).
\end{eqnarray}
The eigenstates $|1\rangle, \ |2\rangle, |3\rangle $ change under
the operators $\sigma^+,\ \sigma^-$ as following
\[\sigma^+|1\rangle=|2\rangle,\ \sigma^+|2\rangle=|3\rangle ,\sigma^+|3\rangle=0,\]\[\sigma^-|1\rangle=0, \
\sigma^-|2\rangle=|1\rangle, \ \sigma^-|3\rangle=|2\rangle.\]
As stated before, the
Hamiltonian of the system is not analytically integrable. The RWA
proximation supposes the resonate condition $\omega_0\approx \omega$
and the weak coupling $\beta\ll 1$ and becomes completely solvable
by discarding the non-energy conserving terms $a\sigma^-,\
a^{\dagger}\sigma^+$.
\section{Adiabatic Approximation in Ultra-strong Coupling Range}
Whenever the coupling is strong or the detuning is large, the
counter-RWA terms $a\sigma^-,\ a^{\dagger}\sigma^+ $ can not be
omitted. This belongs to the regime of adiabatic approximation . In
adiabatic approximation, $\omega_0 $ is small relative to the other
terms in the Hamiltonian, and one could first omit it to study the
rest as the non-RWA Hamiltonian , then take it as perturbation in
later. Physically, this focuses on the quantum oscillator influenced
by the term $\hbar\beta(a+a^{\dagger})S_x$. The Hamiltonian reads
\begin{eqnarray}
H^{0}= \hbar\omega a^{\dagger}a
+ \hbar\beta(a+a^{\dagger})S_x.
\end{eqnarray}
If studied classically, the oscillator undergoes some forced motion
by the qubits. The quantum oscillator system interacting with one
qubit and two non-interacting qubits have been solved in the
adiabatic approximation (see references\cite{iris}-\cite{agar}). We
now employ the similar method to solve the non-equal-level qubits
system. The eigen-vecrtors $ |1,1 \rangle,\ |1, 0\rangle,\ |1,-1
\rangle $ of the operator $S_x $ are\begin{eqnarray}
S_x|1,m\rangle=\sqrt2m |1,m\rangle,\ m=0,\pm1,
\end{eqnarray} which are written as
\begin{eqnarray} \left(
\begin{array}{c}
|1,1 \rangle \\ |1, 0\rangle \\
|1,-1 \rangle\\
\end{array}
\right)=\left(
\begin{array}{ccc}
1/2 & 1/\sqrt{2} & 1/2 \\
1/\sqrt{2} & 0 &- 1/\sqrt{2} \\
1/2 & -1/\sqrt{2} & 1/2 \\
\end{array}
\right)
\left(
\begin{array}{c}
|3 \rangle \\ |2\rangle \\
|1 \rangle\\
\end{array}
\right)\label{10,11,1-1vecctor}.
\end{eqnarray}
With the help of these vectors, the eigen-vectors $|\Psi\rangle$ for
the eigenstates of operator $H^0$ will be written as \cite{iris}
,\cite{agar}
\[|\Psi_{n,m}\rangle=|1,m\rangle|N_m\rangle=|1,m\rangle D(-\sqrt2
m\beta)|N\rangle\] with the corresponding eigen-values
$E^0_{n,m}=\hbar\omega(N-2\beta^2m^2)$. The displaced operator
$D(\alpha)$ of the quantum oscillator is defined as
$D(\alpha)=\exp{(\alpha a^{\dagger}-\alpha^*a)}$ for the arbitrary
complex number $\alpha$. The interaction with the qubits makes
potential well of the quantum oscillator displaced
according to the states of the qubits. From the physical view,
the interaction term $\hbar\beta(a+a^{\dagger})S_x$ has influence
to displace the equilibrium position of the oscillator to different
points by the different states of the qubits $|1,m\rangle,\ m=0,\pm1$
, which result in three displaced number vectors $|N_m\rangle,\
m=0,\pm 1$. From the mathematical view, the eigenstates
$|1,m\rangle|N_m\rangle, m=0,\pm1,\ N=0,1,2,\cdots $ constitute as a
complete basis for the composite system and are useful for the later
calculation, that is, any vector for the composite system of the
qubits and oscillator could be decomposed in the basis
$|1,m\rangle|N_m\rangle, m=0,\pm,\ N=0,1,2,\cdots $. This basis is
not orthogonal due to the fact
\begin{eqnarray}
\langle N_{l}|M_r\rangle\ne 0 \ for \ r\ne l.
\end{eqnarray}
In order to obtain the spectrum for the Hamiltonian $H$, we need to
make calculation of the terms $\langle 1,l|S_z|1,r\rangle\langle
N_{l}|M_r\rangle, \ l,\ r=0,\pm 1 $. Due to the fact $\omega_0\ll
\omega$, the transition of the qubits generally contributes little
in exciting the quantum oscillator, so the corresponding terms
$\langle 1,l|S_z|1,r\rangle\langle N_{l}|M_r\rangle, \ N\ne M$ could
be omitted. This approximation is called adiabatic approximation
(AA).
\section{The spectrum of the Hamiltonian by AA method} Under
this adiabatic approximation, the Hamiltonian $H$ becomes
block-diagonal with the nth diagonal block $\tilde{H}_N$ as a
$3\times 3$ matrices
defined under the basis
$|1,m\rangle |N_m\rangle,\ m=1,0,-1$ as
\begin{eqnarray}
\tilde{H}_N=\left(
\begin{array}{ccc}
\tilde{N} & \Omega_{1N} &\Omega_{2N} \\
\Omega_{1N} & N &\Omega_{1N} \\
\Omega_{2N} & \Omega_{1N} &\tilde{N} \\
\end{array}
\right),
\end{eqnarray}
where
\begin{eqnarray}
\tilde{N} &=& N-2\beta^2+\frac{a}2 \frac{\omega_0}{\omega}, \\
\Omega_{1N} &=& \frac{\omega_0}{\omega}\langle 1,1|S_z|1,0\rangle\langle N_{1}|N_0\rangle\nonumber \\
&=& \frac{1}{\sqrt2}\frac{\omega_0}{\omega}\exp{(-\beta^2)}L_N(2\beta^2) ,\\
\Omega_{2N} &=& \frac{\omega_0}{\omega}\langle 1,-1|S_z|1,1\rangle\langle N_{-1}|N_1\rangle\nonumber \\
&=& -\frac{a}2
\frac{\omega_0}{\omega}\exp{(-4\beta^2)}L_N(8\beta^2).
\end{eqnarray}
The parameter $a$ enters $\tilde{H}_N$ contributing two diagonal
terms in $\tilde{N}$ and two off-diagonal ones in $\Omega_{2N}$,
which will give rise to the transition between $|1,-1\rangle
|N_{-1}\rangle$ and $|1,1\rangle |N_{1}\rangle$. This is a new
transition due to the non-equal-level parameter $a\ne 0$ and is
absent in the equal-level case. The solutions to the eigen-value
problem of the operator $\tilde{H}_N$ are
\begin{eqnarray}
\tilde{E}_{N,0}^0&=& \hbar\omega\left(N-2\beta^2+\frac{a}2 \frac{\omega_0}{\omega}-\Omega_{2N}\right)\nonumber \\
&=& \hbar\omega\left(N+\tilde{T}_0-2\Omega_{2N}\right)\label{e0}, \\
\tilde{E}^0_{N,\pm} &=& \hbar\omega\left(N+\frac{\tilde{T}_0\pm \sqrt{\tilde{T}_0^2+8\Omega^2_{1N}}}2\right),\label{epm} \\
\tilde{T}_0 &=& -2\beta^2+\frac{a}2 \frac{\omega_0}{\omega}+\Omega_{2N} \label{t0}
\nonumber\\ &=&-2\beta^2+\frac{a}2 \frac{\omega_0}{\omega}(1-\exp{(-4\beta^2)}L_N(8\beta^2)),
\end{eqnarray}
and
\begin{eqnarray}
|\tilde{E}_{N,0}^0\rangle &=& \frac1{\sqrt2}\left(
\begin{array}{c}
1 \\
0 \\
-1 \\
\end{array}
\right), |E^0_{N,\pm}\rangle =\frac1{\tilde{L}_{N,\pm}}\left(
\begin{array}{c}
1 \\
\tilde{Y}_{N,\pm} \\
1 \\
\end{array}
\right), \nonumber \\
\label{vpm0}\\
\tilde{Y}_{N,\pm} &=& \left(\frac{-\tilde{T}_0\pm \sqrt{\tilde{T}_0^2+8\Omega^2_{1N}}}{2\Omega_{1N}}\right), \\
\tilde{L}^2_{N,\pm} &=& \tilde{Y}^2_{N,\pm} +2.
\end{eqnarray}
\begin{figure}[ht]
\centering
\includegraphics[width=0.38\textwidth]{11posi.EPS}
\includegraphics[width=0.38\textwidth] {11zero1.EPS}
\includegraphics[width=0.38 \textwidth]{11nega.EPS}
\caption{Schematic diagram of $P^1_N(t)$ with the four
parameters as $N=2,\ \frac{\omega_0}{\omega}=0.25,\ \beta=0.2,
a=0.2,\ 0,\ -0.2$ from the top to bottom respectively. The apparent
difference in these three figures strongly implies that the
parameter $a$ influences the qubits dynamically.}\label{fig11-3}
\end{figure}
In Ref.\cite{agar}, the authors discussed the special case where
$\Omega_{1N}\gg 2\beta^2$ with $a=0$. The other extreme case is that
$\Omega_{1N}\approx 0$: $a=0$ means that the spectrum of $H^0$ is
the same as that of $H$ in AA method, but they are different
whenever $a\ne 0$.
All eigen-values $\tilde{E}_N^m, \ m=0,\pm $ are influenced by the
parameter $a$ through the quantity $\tilde{T}=-2\beta^2+\frac{a}2
\frac{\omega_0}{\omega}(1-\exp{(-4\beta^2)}L_N(8\beta^2))$, so are
the eigenvectors $|\tilde{E}_N^{\pm}\rangle $. Of course, the
dynamics of the qubits will definitely be differently from that of
the equal-level one.
\section{The Physical impact of the results}
The dynamics of the qubits is important for the real system. The
evolutionary behavior of the qubits depends crucially on the
initial states, the initial states both for the qubits and the
quantum oscillator. Here we employ the initial state for the qubits
are $|1,m\rangle,\ m=0,\pm1$, which are applicable in the strong
coupling (see reference\cite{agar} for detail.). $|1,m\rangle,\
m=0,\pm1$ are different from the states $|m\rangle,\ m=1,\ 2,\ 3$.
Similarly, the natural initial states for the quantum oscillator are
the displaced number states or the displaced coherent states. In the
report, we treat the simplest case of the initial states
$|\Phi^N_m(0)\rangle=|1,m\rangle |N_m\rangle, \ m=0,\pm1$. We mainly
focus on the two kinds of probabilities, that is, $P^m_N(t)$ for
the system remains unchanged and $T^N_{m\rightarrow l}(t)$ for it
transits to new states $|1,l\rangle |N_l\rangle,\ l\ne m$. It is
easy to obtain the following
\begin{eqnarray}
P^1_N(t)&=&\frac14+\frac1{\tilde{L}_{N,+}^4}+\frac1{\tilde{L}_{N,-}^4} +\frac1{\tilde{L}_{N,+}^2}\cos \omega_{N,1}t
\nonumber\\ &&+\frac1{\tilde{L}_{N,-}^2}\cos \omega_{N,2}t+\frac2{\tilde{L}_{N,+}^2\tilde{L}_{N,-}^2}\cos \omega_{N,0}t
\label{p1}, \nonumber \\
\end{eqnarray}
\begin{eqnarray}
&& P^0_N(t)=
\frac{\tilde{Y}_{N,+}^4}{\tilde{L}_{N,+}^4}+\frac{\tilde{Y}_{N,-}^4}{\tilde{L}_{N,-}^4}
+\frac{2\tilde{Y}_{N,+}^2\tilde{Y}_{N,-}^2}{\tilde{L}_{N,+}^2\tilde{L}_{N,-}^2}\cos
\omega_{N,0}t, \nonumber \\
\end{eqnarray}
\begin{eqnarray}\tilde{T}_{1\rightarrow -1}^N(t)&=& \frac14+\frac1{\tilde{L}_{N,+}^4}+\frac1{\tilde{L}_{N,-}^4}
-\frac1{\tilde{L}_{N,+}^2}\cos \omega_{N,1}t \nonumber\\
&& -\frac1{\tilde{L}_{N,-}^2}\cos
\omega_{N,2}t+\frac2{\tilde{L}_{N,+}^2\tilde{L}_{N,-}^2}\cos
\omega_{N,0}t, \nonumber \\
\end{eqnarray}
and
\begin{eqnarray}
&&\tilde{T}_{1\rightarrow
0}^N(t)=\frac{\tilde{Y}_{N,+}^2}{\tilde{L}_{N,+}^4}+\frac{\tilde{Y}_{N,-}^2}{\tilde{L}_{N,-}^4}
+\frac{2\tilde{Y}_{N,+}\tilde{Y}_{N,-}}{\tilde{L}_{N,+}^2\tilde{L}_{N,-}^2}\cos
\omega_{N,0}t, \nonumber \\
\end{eqnarray}
where
\begin{eqnarray} && \omega_{N,1}=\omega\left(\frac{4\Omega_{2N}-\tilde{T}_0+ \sqrt{\tilde{T}_0^2+8\Omega^2_{1N}}}2\right),
\\ && \omega_{N,2}=\omega\left(\frac{4\Omega_{2N}-\tilde{T}_0- \sqrt{\tilde{T}_0^2+8\Omega^2_{1N}}}2\right),
\\ &&\omega_{N,0}=\omega\sqrt{\tilde{T}_0^2+8\Omega^2_{1N}}\label{omega00}.
\end{eqnarray}
We see that the probabilities $P^0_N(t)$ oscillate with only one
frequency $\omega_{N,0}$, but the probabilities
$P^1_N(t)=P^{-1}_N(t)$ oscillate with three frequencies
$\omega_{N,1}, \ \omega_{N,2},\ \omega_{N,0}$. The non-equal level parameter $a$ changes the three
frequencies as well as the amplitude. For $a=0$ , the detailed
dynamic of the qubits is given in Ref.\cite{agar}. The special case
with $\beta^2\ll 8\Omega_{1N}^2,\ a=0$ is studied in
Ref.\cite{agar}, where $\omega_{N,1}=-\omega_{N,2}=\frac12
\omega_{N,0}=\sqrt2|\Omega_{1N}|$.
Figs.(\ref{fig11-3}) show the general trait for the qubit remaining
in its initial states $|1,\pm\rangle$ for different parameter
$a=0.2,\ 0, \ -0.2$. Obviously, $P^1_N(t)$ is influenced
by four parameters $\beta, a, N,
\frac{\omega_0}{\omega}$. In Ref.\cite{iris}, it is shown that the
coupling strength $\beta$ ranges from $0.01$ to $1$ for the
application of adiabatic approximation (weak coupling will not be
discussed here). From
Eqs.(\ref{e0})-(\ref{t0}),(\ref{p1}-\ref{omega00}), we see that the
parameter $a$ will come to action apparently whenever $\beta\approx
0.01-0.6$. As stated before, the qubits is equivalent to a two
qubits system and the non-equal-energy-level parameter $a$
represents the coupling strength between the two qubits. This shows
the coupling of the two qubits changes their dynamics considerately
in the range of $\beta\approx 0.1-0.6$ for the adiabatic
approximation method to be applied, and this is our limit on the
coupling parameter $\beta$
\section{Dynamics of the system}
Here we consider the dynamics of the composite system with different
initial conditions. Furthermore, we could also calculate the
probability for the two qubits stay in their other fully entangled
states. There are four fully entangled Bell states
\begin{eqnarray}
\Psi_{\pm}&=&\frac1{\sqrt2}(|\uparrow\downarrow\rangle\pm |\downarrow\uparrow\rangle)\\
\Phi_{\pm}&=&\frac1{\sqrt2}(|\uparrow\uparrow\rangle\pm
|\downarrow\downarrow\rangle).
\end{eqnarray}
It is easy to see that $|1,0\rangle=\Phi_-$, and the others are
related with the vectors $|1,1\rangle,\ |1,-1\rangle$. Due to the
facts that \[|2\rangle=\frac1{\sqrt2}(|1,1\rangle-|1,-1\rangle)\]
and \[|2\rangle=\frac1{\sqrt2}(|\uparrow\uparrow\rangle+
|\downarrow\downarrow\rangle)\] for the symmetrical case
$a=\kappa>0$ and
\[|2\rangle=\frac1{\sqrt2}(|\uparrow\uparrow\rangle-
|\downarrow\downarrow\rangle)\] for the symmetrical case
$a=-\kappa<0$, We could write them as
\begin{eqnarray}
\Phi_-&=&|1,0\rangle\\
\Phi_{+}&=&\frac1{\sqrt2}(|1,1\rangle+|1,-1\rangle)\\
\Psi_{\pm}&=&|2\rangle=\frac1{\sqrt2}(|1,1\rangle-|1,-1\rangle).
\end{eqnarray}
Note that $\Psi_+,\ \Psi_-$ are vectors correspond to the positive
and negative sign of the parameter $a$ respectively. So the initial
fully entangled Bell states of qubits can be united to be written as
$|I_{\delta}\rangle=\frac1{\sqrt2}(|1,1\rangle+\delta |1,-1\rangle)$
($\delta=\pm1$) for $\Phi_+,\ \Psi_{\pm}$ and
$|I_0\rangle=|1,0\rangle$ for $\Phi_-$. We will consider the
dynamics of the system with the two qubits in $|I_{\pm 1}\rangle $
or $|I_0\rangle $ and quantum mode field in $ |\alpha\rangle$.
\subsection{The qubits are initially in states $|I_{\pm 1}\rangle $}
In first case, the initial state for the two qubits is
$|I_{\delta}\rangle $ with $\delta=\pm1$ and quantum mode field in $
|\alpha\rangle$. Suppose $\beta<0.7$. Then in the adiabatic
approximation, the qubits will evolute into the states
$|I_{\bar{\delta}}\rangle $ with $\bar{\delta}=\pm1$ with
probability
\begin{widetext}
\begin{eqnarray}
&&P(\delta,\bar{\delta},\alpha,t)=\frac12+\sum_{N=0}^{\infty}\frac{\bar{\delta}\delta}{N!}(\alpha^2-\beta^2)^Ne^{-(\alpha^2+\beta^2)}\langle
N_{-1}|N_1\rangle\nonumber\\ &&-\frac12\sum_{N=0}^{\infty}\bigg(p(N,\alpha+\beta)+p(N,\alpha-\beta)
+\frac{2\delta}{N!}(\alpha^2-\beta^2)^Ne^{-(\alpha^2+\beta^2)}\bigg)(1+\langle
N_{-1}|N_1\rangle)
\frac{\Omega^2_{1N}}{\tilde{T}_0^2+8\Omega^2_{1N}}\bigg(1-\cos
(\omega_{N,0}t)\bigg), \nonumber \\
\label{p1pm to p1pm}
\end{eqnarray}
\end{widetext}
where \begin{eqnarray}
p(N,\alpha)=\frac{e^{-\alpha*\alpha^*}|\alpha|^{2N}}{N!}
\end{eqnarray} is the probability of quantum field state
$|\alpha\rangle$ in its number state $|N\rangle$ and $\delta=\pm1,\
\bar{delta}=\pm1$. From the above Eq.(\ref{p1pm to p1pm}), we see
that the initial entangled states $|I_{\pm1}\rangle$ of the two
qubits will have large possibility ($p(\delta,-\delta,\alpha,t)$
being around $\frac12$ ) to evolute into the states
$|I_{\mp1}\rangle$. So it is hard for the two qubits to remain its
entangled states. In the following subsection, we will check that
for the initial state $|I_0\rangle$.
\subsection{The qubits are initially in states $|I_{0}\rangle $}
suppose the qubits is in the state $|I_0\rangle$ initially, while
the oscillator naturally stays in its coherent state
$|\alpha\rangle$. This is a very general state for a quantum
processing system.
The system will evolute accordingly and the probability for the qubits remain in its initial state is
\[P_0(\alpha)=1-T(\alpha,t),\]
where $T(\alpha,t)$ is the probability of the two qubits transiting
to other non-entangled states and is given as
\begin{eqnarray}
T(\alpha,t)&=&\sum_{N=0}^{\infty}2p(N,\alpha)\frac{\tilde{Y}_{N,+}^2}{\tilde{L}_{N,+}^4}\bigg(1-\cos
(\omega_{N,0}t)\bigg),\label{t0-1}\end{eqnarray} where $p(N)$ is the
probability of $N$ photons in the coherent state $|\alpha\rangle$.
In the large quantity $|\alpha|^2\gg1$ approximation, the quantity
$P(t)$ could be simplified greatly due to the fact \begin{eqnarray}
p(N,\alpha)&=&\frac{e^\frac{{-(N-|\alpha|^2)^2}}{2|\alpha|^2}}{\sqrt{2\pi|\alpha|^2}}.
\end{eqnarray}
We denote the term $2\frac{\tilde{Y}_{N,+}^2}{\tilde{L}_{N,+}^4}$ as
$B(N)$ in the above equation and will show the general properties
of $B(N)$ by some special parameters in Fig.(\ref{fig7}).
Fig(\ref{fig7}) also gives some examples concerning the
$p(N,\alpha)$ in case of $|\alpha|^2\gg1$. Under these assumption of
rapid falling to zero of $p(N,\alpha)$ as $N$ deviated from its
average $|\alpha|^2$, we could safely approximate $B(N)$ in
Eq.(\ref{t0-1}) as
\[B(N)\approx b_0+b_1(N-\bar{n})+b_2(N-\bar{n})^2\] where $\bar{n}=[|\alpha|^2]$
is the integer part of $|\alpha|^2$
and \[b_0=2\frac{\tilde{Y}_{\bar{n},+}^2}{\tilde{L}_{\bar{n},+}^4},\ b_1=\bigg[\frac {dB(N)}{dN}\bigg]_{N=\bar{n}},\
b_2=\bigg[\frac {d^2B(N)}{2dN^2}\bigg]_{N=\bar{n}}.\]
It is easy to calculate $T(\alpha,t)$ as two parts
\begin{eqnarray} T(\alpha,t)&=&T_1(\alpha)-T_2(\alpha,t)\\
T_1(\alpha)&=&\sum_{N=0}^{\infty}B(N)
\frac{e^\frac{{-(N-|\alpha|^2)^2}}{2|\alpha|^2}}{\sqrt{2\pi|\alpha|^2}}=b_0+b_2\bar{n}\\
T_2(\alpha,t)&=& \sum_{N=0}^{\infty}B(N)
\frac{e^\frac{{-(N-|\alpha|^2)^2}}{2|\alpha|^2}}{\sqrt{2\pi|\alpha|^2}}\cos
(\omega_{N,0}t).
\end{eqnarray}
In the assumption that $|\alpha^2|\gg
1$ and Gauss form of $p(N,\alpha)$, it is reasonable to extend the
summation in $T_2(\alpha,t)$ from $0$ to $-\infty$. Then the use of
Poisson summation formula gives
\begin{eqnarray}
T_2(\alpha,t)&=&\sum_{k=-\infty}^{+\infty}\bar{g}(k,t)
\end{eqnarray}
\begin{eqnarray}
\bar{g}(k,t)&=& \int_{-\infty}^{+\infty}B(N)
\frac{e^\frac{{-(N-|\alpha|^2)^2}}{2|\alpha|^2}}{\sqrt{2\pi|\alpha|^2}}\cos
(\omega_{N,0}t)e^{i2\pi kN} dN. \nonumber \\
\end{eqnarray}
\begin{figure}[ht]
\centering
\includegraphics[width=0.38\textwidth]{201304260.EPS}
\includegraphics[width=0.38\textwidth]{20130426.EPS}
\caption{The function $B(N,t)$, $P(N,\alpha)$ with
$|\alpha|^2=320,360,500$. lines (a),(b),(c),(d) correspond to
$B(N,t)$ with parameters $a=-0.8,\ \beta=0.2,\
\frac{\omega_0}{\omega}=0.24$ and $P(N,\alpha)$ with
$|\alpha|^2=28,47,70$ in the first figure and (b),(c),(d) lines
correspond to $B(N,t)$ with parameters $a=-0.8,\ \beta=0.2,\
\frac{\omega_0}{\omega}=0.24$ and $P(N,\alpha)$ with
$|\alpha|^2=320,360,500$ in the second figure .}\label{fig7}
\end{figure}
Generally,
$\omega_{N,0}t$ could be simplified as
\[\omega_{N,0}t=\omega_{\bar{n},0}t+c_1(N-\bar{n})+c_2(N-\bar{n})^2\]
with \[c_1=\bigg[\frac {d\omega_{N,0}t}{dN}\bigg]_{N=\bar{n}},\
c_2=\bigg[\frac {d^2\omega_{N,0}t}{2dN^2}\bigg]_{N=\bar{n}}.\]
So we have
\begin{eqnarray}
\bar{g}(k,t)(t)&=&A(k,t)\cos\theta_1(k,t)\label{gbar1},\\
A(k,t)&=& \frac{e^{-\frac{\bar{n}(2\pi
k+c_1t)^2}{(1+4\bar{n}^2c_2^2t^2)^{\frac12}}\cos\theta(t)}}{\bigg(1+4\bar{n}^2c_2^2t^2\bigg)^{\frac14}}\bigg(b_0
+\gamma\bigg),\\
\theta_1(k,t)&=&\theta_1(t)=(\omega_{\bar{n},0}t+\frac{\theta(t)}2+2\pi
k\bar{n})\nonumber\\
&& -\frac{\bar{n}(2\pi
k+c_1t)^2}{(1+4\bar{n}^2c_2^2t^2)^{\frac12}}\sin\theta(t),\\
\tan\theta(t)&=&2\bar{n}c_2t.\label{gbar2} \\
\gamma&=&b_2(\frac{\bar{n}}{(1+4\bar{n}^2c_2^2t^2)^{\frac12}}-\frac{(2\pi k+c_1t)^2\bar{n}^2}{(1+4\bar{n}^2c_2^2t^2)}). \nonumber \\
\label{gbar3}
\end{eqnarray}
It is clear that $T_2(\alpha,t)$ exhibits the collapse and revival
phenomena with its $k-th$ term being $\bar{g}(k,t)$. It is more
useful to delineate them in some special cases: one case that when
$c_1\ne 0$ with $c_2= 0$, and the other extreme case that $c_1=0$
and $c_2\ne 0$. In the first case, we obtain that
\begin{equation}\label{111}
A(k,t)= e^{-\bar{n}(2\pi
k+c_1t)^2}\bigg(b_0 +b_2(\bar{n}-(2\pi k+c_1t)^2\bar{n}^2)
\bigg)
\end{equation}
The revival time \[t_{rev}(k)=2\pi|\frac{k}{c_1}|\] and the height
for its amplitude is \[A(k,t_{rev})=b_0 +b_2\bar{n}=T_1,\] which is
constant in contrast to the decreasing height as time goes in
Ref.\cite{agar}.
In the second case, it is easy to see that \begin{eqnarray}
A(k,t)&=& \frac{e^{-\frac{4\bar{n}\pi^2
k^2}{(1+4\bar{n}^2c_2^2t^2)}}}{\bigg(1+4\bar{n}^2c_2^2t^2\bigg)^{\frac14}}\nonumber\\
& *& \bigg(b_0
+\frac{\bar{n}b_2}{(1+4\bar{n}^2c_2^2t^2)^{\frac12}}-\frac{4\pi^2
k^2\bar{n}^2b_2}{(1+4\bar{n}^2c_2^2t^2)}
\bigg), \nonumber \\
\end{eqnarray}
where the fact
\[\tan\theta(t)=2\bar{n}c_2t,\ \
\cos\theta(t)=\frac1{(1+4\bar{n}^2c_2^2t^2)^{\frac12}}\] are used.
Obviously, there is no revival phenomena in $A(k,t)$ in this case.
$A(k,t)$ also will generally decreases as time goes except for the
initially irregular transiting change.
This two cases are not possible absolutely. Anyway, $c_2$ may be
very small with $c_1\gg c_2$, and this case is close to the first
case, where collapse and revival appear in $T_2(\alpha,t)$. However,
the small and non-equal-zero quantity $c_2$ contributes both the
decreasing heights of the revival amplitude, as is shown in
Eq.(\ref{gbar1})-(\ref{gbar3}) and the broadening of revivals with
the time growing. The broadening of revivals also makes the
collapse interval shorter and shorter until its disappearing.
Similarly, $c_2\gg c_1$ and $c_1\ne 0$ means the revival time gap is
greater than that in the first case. All these features all shown
in Fig.(\ref{fig80}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.38\textwidth]{collaps.EPS}
\caption{The transition $2T(\alpha,t)$ with $|\alpha|^2=14,\
a=-0.48,\ \beta=0.102,\ \frac{\omega_0}{\omega}=0.21$.}\label{fig80}
\end{figure}
The above discussion might be limited or not applicable in the
following case, where $p(N,\alpha)$ might be decrease not rapid
enough that the approximation $B(N)\approx
b_0+b_1(N-\bar{n})+b_2(N-\bar{n})^2$ and $
\omega_{N,0}t=\omega_{\bar{n},0}t+c_1(N-\bar{n})+c_2(N-\bar{n})^2 $
can fails to hold. Then new detailed approximation must be added. As
this is seldom, we just stop here.
\section{The preservation of entanglement of two qubits}
From the previous section, we see that the dynamics of the Rabi
model is much more complicated than that of its RWA counterpart JC
model.
\begin{figure}[ht]
\centering
\includegraphics[width=0.38\textwidth]{entanglement11.EPS}
\includegraphics[width=0.38\textwidth]{entanglement22.EPS}
\caption{The function $T(\alpha,t)$ becomes very small with
$|\alpha|^2=36,\ a=-0.8,\ \beta=0.5067,\
\frac{\omega_0}{\omega}=0.22$ for upper figure and $|\alpha|^2=55,\
a=-0.6,\ \beta=0.5599,\ \frac{\omega_0}{\omega}=0.24$ for lower
figure. }\label{fig8}
\end{figure}
To initial entangled state $|I_0\rangle$ of the two qubits with
control field mode in its coherent state, its evolution involves on
the various parameters in Eq.(\ref{t0-1}). It depends on the number
$N$ in an extremely nonlinear and intricate way. The kind perplexity
makes it hard to study the Rabi model, never the less, it also
provide the opportunity to preserve the entanglement of the two
qubits by careful choice of the appropriate parameters. Because the
coherent state has the probability of Poisson distribution, which
will be approximated by a Gauss distribution if the average number
$|\alpha|^2$ is large enough. The intricacy of the Rabi model could
be utilized to make the quantity
$\frac{\tilde{Y}_{N,+}^2}{\tilde{L}_{N,+}^4}=B(N)$ in
Eq.(\ref{t0-1}) extremely small when $N$ is in the neighborhood of
$|\alpha|^2$ by some selection of the appropriate parameters, which
will guaranty the initial state of the qubits unchanging. This is
shown in Fig.(\ref{fig8}). It can be easy to see that whenever we
select the parameters appropriate, for example, the parameters as
$|\alpha|^2=55,\ a=-0.6,\ \beta=0.5599,\
\frac{\omega_0}{\omega}=0.24$, the Bell state
$|I_0\rangle=|1,0\rangle =\frac1{\sqrt2}(|\uparrow\uparrow\rangle
-|\downarrow\downarrow\rangle)$ have the probability about
$1-0.005=\frac{99.5}{100}$ to remain unchanged\cite{tian}.
$\frac{\tilde{Y}_{N,+}^2}{\tilde{L}_{N,+}^4}$ in Eq.(\ref{t0-1})
being small in the neighborhood of $N=|\alpha^2|$ is crucial for
$T(\alpha,t)\approx 0$. So we will select the zeros ($N_1,N_2,\cdots
$) of $\Omega_{1N}$ as possible parameters for $|\alpha|^2$ under
definite quantities $\beta,\ a$. Then $|\tilde{T}_0|^2=|
-2\beta^2+\frac{a}2 \frac{\omega_0}{\omega}+\Omega_{2N}|^2$ larger
around zeros of $\Omega_{1N}$ will advantage $T(\alpha,t)\approx 0$.
As a result, $a$ negative and $\Omega_{2N}$ negative around zeros of
$N=|\alpha|^2$ of $\Omega_{1N}$ are keys to make $T(\alpha,t)\approx
0$, that is, to keep the entangled state $|I_0\rangle$ unchanged.
There is an alternative method for the realization of $T(\alpha,t)\approx
0$. One could first determine the average number $T(\alpha,t)\approx 0$ of the coherent state of the control
field, then chooses $\beta$ and $a$ by similar requirement that $|\tilde{T}_0|^2=| -2\beta^2+\frac{a}2
\frac{\omega_0}{\omega}+\Omega_{2N}|^2$ as larger as possible around
$N=|\alpha|^2$.
The parameter $a$ is connected with the inter-qubit coupling
strength $\kappa$ as $a=\pm \kappa$ in symmetric and asymmetric
transition cases respectively. Study also shows that the parameter
$a$ negative is favorable for $|T(\alpha,t)$ approaching zero, as
Fig.(\ref{fig8}) exhibits. So the inter-qubit coupling is in favor
of preservation of the initial entanglement, especial with
asymmetrical transition case ($a<0$).
All the others' Bell states have not this nice property because
there is a simple factor $\frac12$ in quantities
$P(\delta,-\delta,\alpha,t)$ in Eq.(\ref{p1pm to p1pm}). Never the
less, the preservation of the entangled Bell state $|I_0\rangle$ is
still useful for its application in quantum information process.
Also, the complex formula of $T(\alpha,t)$ make the appropriate
choice of the parameters much easier and will beneficial to the
experiment application.
In summary, coupled strongly with a quantum mode field, the two
qubits' dynamics is influenced by three parameters $ \beta,\
\frac{\omega_0}{\omega},\ a$ and the initial conditions in a very
complicated form. We investigate the evolution of the four Bell
entangled states with the control mode in its coherent state. Three
out of the four Bell states will become the combination of the four
Bell states and can not remain in their initial entangled states.
Nevertheless, the above mentioned complexity unexpectedly promotes
our ability to preserve the entanglement of the two qubits in one
Bell state $|I_0\rangle=|1,0\rangle
=\frac1{\sqrt2}(|\uparrow\uparrow\rangle
-|\downarrow\downarrow\rangle)$, that is, it could remain in its
initial states by suitable choice of the controlled parameters. It
is shown that the
parameter $a$ negative is more favorable for the maintaining the
state $|I_0\rangle=|1,0\rangle$.
These results will be useful for the information process.
\acknowledgments The work was partly supported by the Major State
Basic Research Development Program of China (973 Program:
No.2010CB923202) and the National Natural Science of China (No.
10875018).
|
1,108,101,565,314 | arxiv | \section{Introduction}
Expectations formation is a core question in economics. In recent years, a strain of literature in macroeconomics and finance has been collecting empirical regularities using survey data on subjective forecasts. It finds that forecasts largely deviate from the full information model that predominates in economic modelling: forecast errors are biased and predictable using past errors and past revisions. Two types of explanations for this have been put forward. The first one is that the data-generating process (DGP) is simple and known to forecasters, but forecasting rules are irrational but linear, featuring for instance under-reaction \citep{bouchaud_sticky_2019} or overreaction \citep{bordalo_diagnostic_2019, bordalo_over-reaction_2018, afrouzi_overreaction_2020}. The second approach to explaining observed biases is the tenet that the data-generating process is too complex to be known by forecasters. Thus, they use a misspecified model calibrated on the data they observe. This may come from the fact that the DGP is hard to learn \citep[for recent contributions along these lines see][]{veldkamp, nakamura}, or alternatively from bounded rationality of the forecasters. They can only use simple forecasting rules \citep{fuster_natural_2010, gabaix_behavioral_2018}. In any case, forecast errors are predictable because forecasters use an imperfect model. In this paper, we find evidence consistent with the second view, i.e. that, facing complex (non-Gaussian) processes, forecasters use simple rules.
\medskip
We use data on some 63,601 analyst forecasts of corporate revenue growth and their realizations. An advantage of focusing on revenue growth (instead of EPS as the literature typically does) is that revenue is always positive so that growth rate is always well defined. We first show that the relationship between forecast revisions and future forecast error is non-linear, a feature that is not reported in the existing literature. In some settings, revisions linearly and \emph{positively} predict forecast errors, a feature commonly interpreted as evidence of under-reaction \citep{coibion_information_2015}. In others, revisions linearly and \emph{negatively} predict forecast errors, which is considered as evidence of overreaction \citep{bordalo_diagnostic_2019,bordalo_over-reaction_2018}. In our sample, which is much larger than the existing studies, and which focuses on a rather new object, sales growth, we find evidence of both. For intermediate values of revisions, forecasters underreact to news (an increasing relation between revisions and errors). For large values of revisions, forecaster overreact (a decreasing relation between revisions and errors). This non-linearity is robust. It holds in U.S. data and international data. It holds across most industry groups.
\medskip
The remainder of the paper is dedicated to explaining this fact. Our framework is based on the simple assumption that forecasters use a linear rule to forecast sales growth, but that this rule is misspecified because the true DGP is more complex. Taking inspiration from the literature on firm size distribution \citep[in particular,][]{axtell, bottazzi_explaining_2006}, we posit that sales growth distribution may be modelled by the sum of a low-frequency and high-frequency shock. The low frequency shock is Gaussian, while the high-frequency shock is non-Gaussian. It may have a very large (positive or negative) realizations. With such a model, the optimal forecast of future growth, conditional on current growth, is non-linear. A perfectly rational forecaster anticipates more reversion to the mean when realizations are extreme and more persistence when realizations are intermediate. We assume, however, that agents stick to a linear rule to make their forecasts. The fact that agents use a misspecified model may be grounded in bounded rationality \citep[i.e., agents use a simple rule even if the process is complex, as in][]{fuster_natural_2010} or the difficulty of learning about complex processes (shocks with multiple frequencies are hard to learn \citealp{nakamura}; shocks with fat tails also \citealp{veldkamp}).
\medskip
Combined, these two assumptions (linear forecasting rule but short-term non-Gaussian shocks) are enough to generate the non-linear relation between forecast errors and past revisions that observe empirically. The mechanism is intuitive. When revisions are large, the rational forecaster should anticipate mean reversion, but the linear forecaster won't. She overreacts to big positive (or negative) news. When fitting her forecasting rule to the data, she does, however, take this overreaction into account, and optimally attenuates the sensitivity of her forecast to recent observations in the bulk of the distribution. As a result, she underreacts to news of lesser significance.
\medskip
We then qualitatively test four additional predictions of the model. We start with two natural predictions of the data-generating process. The first such prediction is that the distribution of sales growths has fat tails, a fact that holds strongly in the data (and previously shown by \citealp{bottazzi_explaining_2006}). In particular, we check that this fact is not driven by an alternative model of firm dynamics, where firms have heterogeneous volatility, but Gaussian dynamics. In such a setting, large growth shocks could be generated by the subset of firms who are more volatile than average \citep{wyart_statistical_2003}. We thus rescale sales growths by estimates of firm-level standard deviation and find that the resulting distribution still has very fat tails, suggesting that growth shocks occur within firms, not across firms.
\medskip
The second prediction from our DGP is that, conditional on past growth, future growth should follow a S-shaped pattern as discussed above. We show that this holds in the data, whether we normalize sales growth by firm-level standard deviation or not.
\medskip
The third prediction is on forecast errors. A natural prediction of our forecasting model is that the autocorrelation of forecast errors should have the same non-linear relation as the relation between errors and lagged revision. In our model, where the forecasting rule is linear, they are the same. Large past errors are equivalent to big shocks and therefore transient ones: This leads to overreaction, as in the error-revision relation. We find that this pattern holds in the data: forecast error are positively correlated for intermediate values and negatively for large absolute values.
\medskip
Our fourth and last prediction is on stock returns. Assuming risk-neutral pricing and that equity cash-flows follow a dynamic similar to revenues, it is easy to show that our model predicts that the autocorrelation of returns should have a shape similar to the autocorrelation of forecast errors. For intermediate values of past returns, momentum should dominate, but for extreme values of returns, stock returns should mean revert. We find this pattern to hold in the data. Our findings line up with recent research from \cite{ChristofSchmidhuber}, who also finds evidence of momentum for ``normal past returns'' and mean-reversion for extreme values of returns. We conclude from this analysis that the risk-adjusted performance of momentum strategies would be considerably improved by excluding stocks whose past returns have been large in absolute value.
\medskip
This paper contributes to the recent empirical literature on expectations formation. Most papers in this space focus on linear and Gaussian data-generating processes. Forecasting rules may, or may not, be optimal, but are in general linear, so that the relationship between forecast errors and past revisions (or past errors) is also linear. Our paper emphasizes the non-linearity of such a relation, and as a result, suggests an different modelling approach for the data-generating process to account for this non-linearity. We emphasize non-Gaussian dynamics in firms' growth (as \citealp{veldkamp}, have done in a different setting and in their case with a focus on Bayesian learning).
\medskip
In doing this we also connect the expectations formation literature with the empirical literature on firm dynamics, which has since \cite{axtell} emphasized the omnipresence of power laws in the distribution of firms sizes (see \citealp{gabaix_power_2009}, for a survey of power laws in economics). That sales \emph{growths} (rather than log sales) have fat tails is a less well-known fact, although it was first uncovered by \cite{bottazzi_explaining_2006}.
\medskip
Last, our assumption that forecasters use a simple, linear, forecasting rule that is misspecified is inspired by the literature on bounded rationality, which assumes economic agents have a propensity to use oversimplified models to minimize computation costs \citep{fuster_natural_2010, fuster_natural_2012, gabaix_behavioral_2018}. Such models are correct on average, they are fitted on available data, but their misspecification gives rise to predictability in forecast errors.
\medskip
Section \ref{datasec} describes the data we use: publicly available data on analyst forecasts (IBES) and confidential data on international stock returns from CFM. Section \ref{sec:eform} documents the main fact: future errors are a S-shaped function of past revision. Section \ref{sec:DGP} lays out the simple framework that we build in order to explain this novel pattern. Section \ref{allpredictions} tests four additional predictions from this model. Section \ref{conclu} concludes.
\section{Data}
\label{datasec}
\subsection{Analyst Forecast data}
This paper focuses on firm revenue (sales) forecasts made by analysts. Analyst forecasts come from IBES Adjusted Summary Statistics files, which are available both in the U.S. and internationally. Summary statistics files contains ``current'' estimates as of the third Wednesday of each month. While Earnings per Share forecasts have received greater attention in the literature, sales forecasts are, in fact, better populated in the data than EPS forecasts in recent years. Another advantage of revenue forecasts is that they are never negative, so that we can easily calculate sales growth. A downside of EPS is that it is frequently negative or small rendering the calculation of EPS growth forecast impractical. Thus, the literature on EPS forecasts studies a variable that is, in essence, non-stationary (typically, future EPS normalized by current stock price).
For each firm $i$ and each year $t$, we denote sales by $R_{it}$, and $F_{t}R_{it+1}$ the forecast made in year $t$ for the future realization of $R_{it+1}$. We compute $F_{t} R_{it+1}$ as the consensus three months after the end of fiscal year $t$ (i.e. nine months prior to the end of fiscal year $t+1$) to ensure that sales results for fiscal year $t$ are available when the forecast for $t+1$ is formed. Similarly, the two-year ahead forecast $F_{t-1}R_{it+1}$ is measured three months after the end of fiscal year $t-1$. Finally, we retrieve the realization of $R_{it+1}$ from the IBES actual files, which is designed to recover the realization of the quantity actually forecast by analysts.
\medskip
In this paper, we focus on log sales growth and forecast of log sales growth. We define $g_{it+1}=\log R_{it+1}-\log R_{it}$ the log-growth of this quantity. The growth forecast is defined as $F_t g_{it+1} = \log F_t R_{it+1} - \log R_{it}$ for the one-year ahead growth forecast, and $F_{t-1} g_{it+1} = \log F_{t-1} R_{it+1} - \log F_{t-1} R_{it}$ for the two-year ahead forecast of annual growth.
\medskip
Finally, in the spirit of the expectations formation literature \citep{coibion_information_2015,bouchaud_sticky_2019}, we construct two empirical variables: the forecast error $ERR_{it+1}=g_{it+1}-F_t g_{it+1}$ and the forecast \emph{revision} $R_t g_{it+1}=F_t g_{it+1}-F_{t-1} g_{it+1}$. These two variables will be the main focus of our analysis.
\medskip
To ensure forecast quality and improve sample consistency when we examine returns, restrict our analysis of forecasts to firms that belong to one of the major global stock indexes.\footnote{The list of stock markets used consists of: AEX, AS5, CAC, DAX, HSC, HSI, IBE, IND, KOS, MID, NDX, NIF, NKY, OMX, SMI, SPT, RAY, SX5, TOP, TPX, TWY, UKX}. Further, we restrict ourselves to firm-year observations for which both the forecast error $ERR_{it+1}$ and the revision $R_t g_{it+1}$ are available. We give more details about the number of observations and the start date in Table \ref{tab:index_date_exp}.
\subsection{International Data on Stock Returns}
In examining returns we restrict our sample to equities included in a major national index. We rely on proprietary return data purchased and maintained by CFM. The start of data availability differs by index and is shown in Table \ref{tab:index_date_ret}. For all indexes data has been obtained through January, 2022. Each observation is a ticker-month, and returns are log returns.
\begin{table}[htbp!]
\centering
\caption{Sample size by exchange (sales growth)}
\label{tab:index_date_exp}
\begin{tabular}{lrrrrrr}
\toprule
Index & Total & 2000 & 2005 & 2010 & 2015 & 2020 \\
\midrule
AEX & 533 & 0 & 32 & 19 & 30 & 28 \\
AS5 & 3228 & 48 & 122 & 167 & 196 & 161 \\
CAC & 921 & 0 & 40 & 45 & 49 & 48 \\
DAX & 680 & 0 & 29 & 38 & 37 & 35 \\
HSC & 972 & 7 & 24 & 41 & 75 & 74 \\
HSI & 572 & 15 & 24 & 28 & 29 & 29 \\
IBE & 715 & 0 & 35 & 37 & 40 & 35 \\
IND & 746 & 1 & 38 & 38 & 41 & 39 \\
KOS & 1540 & 34 & 29 & 30 & 101 & 124 \\
MID & 13016 & 10 & 586 & 818 & 782 & 646 \\
NDX & 1174 & 1 & 47 & 67 & 72 & 61 \\
NIF & 1037 & 13 & 24 & 47 & 66 & 64 \\
NKY & 4959 & 207 & 206 & 226 & 224 & 233 \\
OMX & 605 & 19 & 26 & 31 & 32 & 31 \\
RAY & 15923 & 4 & 525 & 995 & 1057 & 881 \\
SMI & 479 & 8 & 21 & 21 & 27 & 23 \\
SPT & 998 & 0 & 34 & 56 & 67 & 63 \\
SX5 & 215 & 0 & 10 & 11 & 13 & 11 \\
TOP & 493 & 0 & 18 & 14 & 40 & 32 \\
TPX & 10836 & 372 & 486 & 531 & 504 & 574 \\
TWY & 1314 & 13 & 40 & 71 & 80 & 74 \\
UKX & 2645 & 82 & 110 & 131 & 142 & 119 \\
\bottomrule
\end{tabular}
\end{table}
\section{Motivating Facts}\label{sec:eform}
In this section we describe new evidence on expectations formation and document a strong non-linearity in the link between forecast error and revisions.
Since \citep{coibion_information_2015-1} many papers in the expectations formation literature estimate the following linear relationship between forecast errors and revision:
\begin{equation}
\label{CG_eq}
ERR_{it+1} = \alpha + \beta R_t g_{it+1} + \epsilon_{it+1}
\end{equation}
\noindent which is intuitive to interpret. Full information rationality predicts $\beta=0$ for consensus forecasts \citep{coibion_information_2015-1}. Plain rationality predicts $\beta=0$ for individual forecasts. $\beta>0$ is typically interpreted as evidence of information frictions \citep{coibion_information_2015}, or, if run at the forecaster level, plain under-reaction (\citealp{bouchaud_sticky_2019}, study EPS forecasts; \citealp{ma_quantitative_2020}, study the revenue forecasts of managers). In contrast, $\beta<0$ is interpreted as evidence of overreaction (\citealp{bordalo_diagnostic_2019}, study long-term EPS growth forecasts; \cite{bordalo_diagnostic_2018-1} focus on macroeconomic expectations). All these papers restrict their analyses to linear functional forms, as in equation (\ref{CG_eq}).
\begin{figure}[htbp!]
\begin{center}
\caption{Revenue Forecast Error as a Function of Past Revision}
\label{fig:zigzag_sales}
\includegraphics{data_figures/2_1_SAL_err_l_rev_nonorm.png}
\end{center}
\footnotesize Note: In this figure we use international sample of firm revenue expectations to report the binned scatter plot of future log forecast errors $g_{t+1}-F_t g_{t+1}$ as a function of past revision $F_t g_{t+1} - F_{t-1} g_{t+1}$. The blue line is a local polynomial approximation, centered in the middle of each centile.
\end{figure}
In this section we show that for revenue growth forecasts this relationship is actually non-linear. In Figure \ref{fig:zigzag_sales}, we represent the relationship in a non-parametric way through a binned scatter plot where the x-axis are the revisions $R_t g_{it+1}$ and the y-axis is the average forecast error $FE_{it+1}$. Each black dot corresponds to a centile of the distribution of revisions, with the x coordinate being the average revision in this centile and the y coordinate being the average forecast error. The grey shaded area shows a bootstrapped 95\% confidence interval. The blue line shows the predicted error from a local polynomial regression (or LOESS) model estimated at the center of each percentile of lagged revision. The kernel for this local regression model is Gaussian with the bandwidth set equal to the average of the distances between the centers of the 1st and 2nd, and the 99th and 100th, percentiles.
\medskip
For revisions of relatively small to moderate magnitude we find that errors are increasing in revision. Thus, forecasters are \textit{under-reacting} in response to moderately-sized news shocks. This consistent with evidence from \cite{bouchaud_sticky_2019} on EPS forecasts in United states. \cite{ma_quantitative_2020} find similar evidence on revenue forecasts from managers' expectations in the U.S. (using guidance data) and Italy (using a survey from the Bank of Italy). Their samples are, however, much smaller than ours (a few 10,000 observations at most), which precludes observing the tails of the distribution of revisions.
The key difference is in the tails of the distribution of revisions, for which this relationship is reversed. In the face of exceptionally bad news, forecasters are \textit{over-reacting}: a larger, positive revision leads to more negative surprises. A similar non-linearity is marginally observable in U.S. EPS forecasts in \cite{bouchaud_sticky_2019}, but the S shape in not complete there.
\medskip
We then explore the robustness of this relationship across sub-categories in Figure \ref{fig:zigzag_breakdown}. This figure has two panels: one that splits between U.S. and non-U.S. firms (Panel A) and one that splits the sample into industries (Panel B). In both cases we only show the prediction from flexible polynomial approximation. In both subcategories the S-shaped function emerges. In particular, it is visible both in US and non-US firms, although more pronounced among U.S. firms.
\begin{figure}[htbp!]
\begin{center}
\caption{The Error-Revision relationship: Sample Splits}
\label{fig:zigzag_breakdown}
\includegraphics[scale=.5]{data_figures/2_1_SAL_err_rev_us_intl.png} \includegraphics[scale=.5]{data_figures/2_1_SAL_err_rev_industry.png} \\
\vspace{.2cm}
\footnotesize
Panel A: US v RoW \hspace{1in} Panel B: By Industry
\end{center}
\footnotesize Note: In this figure we use our international sample of firm revenue expectations to report the binned scatter plot of future log forecast errors $g_{t+1}-F_t g_{t+1}$ as a function of past revision $F_t g_{t+1} - F_{t-1} g_{t+1}$. The blue line is a local polynomial approximation, centered in the middle of each centile. In Panel A, we split the sample between U.S. and non U.S. observations. In Panel B, we split the sample into 1 digit GICS industries.
\end{figure}
Overall, the evidence on log forecast errors and revisions points towards a different treatment of large v. smaller shocks. Such evidence is hard to square with established models of expectations formations, which feature linear DGPs (typically, AR1 models) and linear expectations models. In what follows, we set up a simple model that features extreme (i.e. non-Gaussian) shocks and linear expectations formation in order to captures the above non-linearity.
\section{Model}
In this section we develop a parsimonious model that features extreme shocks and linear expectations in order to capture the non-linear behavior of expectation errors of Figure \ref{fig:zigzag_sales}.
\subsection{Modeling Sales Growth}
\label{sec:DGP}
The first piece of the model is the data-generating process. We will omit the firm index $i$ for clarity's sake and assume that log sales growth, $g_{it+1}$, evolves according to:
\begin{align}
\label{g}
g_{t+1} &= \underline{g}_{t+1} + \epsilon_{t+1} \\
\label{mu}
\underline{g}_{t+1} &= \overline{\underline{g}} + \phi(\underline{g}_t - \overline{\underline{g}}) + u_{t+1}
\end{align}
\noindent where $\underline{g}_{t+1}$ is the unobservable latent state that follows the classic linear-Gaussian AR1 dynamics. The key difference with most existing models of expectations formation is that $\epsilon_{t+1}$ follows a probability distribution density that has heavy tails. Because it fits the data quite well (as we document below), we assume that $\epsilon_{t+1}$ follows a Student's t distribution with $\nu$ degrees of freedom. Thus:
\begin{align*}
\epsilon_{t+1} &\sim \text{Student-t}(0,1, \nu) \\
u_{t+1} &\sim \text{Normal}(0,1)
\end{align*}
Note that, although we analyze the cross-section of firms, we assume a single process for all firms. In this paper, we do not explore the consequences of firm heterogeneity for forecasting biases. For instance, such biases could arise from forecasters using one single forecasting model for firms following different processes. We believe such an avenue is interesting, but beyond the scope of this paper, which focuses on one single deviation from the classical model, i.e. that temporary shocks have fat tails. In order to bring the data closer to the model however, we will conduct all of our analysis with ``normalized growth'' data, thereby ensuring that all firms have the same growth volatility. We discuss this adjustment extensively in Section \ref{allpredictions}.
\medskip
In our simple model the conditional expectation $E\left(g_{it+1} | g_{it}\right)$ is non-linear. We show this numerically in Figure \ref{fig:act_sim}. For different values of $\nu$, we numerically simulate the process and compute the conditional expectation $E\left(g_{it+1} | g_{it}\right)$ on simulated data. As shown in Figure \ref{fig:act_sim} this relationship is indeed quite linear in the body of the distribution, but experiences ``reversals'' in the tails. While not visible in Figure \ref{fig:act_sim}, all finite values of $\nu$ leads to such reversals in the tail, but as $\nu$ gets larger (and $\epsilon$ is closer to being Gaussian) they get pushed out farther into the tails and are very sharp and localized.
\begin{figure}[htbp!]
\caption{Actual vs. lagged actual in model simulation.}
\label{fig:act_sim}
\begin{center}
\includegraphics{model_figures/1_0_act_l_act_sim.png}
\end{center}
\footnotesize Note: We simulate the model over T periods with $u$ following a normal distribution $N(0,1)$, and $\epsilon$ following $\text{Student-t}(0,1, \nu)$. We then show local polynomial regressions of $g_t$ on $g_{t-1}$ estimated at the center of each percentile of lagged realization $g_{t-1}$. We explore values of $\nu$ from 1.6 (fat tailed) to $\infty$ (Gaussian).
\end{figure}
The economic intuition is simple. Since the underlying state variable is Gaussian, extreme negative or positive realizations are more likely to come from the transitory process $\epsilon$ than the persistent one $\underline{g}$, since it features more extreme shocks. As a result, a large sales growth realization today is unlikely to translate into future large sales growth tomorrow: This suggests the presence of ``reversals'' in the tails, as we see in Figure \ref{fig:act_sim}.
The above process has several predictions about the distribution of growth rates, one of them being that the cross-sectional distribution of growth rates should have fat tails. We will explore these predictions in Section \ref{allpredictions}.
\subsection{Expectations Formation}
\label{sec:expfor}
The second building block of the model is the formation of expectations. Our core assumption is that forecasters fail to perceive the non-linearity of true expectations $E\left(g_{it+1} | g_{it}\right)$ and use a linear rule. This assumption is based on the idea that economic agents use simplified, ``sparse'', model of reality to formulate expectations \citep{fuster_natural_2012,gabaix_behavioral_2018}. Agents assume $g_{it}$ follows a linear AR(p) model, estimate it on data and use this model to form forecasts. One advantage of this representation is that the term structure of forecasts is naturally defined, as agents calculate mathematical expectations under the AR(p) model. Hence, we assume that the forecaster believes growth follows the following AR(p) model:
$$g_{t+1} = \underline{g} + \sum_{s=0}^{p-1}{\beta_k \left(g_{t-k}-\underline{g}\right)} + u_{t+1}$$
\noindent We denote the subjective expectation operator by $F_t g_{t+k} \equiv EL(g_{t+k}|\underline{g_{t}})$.
We assume that this prior is dogmatic. The forecaster is willing to re-estimate the model's parameters as new data comes in, but does not explore models outside of the AR(p) set-up. As a result, the agent does not really formulate rational expectations since she does not estimate the right DGP, as in \cite{fuster_natural_2010}. One foundation for such dogmatism is that learning is extremely slow in non-Gaussian, non-linear environments, so that it takes many periods to modify the prior about the model (in recent literature, see \cite{veldkamp} and \cite{nakamura}).
Thus the agent estimates the parameters of the misspecified model using OLS on expanding windows -- using all information until date $t$. Let $\widehat{\underline{g}}$ and $\widehat{\beta_k}$ be these estimates. The one-period ahead forecast and the revision are given by:
\begin{align*}
F_t g_{t+1} & = \widehat{\underline{g}} + \sum_{s=0}^{p-1}{\widehat{\beta_k} \left(g_{t-k}-\widehat{\underline{g}}\right)}
\end{align*}
\subsection{Predictions of the Model: Errors on Revisions}
We now check that our model indeed generates the non-linear relation between revenue forecast errors and revenue forecast revisions shown in Figure \ref{fig:zigzag_sales}.
In Figure \ref{fig:sim_err_l_rev}, we report results from simulations, assuming that forecasts are based on a fitted AR(2) model. We vary the thickness of the tail of the temporary shock $\epsilon$, which is governed by $\nu$. $\nu=+\infty$ corresponds to a normal distribution, while $\nu=1.6$ is the thickest tail possible.
\begin{figure}[htbp!]
\begin{center}
\caption{Error as a function of lagged revision}
\label{fig:sim_err_l_rev}
\includegraphics{model_figures/1_0_err_l_rev_sim.png}
\end{center}
\footnotesize Note: We simulate the model over T periods with $u$ following a normal distribution $N(0,1)$, and $\epsilon$ following $\text{Student-t}(0,1, \nu)$. We then show local polynomial regressions of error $ERR_{t+1}=g_{t+1} - F_t g_{t+1}$ on revision $R_t g_{t+1}= F_t g_{t+1} - F_{t-1} g_{t+1}$ estimated at the center of each percentile of revision. We explore values of $\nu$ from 1.6 (fat tailed) to $\infty$ (Gaussian). Forecasters are assumed to employee an AR(3) model when predicting dividend growth rates.
\end{figure}
Figure \ref{fig:sim_err_l_rev} shows that, as long as the temporary shock has sufficiently fat tails, the linear expectations model generates predictable forecast error that display a non linear pattern similar to Figure \ref{fig:zigzag_sales}. This is quite intuitive. As shown previously, the true conditional expectation is non-linear (see Figure \ref{fig:act_sim}). When the past realization of revenue growth is large, it is likely that it was driven by the temporary fat-tailed process. As a result, the rational forecaster would expect some mean-reversion, but the linear forecaster does not. This creates overreaction to large shocks. In contrast, when past realizations are moderate, there is underreaction. This comes from the fact that the linear forecaster is on average rational: She fits a linear relation on the S-shaped data of Figure \ref{fig:act_sim}. The slope of forecasts for smaller realizations incorporates some of the overreaction in the tails.
To gain further insights in Figure \ref{fig:act_sim} we vary the fatness of the tail $\nu$. The less thick-tailed the innovation process, the less predictable errors are. When $\nu=+\infty$, the temporary shock $\epsilon$ is Gaussian and forecast errors are very close to zero for all lagged realizations (the black dots line up on the x-axis). This is because in this case the linear AR2 forecasting rule is nearly rational. Indeed, in this case, the rational expectation is a Kalman filter:
$$K_t g_{t+h} =\underline{g} + \phi^h G\sum_{s=0}^{+\infty}{\left(1-G\right)^s \left(g_{t-s}-\underline{g}\right)}$$
\noindent where $G$ is the Kalman ``gain''. The AR2 process is close enough to the above equation that forecast errors are nearly zero in our simulations.
The bottom line of this analysis is that the non-linear structure of expectations errors can easily arise when forecasters use linear models when the data generating process has temporary shocks that have fat tails. Indeed in this case, the optimal forecasting rule is non-linear, even though the process is itself linear.
\subsection{An Additional Prediction: Error on Lagged Error}
The empirical expectations literature also investigates a different moment: The autocorrelation of expectation errors (for instance, \cite{ma_quantitative_2020} and \cite{nakamura} among many others).
In our model the autocorrelation of errors is equivalent to the error-revision coefficient. This happens because revisions are directly proportional to current forecast errors:
\begin{equation}
\label{ERRREVlin}
\underbrace{F_t g_{t+1}- F_{t-1} g_{t+1}}_{\equiv R_t g_{t+1}} = \widehat{\beta_0}\cdot\left(g_t-F_{t-1} g_{t}\right)
\end{equation}
\noindent which means that a positive surprise translates into a positive revision about future growth. The fact that the prior is linear makes this relationship linear, whatever the number of lags $p$.
As a result, we expect the non-linear relation between errors and lagged revision of Figure \ref{fig:zigzag_sales} to also hold between error and lagged \emph{errors}. We test this additional prediction in Section \ref{allpredictions}.
\subsection{Model Prediction on Returns: Building Intuition}
\label{returns_model}
We also derive predictions on stock returns. Our simple model, as we will see, predicts that momentum occurs for intermediate returns and mean-reversion occurs for extreme returns.
\medskip
In the spirit of \cite{bouchaud_sticky_2019}, we assume stock prices are given by:
\begin{equation}
\label{APequation}
P_t = \sum_{s \geq 1 }{\frac{F_t D_{t+s}}{\left(1+r\right)^s}}
\end{equation}
\noindent where $F_t D_{t+s}$ is based on the forecasting rule described above in Section \ref{sec:expfor}. Hence, the stock is priced by investors who form expectations based on a linear AR2 fitted on past realizations, while we assume dividends to follow the process described in Section \ref{sec:DGP}. We also assume for simplicity that investors are risk-neutral, so that the discount rate is fixed at $r$.
In this very simple asset pricing model we expect returns to be a non-linear function of past returns, similar to what we documented for the error-revision relation in Figure \ref{fig:zigzag_sales}. Before we discuss simulation results and economic intuition, it is worth showing the algebra. The standard first order Campbell-Shiller approximation writes as:
$$ r_{t+1} - F_t r_{t+1} \approx \left( F_{t+1}-F_{t} \right) \sum_{s=0}^\infty \rho^s g_{t+1+s} - \left( F_{t+1}-F_{t} \right) \sum_{s=1}^\infty \rho^s r_{t+1+s}$$
\noindent where we denote log dividend growth as $g$ with a slight abuse of notation ($g$ stands for log revenue growth in the rest of the paper). Equation (\ref{APequation}) assumes constant expected returns $ F_t r_{t+k} = r$ (investors may be biased but are risk neutral), so that the CS decomposition simplifies into:
\begin{align*}
r_{t+1} - r &= \left( F_{t+1}-F_{t} \right) \sum_{s=0}^\infty \rho^s g_{t+1+s} \\
&= g_{t+1} - F_t g_{t+1} + \rho \left( F_{t+1}-F_{t} \right) \sum_{s=0}^\infty \rho^s g_{t+2+s} \\
\end{align*}
It then remains to describe the infinite sum of discounted dividend growth.
In this paper, we assume that forecasters (mistakenly) estimate dividend growth as an AR(p) process:
$$ g_t - \underline{g} = \sum_{s=1}^p \beta_s \left(g_{t-s} - \underline{g} \right) + \epsilon_t$$
We can then stack the estimated AR(p) coefficients $\beta_1,\dots,\beta_p$ into ``companion'' form:
$$ \begin{bmatrix}
g_{t} - \underline{g} \\
g_{t-1} - \underline{g} \\
\vdots \\
g_{t-p} - \underline{g}
\end{bmatrix}
= \begin{bmatrix}
\beta_1 & \cdots & \beta_{p-1} & \beta_p \\
1 & \cdots & 0 & 0 \\
\vdots & \ddots & 0 & 0 \\
0 & \cdots & 1 & 0
\end{bmatrix}
\begin{bmatrix}
g_{t-1} - \underline{g} \\
g_{t-2} - \underline{g} \\
\vdots \\
g_{t-p-1} - \underline{g}
\end{bmatrix}
+ \begin{bmatrix}
\epsilon_t \\
0 \\
\vdots \\
0
\end{bmatrix} $$
\noindent or more compactly:
$$ \mathcal{G}_t = \mathbf{B}\mathcal{G}_{t-1} + \mathbf{\epsilon}_t $$
Time $t$ forecasts for $g_{t+s}-\underline{g}$ are then given by:
$$F_t (g_{t+s}-\underline{g}) = \mathbf{e}_1' \mathbf{B}^{s}\mathcal{G}_{t} $$
\noindent where $\mathbf{e}_1$ is a ``selector'' vector picking out the first element of the following vector. The infinite sum of discounted forecast dividend growth is given by:
\begin{align*}
F_t \sum_{s=0}^\infty \rho^s (g_{t+1+s}-\underline{g}) &= \sum_{s=0}^\infty \rho^s \mathbf{e}_1' \mathbf{B}^{s+1}\mathcal{G}_{t} \\
&= \mathbf{e}_1' \mathbf{B}(\mathbf{I}-\mathbf{B})^{-1}\mathcal{G}_{t}
\end{align*}
We then plug this formula into the CS decomposition and obtain:
\begin{align}
r_{t+1} - r &= \mathbf{e}_1' \left( \mathcal{G}_{t+1} - \mathbf{B}\mathcal{G}_{t} \right) + \rho\mathbf{e}_1'\mathbf{B}(\mathbf{I}-\mathbf{B})^{-1} \left( \mathcal{G}_{t+1} - \mathbf{B}\mathcal{G}_{t} \right) \\
&= \mathbf{e}_1' \left(\mathbf{I} + \rho\mathbf{B}(\mathbf{I}-\mathbf{B})^{-1} \right) \underbrace{\left( \mathcal{G}_{t+1} - \mathbf{B}\mathcal{G}_{t} \right)}_{=ERR_{t+1}\mathcal{G}_{t+1}}
\end{align}
The above expression shows that under the AR(p) assumption, returns are a linear function of past forecast errors ($ERR_{t+1}\mathcal{G}_{t+1}$ is the vector of past $p$ forecast error). In this simple asset-pricing model, returns are only predictable if dividend growth forecast errors are predictable. Under rational expectations (i.e. if the true DGP for dividends is an AR(p)), they are not. But if dividends are driven by a thick-tailed state variable, the true DGP is far from AR(p) as we have documented. Thus, expected forecast errors are non-linear functions of past errors, and the same should hold for returns and past returns. So our model predicts that returns should be a non-linear function of past returns, in other words, momentum should only be present for intermediate values of past returns.
\subsection{Model Prediction on Returns: Simulations}
In order to check that this prediction also holds without CS approximation, we proceed to simulate our model. On simulated data, we build returns as $R_{t+1}=(P_{t+1}+D_t-P_t)/P_t$. We then plot average future returns by bins of past returns in Figure \ref{fig:sim_ret_l_ret}. In this very simple asset pricing model, returns are predictable as soon as the dividend process
\begin{figure}[htbp!]
\begin{center}
\caption{Momentum for Intermediate Past Returns; Reversal in the Tails}
\label{fig:sim_ret_l_ret}
\includegraphics{model_figures/1_0_ret_l_ret_sim.png}
\end{center}
\footnotesize Note: We simulate the model over many periods with $u$ following a normal distribution $N(0,1)$, and $\epsilon$ following $\text{Student-t}(0,1, \nu)$. We then show local polynomial regressions of $R_{t+1}$ on $R_{t}$ estimated at the center of each percentile of lagged realization $R_t g_{t+1}$. We explore values of $\nu$ from 1.6 (fat tailed) to $\infty$ (Gaussian). Forecasters are assumed to employee an AR(3) model when predicting dividend growth rates.
\end{figure}
The intuition is the same one as before. Very high past returns likely emerge from surprises due to a large thick-tailed temporary shock. Linear forecasters overreact: large past return events are likely to be situations where dividend realization a one-off boon. As a result, linear forecasters overestimate the level of future dividends: The stock price rises too much and future returns are lower. Intermediate past returns, however, are likely generated by standard shocks. There, the linear forecaster under-reacts to small dividend news and the price does not respond enough to these news. Future returns are thus positively correlated with past returns for ``smaller'' absolute values of returns. Note that when $\nu=+\infty$, temporary shocks are Gaussian and, as discussed previously, linear expectations are quasi optimal -- a true Kalman filter would be perfectly rational -- and past returns do \emph{not} predict future returns.
\section{Testing the Model's Predictions}
\label{allpredictions}
\subsection{Predictions of Growth Rate Dynamics}
In this section we discuss two key predictions of our data-generating process (\ref{g})-(\ref{mu}). The first one is that the distribution of firms \emph{revenue growths} should have fat tails. The second one is that the conditional expectation of $g_{t+1}$ on $g_t$ should be non-linear as in Figure \ref{fig:act_sim}.
\medskip
First, a key prediction of our DGP (\ref{g})-(\ref{mu}) is that the distribution of firm size growth has fat tails. Many variables relevant for finance and economics are not normally distributed \cite{gabaix_power_2009}. It is for instance well-known that the distribution of firm sizes follows a Zipf law \citep{axtell}. Less well-known is also the fact that the distribution of firm \emph{growth rates} has fat tails. \cite{bottazzi_explaining_2006} show that the distribution of Compustat firms follows a Laplace distribution. Here, we provide similar evidence from our sample.
In Figure \ref{fig:qq_sal} we show the QQ plot of the sales growth distribution in our sample, along some other textbook distributions. This QQ plot focuses on observations above the 90$^{th}$ percentile distribution of $|g_{it}|$, the \emph{absolute} value of sales growth (so both negative and positive shocks). Each point of this chart corresponds to one quantile of the data distribution. For a given quantile $q$ and a c.d.f. $F$, the y coordinate of the point is the average value of absolute growth at quantile $q$ of the data distribution. Since we focus on the top 10\% of absolute sales growth, this number is positive (so the y axis does not start at zero). The x coordinate of that point is the average value of the \emph{same quantile} of the chosen distribution $F$, or $F^{-1}(q)$. The closer is $F$ to the data distribution, the more the chart will look like a 45 degree line (the black line on the Figure).
\begin{figure}[htbp!]
\begin{center}
\caption{Tail of log sales growth distribution: Fit of various distributions}
\label{fig:qq_sal}
\includegraphics{data_figures/2_1_qqplot_comp_sal.png}
\end{center}
\footnotesize Note: This is a Q-Q plot of log sales growth vs. some textbook distributions (Laplace, Normal and Student). This plot shows the tail above the 90\textsuperscript{th}-percentile of the distribution of $|g_{it}|$. On the x-axis, we report the value of the quantile ($F^{-1}(q)$) of the comparison distribution. On the y-axis, we report the value of the same quantile of the data distribution ($F_{\text{data}}^{-1}(q)$). The data distribution is normalized so that its variance is one. The comparison distribution also have unit variance. By design, the ``data'' line is the 45 degree line.
\end{figure}
Looking at Figure \ref{fig:qq_sal} it is clear that the distribution of growth rates is very different from normal in the tail. The green line increases faster than the 45 degree line meaning that the sales growth distribution has much heavier tails than normal distribution. The best fit is obtained by fitting a Student distribution.
\medskip
Our model crucially assumes that this distribution comes from temporary shocks occurring \emph{within} firms. \cite{wyart_statistical_2003} suggest an alternative explanation for such thick tails: sales growth has a normal distribution at the firm-level, but that the standard deviation of this process varies across firms. In this case, extreme growth rates would typically occur among firms that have very volatile growth rates (for instance, smaller firms). This alternative interpretation does not explain our findings, but it is worthwhile to analyze its validity.
\medskip
In order to do that we normalize growth rates by a measure of firm-level ``volatility''. To do this we compute the mean absolute deviation of log sales for each firm. This measure of volatility has the advantage of being more immune to fat tails in the growth distribution (since variance may not exist in such cases). For firm $i$, we thus compute:
$$MAD_i=\frac{1}{T_i}\sum_{t=0}^{T_i} |g_{it}-\overline{X}_i|$$
\noindent where $T_i$ is the number of observations for the firm, and $\overline{X}_i$ is the average sales growth at the firm level.
\medskip
In Figure \ref{fig:qq_sal_norm} we show the QQ plot of the distribution of normalized $\frac{g_{it}}{MAD_i}$. If heavy tails are driven by firms with larger growth variance, this adjustment should significantly reduce the fat tails of the data. The QQ plot shows that the distribution is still strongly non-normal, though now the fit of the Laplace distribution is much better (consistent with \citealp{bottazzi_explaining_2006}) and the one of the student distribution is nearly perfect.
\begin{figure}[htbp!]
\begin{center}
\caption{Tail of log sales growth distribution (Normalized)}
\label{fig:qq_sal_norm}
\includegraphics{data_figures/2_1_qqplot_comp_sal_norm.png}
\end{center}
\footnotesize Note: This is a Q-Q plot of \emph{normalized} log sales growth vs. some textbook distributions (Laplace, Normal and Student). This plot shows the tail above the 90\textsuperscript{th}-percentile of the distribution of $|g_{it}-\overline{g}_i|/MAD_i$. On the x-axis, we report the value of the quantile ($F^{-1}(q)$) of the comparison distribution. On the y-axis, we report the value of the same quantile of the data distribution ($F_{\text{data}}^{-1}(q)$). The data distribution is normalized so that its variance is one. The comparison distribution also have unit variance. By design, the ``data'' line is the 45 degree line.
\end{figure}
From this analysis we draw the conclusion that sales growth distribution has heavy tails and that the conditional expectation $E\left(g_{it+1} | g_{it}\right)$ is non-linear. We will postulate below a model of growth dynamics that fits these two facts, and show how it can explain the non-linear relation between revisions and errors.
\bigskip
From now on we report results using the above normalization by the mean absolute distance. This allows to account for fat-tails effects stemming from heterogeneous growth variance. Also quite importantly our model (\ref{g})-(\ref{mu}) assumes homoskedasticity, so it is important to rescale the data so that they have the same property.
\bigskip
A second key prediction of our Data is that the conditional expectation $E_t \left( g_{t+1} | g_{t} \right)$ should be non-linear, as shown in Figure \ref{fig:act_sim}. As mentioned previously the intuition is that large revision presumably come from large shocks to firm sales growth. Since in our model large shock are transitory, the rational forecaster should not expect that large shocks should persist going forward. Smaller shocks are, however, much more likely to stem from the permanent component of revenue, and therefore the rational forecaster should expect them to persist. We now check whether this relationship holds in the data.
\begin{figure}[htbp!]
\begin{center}
\caption{Future Growth as a Function of Past Growth (Normalized)}
\label{fig:act_data}
\includegraphics{data_figures/2_1_SAL_act_l_act.png}
\end{center}
\footnotesize Note: This Figure reports the binned scatter plot of future growth $|g_{it+1}-\overline{g}_i|/MAD_i$ by value of past growth $|g_{it}-\overline{g}_i|/MAD_i$. Each bin corresponds to a centile of the past growth distribution. The blue line is a local polynomial approximation centered around each one of these centiles.
\end{figure}
In Figure \ref{fig:act_data} we construct a binned scatter plot of sales growth against lagged sales growth. To make sure all firms have the same growth volatility (as in the model), we normalize growth by our estimate of the firm level standard deviation $MAD_i$. Each black dot on this figure represents a centile of the distribution of lagged log growth of sales. The x axis shows the average lagged growth and the y axis measures the average current growth. This chart shows that the relationship between current and lagged growth is far from being linear and looks like the S curve shown in Figure \ref{fig:zigzag_sales}. For intermediate levels of growth (between 0 and 1 standard deviation), past growth translates into higher future growth, with a coefficient of about 0.3. The relationship does, however, become much flatter for high growth (with a slightly negative slope in the tail). For negative growth the slope becomes strongly negative. The lower the past growth, the higher the future growth will be, which is consistent with the idea of a rebound. Conditional on survival, very poor past performance predicts strong future growth, as in our model.
\subsection{Predicting Forecast Errors}
The model was designed to predict that forecast errors be a non-linear function of past revisions. Given that the model assumes that all firms have the same variance of shocks, it is natural to check that our main empirical results holds after rescaling by firm-level variance. Another reason why it is important to perform such a robustness check is discussed is discussed in the previous Section. Assume, following \cite{wyart_statistical_2003}, that large news purely come from a separate group of firms (those with more volatile, but still Gaussian, shocks). Then, if forecasters use a firm-level linear forecasting rule, then their forecast errors should be close to unpredictable (to the extent that the AR(p) model they use mimics the optimal Kalman filter).\footnote{If, however, forecasters were to use a global forecasting rule (a single rule estimated on all firms), the non-linear shape may be predicted. Indeed, assume all shocks are Gaussian, but firms differ in the volatility of their \emph{temporary} shock $\epsilon$. In this case, a unique forecasting rule would overestimate the persistence of large shocks. We do not explore this lead in this paper since we have documented in the previous Section that normalized growth is far from being normally distributed.}
Figure \ref{fig:zigzag_sales_norm} shows that normalized error and normalized revisions follow the same relationship as in our headline Figure \ref{fig:zigzag_sales}. In this Figure, we simply show the binned scatter plot of future log forecast errors $\frac{g_{t+1}-F_t g_{t+1}}{MAD_i}$ as a function of past revision $\frac{F_t g_{t+1} - F_{t-1} g_{t+1}}{MAD_i}$. This suggests that the non-linear relationship does not stem from firm volatility heterogeneity.
\begin{figure}[htbp!]
\begin{center}
\caption{Revenue Forecast Error as a Function of Past Revision (Normalized)}
\label{fig:zigzag_sales_norm}
\includegraphics{data_figures/2_1_SAL_err_l_rev.png}
\end{center}
\footnotesize Note: In this figure we use our international sample of firm revenue expectations to report the binned scatter plot of future log forecast errors $\frac{g_{t+1}-F_t g_{t+1}}{MAD_i}$ as a function of past revision $\frac{F_t g_{t+1} - F_{t-1} g_{t+1}}{MAD_i}$. The blue line is a local polynomial approximation, centered in the middle of each centile.
\end{figure}
Another natural prediction of our forecasting model is that current and past forecast errors should follow a similar relationship. This comes from the fact that in our linear forecasting model, errors and revisions are proportional (equation \ref{ERRREVlin}). Thus, it mechanically follows that if error and lag revisions are linked by the S-shaped curve of Figure \ref{fig:zigzag_sales_norm}, then error and lagged error should follow the same relationship.
We look at the relation between error and lagged error in Figure \ref{fig:zigzag_err_err_norm}. It shows the binned scatter plot of future log forecast errors $\frac{g_{t+1}-F_t g_{t+1}}{MAD_i}$ as a function of past error $\frac{g_{t} - F_{t-1} g_{t}}{MAD_i}$, both of them normalized by firm-level volatility. As can be seen from this figure, the forecast errors follow a linear relationship for intermediate values (until about 1 unit of volatility), but the relationship reverses for larger past errors.
\begin{figure}[htbp!]
\begin{center}
\caption{Revenue Forecast Error as a Function of Past Error (Normalized)}
\label{fig:zigzag_err_err_norm}
\includegraphics{data_figures/2_1_SAL_err_l_err.png}
\end{center}
\footnotesize Note: In this figure we use our international sample of firm revenue expectations to report the binned scatter plot of future log forecast errors $\frac{g_{t+1}-F_t g_{t+1}}{MAD_i}$ as a function of past error $\frac{g_{t} - F_{t-1} g_{t}}{MAD_i}$. The blue line is a local polynomial approximation, centered in the middle of each centile.
\end{figure}
\subsection{Evidence from Returns}
\begin{table}[htbp!]
\centering
\caption{Sample size by exchange (returns)}
\label{tab:index_date_ret}
\begin{tabular}{lrrrrrr}
\toprule
Index & Total & 2000 & 2005 & 2010 & 2015 & 2020 \\
\midrule
AEX & 5892 & 0 & 290 & 300 & 278 & 283 \\
AS5 & 47252 & 0 & 2127 & 2340 & 2164 & 2361 \\
CAC & 9946 & 0 & 475 & 474 & 474 & 480 \\
DAX & 6531 & 0 & 360 & 360 & 360 & 351 \\
HSC & 5946 & 0 & 0 & 281 & 435 & 585 \\
HSI & 8949 & 0 & 0 & 522 & 582 & 597 \\
IBE & 8663 & 0 & 398 & 419 & 410 & 419 \\
IND & 5377 & 0 & 0 & 360 & 360 & 348 \\
KOS & 36595 & 0 & 0 & 2381 & 2345 & 2369 \\
MID & 120017 & 4585 & 4773 & 4770 & 4583 & 4750 \\
NDX & 21624 & 0 & 1171 & 1200 & 1234 & 1197 \\
NIF & 6794 & 0 & 0 & 52 & 599 & 600 \\
NKY & 60783 & 2642 & 2674 & 2698 & 2676 & 2698 \\
OMX & 7565 & 0 & 348 & 360 & 360 & 360 \\
RAY & 830673 & 29792 & 32936 & 33833 & 31915 & 33247 \\
SMI & 3715 & 0 & 0 & 233 & 240 & 228 \\
SPT & 10200 & 0 & 0 & 707 & 708 & 720 \\
SX5 & 9744 & 0 & 0 & 600 & 600 & 587 \\
TOP & 5229 & 0 & 0 & 5 & 504 & 452 \\
TPX & 518734 & 16490 & 19074 & 19783 & 21722 & 25588 \\
TWY & 13663 & 0 & 0 & 355 & 1168 & 1028 \\
UKX & 29775 & 1086 & 1180 & 1196 & 1180 & 1187 \\
\bottomrule
\end{tabular}
\end{table}
Our model in Section \ref{returns_model} predicts that past returns should predict future returns in a non-linear way. We now provide evidence on returns based on CFM's international monthly stock returns data described in the Data section of this paper.
In Figure \ref{fig:zigzag_ret} we first show a smoothened binscatter plot of future returns on past returns. Future returns are monthly and past returns are calculated over the past 12 months excluding the last month of returns, as is common in the literature on stock momentum. The only difference here with the standard literature is that we take the log of returns (this is done because this analysis tends to focus on extreme past returns).
\begin{figure}[htbp!]
\caption{Binscatter Plot of Returns by Past Returns}
\vspace{.2in}
\begin{center}
\footnotesize
Panel A: Entire Sample \hspace{1.2in} Panel B: US vs International
\includegraphics[scale=.5]{data_figures/3_1_ret_l_ret.png}\hspace{.1in}
\includegraphics[scale=.5]{data_figures/3_1_ret_l_ret_us_intl.png} \\
\vspace{.2in}
Panel C: By holding Period \hspace{1.2in} Panel D: By
S index
\includegraphics[scale=.5]{data_figures/3_1_ret_l_ret_hold_per.png}\hspace{.1in}
\includegraphics[scale=.5]{data_figures/3_1_ret_l_ret_mkt_cap.png}
\label{fig:zigzag_ret}
\end{center}
\footnotesize Note: These 4 panels represent smoothed binned scatter plots of future log returns as a function of log past cumulative returns of the past 12 months excluding the last month. Panel A is the entire sample, Panel B splits the sample into US and International stocks, Panel C computes future returns using different holding periods, and Panel D by size quintile.
\end{figure}
Figure \ref{fig:zigzag_ret} shows the binned scatter plot for different splits of the data. Panel A looks at the entire dataset and provides a picture consistent with our prediction: There is momentum for most levels of past returns, but for extreme values it is mean-reversion that prevails. Panel B shows that this pattern holds both on US data and non-US returns. Panel C investigates the role of various holding periods, i.e. looking for future returns over the following 1, 3, 6, 9 and 12 months. We find that the S-shaped curve emerges as soon as this holding period is longer than one month. In panel D, we sort stocks into market cap quintiles at the index-month level. Even when examining different sizes of stocks, the S-shaped pattern is to be seen everywhere.
This finding suggests that the performance of traditional momentum strategies could be ``boosted'' by allowing for a region of reversal in both tails. To test this, we consider a self-financing strategy that goes long on momentum for moderate values of the momentum signal, and goes short on a momentum (i.e. long on reversal) for more extreme values of the signal. Let $s_{i,t-1}$ be a momentum signal calculated from past returns. Specifically, we calculate the momentum signal as the cross-sectional rank transform of cumulative returns over 11 months from $t-12$ to $t-1$, normalized such that $s_{i,t-1}=0.5$ for firm with the greatest past returns, and $s_{i,t-1}=-0.5$ for the firm with the least. A portfolio with weights $w(s_{it})$ is then formed at time $t$ as follows:
$$
w_{it}=\begin{cases}
0.5 - \frac{s_{i,t-1}}{a} & \text{if $s_{i,t-1} \le a$} \\
\frac{s_{i,t-1}-b}{b - a} - 0.5 & \text{if $a<s_{i,t-1} \le b$} \\
0.5 - \frac{s_{i,t-1} - b}{1 - b} & \text{if $b \le s_{i,t-1}$}
\end{cases}
$$
where $a$ and $b$ are constant ``inflection'' points at which our strategy flips from reversal to momentum, and then from momentum to reversal.
\begin{figure}[htbp!]
\begin{center}
\caption{The Error-Revision relationship: Sample Splits}
\label{fig:zigzag_strat}
\includegraphics[scale=.33]{data_figures/3_2_port_wts.png} \includegraphics[scale=.33]{data_figures/3_2_sr_heatmap.png} \\
\vspace{.2cm}
\footnotesize
Panel A: Portfolio weights \hspace{1in} Panel B: Sharpe ratio by inflection point
\end{center}
\footnotesize Note: Portfolio formed across all firms in our equity sample, both U.S. and international. See text for details of momentum signal construction.
\end{figure}
Our findings are depicted in figure \ref{fig:zigzag_strat}. The inflection points of our strategy, $a$ and $b$ are chosen to optimize the Sharpe ratio of our strategy over the sample period. Panel A shows the Sharpe ratio maximizing weighting function, while panel B shows how the Sharpe ratio varies for different upper and lower inflection points. Note that a strategy with a low inflection point of 0 and upper inflection point of 100 corresponds to a traditional momentum strategy, with no tail reversal. Clearly, our approach contains some look ahead bias, as the coefficients $a$ and $b$ are estimated on the entire sample. A more systematic investigation of these returns is beyond the scope of this paper.
In line with our earlier empirical results, we find that the Sharpe ratio of our strategy is maximized when the lower inflection point is set at the 13th cross-sectional percentile of the normalized momentum signal, and the upper inflection point is set at the 86th percentile. The maximum Sharpe ratio to this strategy (0.61) is 1.27x the Sharpe ratio we observe for a pure momentum strategy (0.48).
\section{Conclusion}
\label{conclu}
In this paper we emphasize that boundedly rational agents, when faced with fat-tailed processes, will make predictable mistakes. In order to explore such processes, we need large samples. Our empirical research here leverages the international version of IBES which gives us a large panel of sales growth forecasts. Consistently with the firm demographics literature, we find that sales growth dynamics are well described by the sum of a short-run and and long-run processes. The long-run process is a simple Gaussian, AR1 process, but the short-run process has fat tails. As a result, a simple, linear filtering rule will not be optimal. This simple model of expectations formation matches a lot of the key features of the data.
\medskip
Natural extensions of our work consists in exploring alternative forecasts data. Macro forecasts are unlikely to provide us with non-Gaussian processes and are in general too sparse to measure the tails of the DGP with enough accuracy. Within IBES studying EPS forecasts is another natural research direction, although it presents a scaling challenge. Growth cannot be computed for a large number of firms. Internal sales forecast from large companies could be another path.
\newpage
\bibliographystyle{ecta}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.